Winners, dark horses, and threats for the new era of AI
AI's barrier to entry just got lower. When the Chinese-born DeekSeek AI app debuted on the App Store in January, it knocked OpenAI's ChatGPT off the top spot, and Nvidia lost nearly $600 billion of market cap in a single day. The reason for the market's overreaction was straightforward: DeepSeek's r1 AI model upended key assumptions about what it costs—in terms of investment and computing power—to develop a model. DeepSeek distilled its AI models without U.S. hardware.
DeepSeek's r1 AI model is as efficient, if not more efficient, as those developed by OpenAI, Anthropic, Google and other companies, in terms of the quality of output. It seems to have been dramatically cheaper to develop. The topline numbers are US$6 million and about two months of training, which is on the surface dramatically different than some of the larger American incumbents—Microsoft, Amazon, Meta, ChatGPT—who have relied on cost-intensive computational power (data centers and chips) to develop their AI models.
Whether it was innovation by necessity, or just smarter thinking, DeepSeek has figured out that the way to get better AI models is not just through bigger infrastructure and shared brute force capacity, but better algorithms.
This presents a new paradigm for AI. The new model is: Algorithmic efficiency + domain-specific intelligence = better AI.
What it means for the incumbents
The new game for AI has implications for those already leading on AI, and for those who, up until now, have found it cost-prohibitive.
Of the former, AI leaders must embrace hybrid approaches that entail efficient algorithms, smart compute scaling, and a strategy for enterprise AI monetization. Those with greater adaptability will win in the market, while those that cling to brute-force GPU-based models will risk declining investor confidence.
Geopolitically speaking, this shift puts the U.K. and Europe back in the game. China isn't the only winner—firms across Europe could now come up with models that can run as well as anything coming out of Silicon Valley.
Meanwhile, the Nvidias and Intels of the world are also poised for gains. Lower AI costs won't ameliorate demand for compute power, because that isn't how efficiencies work. The Jevons Paradox—originally applied to coal use in the age of steam engine—states that efficiency gains often lead to greater overall consumption, not less.
Lower prices don't mean AI becomes a commodity any more than Amazon becomes a commodity for continually lowering prices. In fact, the opposite happens:
- Lower AI costs mean more business cases suddenly become profitable.
- Lower AI costs unlock new demand from industries and applications that previously couldn't justify the investment.
- Lower AI costs increase revenue and value creation, fuelling even greater adoption.
That's Jevons paradox in action. AI isn't getting cheaper so companies will spend less on it—it's getting cheaper so they can spend more, faster, and in more areas than ever before.
The companies who have been boxed out until now
One of the biggest barriers to widespread AI adoption has been cost. Many businesses have been hesitant to invest because the return on investment was difficult to justify. That's been the conversation for the last two years.
But DeepSeek just changed the game.
Computing costs today are likely 50% cheaper than they were a year ago. What cost OpenAI $25 million to train in 2023 would likely cost $12.5 million today. And those costs will continue to drop. This acceleration means our clients need new strategies. AI investments that were once too expensive, too speculative, or too difficult to justify are suddenly within reach. The businesses that move first will gain an edge.
One of our clients, a leading pet-care CPG company, had identified six LLM (large language model) use cases few months ago. They had estimated that the cost to train and deploy each of these models into production would be anywhere from $5 million to $6 million. Since DeepSeek's announcement, they are estimating this cost to be 80% cheaper. Imagine how much more this CPG company can now do with the original budget.
A caveat for the next phase: Security
Recent research has shown that while DeepSeek might be efficient, it is far from secure. Early testing has shown that DeepSeek's models fail basic security tests, leaving them vulnerable to adversarial attacks and potential misuse.
This raises a crucial question: Is DeepSeek's cost-saving approach sacrificing AI security?
For enterprises, security is just as important as efficiency, especially when deploying AI in regulated industries, financial services, healthcare, or any environment where data privacy is critical. If DeepSeek's approach to cost-cutting introduces vulnerabilities, companies must weigh the risks against the benefits before adoption.
Moreover, as AI becomes embedded into infrastructure, governments and regulators may begin enforcing stricter security standards—meaning that cheaper AI models like DeepSeek's might not even be compliant in some markets.
The AI game has opened up: there are huge opportunities for new players, but businesses must look beyond the cost savings and consider whether the AI models they adopt can withstand real-world threats.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.