Tech stocks lost more than a trillion dollars in value on Monday after a Chinese company launched an open-source AI model that’s far cheaper than its US competitors.
So what? This might be AI’s Sputnik moment. In 1957, Russia’s satellite launch shocked the US. Now, markets are wondering if America is slipping in the global race for AI dominance.
Relentless growth. OpenAI and other leading US tech giants have long argued that relentlessly scaling AI models – using increasing numbers of chips – is the key to achieving AI breakthroughs.
Their push convinced Donald Trump, who in his first week in office announced $500 billion of private funding for AI infrastructure. Meanwhile, UBS predicts that the four biggest US tech firms will funnel $280 billion into AI this year. The weekend buzz around DeepSeek sparked concerns that this is all an overinvestment. Trump called it a wake-up call. In response:
Unintended consequences. DeepSeek’s rise coincides with US restrictions on the sale of advanced chip technology to China. Those export controls may have pushed Chinese developers to achieve more with less.
Liang Wenfeng, the 40-year-old founder of DeepSeek, reportedly built up a store of Nvidia A100 chips before they were banned from export to China, using those to launch DeepSeek. He was then able to use the A100s in combination with lower-power, unsanctioned chips.
DeepSeek released a comprehensive paper showing its workings. The result is a savvy AI model that is trained in a similar way to OpenAI’s o1, but requires far less computing power thanks to ultra-efficient engineering optimisations. So much for the Silicon Valley ethos of bigger is better.
But is it really that cheap? The conspiracy theory on tech forums is that this is a Chinese “psyop” – psychological operation – designed to rattle the US. A handful of researchers suggested that DeepSeek accessed more computing power than it reports. For that there is no evidence.
The numbers may still be misleading. “While [DeepSeek’s] training run was very efficient, [the model] required significant experimentation and testing to work,” says Dylan Patel, chief analyst at chip consultancy SemiAnalysis. Unlike OpenAI, DeepSeek didn’t include all of the costs associated with research and development of its models in its cost analysis. Patel thinks the real training figure is much higher: “Deepseek has spent well over $500 million on GPUs over the history of the company.”
What’s more… Even if DeepSeek’s is cheaper, more efficient and more accessible, it won’t necessarily devalue chip-makers like Nvidia in the long run. Patel thinks advancements in AI will only remind people of the value of this technology, and make people want it more. That will require more chips. See Jevon’s Paradox.