AI / ML

DeepSeek Topples Nvidia: $600B Market Crash Shocks Tech World!

DeepSeek’s rise triggers Nvidia’s biggest-ever $600 billion market crash

Nvidia Corp. has recorded the largest single-day market value loss in US stock market history, erasing $600 billion as investor concerns over Chinese AI startup DeepSeek rocked the semiconductor giant. Nvidia shares plummeted as much as 17% on Monday, January 27, surpassing its previous record decline of $279 billion in September last year.

Either way, the quality and cost efficiency of DeepSeek’s models have flipped this narrative; even if, in the long run, this particular Chinese model flops, that it was developed with a fraction of the financial and technological resources available to firms in the West is an eye-opener.

Again, how much of a disruptor is it?

Well, last month DeepSeek’s creators said training the V3 model required less than $6 million (although critics say the addition of costs from earlier development stages could push eventual costs north of $1 billion) in computing power from Nvidia’s H800 chips, a mid-range offering. “Did DeepSeek really build OpenAI for $5 million? Of course not,” Bernstein analyst Stacy Rasgon told Reuters.

But break down the available financials and it gets quite remarkable.

OpenAI’s 01 charges $15 per million input tokens.

DeepSeek’s R1 charges $0.55 per million input tokens.

The pricing, therefore, absolutely blows the competition away.

And, depending on end-use cases, DeepSeek is believed to be between 20 and 50 times more affordable, and efficient, than OpenAI’s 01 model. In fact, logical reasoning test score results are staggering; DeepSeek outperforms ChatGPT and Claude AI by seven to 14 per cent.

Dev.to, a popular online community for software developers, said it scored 92 per cent in completing complex, problem-solving tasks, compared to 78 per cent by GPT-4.

Input tokens, by the way, refer to units of information as part of a prompt or question. These are basically what the model needs to analyse or understand the context of a query or instruction.

For context, OpenAI is believed to spend $5 billion every year to develop its models.

So, even if DeepSeek’s critics (see above) are right, it is still a fraction of OpenAI’s costs.

This translates, as company boss Sam Altman pointed out, into significantly enhanced computing capabilities, but for the DeepSeek model to deliver at least that much processing power on its relatively shoestring budget is an eyebrow-raiser.

Google boss Sundar Pichai went one step further, telling CNBC at Davos, ” I think we should take the development out of China very seriously.” And US President Donald Trump sounded a “wake-up” call.

And there are the hundreds of billions of dollars that US companies have lost amid a rout this week in tech stocks; chip-maker Nvidia, for example, lost over $600 billion and the tech-rich Nasdaq index finished Monday down by more than three per cent, with the unwelcome possibility of a further drop based on AI giants Meta and Microsoft’s expected earnings reports.

For context, Meta and Microsoft both have their own AI models, at the forefront of which are Llama and Copilot; the former is a LLM that was first released in February 2023 and the latter is now an integrated feature in various Microsoft 365 applications, such as MS Word and Excel.

While neither is, arguably, on the same tech level as OpenAI or ChatGPT, Meta and MS have invested billions in AI and LLM projects, both in the US and abroad. For example, some analysts believe big US cloud companies will spend $250 billion this year on AI infrastructure alone.

But what really makes DeepSeek special is more than the cost and technology.

It is that, unlike its competitors, it is genuinely open-source.

The R1 code is completely open to the public under the MIT License, which is a permissive software license that allows users to use, modify, and distribute software with few restrictions.

This means you can download it, use it commercially without fees, change its architecture, and integrate it into any of your existing systems.

DeepSeek is also faster than GPT 4, more practical and, according to many experts, even understands regional idioms and cultural contexts better than its Western counterparts.

There is much more consider.

How, for example, does DeepSeek affect diplomatic and military ties between China and the US (and India also, actually), and what are the ethical problems with truly open-source AI models?

But what is undeniable is that China’s DeepSeek is a disruptor. And experts believe China has now leapfrogged – from 18 to six months behind state-of-the-art AI models developed in the US.

Meanwhile, DeepSeek’s success has already been noticed in China’s top political circles.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


To Top