What do the world's top AI minds say about DeepSeek?
The world's leading AI experts have analyzed the speed at which China is closing the gap with the US in the AI race, as DeepSeek pioneers an open-source strategy.
Leading experts in the field of artificial intelligence (AI) have recognized the remarkable achievements of Chinese startup DeepSeek.
However, they also warn against overestimating the company's success, especially as the tech industry considers the real-world impact of the advanced AI models DeepSeek develops at a fraction of the cost of conventional ones.

Influential figures in the AI world, including Sam Altman, CEO of OpenAI, and Andrew Ng, a scientist formerly at Baidu and Google, have praised DeepSeek's open-source approach.
The accolades come after the company launched its cutting-edge AI model, which has garnered widespread attention from the tech community. While DeepSeek is making a significant mark, questions remain about its long-term sustainability and competitiveness against industry giants.
Based in Hangzhou, capital of Zhejiang province (China), DeepSeek has shaken the global AI industry with the launch of its open-source advanced AI model DeepSeek R1. Announced on January 20, DeepSeek R1 demonstrates performance comparable to closed-source models from OpenAI – the company behind ChatGPT – but was developed with significantly lower training costs.
In addition, DeepSeek revealed that its foundational large language model, DeepSeek R1-V3, released just weeks earlier, cost a mere $5.6 million to train. The news raised concerns that tech companies may be overspending on graphics processing units (GPUs) for AI training. These concerns contributed to a sell-off in shares of Nvidia, a leading AI chip supplier, last week.
.jpg)
During an “Ask Me Anything” session on Reddit last week, OpenAI CEO Sam Altman admitted that the company is on the wrong side of history in pursuing a closed-source strategy and needs to reconsider its open-source approach.
OpenAI now tightly controls information about the training process, energy costs, and technical details of its advanced AI models.
However, Altman also emphasized that not everyone at OpenAI shares this view, and that transitioning to open source is not a top priority for the company at the moment.
Meanwhile, Andrew Ng – founder and former director of Google Brain, and former chief scientist at Baidu – said that the rise of DeepSeek and its domestic competitors is evidence that China is rapidly narrowing the gap with the US in the AI race.
"When ChatGPT launches in November 2022, the US will still be far ahead of China in the field of generative AI, but in fact, this gap has narrowed significantly in just the past two years," Andrew Ng wrote on the social networking platform X.
He stressed that with the emergence of a series of AI models from China such as Qwen, Kimi, InternVL and DeepSeek, China has clearly narrowed the gap with the US. In some areas, such as video-generating AI, there was even a time when China took the lead.
Accordingly, the Qwen AI model is a product of the Alibaba Group. Meanwhile, Kimi was developed by the startup Moonshot AI, and InternVL comes from the Shanghai AI Lab, a state-backed organization. These names are contributing to the strong rise of Chinese AI in the international arena.
.jpg)
Andrew Ng commented: "If the US continues to suppress the development of open source, China will dominate this area in the AI supply chain. At that time, many businesses around the world will use models that reflect Chinese values and thinking more than American ones."
Meanwhile, Shawn Kim, an analyst at the New York-based multinational financial and investment banking group Morgan Stanley, commented that DeepSeek is receiving widespread recognition at a time when major US technology corporations are actively promoting this AI startup from China.
Since its launch, the advanced AI model DeepSeek R1 has attracted the attention of many large companies around the world and has been integrated into their services. For example, the American semiconductor company Nvidia has integrated the DeepSeek R1 model into its NIM microservice, making it easy for users to access and exploit the potential of this model.
Meanwhile, Microsoft, one of OpenAI's investors, has also announced support for DeepSeek R1 on the Azure cloud computing platform and GitHub.
Not to be outdone, Amazon.com has enabled customers to use Amazon Web Services (AWS) to build applications based on DeepSeek R1, expanding access to this cutting-edge AI technology.
While DeepSeek is attracting a lot of attention, some experts say the true scale and impact of the breakthrough may have been overestimated.
Yann LeCun, chief AI scientist at Meta Platforms, dismissed the notion that DeepSeek is helping China overtake the US in the AI race. “The correct understanding is that open-source models are catching up and even surpassing proprietary models,” he asserted on Threads.
Despite its buzz, DeepSeek has faced a lot of skepticism, especially around its actual costs and AI training methods. The company, spun out of founder Liang Wenfeng’s hedge fund High-Flyer Quant in May 2023, has not been fully transparent about the total cost of developing its models.
According to Professor Zheng Xiaoqing from Fudan University (China), the figure of 5.6 million USD that DeepSeek announced for training the DeepSeek R1-V3 model does not take into account previous research and testing costs.
In an interview with China's leading economic newspaper National Business Daily, he said DeepSeek's success was largely due to technical optimization, but would not have a significant impact on the AI chip market or hardware supply chain.