NVIDIA's Nightmare, But Startups' Dream? How DeepSeek's R1 Model Disrupts the AI Ecosystem
Chinese AI startup DeepSeek has disrupted the U.S. artificial intelligence ecosystem with its latest large model, causing chip leader NVIDIA's market value to drop by hundreds of billions of dollars o
Chinese AI startup DeepSeek has disrupted the U.S. artificial intelligence ecosystem with its latest large model, causing chip leader NVIDIA's market value to drop by hundreds of billions of dollars overnight upon its release. At the same time, smaller AI companies see an opportunity to scale up through DeepSeek.
Several AI chip companies have stated that DeepSeek's emergence represents a "huge opportunity" for them rather than a threat.
Andrew Feldman, CEO of chip startup Cerebras Systems, said, "Developers are very keen to replace OpenAI's expensive and closed models with open-source models like DeepSeek R1..."
Cerebras is one of the few challengers to NVIDIA in training AI models, offering cloud-based services through its own computing clusters. Feldman noted that the release of the R1 model brought Cerebras one of the largest spikes in service demand ever.
Feldman added, "R1 shows that [AI market] growth will not be dominated by a single company — hardware and software moats do not exist for open-source models."
The open-source R1 inference model released by DeepSeek at the end of last month rivals the best U.S. technology and has stunned global markets by achieving cutting-edge performance at a low cost.
Feldman said, "Like in the PC and internet markets, falling prices help fuel global adoption. The AI market is on a similar secular growth path."
The "Surge" of Inference Chips
Chip startups and industry experts believe that by accelerating the cycle of AI from the "training phase" to the "inference phase," DeepSeek could increase the adoption of new inference chip technologies.
Inference refers to the use of AI to make predictions or decisions based on new information, as opposed to the training phase, which involves building or training models.
Philip Lee, a semiconductor equity analyst at Morningstar, pointed out, "To put it simply, AI training is about building a tool, or algorithm, while inference is about actually deploying this tool for use in real applications."
While NVIDIA dominates the GPU market for AI training, many competitors see room for expansion in the inference space, promising greater efficiency at lower costs.
Lee added that AI training requires massive computing power, but inference can be done with less powerful chips programmed to perform a narrower range of tasks.
Many industry insiders believe that as customers adopt and build on DeepSeek's open-source models, they are seeing increasing demand for inference chips and computing.
Sid Sheth, CEO of AI chip startup d-Matrix, said, "[DeepSeek] has demonstrated that smaller open models can be trained to be as capable or more capable than larger proprietary models and this can be done at a fraction of the cost."
He also noted that the company has recently seen a doubling of interest from global customers in accelerating their inference plans.
Robert Wachen, co-founder and COO of AI chip manufacturer Etched, said that since DeepSeek released its inference model, dozens of companies have reached out to the startup. "Companies are [now] shifting their spend from training clusters to inference clusters."
Sunny Madra, COO of Groq, a company developing AI inference chips, said in an interview last week that as overall AI demand grows, smaller companies will have more room to expand. "As the world is going to need more tokens [a unit of data that an AI model processes] Nvidia can't supply enough chips to everyone, so it gives opportunities for us to sell into the market even more aggressively."
Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.