AMD Releases Latest AI Chip to Target NVIDIA Market Why Not Buy It?
On June 13, local time, Chaowei Semiconductor (AMD) held the "AMD Data Center and Artificial Intelligence Technology Premiere," where CEO Lisa Su demonstrated AMD's Instinct MI300 series and the fourth-generation EPYC (Xiaolong) processor and other AI and data center-related products.。
On June 13, local time, Super Power Semiconductor (AMD) held the "AMD Data Center and Artificial Intelligence Technology Premiere," where CEO Lisa Su demonstrated AMD's Instinct MI300 series and the fourth-generation EPYC (Xiaolong) processor and other AI and data center-related products.。
The emergence of generative artificial intelligence is pushing the computing power of data centers to the limit。According to a new report released by market research firm Technavio, the market size of chips for artificial intelligence is expected to reach a staggering 61% in 2022-27..Explosive 51% CAGR to reach approximately $210.5 billion market size in 2027。
Nvidia's A / H100 GPU chip currently has great advantages in AI training and running machine learning, and its chip is also a key underlying hardware to support ChatGPT。AMD expects new chips to take market share from NVIDIA's hot artificial intelligence chip H100。At this meeting, it is considered that the most promising challenge to Nvidia is the Instinct MI300 series, which includes MI300A, MI300X。
AMD releases high-memory chip MI300X, but the market does not "buy it," shares fell more than 3%
MI300A has 13 small chips with 146 billion transistors, 24 Zen 4 CPU cores, 1 CDNA 3 graphics engine and 128GB of HBM3 memory。It is reported that this is the world's first APU accelerator card for AI and HPC (high performance computing).
The MI300X is aimed at the recent hot generative artificial intelligence technology。It is reported that this chip has 153 billion transistors and a memory bandwidth of 5.2TB / sec, Infinity Fabric bandwidth of 896GB / sec, HBM3 memory up to 192GB。In contrast, Nvidia's artificial intelligence chip H100 only supports 120GB of memory。AMD also said that the MI300X provides HBM density and bandwidth up to 2 of the H100, respectively..4 times and 1.6 times。
The MI300X accelerator is based on AMD's CDNA 3 technology and uses up to 192GB of memory to handle workloads for large language models and generative artificial intelligence。"And the powerful memory capabilities mean that AMD's new chips can be applied to larger AI language models than NVIDIA's H100."。The more memory you have, the larger the model the chip can handle。"We've seen it run faster in customer workloads.。We do think it's different。She also said it could help tech companies cope with the rising cost of providing services similar to ChatGPT.。
AMD said that the upcoming chip will begin mass production in the third quarter and mass production in the fourth quarter.。But unlike in the past when AMD pitched new chips to big customers, AMD did not say who would adopt the MI300X or MI300A.。
AMD's shares have doubled this year and hit a 16-month high early on June 13, but closed down 3 months after an introduction to its AI strategy..6%。
Kevin Krewell, chief analyst at TIRIAS Research, said: "I don't think Wall Street is likely to be disappointed that (big customers) have said they will use MI300 A / X.。They want AMD to say that they have replaced Nvidia in some design。"
Market reaction contrast is obvious, AMD and NVIDIA are worse than?
Compared to AMD's "plain light," Nvidia can be regarded as "full of scenery" in the near future.。
On June 13, Nvidia's shares closed at 410.$22, up 3.9%, becoming the first chip manufacturer with a market value of over $1 trillion。Earlier, Nvidia disclosed better-than-expected second-quarter revenue guidance, taking Nvidia's stock to a higher level。Nvidia's share price has surged 170% this year, dominating the AI computing market with 80% to 95% market share.。
And Nvidia's share price "all the way to run" behind, is its super products in support.。
In addition to the H100 chips that have been fully shipped, Nvidia also announced at the end of May that its GH200 Grace Hopper superchip has been fully operational, and the first GH200 will be provided to Google Cloud, Meta, and Microsoft to explore its generative artificial intelligence capabilities.。In addition, Nvidia has launched a new DGX GH200 artificial intelligence supercomputer with 256 GH200 Grace Hopper chips.。It is reported that the memory storage of the DGX GH200 is nearly 500 times that of Nvidia's current DGX A100 system, and a DGX GH200 will have 256 GPUs, which is 32 times that of the DGX A100.。
In the face of such a strong opponent, two relative, AMD's MI300X is a bit not enough to see。
Although AMD's MI300X has more memory than the H100, Nvidia will also provide products with the same memory specifications, so it's hard to say how long the memory advantage will last.。And because high-density HBM is expensive, the MI300X does not have a cost advantage。The company has not disclosed the price of the chip or how it will boost sales.。
Some analysts say Nvidia has few large-scale competitors。While several startups such as Intel and Cerebras Systems and SambaNova Systems have competing products, Nvidia's biggest sales threat to date is the in-house chip operations of Google and Amazon's cloud divisions, both of which lease their own custom chips to outside developers.。
Meta vice president Soumith Chintala, who helped create AI open source software, said he worked closely with AMD to make it easier for AI developers to use free tools, moving from a "single dominant supplier" of AI chips to other products such as AMD's chips.。AMD says it has started shipping large quantities of a general-purpose CPU chip called "Bergamo" to companies such as Meta.。This news has also been confirmed by Meta。Alexis Black Bjorlin, head of Meta Computing Infrastructure, said the company has adopted the Bergamo chip, which is targeted at different parts of AMD's data center business, which is geared toward cloud providers and other large chip buyers.。
While there are Meta executive platforms, there are also analysts who say that just because a company as mature as Meta can get good speed from AMD chips does not guarantee wider market appeal among less mature buyers.。
In a new customer report, Citi chip analyst Chris Danely said: "AMD's MI300 chip appears to be a huge victory in design, however, given performance limitations and a history of previous failures, we question the sustainability of the graphics / CPU integrated circuit (combined graphics / CPU IC).。"While we expect AMD to continue to gain a comparable share to Intel, its Genoa rollout appears to be slower than expected."。"
In addition, Nvidia's leadership in artificial intelligence comes not only from its chips, but also from more than a decade of providing software tools to artificial intelligence researchers and learning to predict what they need in chips that take years to design.。
Moor Insights & Strategy analysts said: "People are still not convinced that AMD's software solutions are competitive with NVIDIA's solutions, even if it is competitive in terms of hardware performance.。"
Huatai Securities has said that AMD's challenge to Nvidia's market share did not come overnight。On the one hand, the computing power barriers of Nvidia GPU chips and the in-depth layout of AI training terminals are difficult to shake for a while, on the other hand, AMD's software ecology also limits its integration with customer systems and penetration of application scenarios.。
Although AMD's "ambitious," but in the short term, Nvidia's leadership throne is still difficult to shake。Strength determines everything, investors are the most "realistic," lack of confidence is naturally difficult to fly AMD shares.。
·Original
Disclaimer: The views in this article are from the original author and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.