Broadcom, TSMC Jump as OpenAI Close to In-House AI Chip Design, Nvidia Drops
OpenAI's collaboration with Broadcom on a custom AI chip is progressing faster than expected, reducing its reliance on Nvidia's expensive GPUs and offering more tailored solutions.The ChatGPT creator
OpenAI's collaboration with Broadcom on a custom AI chip is progressing faster than expected, reducing its reliance on Nvidia's expensive GPUs and offering more tailored solutions.
The ChatGPT creator is finalizing the design for its first in-house AI chip, with plans to send it for fabrication at TSMC in the coming months, according to sources speaking to Reuters. This stage, known as taping out, is expected to take place soon.
TSMC and Broadcom saw a 2% jump in pre-market trading following the news, while Nvidia's stock dipped into the red.
OpenAI has been working with TSMC and Broadcom on developing the chip since October, with the first unit expected to arrive by 2026. Typically, a tape-out costs tens of millions of dollars and takes around six months to produce a finished chip. If the tape-out fails, it requires a redesign, and the process must be repeated.
The chip is intended for AI model training and is seen as a strategic move to strengthen OpenAI's negotiating leverage with other chip suppliers, Reuters reports. OpenAI also aims to develop increasingly advanced processors with broader capabilities in future iterations.
If the first chip is successful, it could accelerate OpenAI's shift away from Nvidia, with mass production potentially beginning as early as this year. Additionally, companies like Apple, Google, and Amazon are developing AI-specific chips, further challenging Nvidia's dominance in AI training.
The team designing OpenAI's chip is led by Richard Ho, who joined from Alphabet's Google over a year ago, where he contributed to the development of Google's Tensor Processing Units (TPUs).
Reports suggest that TSMC will manufacture the chip using its advanced 3-nanometer process technology, incorporating systolic array architecture with high-bandwidth memory (HBM), similar to Nvidia's chips, along with extensive networking capabilities.
Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.