HawkInsight

  • Contact Us
  • App
  • English

OpenAI has been revealed to work with Broadcom to develop AI inference chips

In order to meet growing demand, many companies are also actively exploring options to reduce their dependence on Nvidia.

According to media reports, artificial intelligence startup OpenAI is working with Broadcom to develop a new artificial intelligence chip specifically to run trained AI models.

In addition, OpenAI is also in preliminary negotiations with TSMC, the world's largest chip foundry company.People familiar with the matter also said that OpenAI plans to develop its own chips and has been exploring related technologies for more than a year, but the project is still in its early stages.

Driven by the latest news, Broadcom shares closed up 4.2% on Tuesday to $179.24.The company's share price has risen 54% so far this year.TSMC's American Depositary Receipts (ADRs) closed up more than 1% on Tuesday.

It is understood that OpenAI's research and development focus is not on graphics processing units (GPUs) used to train and build generative AI models. This market is currently basically monopolized by Nvidia.In contrast, OpenAI prefers inference chips specifically designed to execute software and respond to user requests.Several analysts have predicted that the demand for inference chips will continue to grow as more technology companies use AI models to complete complex tasks.

According to people familiar with the matter, although OpenAI is still exploring the possibility of building its own wafer fab or chip factory, it currently believes that developing custom chips with partners is a faster and more feasible path.

Broadcom is an industry leader in application-specific integrated circuit design (single-purpose chips tailored to customers), with Google's largest customer.In addition, Meta and TikTok parent company ByteDance He is also a customer of Broadcom.

Last month, Broadcom CEO Hock Tan said in an earnings conference call that despite the strong demand for AI training, the project will only be officially included in the customer list after it is mass-produced.He said it would not be easy for any customer to deploy such a product, so the company would not view proof of concept as mass production.

OpenAI's service development and operation require a lot of computing power, a large part of which relies on NVIDIA chips.In order to meet the growing demand, many companies are also actively exploring ways to reduce their dependence on Nvidia. Some have chosen to expand cooperation with other chip manufacturers (such as AMD), while technology giants including Google and Meta have chosen to develop their own chips.

In addition, OpenAI is actively deploying data center investment and cooperation to carry future AI chip deployments.Company management has expressed its need to build a larger data center to the U.S. government, and CEO Sam Altman has sought financing support from global investors, including some in the Middle East.

Sarah Friar, chief financial officer of OpenAI, said on Monday that this is indeed a challenge from a capital perspective."Frankly, we are all learning in this area-infrastructure determines the future.

OpenAI被曝将携手博通开发AI推理芯片

·Original

Disclaimer: The views in this article are from the original author and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.