HawkInsight

  • Contact Us
  • App
  • English

UBS: NVIDIA's Revenue From Data Center Is Expected To Double By 2028

This week, nvidia (NVDA.US) held its GTC 2025 annual conference. In its report on the third day of the GTC conference, ubs stated that NVIDIA's data center revenue is expected to at least double from

This week, nvidia (NVDA.US) held its GTC 2025 annual conference. In its report on the third day of the GTC conference, ubs stated that NVIDIA's data center revenue is expected to at least double from 2025 to 2028; it also highlighted that there are still bright spots in data center capital expenditures. Additionally, the development of AI Agents, Samsung's HBM4 technology, advancements in humanoid robots, and the advantages of liquid cooling solutions were also emphasized.

UBS mentioned that although no specific data center capital expenditure forecasts were provided, the conclusion drawn from NVIDIA's comments during the conference is that the company believes its data center revenue will at least double from 2025 (UBS expects 215billion,whileWallStreetexpects215billion,whileWallStreetexpects180 billion) to 2028, implying earnings per share of around $12 during this period.

The firm also emphasized that many metrics for data center capital expenditures often overlook some of the largest clusters currently under development, including xAl, OpenAl, sovereign entities, and many Neocloud companies.

Other highlights include: 1) NVIDIA once again countered the debate on computing power intensity, strongly reiterating that improvements in inference models actually drive higher computing power intensity, as machines must think for themselves to solve problems and require rapid inference; 2) Significant progress has been made in running these new models on cheaper hardware, but in many cases, NVIDIA believes this is not the most economical approach due to trade-offs in speed and performance; 3) NVIDIA emphasized that it is in the infrastructure business and is the only realistic choice for customers planning large-scale deployments in advance (compared to self-developed ASIC chips).

With the participation of Randy Abrams, Head of Semiconductor Equity Research at UBS Asia Technology Team, the report highlighted a very popular panel on AI, where executives from Meta (META.US), Microsoft (MSFT.US), ServiceNow (NOW.US), and Accenture (ACN.US) emphasized how AI Agents can now build their own software workflows instead of adhering to scripts/plans previously written by SaaS providers. UBS noted that, following the line of thought from the Micron Technology (MU.US) panel, Samsung introduced its SOCAMM platform and revealed some HBM4 specifications, including the use of HCB to reduce stack height by about 33% (compared to TCB) and lower thermal resistance by about 20% (compared to TCB).

The rise of humanoid robots once again underscored the popularity of NVIDIA's software stack, as the introduction of GRO0T signifies the accelerated development of humanoid robots while serving as a computational platform for physical AI. The Super Micro Computer (SMCI.US) session on liquid cooling for AI data centers highlighted the fact that deploying liquid cooling solutions (versus air cooling) over several years offers TCO advantages (including lower upfront capital expenditures) and significantly increases computing power density.

Disclaimer: The views in this article are from the original Creator and do not represent the views or position of Hawk Insight. The content of the article is for reference, communication and learning only, and does not constitute investment advice. If it involves copyright issues, please contact us for deletion.