Nvidia-Meta Pact Signals New AI Era Of Scale—And Power Constraints

March 31, 2026

Meta taps Nvidia in multibillion-dollar AI deal, scaling data centers for training and inference as power constraints reshape priorities.

Nvidia Corporation and Meta Platforms Inc. have struck a multibillion-dollar partnership to expand AI infrastructure, spanning cloud and on-premises systems as the Facebook parent company ramps up data centers optimized for both training and inference.

AI inference is the real-time process where a trained machine learning model analyzes new data and generates decisions or content. 

The financial terms of the agreement have not been disclosed. Some analysts estimate it might be worth around $50 billion or more. Meta, which also owns Instagram and WhatsApp, recently revealed it intends to invest up to $135 billion in artificial intelligence this year and plans to devote $600 billion to build 26 data centers and related infrastructure in the US in the next three years.

The multi-generational arrangement involves millions of Nvidia products, including central processing units (CPUs) and GPUs (high-performance processors) that compete directly with the latest-generation chips from AMD and Intel, Nvidia’s main rivals. 

Among others, Meta will purchase Nvidia’s current Blackwell CPUs, and upcoming Rubin GPUs. The social media behemoth will be the first company to use Nvidia’s stand-alone Grace and Vera processors, designed to “deliver two times the performance” of other CPUs. Grace chips are currently in production, while Vera units will be implemented in 2027.

Meta will use the hardware “to expand data-center capacity supporting AI training and inference workloads across its platforms,” explains Jeffrey Cooper, AI & Semiconductor Author, and Former Leader at ASML, ABB & GE. “AI growth is now constrained by power, not compute. Efficiency gains like Nvidia’s Grace CPUs directly determine how fast Hyperscalers can scale.” 

Hyperscalers are global massive-scale cloud service providers, highly efficient in order to meet the demands of millions of users, such as Meta, Google, Amazon, and Microsoft.

“My prediction,” concludes Cooper, “is that Hyperscalers will continue locking in multi-year AI chip supply. Additional long-term procurement deals are likely to be announced through 2026, as large platforms expand compute clusters.”

  

Search

RECENT PRESS RELEASES