Nvidia Signs Multiyear AI Chip Deal With Meta, Expands Push Into Data Center CPUs
Nvidia has secured a multiyear agreement to supply Meta with millions of artificial intelligence chips, deepening its role in powering large-scale AI infrastructure.
Meta has entered into a multiyear agreement with Nvidia to purchase millions of current and next-generation AI chips for its expanding data center operations, according to the reports.
The deal highlights continued demand for advanced computing hardware as major technology companies accelerate artificial intelligence investments.
Nvidia’s Strategic CPU Assault
According to Reuters, the agreement includes Nvidia’s current Blackwell AI chips and the upcoming Rubin generation, alongside standalone deployments of Grace and next-gen Vera CPUs.
While financial terms remain undisclosed, analysts estimate the total value of the deal at up to $50 billion. This expansion into CPUs signals a direct assault on core markets of Intel and AMD, moving Nvidia beyond its traditional GPU dominance.
Ian Buck, Nvidia’s VP of Hyperscale, noted that Grace processors deliver significant power efficiency for database tasks, with Vera expected to push these gains further. Meta has already reported “promising” early test results for Vera workloads, he added.
However, Reuters reports that Meta continues to hedge its bets, developing in-house AI silicon and exploring Google’s Tensor Processing Units (TPUs) as potential alternatives.
Driving Efficiency in Meta’s Multi-Billion Infrastructure
Further details from The Verge indicate that the agreement represents the first large-scale deployment of Nvidia’s Grace CPUs as standalone processors within Meta’s data centers. Nvidia claims that energy efficiency is a key factor as companies attempt to manage the growing energy demands of generative AI infrastructure. The deal also includes plans to integrate Vera CPUs into Meta’s facilities beginning in 2027.
This spending spree is part of a massive, industry-wide wave of investment. Major tech players are currently pushing AI budgets to record highs, fueling an intense race to build the next generation of superintelligence. It’s a high-stakes balancing act: firms are committing billions to secure their lead while simultaneously battling hardware shortages and fierce competitive pressure.
Why This Matters: The Battle for the Data Center
This agreement cements Nvidia’s grip on global AI infrastructure at a time when demand for high-end silicon still far outstrips supply. By locking in a multi-year commitment from Meta, Nvidia isn’t just selling GPUs; it’s invading the data center CPU market.
For Meta, the deal proves that even while building its own chips, Nvidia’s ecosystem remains the “gold standard” for speed and scale. In the current “AI Reality Check” era, access to these chips has become the ultimate competitive weapon. Controlling the hardware is no longer just a technical detail; it is the defining factor for who leads the next phase of the AI revolution.
Beyond the Silicon
Looking ahead, this partnership sets the stage for “Personal Superintelligence” by 2028. As Meta integrates Vera CPUs and Rubin GPUs into its massive “Hyperion” superfactories, the focus will shift from simply building models to making them hyper-efficient.
The future belongs to this hybrid backbone, where specialized standalone CPUs handle the world’s data while proprietary silicon and Nvidia hardware run the world’s AI agents.
It is a world where the race for intelligence is defined not just by who has the most chips, but by who uses them with the greatest precision.



