By signing in or creating an account, you agree with Associated Broadcasting Company's Terms & Conditions and Privacy Policy.
New Delhi: Meta Platforms is exploring a major shift in its artificial intelligence hardware strategy. The company is reportedly in talks to spend billions of dollars on Google’s tensor processing units (TPUs) for future data centre rollouts, signaling rising competition in the AI chip market currently dominated by Nvidia.
The discussions, first reported by The Information, come at a time when demand for compute is soaring and concerns of overreliance on a single supplier are growing across the industry.
Meta is discussing the deployment of Google TPUs in its data centres from 2027, the report stated, citing a source familiar with the deliberations. The company is also evaluating the option to rent TPUs from Google Cloud as early as next year.
The move would offer Meta an alternative to Nvidia GPUs, the preferred processors for developing and running large scale AI models. It follows Google’s earlier deal to provide up to one million chips to Anthropic, signaling broader interest in Google’s hardware capabilities outside its own products.
Investors see any such shift from a major buyer like Meta as a strong sign that Google’s TPU program is gaining momentum.
The Wall Street Journal noted the growing hardware rivalry between Google and Nvidia, quoting Core Scientific CEO Adam Sullivan who said it is the “biggest story in the AI world right now” as both companies look to secure long term data centre capacity.
Nvidia shares fell 3 percent on Tuesday after reports hinted Meta may diversify its compute away from Nvidia. The chipmaker responded with a public statement on X.
Nvidia said its technology remains a full generation ahead of competitors and highlighted that its platform can support “every major AI model” across different computing environments. It added “NVIDIA offers greater performance, versatility, and fungibility than ASICs” referring to specialized chips such as Google’s TPUs.
The company also stressed that it continues to supply chips to Google as well.
Google first introduced TPUs more than a decade ago to improve performance for internal AI workloads across Search, YouTube and DeepMind. The chips are application specific and optimized for machine learning tasks.
Unlike Nvidia, Google does not sell TPUs directly to customers. Instead, it uses them for its own AI development or offers access to external businesses through Google Cloud infrastructure.
The spokesperson said Google is seeing growing demand for TPUs alongside Nvidia GPUs and plans to support both hardware options.
Google’s recent Gemini 3 model was trained on TPUs, a detail that has helped boost confidence around their capability for high scale training and inference.
Meta remains one of the largest AI investors globally, continuously expanding infrastructure for its Llama model line and future AI services. Securing diversified chip supply would provide the company more flexibility and protection from shortages.
Nvidia continues to dominate the AI compute market with more than 90 percent share, but analysts say competition is now opening up more visibly. AMD has also been scaling its AI efforts but trails far behind.
A possible Meta Google chip deal reflects a broader shift toward multi vendor strategies across major platform companies as AI workloads expand. It also suggests that hardware decisions could influence the pace of innovation as companies push for faster and more efficient training of large models.
The outcome of these talks could have wide implications for global AI infrastructure, pricing dynamics and future hardware standards that power artificial intelligence services used by billions of people.