TV9
user profile
Sign In

By signing in or creating an account, you agree with Associated Broadcasting Company's Terms & Conditions and Privacy Policy.

Google picks veteran Amin Vahdat to lead its AI infrastructure push

Google has appointed veteran engineer Amin Vahdat as chief technologist for AI infrastructure, placing him at the center of the company's global AI scale up. The role comes during record capital spending and rising competition in the AI race. Vahdat has overseen core systems like TPUs, the Jupiter network and Borg orchestration that support Google's largest AI models.

Google appoints longtime engineer Amin Vahdat as chief technologist for AI infrastructure
Google appoints longtime engineer Amin Vahdat as chief technologist for AI infrastructure
| Updated on: Dec 11, 2025 | 02:19 PM

New Delhi: Google has expanded its leadership structure at a moment when artificial intelligence infrastructure has become one of the most competitive areas in global tech. An internal memo reviewed by Semafor shows that veteran Google engineer Amin Vahdat will take on the role of chief technologist for AI infrastructure. The position places him among a small circle of senior leaders who report directly to CEO Sundar Pichai. For a company preparing for another year of heavy capital spending on data centers and custom AI hardware, the appointment signals a shift in how Google wants to coordinate its rapidly growing AI systems.

It is interesting to see someone who shaped foundational systems inside Google now move into a role that sits right at the center of the AI race. The company expects to spend more than ninety billion US dollars on capital expenditure by the end of 2025, much of it tied to the infrastructure he will oversee.

Also Read

Google makes AI infrastructure a core priority

In the memo, Google Cloud CEO Thomas Kurian wrote, “This change establishes AI Infrastructure as a key focus area for the company.” That line reflects how Google is elevating what used to be a deeply technical function into a strategic one.

Google has received strong feedback for its Gemini 3 model in recent months. Even the CEO of OpenAI described Google’s rise as an emergency for the ChatGPT maker. But the company’s edge is not limited to model performance. A major advantage is Google’s ability to distribute AI at scale across Search, YouTube, Workspace and other services. That efficiency comes from its decade long investment in custom Tensor Processing Units and a software and hardware stack built around them.

Google DeepMind works closely with the TPU team to optimize chips for Gemini workloads. The company has also developed optical circuit switches, liquid cooling systems and routing tools to maximise throughput. These engineering efforts help reduce cost per operation which becomes important when billions of queries run each day.

The engineer behind Google’s data center backbone

Amin Vahdat has been part of this journey for more than fifteen years. Before joining Google, he worked in academia and spent time as a research intern at Xerox PARC in the early 1990s. His research focused on networking computers efficiently at large scale which aligned closely with Google’s early challenges of serving an exploding internet.

Google hired him in 2010 to work on optical circuit switching. Over time his responsibilities grew. People familiar with his work say he was mentored by senior leaders Luiz Barroso and Urs Hölzle. In 2022, Vahdat wrote a blog post describing how his team rebuilt the company’s Jupiter network which connects systems inside data centers. The redesign helped reduce costs for delivering core products such as YouTube, Search and Cloud.

These upgrades later became essential for modern frontier models which require vast amounts of data to move across thousands of processors. Efficient interconnects remove bottlenecks and enable clusters of computers to act as a single system.

Vahdat has also overseen Borg which is Google’s job orchestration system. Borg schedules millions of tasks across servers. Engineers often compare it to a large scale puzzle where pieces must be placed perfectly for compute resources to be used without waste. The reliability of systems like Borg directly influences the economics of AI model training and inference.

Energy and efficiency remain under scrutiny

In August, Google published a research paper co authored by Vahdat. One key finding said that running a median prompt on its AI models used energy equal to watching less than nine seconds of television and consuming five drops of water. Those numbers were lower than estimates from critics who expected much heavier usage.

A global challenge that requires coordinated leadership

Hyperscalers including Google face the challenge of finding enough power for massive data centers while addressing concerns from local communities who resist new projects. Companies also need to balance demand forecasts to avoid a scenario where expensive chips remain unused.

The memo indicates that coordinating these efforts now falls under Vahdat’s new title. For many inside the company, the move shows that Google is strengthening its core engineering identity at a time when AI infrastructure has become a deciding factor in global competition.

Vahdat’s appointment comes as Google prepares for another year of rapid AI expansion. His leadership will shape how the company builds and manages the systems that power its largest models and services.

{{ articles_filter_432_widget.title }}