Prefer watching instead of reading? Watch the video here. Prefer reading instead? Scroll down for the full text. Prefer listening instead? Scroll up for the audio player.
P.S. The video and audio are in sync, so you can switch between them or control playback as needed. Enjoy Greyhound Standpoint insights in the format that suits you best. Join the conversation on social media using #GreyhoundStandpoint.
In a move closely watched by enterprise technology leaders, Alphabet CEO Sundar Pichai has reaffirmed Google’s commitment to spending $75 billion this year on AI infrastructure and data centers — weeks after Microsoft reportedly abandoned many of its data center projects.
“Enterprise cloud strategies for AI are no longer just about picking a hyperscaler — they’re increasingly about workload sovereignty, GPU availability, latency economics, and AI model hosting rights,” said Sanchit Gogia, CEO and chief analyst at Greyhound Research.
According to Greyhound’s research, 61% of large enterprises now prioritize “AI-specific procurement criteria” when evaluating cloud providers — up from just 24% in 2023. These criteria include model interoperability, fine-tuning costs, and support for open-weight alternatives.
“There’s a gap between what’s being built and what we can use today,” one financial services technology leader told Greyhound Research, highlighting the dissonance between hyperscaler ambitions and enterprise readiness.
“We are entering a new phase of hyperscaler evolution—one where strategies are no longer harmonized around blanket global expansion,” Gogia said. “Google’s infrastructure roadmap appears to follow a global scale-first logic, while Microsoft’s more measured approach reflects a regulatory-aware, enterprise-tethered model.”
As quoted in NetworkWorld.com
Additional comments by Greyhound Research analyst:
Google’s $75 Billion Bet: Advantage or Overreach?
At Greyhound Research, we believe Google’s reaffirmed $75 billion commitment to AI infrastructure in 2025 signals a bold, conviction-driven strategy—but one that walks a fine line between visionary and vulnerable. This scale of investment undoubtedly positions Google as a foundational player in the future of AI-native computing, particularly in foundational model training, LLM inference, and custom chip design. But the sustainability of this advantage will depend not just on capacity, but utilisation.
While Microsoft’s reported pullback on certain U.S. and European data centre expansions suggests recalibration in response to overestimated demand curves, Google appears to be leaning in, aiming to monetise its AI stack across Search, Cloud, and YouTube. However, the risk of overcapacity remains real—particularly if enterprise adoption of GenAI workloads does not match hyperscaler assumptions or if regulators impose new restrictions on energy and water use.
In one recent Greyhound Fieldnote, during discussions with CIOs of global financial institutions, many expressed admiration for Google’s infrastructure ambition—but also questioned the commercial alignment between its AI infrastructure and enterprise consumption readiness. “There’s a gap between what’s being built and what we can use today,” one technology leader told us.
This dissonance underscores the value of Greyhound Research’s Distributed Enterprise Blueprint, which urges organisations to decouple scale from value and instead align infrastructure decisions with distributed usage models across business units and geographies.
This scepticism is mirrored in Greyhound AI Infrastructure Pulse 2025 findings, where only 43% of enterprise respondents said they currently feel “fully ready” to operationalise large-scale AI workloads on public cloud infrastructure. The rest cited architectural mismatches, cost unpredictability, and compliance risks as barriers. In this light, Google’s advantage is long-term—but not yet guaranteed.
Enterprise Procurement in Flux: Rethinking Cloud for AI Workloads
The shift in hyperscaler posturing—OpenAI exploring self-built infrastructure, Microsoft pausing certain regional builds, and Google going all-in—has created a sense of strategic dissonance for enterprise buyers. At Greyhound Research, we believe this will force CIOs to rethink procurement strategies not only in terms of provider preference, but also in architectural control and deployment flexibility.
Enterprise cloud strategies for AI are no longer just about picking a hyperscaler—they’re increasingly about workload sovereignty, GPU availability, latency economics, and AI model hosting rights. In one Greyhound Fieldnote from our work with a European telco deploying GenAI-powered customer service agents, the CIO noted, “Our infra planning used to be about region and cost. Now it’s about model compatibility and compute transparency.”
As hyperscalers diverge in their infrastructure paths, buyers will become more sensitive to lock-in clauses, GPU orchestration limitations, and multi-cloud portability. This will usher in a new era of AI-centric cloud procurement, where long-term value is determined not by SLAs alone, but by the level of visibility and control CIOs have over inference performance, finetuning options, and data locality.
At Greyhound Research, we advise clients to apply our Distributed Enterprise Blueprint to this shift—prioritising architecture that enables distributed intelligence, sovereign AI workloads, and modular deployment across hybrid and multi-cloud environments.
Supporting this, Greyhound Enterprise Cloud Strategy Pulse 2025 reports that 61% of large enterprises now prioritise “AI-specific procurement criteria” when evaluating cloud providers—up from just 24% in 2023. These criteria include model interoperability, fine-tuning costs, AI license modularity, and support for open-weight alternatives. Enterprises are waking up to the fact that the future of cloud isn’t general-purpose—it’s AI-shaped.
The Great Divergence: Hyperscaler Strategies Are No Longer Synchronized
At Greyhound Research, we believe we are entering a new phase of hyperscaler evolution—one where strategies are no longer harmonised around blanket global expansion, but are instead fracturing along lines of geography, energy policy, ecosystem control, and AI monetisation models.
Google’s infrastructure roadmap appears to follow a global scale-first logic—prioritising planetary infrastructure to support its vertically integrated AI ambitions. Meanwhile, Microsoft’s more measured approach, including regional pauses and hints of hybrid cloud prioritisation, reflects a regulatory-aware, enterprise-tethered model. And with OpenAI now reportedly exploring its own infrastructure options, we’re also seeing the rise of specialised, vertically owned compute stacks designed for a single AI tenant.
This divergence is creating downstream consequences for the entire enterprise ecosystem. In one Greyhound Fieldnote, during an advisory session with a manufacturing conglomerate exploring private GenAI for R&D innovation, the CTO flagged that hyperscaler differentiation is now “forcing us to hedge bets across three clouds, and rethink everything from power consumption models to token licensing schemes.”
This shift is borne out in Greyhound Global Infrastructure Strategy Pulse 2025, where 57% of enterprise respondents said they now view cloud infrastructure strategies as “divergent by design”—a reversal from the convergence narrative of 2019–2021. Hyperscalers are no longer building for the same markets or the same models. They are now betting on different futures: full-stack AI integration versus enterprise modularity; planetary scale versus regional trust.
This is precisely the shift addressed in Greyhound Research’s Distributed Enterprise Blueprint, which positions distributed—not just in infrastructure, but in decision-making, AI governance, and compute strategy—as the defining architecture of the next decade.

Analyst In Focus: Sanchit Vir Gogia
Sanchit Vir Gogia, or SVG as he is popularly known, is a globally recognised technology analyst, innovation strategist, digital consultant and board advisor. SVG is the Chief Analyst, Founder & CEO of Greyhound Research, a Global, Award-Winning Technology Research, Advisory, Consulting & Education firm. Greyhound Research works closely with global organizations, their CxOs and the Board of Directors on Technology & Digital Transformation decisions. SVG is also the Founder & CEO of The House Of Greyhound, an eclectic venture focusing on interdisciplinary innovation.
Copyright Policy. All content contained on the Greyhound Research website is protected by copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of Greyhound Research or, in the case of third-party materials, the prior written consent of the copyright owner of that content. You may not alter, delete, obscure, or conceal any trademark, copyright, or other notice appearing in any Greyhound Research content. We request our readers not to copy Greyhound Research content and not republish or redistribute them (in whole or partially) via emails or republishing them in any media, including websites, newsletters, or intranets. We understand that you may want to share this content with others, so we’ve added tools under each content piece that allow you to share the content. If you have any questions, please get in touch with our Community Relations Team at connect@thofgr.com.
Discover more from Greyhound Research
Subscribe to get the latest posts sent to your email.
