Prefer watching instead of reading? Watch the video here. Prefer reading instead? Scroll down for the full text. Prefer listening instead? Scroll up for the audio player.
P.S. The video and audio are in sync, so you can switch between them or control playback as needed. Enjoy Greyhound Standpoint insights in the format that suits you best. Join the conversation on social media using #GreyhoundStandpoint.
Nvidia-backed AI cloud provider CoreWeave is acquiring crypto miner Core Scientific for about $9 billion, giving it access to 1.3 gigawatts of contracted power to support growing demand for AI and high-performance computing (HPC) workloads.
“Crypto facilities bring power and space, but not always enterprise assurance,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “While high-density cooling and electrical capacity make them attractive for AI workloads, their design DNA – batch processing, ASIC thermal loads, limited telemetry – doesn’t translate seamlessly into AI inference or real-time model operations.”
As quoted in NetworkWorld.com, in an article authored by Prasanth Aby Thomas published on July 8, 2025.
Beyond the Media Quote: Our View, In Full
Pressed for time? You can focus solely on the Greyhound Flashpoints that follow. Each one distills the full analysis into a sharp, executive-ready takeaway — combining our official Standpoint, validated through Pulse data from ongoing CXO trackers, and grounded in Fieldnotes from real-world advisory engagements.
AI Infrastructure Firms Are Reclaiming Power—Literally
Greyhound Flashpoint — CoreWeave’s $1.6 billion acquisition of Core Scientific marks a definitive pivot in AI infrastructure strategy—from leasing compute to owning power. Per Greyhound CIO Pulse 2025, 82% of Fortune 500 CIOs in the U.S. cite “energy availability and predictability” as the top determinant of AI workload placement. This deal allows CoreWeave to bypass traditional colocation chokepoints and anchor its infrastructure on a 1.3 GW power-rich foundation spanning Texas, Oklahoma, and North Dakota. With hyperscalers scrambling to secure clean energy permits and land for new builds, CoreWeave is effectively buying its way into energy sovereignty—a competitive edge that CIOs must now account for in their multi-cloud and colocation strategies.
Greyhound Standpoint — According to Greyhound Research, CoreWeave’s vertical integration of power, land, and compute is a calculated response to the structural limitations of hyperscale cloud. This move allows CoreWeave to eliminate over $10 billion in future lease liabilities, reduce GPU delivery delays, and improve service-level transparency—directly addressing what enterprise buyers have flagged as chronic cloud supply chain risks. More critically, it creates a precedent where control over utility-scale energy becomes synonymous with leadership in AI infrastructure. CIOs must now recalibrate data centre strategies not just around Tier certifications or redundancy, but around verified power contracts, brownout protections, and energy price hedging mechanisms. Enterprise RFPs that once focused on SLAs and cost tiers must evolve to include energy mix disclosures, location-specific grid stress profiles, and capital stack transparency.
Greyhound Pulse — The Greyhound CIO Pulse 2025 shows 64% of global CIOs are actively reassessing colocation partnerships in light of GPU supply constraints and regional power instability. Of these, 71% report that energy procurement guarantees—such as priority access to substation capacity and municipal permits—have overtaken even cost per kWh as primary decision criteria. Interestingly, sectors such as autonomous manufacturing and AI-driven biotech are leading this rethink, citing performance degradation during high grid stress windows as a material business risk. Additionally, 43% of surveyed CIOs flagged that their legal and procurement teams are now involved earlier in AI infrastructure deals—tasked with evaluating energy contract clauses, not just TCO spreadsheets. This signals a strategic convergence: energy literacy is becoming a must-have competency for enterprise infrastructure teams.
Greyhound Fieldnote — Per a recent Greyhound Fieldnote from a Fortune 500 pharmaceutical company headquartered in Illinois, the CIO was forced to stall an AI-led molecular simulation programme after two consecutive GPU lease failures from a hyperscaler due to unexpected power constraints in the host region. After an internal risk audit, the firm shifted its AI architecture to a hybrid model using a privately brokered agreement with a GPU hosting provider operating out of a converted crypto-mining site in Oklahoma. While performance improved and queue depth dropped, the transition revealed gaps in thermal zoning, air handling, and hardware telemetry—requiring $3 million in custom instrumentation upgrades. This real-world friction reinforces the growing enterprise concern: power-first infrastructure may solve availability but introduces complexity CIOs must be ready to govern.
Crypto Mining Facilities Are Being Recast as AI Supernodes—With Caveats
Greyhound Flashpoint — CoreWeave’s transformation of Core Scientific from a distressed Bitcoin miner to an AI data centre powerhouse is emblematic of a larger shift: crypto infrastructure is being reborn as the substrate of enterprise AI. However, 59% of CIOs in North America remain cautious, warning that such retrofits often carry reliability and compatibility debt (Greyhound CIO Pulse 2025). These sites were built for proof-of-work algorithms, not latency-sensitive inference workloads. While they offer ready-to-use megawatts, they require extensive retrofitting to meet enterprise-grade standards for GPU deployment, telemetry, and uptime. The future of AI infrastructure may be built on crypto’s ashes—but not without complexity.
Greyhound Standpoint — According to Greyhound Research, the pivot from mining coins to serving compute underscores a necessary but risky repurposing trend. Crypto facilities bring power and space, but not always enterprise assurance. While high-density cooling and electrical capacity make them attractive for AI workloads, their design DNA—batch processing, ASIC thermal loads, limited telemetry—doesn’t translate seamlessly into AI inference or real-time model operations. CIOs must perform rigorous due diligence on airflow schematics, environmental zoning, and remote monitoring capabilities before committing production AI pipelines to these sites. More importantly, such facilities should be treated as differentiated workload zones—ideal for batch training or test environments but unsuitable as the backbone for regulated or mission-critical inference unless comprehensively overhauled.
Greyhound Pulse — The Greyhound CIO Pulse 2025 reveals that among enterprises evaluating crypto-to-AI transitions, only 27% proceeded to production-scale rollouts. Of those, over 40% encountered thermal anomalies, unplanned brownouts, or security control gaps that necessitated costly retrofit cycles. In sectors such as fintech and insurance, these risks translated to delayed model validation and compliance audit failures. CIOs leading these transitions report median retrofit costs of 28–35% above initial projections, driven by new thermal loop designs, hardened surveillance infrastructure, and firmware-level GPU orchestration modules. While the economic promise of converting stranded mining infrastructure is compelling, the conversion from crypto to AI is neither linear nor low-friction.
Greyhound Fieldnote — Per a recent Greyhound Fieldnote from a Singapore-based online education provider, the CTO greenlit a testbed for multilingual model training inside a decommissioned crypto mining site repurposed for AI colocation. While initial benchmarks showed 3x throughput gains compared to their previous cloud instance, the facility suffered an unplanned thermal shutdown during the third training cycle. A post-mortem revealed that the site lacked dynamic airflow zoning and had not accounted for variable GPU power draws. As a result, the firm paused expansion and reverted to cloud fallback until a full site audit and cooling system redesign could be completed. This case illustrates a common pattern in enterprise AI scaling—opportunistic gains from repurposed infrastructure often come with hidden reliability trade-offs.
AI Compute Consolidation Is Redefining Access, Pricing—and Risk
Greyhound Flashpoint — CoreWeave’s merger with Core Scientific brings compute, power, and hosting under one vertically integrated roof—eliminating $10B in lease costs and consolidating GPU access at a time of global shortage. However, 68% of global CIOs fear this trend of AI infrastructure centralisation could exacerbate pricing volatility and vendor lock-in (Greyhound CIO Pulse 2025). As AI-native players like CoreWeave gain direct control over GPU supply chains and power capacity, the landscape of enterprise compute access is shifting from abundance to gated access—with price premiums increasingly tied to priority queues, energy provisioning, and contract length. This is no longer a cloud market—it’s an AI infrastructure oligopoly in the making.
Greyhound Standpoint — According to Greyhound Research, this acquisition not only expands CoreWeave’s capacity, it reshapes the enterprise AI procurement calculus. CIOs must now navigate a vendor landscape where fewer firms control greater portions of the GPU lifecycle—from silicon access to power provisioning, from orchestration software to rack-level SLA control. With CoreWeave’s investor-backed relationship with Nvidia and now 1.3 GW of owned hosting capacity, the firm can bundle GPU access with favourable energy contracts and queue guarantees—creating de facto tiered pricing for compute. For enterprise buyers, this means less room for negotiation, greater exposure to preferential treatment policies, and longer lock-in periods. CIOs must embed financial stress testing, GPU queue analytics, and dual-sourcing clauses into AI sourcing strategies or risk becoming dependent on a vertically integrated, AI-only cloud cartel.
Greyhound Pulse — The Greyhound CIO Pulse 2025 indicates that 73% of CIOs globally are forming AI Infrastructure Councils—cross-functional teams with legal, finance, and engineering participation—explicitly tasked with governing AI vendor engagement models. Among these, 54% have rewritten their AI cloud contracts to include GPU queue depth disclosures, energy uptime reporting, and failover contingency clauses. Notably, 46% of CIOs in sectors like pharma, gaming, and autonomous logistics now view GPU availability as a board-level KPI, not just an IT metric. With CoreWeave absorbing both compute and capacity, enterprise teams are elevating procurement to the same level of oversight once reserved for critical ERP and financial systems—an unmistakable sign that AI infrastructure is no longer just technical plumbing but enterprise risk capital.
Greyhound Fieldnote — Per a recent Greyhound Fieldnote from a global gaming firm headquartered in Europe, the CIO was forced to delay a next-gen graphics engine rollout after two months of unpredictability in GPU reservation confirmations from a single vendor. An internal review revealed that resource allocations had been reprioritised for larger enterprise clients—despite prior commitments. In response, the firm deployed a GPU diversification policy that split high-density training jobs across two secondary colocation providers and retained a burst-capable in-house cluster of A100s for inference. While this model increased infrastructure overhead by 11%, it delivered near-zero latency and 99.95% uptime for real-time workloads. This episode reflects a growing trend: enterprises are willing to absorb extra costs to reduce compute dependency risk and preserve strategic autonomy in the age of AI consolidation.

Analyst In Focus: Sanchit Vir Gogia
Sanchit Vir Gogia, or SVG as he is popularly known, is a globally recognised technology analyst, innovation strategist, digital consultant and board advisor. SVG is the Chief Analyst, Founder & CEO of Greyhound Research, a Global, Award-Winning Technology Research, Advisory, Consulting & Education firm. Greyhound Research works closely with global organizations, their CxOs and the Board of Directors on Technology & Digital Transformation decisions. SVG is also the Founder & CEO of The House Of Greyhound, an eclectic venture focusing on interdisciplinary innovation.
Copyright Policy. All content contained on the Greyhound Research website is protected by copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of Greyhound Research or, in the case of third-party materials, the prior written consent of the copyright owner of that content. You may not alter, delete, obscure, or conceal any trademark, copyright, or other notice appearing in any Greyhound Research content. We request our readers not to copy Greyhound Research content and not republish or redistribute them (in whole or partially) via emails or republishing them in any media, including websites, newsletters, or intranets. We understand that you may want to share this content with others, so we’ve added tools under each content piece that allow you to share the content. If you have any questions, please get in touch with our Community Relations Team at connect@thofgr.com.
Discover more from Greyhound Research
Subscribe to get the latest posts sent to your email.
