IBM’s Q2 2025 Results – Buyer Confidence Grows, But Field-Level Complexity Persists

Reading Time: 15 minutes
Save as PDF 

P.S. The video and audio are in sync, so you can switch between them or control playback as needed. Enjoy Greyhound Standpoint insights in the format that suits you best. Join the conversation on social media using #GreyhoundStandpoint.


IBM closed Q2 2025 with $17.0 billion in revenue, up 8% year-on-year and 5% in constant currency—a result that beat Wall Street expectations on every major metric: revenue, profit, and free cash flow. The company raised its full-year free cash flow guidance to above $13.5 billion and reaffirmed its target of 5%+ revenue growth at constant currency.

And yet, IBM’s stock fell 7.6% after the results—the steepest single-day decline in over a year. That drop wasn’t about fundamentals. It was about expectations. Investors are no longer reacting to beat-or-miss headlines. They’re asking a harder question: Is this growth scalable and strategic, or episodic and hardware-led?

The Software segment is where this tension is most visible. IBM reported $7.39 billion in Software revenue—strong on the surface, but about $40 million short of consensus. That miss, though modest, came despite double-digit growth in Red Hat and Automation. The drag? Transaction Processing, which declined 2% in constant currency despite a blockbuster z17 mainframe cycle. It signalled a gap: IBM can sell hardware, but can it monetise the software that’s supposed to ride on top?

Meanwhile, Adjusted EBITDA rose 16%, and gross margin expanded by 230 basis points to 60.1%. IBM attributes this to a richer mix of AI-linked software and consulting deals—and internal productivity gains from 70+ AI workflows automated under Client Zero. This matters. Unlike AI theatre, Client Zero is AI applied to IBM’s own operations, delivering $3.5 billion in run-rate savings. For CIOs, it’s a template: AI not as a pilot, but as margin.

And that’s the deeper story this quarter. IBM is executing—but not theatrically. There are no splashy consumer GPT demos or trillion-token models here. Instead, IBM is building out a policy-aware, AI-integrated, enterprise-grade stack—across infrastructure, automation, and services. The market isn’t punishing IBM for lack of ambition. It’s pushing IBM to prove that its AI momentum isn’t just landing big deals, but creating durable, software-led ARR.

Greyhound Standpoint – For enterprise buyers, the takeaway is simple: this is no longer the IBM of 2015, chasing cloud relevance. Nor is it the IBM of 2020, over-indexed on Red Hat. This is a company now operating at the intersection of hybrid cloud, secure AI, and embedded orchestration. The financials say IBM is stable. The backlog says it’s winning new clients. The AI productivity gains say it’s learning. But the open question—as investors reminded us—is whether that adds up to a sustained inflection, or another slow build.

IBM’s regional performance in Q2 2025 reflected a familiar pattern: strength in the Americas and EMEA, and persistent drag in APAC. But beneath the top-line numbers, the story was more nuanced—and increasingly relevant for enterprise buyers calibrating their vendor risk by region.

In the Americas, IBM reported 7% year-on-year revenue growth for Q2 2025. According to IBM, this region accounted for over half of the company’s incremental revenue growth in the quarter. The company highlighted strong demand for IBM Z systems in U.S. federal and financial services, with continued uptake of Red Hat and Consulting services across Canada and Brazil.

We at Greyhound Research believe this performance reflects IBM’s ability to execute full-stack engagement across its most mature client base. The strength in the Americas is not simply transactional—it points to deeper client entrenchment where infrastructure, software, and services are being bought as a system. For CIOs in North America, IBM is increasingly being perceived not just as a supplier, but as a platform partner embedded in long-horizon technology strategy.

In Europe, Middle East, and Africa (EMEA), IBM reported 8% year-on-year growth. The company attributed this to strong IBM Z upgrades in Germany and the UK, as well as a recovering Red Hat pipeline across France, the Nordics, and Southern Europe. IBM also disclosed that some Consulting renewals in the region were delayed due to extended procurement cycles and pricing renegotiations, particularly in Spain and Italy.

We at Greyhound Research believe that IBM’s AI governance stack—particularly watsonx.governance—is gaining meaningful traction in EMEA, especially in public sector accounts across Scandinavia. In these markets, regulatory posture is not optional—it’s a buying determinant. IBM’s early wins here position it well in a region that demands explainability, audit trails, and lifecycle policy enforcement. However, the delays in Consulting renewals reveal ongoing friction in aligning IBM’s pricing and GTM approach with complex government procurement cycles.

In Asia-Pacific (APAC), IBM posted a 3% revenue decline at constant currency. The company cited broad-based softness due to delayed infrastructure procurement in Japan and slower-than-expected Consulting renewal momentum in Australia and South Korea. At the same time, IBM pointed to early signs of Power11 adoption across ASEAN—particularly in SAP RISE transformation projects—and flagged increased AI interest from telecom clients across the region.

We at Greyhound Research believe APAC remains IBM’s most uneven market—marked by promising demand signals but persistent execution drag. Buyers in countries like Singapore and South Korea are beginning to explore IBM’s stack for edge-native AI inferencing use cases, especially in telecom. However, momentum still hinges on IBM’s ability to unify regional GTM and simplify post-sale ownership. In its current state, the APAC pipeline is less a flywheel and more a collection of starts and stops—promising, but far from predictable.

Greyhound Standpoint – For global CIOs, the takeaway is this: IBM’s growth is increasingly regionalised, and execution consistency varies. While the Americas and parts of Europe show strong platform pull-through, APAC remains a market where IBM is still rebuilding pipeline credibility. Buyers should factor this into implementation timelines and support expectations—especially for cross-regional rollouts or partner-led delivery.

IBM’s Q2 results revealed a top-line that looks even—10% Software growth, 14% Infrastructure growth, 3% growth in Consulting—but underneath that surface, the drivers and drag factors couldn’t be more different.

IBM reported Software revenue of $7.4 billion, representing an 8% year-on-year increase. According to IBM’s earnings call, Red Hat grew 14% during the quarter, led by strong adoption of OpenShift and virtualization offerings. IBM noted that virtualization bookings exceeded $300 million, and that OpenShift’s annual recurring revenue (ARR) surpassed $1.7 billion, marking continued strength in hybrid cloud platforms.

We at Greyhound Research believe that this growth reflects increasing stickiness of IBM’s hybrid Kubernetes architecture within large enterprise workloads. The expansion of OpenShift’s ARR suggests that IBM is now successfully embedding Red Hat deeper into multi-cloud automation workflows. Furthermore, we are hearing from CIOs and infrastructure buyers that Red Hat Virtualization is being considered as a strategic hedge against VMware, especially in light of the architectural and licensing uncertainty following Broadcom’s acquisition. While IBM has not made this comparison directly, buyer behaviour indicates that Red Hat’s virtualization stack is quietly filling gaps left by incumbents.

IBM reported that its Automation segment grew 14% year-on-year, matching the growth rate of Red Hat. IBM credited this momentum to early traction from its HashiCorp integration, which contributed significantly to new deal wins during the quarter. The company noted that Terraform and Vault, now being bundled with Red Hat Ansible, helped drive a twofold increase in automation bookings during Q2, and a threefold increase in pipeline volume for the second half of the year.

We at Greyhound Research believe this performance reflects more than just portfolio uplift—it signals that IBM is assembling a modern hybrid automation control plane. With HashiCorp’s declarative tooling layered into Red Hat’s automation stack, IBM is positioning itself to challenge fragmented, best-of-breed tooling with a tightly integrated, platform-first alternative. This will matter to CIOs looking to rationalise automation across infrastructure, security, and orchestration domains—especially in multi-cloud environments where consistency and policy control are top of mind.

IBM reported that its Data sub-segment grew 7% year-on-year in Q2, driven by ongoing adoption of watsonx.data and governance tooling. The company pointed to increased traction in AI governance, as well as deeper integration of data foundation services under the watsonx stack. However, growth in the broader Software segment was held back by Transaction Processing, which IBM said declined 2% at constant currency.

Per IBM, Transaction Processing revenue typically lags behind IBM Z hardware deployments by one to two quarters, and that a rebound is expected as z17 capacity expansions convert into software renewals.

We at Greyhound Research believe this continued divergence between infrastructure sales and Transaction Processing monetisation presents a strategic friction for IBM’s software narrative. While the company is right to point out historical lag patterns, today’s buyer behavior is different—modernisation timelines are longer, compliance layers are heavier, and many Z customers are deferring Transaction Processing upgrades until GenAI use cases are fully validated. The market isn’t questioning whether Transaction Processing will recover. It’s questioning whether Transaction Processing will evolve.

IBM reported that its Infrastructure revenue reached $4.1 billion in Q2, up 11% year-on-year, with growth driven almost entirely by the launch of the IBM z17 platform. IBM confirmed that IBM Z revenue surged 67%, boosted by strong client demand and continued adoption across regulated sectors. The company highlighted that its Telum II processor is now executing over 450 billion AI inferences per day, with sub-millisecond latency—a performance metric designed to support transactional AI use cases in real time.

IBM also announced the upcoming Spyre accelerator, designed to run watsonx Assistant and watsonx Code Assistant for Z natively on the mainframe. This, the company says, will further integrate GenAI capabilities into the IBM Z platform, closing the loop between model execution and transactional workloads.

We at Greyhound Research believe this evolution signals a broader shift in how the mainframe is being positioned—not as a legacy compute platform, but as a native AI inference layer embedded at the heart of regulated workflows. In industries like banking, insurance, and telecom—where latency, compliance, and data gravity matter—this ability to execute AI models on-platform, without GPU offloading or architectural handoffs, is fast becoming a differentiator. If IBM continues to simplify development and orchestration on Z, it could move from niche relevance to architectural centrality in hybrid AI stacks.

IBM reported that Hybrid Infrastructure revenue grew 19% in Q2 2025, while Distributed Infrastructure declined 17%, a drop attributed to timing around the launch of Power11. In its earnings call, IBM cited early signs of customer momentum for Power11, particularly in SAP RISE-certified workloads and in AI workloads requiring container-native inferencing. The company reiterated that Power11, delivered via PowerVS, is currently the only non-x86 platform offered by a hyperscaler that is SAP RISE-certified—a positioning that IBM believes will help expand its relevance in performance-intensive cloud environments.

We at Greyhound Research believe this capability makes Power11 more than a niche refresh—it’s a signal that IBM is serious about carving a place in the hybrid-native infrastructure stack. For enterprises navigating cost, latency, and compliance trade-offs across on-prem and cloud landscapes, Power11 presents a differentiated alternative to commodity x86 and hyperscaler lock-in. Especially for SAP-intensive clients or those pursuing AI adjacent to ERP, this could be a strategic wedge—if IBM can deliver the ecosystem maturity and operational support to match.

IBM reported that Consulting revenue reached $5.3 billion in Q2, remaining flat in terms of year-on-year growth. Within the segment, Intelligent Operations increased by 2%, while Strategy and Technology declined by 2%, according to the company’s prepared remarks. IBM noted that Consulting bookings fell 18% year-on-year, but was quick to point out a 13% rise in net new client logos and the addition of more than 200 new clients in the first half of the year. The most notable metric: over 20% of Q2 signings were GenAI-led, and generative AI work now accounts for 10% of total Consulting revenue and 17% of its backlog.

We at Greyhound Research believe this mix shift is revealing. The softness in Strategy and Technology suggests that clients are still cautious about committing to large-scale transformation programs. Budget holders are leaning into AI projects with faster ROI, often tied to operational automation and customer service enhancements. The Consulting business is clearly adapting—GenAI-led engagements now carry higher margins and shorter cycles, and IBM appears to be using these as a wedge to build broader client relationships. But to sustain this shift, IBM will need to scale delivery quality and reference wins faster than it has in past waves of Consulting reinvention. The momentum is real. The translation to predictable revenue is what comes next.

The $32 billion consulting backlog, is showing signs of improved quality—shorter contract durations, quicker conversion to billings, and higher-margin mix driven by AI services. IBM’s GTM teams are leaning into this pull-through motion, using GenAI as a wedge into higher-value client relationships.

Greyhound Standpoint – For CIOs, the signal is clear: IBM’s product lines are moving at different speeds. Red Hat, Automation, and z17 are showing tangible enterprise lift—with Power11 now entering the conversation. But Software monetisation still lags infrastructure delivery, and Consulting remains in transition from legacy programs to AI-led services. The pieces are working, but orchestration is everything. CIOs should engage IBM across stack boundaries—with a sharp eye on integration, licensing friction, and roadmap alignment between business units.

IBM’s cumulative generative AI book of business crossed $7.5 billion this quarter—up from $6 billion in Q1—marking its strongest sequential expansion since the launch of watsonx. More importantly, this wasn’t theoretical pipeline growth. IBM added $1.5 billion in net new generative AI signings in Q2 alone. Of this, over $1 billion came from Consulting, with Software contributing more than $1.5 billion cumulatively. Red Hat, watsonx.data, and watsonx.governance were the key levers driving this momentum.

Where competitors are racing to scale foundation models for open-ended tasks, IBM is playing a different game: building auditable, orchestrated AI tailored to enterprise governance and lifecycle constraints. It’s not about training the biggest model—it’s about where and how that model runs, and whether it can be secured, monitored, and reused.

That strategy is now showing up in client deployments. UPS is using watsonx.ai to orchestrate predictive delivery flows in its logistics chain. Nestlé has deployed watsonx Assistant to automate supply chain operations. Verizon is using watsonx Code Assistant to streamline network engineering workflows. These aren’t experiments—they’re embedded, ROI-driven transformations.

At Think 2025, IBM extended the capabilities of watsonx by introducing no-code agent builders in Orchestrate. This allows business users—without coding experience—to configure and manage domain-specific AI agents. IBM now offers over 150 of these prebuilt agents across HR, finance, procurement, and IT. Crucially, all of them are orchestrated and governed through watsonx.governance, ensuring policy control, versioning, and regulatory alignment from day one.

Underneath this application layer, IBM has been strengthening the AI execution environment. RHEL AI is positioned as the secure developer platform for fine-tuning open models. OpenShift AI, meanwhile, serves as the runtime backbone for inferencing across hybrid deployments. This integration matters—not just because it improves performance, but because it ties AI into the operating fabric of enterprises’ existing hybrid estates.

Internally, IBM’s Client Zero initiative has become a showcase for how AI scales in the real world. The company has automated more than 70 workflows across finance, procurement, HR, and IT, delivering $3.5 billion in productivity gains to date. That number is expected to exceed $4.5 billion by end-FY25. These internal playbooks are not just stories for investor decks—they’re being turned into repeatable client offerings.

Just as critical is IBM’s position in the broader ecosystem. Rather than going head-to-head with hyperscalers, IBM is integrating across the board—embedding watsonx into AWS, Azure, Salesforce, Oracle, and other platforms. The intent is clear: to position watsonx as the connective tissue for enterprise AI, capable of governing and orchestrating models, agents, and workflows across an increasingly fragmented hybrid environment.

Greyhound Standpoint – For CIOs, the takeaway is this: IBM isn’t chasing AI headlines. It’s building an enterprise-native AI stack—one that is tightly coupled with infrastructure, rich in policy, and grounded in execution. For organisations navigating regulated environments, distributed operations, and high-stakes workloads, IBM’s strategy may not be loud, but it is built to last.

Across over 800 enterprise technology leaders surveyed in Greyhound CIO Pulse 2025, one theme is sounding louder than the rest: the enterprise no longer buys AI on narrative. They buy it on impact. And impact now means proximity, not platform. A full 68% of respondents said they are actively prioritising AI workloads that run closer to their core systems of record—not in remote cloud regions—due to rising sensitivity around latency, operating costs, and policy enforcement.

IBM’s z17 and Power11 platforms are aligning with this shift—but alignment is not the same as adoption. The ability to run AI inference directly within the mainframe, on Telum II, is gaining conceptual traction among CIOs in financial services, telecom, and public sector. Likewise, Power11—with its SAP RISE certification and container-native inferencing—is surfacing as a credible alternative in hybrid deployment evaluations, particularly where non-x86 performance and workload locality are strategic drivers.

According to Greyhound Fieldnotes, however, this conceptual alignment is being undercut by fragmented execution. In practice, CIOs aren’t evaluating IBM’s products in isolation. They’re assessing IBM’s ability to show up as an integrated, accountable partner across their AI infrastructure, orchestration, and automation estate. And too often, that cohesion breaks down.

When asked about platform priorities, 71% of CIOs in Greyhound Pulse said they are actively consolidating AI, automation, and orchestration into a unified control plane—a single layer that governs both model behaviour and workflow execution. IBM, on paper, has the architecture: watsonx for model orchestration, Red Hat for container management and compliance, and HashiCorp for secrets, state, and automation. But according to Greyhound Fieldnotes, buyers still experience these capabilities as a loosely-coupled collection—not a cohesive, license-clear platform.

In Europe, especially France and the Nordics, CIOs consistently called out watsonx.governance as the most technically mature AI governance tool among major enterprise vendors. In banking and government settings, where audit trails and policy configuration matter more than model scale, this feature set is a differentiator. But as one European CIO put it bluntly: “The product is right. The packaging is not.” Greyhound Fieldnotes reinforce that enthusiasm routinely gives way to hesitation—driven by SKU sprawl, overlapping modules, and unclear deployment tiers.

In Southeast Asia, particularly Singapore and Malaysia, CIOs report success with watsonx Orchestrate in procurement and HR automation. But they also note that post-pilot delivery falters—largely due to fractured accountability between Red Hat, Consulting, and watsonx teams. In multiple enterprise accounts, we documented escalation loops just to align architectural guidance with ownership responsibilities. That’s not a product flaw. That’s a coordination gap.

In Australia and the UAE, visibility—not viability—is the limiting factor. CIOs in these markets appreciate IBM’s stack vision but tell us they still struggle to translate that vision into tailored architecture without relying on global teams parachuted in from abroad. These are markets with board-level urgency for AI, yet IBM’s local pre-sales and solutioning depth still trails buyer intent.

In the U.S. federal market, Greyhound Fieldnotes show real momentum for z17. Agencies evaluating Telum II for confidential inferencing are focused on scenarios where GPU offloading is impractical or non-compliant—especially in civilian defence and intelligence workflows. IBM’s value proposition here is unique. But even among friendly accounts, integration clarity remains a pain point. CIOs want to know whether Spyre, OpenShift AI, and watsonx workloads will operate seamlessly in hybrid zones like AWS GovCloud and Azure Government. Confidence in compute is not the same as confidence in orchestration.

Greyhound Standpoint – What unifies these buyer stories is not doubt in IBM’s AI ambition—it’s concern over delivery fragmentation. Enterprises aren’t just buying platforms. They’re buying predictability. They want IBM to collapse complexity, surface ownership, rationalise SKUs, and follow through with local expertise. The potential is no longer in question. The architecture is well-understood. The early proof points are on the table. But the margin of success now lies in IBM’s ability to stitch these parts together in the field—with consistency and without friction. The interest is real. The stakes are high. And as it stands, the burden of execution still sits squarely with IBM.

For CIOs and enterprise architects evaluating IBM after this quarter, the message is layered—but actionable.

First, z17 and Power11 are no longer just performance upgrades. They’re turning into nodes for AI execution. With Telum II and the upcoming Spyre accelerator, z17 now enables native AI inference directly on core transactional systems—without sending sensitive data off-platform. For organisations in financial services, telecom, and government, that isn’t a spec sheet feature—it’s a policy-compliant architecture. Meanwhile, Power11 is entering production cycles with SAP RISE certifications, container-native capabilities, and a credible path to multi-cloud inferencing via PowerVS.

Second, IBM’s control plane is starting to take shape. With the HashiCorp integration now underway, and Red Hat and watsonx fully aligned under the same enterprise stack, buyers can begin testing IBM’s platform as a cohesive automation and orchestration layer. The pieces are finally interoperable. But they will only deliver value if IBM simplifies licensing models across agents, runtimes, and hybrid data planes—and gives enterprise IT teams a single pane of ownership from pre-sales through support.

Third, CIOs should begin reframing IBM Consulting from a delivery engine to a design partner. With over 75,000 GenAI-certified professionals, a $1 billion+ GenAI book of business, and a maturing catalogue of industry-specific AI blueprints, IBM is in a position to co-develop—not just implement—enterprise AI strategy. This is particularly important for buyers navigating overlapping clouds, complex compliance postures, and workflow-centric transformation. The Consulting backlog is growing, the quality of that backlog is improving, and the productivity differential on GenAI projects is starting to show up in the margins.

Above all, buyers must treat IBM as what it’s becoming—not what it once was. This is no longer a legacy vendor repositioning itself for relevance. It’s a systems player building a vertically integrated AI stack from chip to orchestration, with governance in the middle. That stack is not yet seamless. But it’s visible. And for enterprises that value trust, policy, and repeatability over speed alone, it may be the most strategically aligned AI platform on the market today.

IBM’s Q2 2025 results reinforce that this is no longer a company in recovery mode. It is replatforming—methodically—around a full-stack AI and automation strategy that spans silicon, software, and services. The z17 launch didn’t just refresh a hardware cycle. It reframed the mainframe as an inferencing substrate. Power11 isn’t just chasing x86 parity. It’s carving space as a non-hyperscaler option for regulated AI workloads. And watsonx is no longer an ambition—it’s showing up in contracts, deployments, and client pilots. For a company often dismissed as lagging behind the AI curve, this quarter was a line in the sand.

But execution alone won’t buy IBM a re-rating—because it isn’t what’s missing. What’s missing is coherence at scale. Software growth is real in parts—Red Hat, Automation, Data—but inconsistent when viewed as a whole. Transaction Processing continues to dilute software narrative strength. Consulting is writing AI deals, yes, but those deals haven’t matured into consistent revenue lift. And while the $7.5 billion GenAI number sounds impressive, there’s still no clarity on how much of that becomes recurring—and when. What IBM has proven is that it can sell. What it now has to prove is that it can sustain.

Internally, IBM’s transformation is visible and valuable. Client Zero is a blueprint for enterprise AI done right—automating over 70 workflows, delivering billions in savings, and creating reference architectures with measurable ROI. But in the market, that credibility often gets lost in the shuffle. Buyers tell us they still experience IBM in fragments: separate motions from Red Hat, Consulting, watsonx, and Infra teams, with overlapping roadmaps and disjointed ownership. The result isn’t rejection—it’s hesitation. And in this market, hesitation costs deals.

Strategically, IBM is building something unique. It is not chasing scale for its own sake. It is not racing into chatbots or web-scale LLMs. It is architecting an enterprise AI operating layer—one that is policy-aware, lifecycle-managed, and embedded into systems of record. That approach is resonating in telcos, banks, governments, and large industrials who don’t want to just bolt AI onto a workflow—they want to rebuild the workflow around intelligence. But to fully capitalise, IBM must do three things fast: unify its go-to-market, simplify its licensing, and prove that its cross-stack execution can be repeatable across industries.

The company is no longer trying to reinvent itself. It has. What comes next is proving that reinvention delivers not just product wins, but platform-wide pull. That it can turn deployments into platforms. That it can turn bookings into margin. That it can show up not as a collection of parts—but as a system that works.

The flywheel is turning. The architecture is credible. The clients are listening. Now IBM must compound.

Analyst In Focus: Sanchit Vir Gogia

Sanchit Vir Gogia, or SVG as he is popularly known, is a globally recognised technology analyst, innovation strategist, digital consultant and board advisor. SVG is the Chief Analyst, Founder & CEO of Greyhound Research, a Global, Award-Winning Technology Research, Advisory, Consulting & Education firm. Greyhound Research works closely with global organizations, their CxOs and the Board of Directors on Technology & Digital Transformation decisions. SVG is also the Founder & CEO of The House Of Greyhound, an eclectic venture focusing on interdisciplinary innovation.

Copyright Policy. All content contained on the Greyhound Research website is protected by copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of Greyhound Research or, in the case of third-party materials, the prior written consent of the copyright owner of that content. You may not alter, delete, obscure, or conceal any trademark, copyright, or other notice appearing in any Greyhound Research content. We request our readers not to copy Greyhound Research content and not republish or redistribute them (in whole or partially) via emails or republishing them in any media, including websites, newsletters, or intranets. We understand that you may want to share this content with others, so we’ve added tools under each content piece that allow you to share the content. If you have any questions, please get in touch with our Community Relations Team at connect@thofgr.com.


Discover more from Greyhound Research

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading