AI Coding Tools: Debunking the Productivity Myth

Reading Time: 5 minutes
Save as PDF 

P.S. The video and audio are in sync, so you can switch between them or control playback as needed. Enjoy Greyhound Standpoint insights in the format that suits you best. Join the conversation on social media using #GreyhoundStandpoint.


Experienced developers can take 19% longer to complete tasks when using popular AI assistants like Cursor Pro and Claude, challenging the tech industry’s prevailing narrative about AI coding tools, according to a new study.

Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, warned that organizations risk “mistaking developer satisfaction for developer productivity,” noting that most AI tools improve the coding experience through reduced cognitive load but don’t always translate to faster output, especially for experienced professionals.

Gogia argued this represents “a vital corrective to the overly simplistic assumption that AI-assisted coding automatically boosts developer productivity,” suggesting enterprises must “elevate the rigour of their evaluation frameworks” and develop “structured test-and-learn models that go beyond vendor-led benchmarks.”

“The 19% slowdown observed among experienced developers is not an indictment of AI as a whole, but a reflection of the real-world friction of integrating probabilistic suggestions into deterministic workflows,” Gogia explained, emphasizing that measurement should include “downstream rework, code churn, and peer review cycles—not just time-to-code.”

Gogia recommended enterprises adopt a “portfolio mindset: deploying AI copilots where they augment cognition (documentation, boilerplate, tests), while holding back in areas where expertise and codebase familiarity outweigh automation.” He advocated treating AI tools “not as a universal accelerator but as a contextual co-pilot” that requires governance and measurement.

As quoted in InfoWorld, in an article authored by Gyana Swain published on July 11, 2025.

CIOs Must Rethink AI Coding Investments Beyond Perceived Velocity Gains

Greyhound Standpoint – According to Greyhound Research, the METR study offers a vital corrective to the overly simplistic assumption that AI-assisted coding automatically boosts developer productivity. Rather than viewing this as a refutation of AI’s value, CIOs and CTOs should treat it as a call to elevate the rigour of their evaluation frameworks. The key message is not that AI tools don’t work, but that they work differently depending on context, experience level, and task type.

The 19% slowdown observed among experienced developers using AI tools like Cursor and Claude is not an indictment of AI as a whole, but a reflection of the real-world friction of integrating probabilistic suggestions into deterministic workflows. Enterprises must now develop structured test-and-learn models that go beyond vendor-led benchmarks. These should include longitudinal telemetry tracking, A/B testing across developer personas, and measurement of downstream rework, code churn, and peer review cycles—not just time-to-code.

Greyhound Research believes the most successful organisations will adopt a portfolio mindset: deploying AI copilots where they augment cognition (documentation, boilerplate, tests), while holding back in areas where expertise and codebase familiarity outweigh automation. AI tools should be seen not as a universal accelerator but as a contextual co-pilot—one that can offload mental strain, but also introduce oversight burdens that experienced developers are uniquely positioned to detect and resolve.

Developer Perception of AI Productivity Needs Harder Validation Metrics

Greyhound Standpoint – According to Greyhound Research, the stark gap between developer perception (believing AI makes them faster) and actual observed performance (being 19% slower) reveals a deeper failure in enterprise evaluation practices: mistaking developer satisfaction for developer productivity. Most AI tools today improve the experience of coding—by reducing effort, cognitive load, and decision fatigue—but that does not always translate into faster or higher-quality output, especially in the hands of seasoned professionals.

This divergence is not unique to the METR study. Similar patterns have been observed in field data from GitClear (41% higher churn for AI-generated code), Uplevel (higher bug rates with Copilot), and even Atlassian’s own 2025 Developer Experience study, which found that while 99% of developers believe AI tools save time, those gains are often cancelled out by inefficiencies elsewhere in the workflow. The key takeaway is that organisations must move beyond surface-level developer surveys and implement robust instrumentation of how AI tools impact task completion, code quality, and team throughput.

Crucially, CIOs must realise that developer belief in AI’s utility—while important for adoption—can lead to misplaced confidence and silent technical debt. The correct approach is to frame AI as a cognitive accelerator that requires governance. Greyhound Research recommends redefining developer KPIs to include dimensions like peer review friction, incident rates post-deploy, and learning curve benefits for junior engineers. Without this multidimensional lens, enterprises risk mistaking comfort for competence and velocity for value.

If AI Productivity Gains Are Misjudged, Enterprises Must Recalibrate Fast

Greyhound Standpoint – According to Greyhound Research, the industry’s assumption that AI coding tools unequivocally enhance developer productivity is not so much wrong as it is premature and context-insensitive. The METR study does not suggest that AI tools lack value—it shows that when measured in realistic, high-skill environments, their benefits are uneven and often offset by new types of friction. This nuance is essential. We are not witnessing the collapse of AI promise; we are witnessing its recalibration.

If enterprises continue to assume linear productivity gains from AI without measuring unintended complexity—like integration overhead, code review burden, and prompt engineering loops—they risk investing in tooling that generates output but diminishes control. For CIOs, this is a governance issue as much as a capability issue. It demands a shift in mindset: from deploying AI to “go faster” to deploying AI to “go better, where it makes sense.”

Greyhound Research urges caution not in adoption, but in overgeneralisation. GitHub, Microsoft, and others continue to show productivity lift in structured tasks, new codebases, or for less experienced developers. However, these benefits do not extrapolate universally—especially not to deep domain experts maintaining mature code. If productivity assumptions are proven incorrect at scale, the competitive implication won’t be obsolescence of AI tooling, but a retreat toward smarter orchestration strategies. Winners will be those who pair AI with developer telemetry, maintain feedback loops, and foster a culture where automation is interrogated, not idolised.

Analyst In Focus: Sanchit Vir Gogia

Sanchit Vir Gogia, or SVG as he is popularly known, is a globally recognised technology analyst, innovation strategist, digital consultant and board advisor. SVG is the Chief Analyst, Founder & CEO of Greyhound Research, a Global, Award-Winning Technology Research, Advisory, Consulting & Education firm. Greyhound Research works closely with global organizations, their CxOs and the Board of Directors on Technology & Digital Transformation decisions. SVG is also the Founder & CEO of The House Of Greyhound, an eclectic venture focusing on interdisciplinary innovation.

Copyright Policy. All content contained on the Greyhound Research website is protected by copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of Greyhound Research or, in the case of third-party materials, the prior written consent of the copyright owner of that content. You may not alter, delete, obscure, or conceal any trademark, copyright, or other notice appearing in any Greyhound Research content. We request our readers not to copy Greyhound Research content and not republish or redistribute them (in whole or partially) via emails or republishing them in any media, including websites, newsletters, or intranets. We understand that you may want to share this content with others, so we’ve added tools under each content piece that allow you to share the content. If you have any questions, please get in touch with our Community Relations Team at connect@thofgr.com.


Discover more from Greyhound Research

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading