Access Without Accountability: The Dangerous Experiment Unfolding in India’s AI Economy

Reading Time: 16 minutes
Save as PDF 


In recent weeks, three of the world’s most powerful AI platforms – OpenAI (ChatGPT Go), Google (Gemini), and Perplexity – quietly dropped a bombshell in India. They made their premium AI services free. To the public, it feels like a gift. To investors, it looks like user acquisition on steroids. But to those watching closely, it marks something else entirely.

This is not just a rollout. It is a realignment.

India is no longer a passive consumer of global tech. It has become the frontline in a new race. This is not just about adoption. It is about behavioral influence, training data, and investor validation. The platforms know this. The telcos know this. And slowly, regulators are starting to catch on.

At Greyhound Research, we have seen this before. These are moments when access masks deeper asymmetries. These AI giveaways are not about generosity. They are a new kind of land grab. The prize is not revenue today. The prize is training signals, usage patterns, and early lock-in at a national scale. In India, that means hundreds of millions of real-world prompts across languages, devices, and use cases. The more granular the prompt, the more valuable the insight. The more frictionless the access, the more invisible the trade.

Behind the zero-price tag is a full-stack negotiation between telecom carriers and platform providers. Google has bundled Gemini Pro into Jio plans. Airtel has done the same with Perplexity Pro. OpenAI has bypassed telcos for now, but only to fast-track a direct user relationship. In every case, the logic is clear. Use free access to collect engagement, train models, shape habits, and inflate growth metrics. This is not consumer-scale access. This is infrastructure-scale influence.

And telecoms are not just distribution partners. They are becoming digital power brokers. What used to be a channel is now a filter. If your first AI assistant comes preloaded with your data plan, what happens to platform neutrality? What happens to choice? When Jio users default to Gemini and Airtel users default to Perplexity, the illusion of discovery disappears. What replaces it is quiet, commercial gatekeeping.

What makes this moment more dangerous is its timing. India’s policy framework for AI remains nascent. The Digital Personal Data Protection Act is not yet operational. No AI-specific regulation exists. Enterprise governance maturity is also far behind the curve. Sensitive data is already flowing into external models with little clarity on ownership, reuse, or jurisdiction. This is not a hypothetical risk. It is already happening. CIOs are waking up to discover that internal pitch decks, customer contracts, and compliance documents are being used to train the future of intelligence.

Meanwhile, valuations climb. Metrics soar. Platforms gain ground. And no one stops to ask the question. At what cost?

This is not a market expansion. It is a behavioral reset. It is a reprogramming of how people search, learn, write, and make decisions. This is AI as soft power. It is not being embedded through governance or education. It is entering lives through telecom billing and bundling. Once installed, it becomes second nature. It becomes the first interface to knowledge, to productivity, and to identity itself.

To enterprise leaders watching from the sidelines, this is not just a consumer story. The platforms entering your employees’ pockets today will enter your enterprise stack tomorrow. The tools your teams play with at home become the defaults they reach for at work. What is free today becomes embedded tomorrow. It happens without oversight, without policy, and without anyone knowing what is being learned from your organization’s data.

This dossier is a warning. AI platforms are not just distributing access. They are embedding themselves into national digital behavior. They are rewriting norms. These include trust, neutrality, how value is created, and who captures it. They are reshaping intelligence in their image, using your users, your data, and your bandwidth.

India is not the endgame. It is the prototype. Brazil, Indonesia, and Nigeria are next. The strategy is portable. The silence is global.

This is not the first time India has been the proving ground for a global digital agenda. We saw this during the Free Basics controversy, when telcos and platforms tried to curate the open internet into commercial silos. We saw it with zero-rating debates, Aadhaar’s biometric infrastructure, and UPI’s transformation of financial identity. India has always been the sandbox. Now, it is the signal generator.

Free AI is the next frontier in that playbook. This time, it is not about how we access information, but how we author it. It is not about which sites load faster, but whose language becomes default. When AI platforms train on Indian prompts and export those learnings to shape global cognition, who decides what is local and what becomes universal?

The stakes are cultural. The costs are structural. The urgency is now. This is not about what AI can do. It is about who it learns from. And whether we have a say in how that learning is used.

Let us call this what it is. This is not a mass-access play. It is a momentum machine. The free AI push in India is not about generosity or global inclusion. It is a calibrated move to capture usage signals, build behavioral datasets, and inflate platform valuations.

Platforms like Perplexity are racing to demonstrate scale, engagement, and diversity of training input before their next funding rounds. For Google and OpenAI, the stakes are even higher. They are defending global positioning and revenue futures by locking in users now, before policy and competitors catch up.

India is not just a market. It is a multiplier. It provides model feedback, monetization narratives, and investor signals. Every new user, every prompt, and every uploaded file adds weight to a platform’s valuation story. India’s scale and linguistic diversity are being harvested as training fuel and strategic proof.

This is also the first time multiple AI players have launched free or subsidized pro-level services at a national scale. They are reaching users directly through consumer channels and telecom plans. These are not promotional stunts. They are foundational moves. They are shaping how AI gets introduced into society, not by discovery, but by preloading.

In a recent Greyhound Fieldnote, we at Greyhound Research highlighted to a CIO at a large bank how junior analysts were uploading pitch decks into free AI tools, unaware they were training external models. The risk only came to light after a partner flagged similar phrasing in a public demo. No one had noticed. No policy existed. No opt-out had been set.

What few enterprises realize is that this is not just a licensing gap. It is shadow AI. Employees are not installing rogue software. They are relying on free public models to generate insights, create content, summarize legal drafts, and even process client data. This is not just a security problem. It is a cognitive risk surface. It creates blind spots in decision-making. And the consequences are rarely visible until they surface publicly.

The real risk is not model accuracy. It is organizational amnesia. Enterprises are not tracking what data is leaving, what logic is being accepted, or what assumptions are being baked into daily operations. When a prompt becomes embedded in a workflow, it becomes policy without ever being debated.

There is also a cost to waiting. Some CIOs believe they can delay until the next budget cycle or until regulation forces their hand. But by then, usage patterns will be entrenched. User behaviors will be primed. Retraining employees will be harder. Cleaning up leakage will be more expensive. And explaining these risks to regulators will come with higher stakes.

This is soft lock-in. Not through contracts, but through convenience. Not through pricing, but through priming. When platforms own the interface to intelligence, they own the context. And when that interface is free, it is easier to accept it without challenge.

Greyhound Standpoint –  At Greyhound Research, we see this as more than a market push. This is an infrastructure-level reset. Platforms are not simply expanding access. They are reconfiguring the rules of value exchange. In this new paradigm, users trade data for convenience. Enterprises, meanwhile, inherit exposure without visibility. This is not a neutral shift. It is an aggressive restructuring of how language, loyalty, and leverage are captured in the AI economy.

India once stood firm on the principle of net neutrality. It rejected Free Basics. It refused zero-rating. It chose openness over platform control. But that was the internet. What happens when intelligence becomes the product, and it arrives not through discovery, but bundling?

We are now entering a two-tier AI structure, divided by telecom plan.

If you are a Jio user, you get Google’s Gemini Pro, preloaded and subsidized. If you are with Airtel, you get Perplexity Pro embedded in your mobile and broadband experience. What was once an open choice is now a preselected path. Platform preference is being dictated by telecom allegiance.

This is not just commercial bundling. It is epistemic sorting. It is telecom identity deciding your first brush with machine intelligence. It shapes what you ask, what you learn, and how you form trust.

In this emerging model, telcos are no longer passive carriers. They are active filters of platform access. They choose who gets distribution, visibility, and default status. They negotiate terms that give AI providers instant reach and brand credibility. In return, platforms offer white-glove integration and data capture at scale.

Google’s Gemini Pro bundle with Jio is not just a growth hack. It is a calculated land-and-expand model, targeting youth and mobile-first users. Airtel’s national rollout of Perplexity Pro flips the same switch; this time with a smaller AI player betting on first-mover advantage through telecom loyalty.

This creates a systemic risk. If platform adoption is defined by telco alignment, not by user need or model quality, we end up with AI monocultures in disguise. One telco, one AI. One plan, one worldview.

These early interactions with AI are not neutral. They influence how users learn to ask questions, what formats they trust, and what types of responses they normalize. This is not just behavioral shaping. It is cognitive scaffolding. If your first few hundred AI interactions happen inside one platform’s model, that model becomes your epistemic lens. It does not just answer your questions. It starts to frame your worldview.

As AI becomes the interface to search, writing, and decision support, the line between editorial and inference begins to blur. Users may not know where the answer ends and the algorithm begins. This is especially true for first-time AI users who trust the default, unaware of what alternatives exist.

For enterprises, the exposure is closer than most CIOs imagine. With mobile devices doubling as both personal and professional tools, these pre-bundled AI assistants often enter enterprise workflows under the radar. No one signed a contract. No one saw the terms. But the model is now in the room, listening, generating, and learning from prompts tied to real business operations.

The concern is not just consumer disempowerment. It is structural control. If a telco contract determines your access to intelligence, then platform neutrality becomes meaningless. The gate has moved. The gatekeeper is now your network provider.

This reshapes the AI opportunity in India into something narrower, more guided, and more commercially contained. It rewards players with reach, not necessarily responsibility. And it risks locking out local innovation before it reaches scale.

Greyhound Standpoint – Telcos are no longer pipes. They are power brokers. India must now confront what it means when a data plan also decides your first AI assistant. Platform neutrality can no longer be defined by traffic alone. It must evolve to include access, algorithmic exposure, and ethical distribution of intelligence.

This new wave of AI is moving faster than policy and far faster than enterprise governance can keep up. The rollout of free, pro-tier AI tools across India has stress-tested the country’s regulatory stack, and the cracks are starting to show.

India’s Digital Personal Data Protection (DPDP) Act is still in its rollout phase. AI-specific regulation remains aspirational. In the meantime, global platforms are operating at scale, training on Indian prompts, generating cross-border outputs, and redefining data jurisdiction in real time. There is no mature enforcement mechanism. There is no structured oversight. The guardrails are being retrofitted after rollout.

For enterprises, this is not just a regulatory gap. It is a compliance multiplier. Every new AI tool accessed by an employee introduces new flows of prompt data, generated content, and potential IP leakage. Most of these tools operate in black-box mode. Their terms of service change frequently. Their data handling policies are vague by design. And few enterprises have mapped how their internal controls extend to free-tier AI interactions.

This is not hypothetical. Prompt data is already crossing borders, leaving no audit trail. Generated outputs are entering workflows without validation. And legal teams are discovering platform usage only after external incidents expose it.

Most enterprises still assume that model training is a separate event from user interaction. But many free-tier tools treat every prompt as potential fuel. That means proprietary logic, confidential syntax, and unique business phrasing can quietly become part of the platform’s learning base. And once it is learned, it cannot be unlearned.

Regulated sectors face an amplified version of this risk. When AI-generated content is used in documentation tied to public safety, financial decisions, or legal compliance, even a minor misalignment between platform policy and internal protocols can escalate into regulatory conflict.

Over time, AI tools start to change their voice. The answers that once sounded consistent with the company tone can begin to feel slightly off. Models evolve in the background, learning new patterns, adjusting their style, and sometimes losing touch with what was once approved. What slips through first is not accuracy, but alignment: the subtle tone, the phrasing, and the judgment. And before long, what once saved time begins to create quiet risk in customer communication and regulatory submissions.

In a recent Greyhound Fieldnote, a compliance officer at a pharmaceutical firm shared how a team began using a bundled AI tool to generate clinical reporting drafts. Legal and regulatory teams had never reviewed the tool’s data policy. The terms of service were discovered only after an external agency flagged output similarities with another trial response.

Another blind spot is exit risk. Enterprises may allow free AI usage during periods of exploration or resourcing gaps. But if these tools become embedded in workflows and then suddenly pivot to paywalls, change access terms, or go offline, the business impact can be immediate. Productivity loss, continuity disruption, and compliance exposure follow.

This also raises a deeper sovereignty question. Most of the AI tools currently scaling in India are foreign-built, trained on non-local contexts, and governed by external legal frameworks. India still lacks scale in public models, localized compute, or developer ecosystems. The risk is not just dependency. It is asymmetry.

Audit teams are now being asked to sign off on AI usage with no visibility into what was used, by whom, with what prompts, and under which jurisdictions. That is not audit readiness. That is a liability event waiting to happen.

Greyhound Standpoint – Compliance is no longer a checklist. It is a design function. At Greyhound Research, we urge enterprise leaders to treat AI adoption with the same controls as financial systems, not because of what the models can do, but because of what users might unknowingly reveal to them. If your teams cannot audit what they use, where it sends data, or how it learns from their prompts, then that tool is not free. It is a deferred cost.

There is a myth in the enterprise playbook that being early means being smart. But with AI, early adoption without internal maturity is not leadership. It is exposure.

Every organization is under pressure to say yes to AI. Boards want to hear about pilots. Vendors offer proof-of-concepts. Employees experiment with tools on the side. But very few leaders are asking the harder question: do we even need this yet?

The best AI outcomes do not come from the fastest adopters. They come from the most prepared ones. Organizations that pause to assess readiness, define boundaries, and build governance before tools are introduced tend to scale AI more safely, more credibly, and with fewer reputational scars.

Saying no is not anti-innovation. It is often the most strategic decision a leader can make.

Most free-tier AI tools today are built with capabilities that far exceed the average enterprise’s governance readiness. These models can draft policy, summarize contracts, and simulate tone of voice. Yet the organizations using them often lack even basic AI usage protocols. This gap is not operational. It is existential.

The real test is not whether your teams can use AI. It is whether your enterprise can absorb its consequences. Have you defined a list of approved tools and use cases? Do you have training protocols to help employees understand data sensitivity and model limitations? Are your legal teams aligned on how prompt data interacts with contracts and audit requirements? Do you have an exit plan if a tool changes access terms or becomes incompatible with your compliance stack?

In service firms and regulated industries, using AI-generated outputs without proper oversight can lead to unintentional breaches of client contracts. If a response is co-authored by a public AI tool that was never disclosed or approved, your firm could be exposed to liability or reputational damage, even if the work was technically accurate.

In a recent Greyhound Fieldnote, a digital leader at a consumer goods firm shared how their customer service team began using a GenAI tool to draft responses to product queries. At first, it seemed efficient. But weeks later, inconsistencies emerged. The responses relied on outdated knowledge and unverified sources. No one had checked the tool’s training cut-off or default content policy.

Shadow governance is the illusion of control. A few guidelines on a wiki, or a disclaimer at the bottom of an email, are not substitutes for enforced guardrails. Real governance shows up in audits, escalation logs, and cross-functional ownership.

The deeper risk is this: free AI tools lower the friction to experimentation. But friction exists for a reason. It creates pause. It creates deliberation. It forces teams to ask whether the tool fits the process or if the process is being reshaped around the tool.

Leadership today requires the ability to say yes selectively and say no decisively. Not because AI is risky, but because not every organization is ready to govern it.

Greyhound Standpoint – Saying no to new technology is also a form of leadership. At Greyhound Research, we have seen that the organizations that benefit most from AI are not the first to adopt but the first to pause, prepare, and draw hard lines between curiosity and capability. Governance is not a reaction. It is a precondition.

The enterprises that get AI right are rarely the ones running the flashiest models. They are the ones that know when to slow down, draw boundaries, and enforce them. Governance is not a plug-in. It is a habit, one that must be trained, tested, and shared across every team that touches data or makes decisions.

Right now, most organizations are still flying blind. Employees are experimenting with AI tools that no one has approved. Policies exist, but they are either too thin to matter or so dense that no one reads them. Legal, compliance, and tech leaders often operate in parallel universes. Prompts flow freely, outputs circulate unchecked, and no one really knows what the models are learning. What feels like agility is actually unmanaged risk wearing a productivity badge.

To bring order to this chaos, four steps help anchor the process.

1. Discovery starts with seeing what is already happening. Every company needs a clear picture of which tools are in use, who is using them, and for what. Browser plug-ins, mobile apps, and telco-bundled assistants—all of it counts. Relying on employees to self-declare will never work. Real discovery means digital detective work: analyzing network logs, cloud traffic, and endpoints to surface the truth.

2. Categorization follows naturally. Once you know what exists, decide what stays, what gets fenced off for testing, and what needs to be blocked. The goal is not to kill curiosity; it is to stop it from mutating into exposure.

3. Policy turns good intentions into rules that people can live by. List which tools are approved, what data is off-limits, and how any AI-generated material is verified before it reaches a client or regulator. Teach employees not only how to use AI but also when to walk away from it. Governance that no one understands is not governance at all.

4. Control is where it all becomes real. Every interaction should leave a trail—prompts, outputs, user IDs—so that compliance teams can trace incidents and learn from them. Without logs, it is impossible to tell accident from intent. Auditability might sound bureaucratic, but it is the only thing standing between transparency and denial.

Telecom-bundled AI tools deserve special mention. They slip into workplaces through personal phones and unmanaged devices. Without mobile-device management, these assistants operate outside enterprise control. A SIM card should never be a backdoor. The same rules that guard the firewall must extend to every handset that touches company data.

Model drift adds another layer of risk. AI models evolve quietly, and a tool approved last quarter can start answering differently today. Tone changes, facts shift, and outputs drift just enough to sound off-brand or out-of-policy. Left unchecked, this slow shift can turn a once-safe tool into a compliance liability.

Vendor complexity compounds the problem. Many AI products hide a web of third-party plug-ins and APIs. The front end might look compliant, while the back end routes data through unfamiliar jurisdictions. Each integration needs its own review, because every hidden dependency is a potential leak.

As per a recent Greyhound Fieldnote, a large Indian retailer decided to confront this directly. After a few close calls with unapproved AI use, the CIO built an internal sandbox where employees could test prompts safely. Everything was logged and monitored in real time. Far from stifling creativity, the approach had the opposite effect. Employees experimented more freely, compliance incidents fell, and management finally had a clear view of where AI added value and where it simply added noise.

The biggest truth is that governance cannot live in one department. The CIO cannot own it alone. Legal, HR, risk, and business leaders all have a piece of it. When no one owns AI risk, it multiplies quietly until it explodes publicly.

Greyhound Standpoint –  Governance is not a policy document. It is a daily behavior. At Greyhound Research, we see that the companies scaling AI responsibly are not the loud adopters but the disciplined ones. They weave AI hygiene into onboarding, project planning, and performance reviews. The best AI system is not the one that dazzles in demos. It is the one your enterprise can trust without looking over its shoulder.

India is not just another growth market for generative AI. It is the global testbed. It is where platforms trial scale, fine-tune language models, and shape user behavior in one of the world’s most data-rich, price-sensitive, and digitally complex environments.

Free access is not a gift. It is a strategy. It is a lever to gather prompt data, train global models, capture early loyalty, and create soft dependencies long before monetization begins. It is designed to enter quietly, integrate deeply, and scale invisibly.

The platforms understand this. The telecoms are playing along. The regulators are still catching up.

Free tools are not free from risk. When generative AI becomes a bundled service in telecom plans, when the first digital assistant you meet is pre-decided by your carrier, and when critical knowledge is accessed through proprietary inference engines, then platform neutrality is no longer a traffic question. It is a cognitive one.

This is the new frontier of digital power. And in the absence of hard policy, it will be governed by soft alliances and commercial interests.

Enterprises cannot afford to mistake exposure for innovation. If you do not know how a tool handles prompt data, where its models are trained, or what its exit terms are, then you are not innovating. You are offloading control.

Innovation without governance is not leadership. It is a risk passed downstream. It is trust outsourced to unseen models. It is compliance retrofitted after the breach.

Real innovation rarely rewards speed for its own sake. The organizations that will matter most in the next phase of India’s AI journey are the ones that move with purpose. The loud experiments will get attention, but the disciplined ones will endure. Progress does not come from constant motion; it comes from control. What feels like restraint in the short term is often what makes scale sustainable in the long run.

What happens in India will define how the world’s next billion users experience AI. If free AI becomes the default without accountability, it will not just change business models. It will reshape language, trust, and truth itself. The challenge before India is not adoption but authorship. Who writes the next chapter of intelligence—the platforms or the people?

It is not enough to embrace AI. We must embed it with accountability. Because if you cannot govern it, you cannot scale it. And if you cannot scale it safely, then access is not innovation. It is exposure.

Analyst In Focus: Sanchit Vir Gogia

Sanchit Vir Gogia, or SVG as he is popularly known, is a globally recognised technology analyst, innovation strategist, digital consultant and board advisor. SVG is the Chief Analyst, Founder & CEO of Greyhound Research, a Global, Award-Winning Technology Research, Advisory, Consulting & Education firm. Greyhound Research works closely with global organizations, their CxOs and the Board of Directors on Technology & Digital Transformation decisions. SVG is also the Founder & CEO of The House Of Greyhound, an eclectic venture focusing on interdisciplinary innovation.

Copyright Policy. All content contained on the Greyhound Research website is protected by copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of Greyhound Research or, in the case of third-party materials, the prior written consent of the copyright owner of that content. You may not alter, delete, obscure, or conceal any trademark, copyright, or other notice appearing in any Greyhound Research content. We request our readers not to copy Greyhound Research content and not republish or redistribute them (in whole or partially) via emails or republishing them in any media, including websites, newsletters, or intranets. We understand that you may want to share this content with others, so we’ve added tools under each content piece that allow you to share the content. If you have any questions, please get in touch with our Community Relations Team at connect@thofgr.com.


Discover more from Greyhound Research

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading