EU Probes X Over AI Data Use: What It Means for Enterprises

Reading Time: 6 minutes
Save as PDF 

P.S. The video and audio are in sync, so you can switch between them or control playback as needed. Enjoy Greyhound Standpoint insights in the format that suits you best. Join the conversation on social media using #GreyhoundStandpoint.


Elon Musk’s X is facing a regulatory probe in Europe over its alleged use of public posts from EU users to train its Grok AI chatbot – an investigation that could set a precedent for how companies use publicly available data under the bloc’s privacy laws.

“There’s a noticeable chill sweeping across enterprise boardrooms,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “With Ireland’s data watchdog now formally probing X over its AI training practices, the lines between ‘publicly available’ and ‘publicly usable’ data are no longer theoretical.”

Eighty-two percent of technology leaders in the EU now scrutinize AI model lineage before approving deployment, according to Greyhound Research.

In one case, a Nordic bank paused a generative AI pilot mid-rollout after its legal team raised concerns about the source of the model’s training data, Gogia said.

“The vendor failed to confirm whether European citizen data had been involved,” Gogia said. “Compliance overruled product leads and the program was ultimately restructured around a Europe-based model with fully disclosed inputs. This decision was driven by regulatory risk, not model performance.”

“This probe could do for AI what Schrems II did for data transfers: set the tone for global scrutiny,” Gogia said. “It’s not simply about X or one case – it’s about the nature of ‘consent’ and whether it survives machine-scale scraping. Regions like Germany and the Netherlands are unlikely to sit idle, and even outside the EU, countries like Singapore and Canada are known to mirror such precedents. The narrative is shifting from enforcement to example-setting.”

As quoted in ComputerWorld.com

Ireland’s AI Scraping Probe May Shift Enterprise Risk Perception

Greyhound Flashpoint – There’s a noticeable chill sweeping across enterprise boardrooms. With Ireland’s data watchdog now formally probing X over its AI training practices, the lines between “publicly available” and “publicly usable” data are no longer theoretical. The Greyhound CIO Pulse 2025 report shows that 82% of technology leaders in the EU now say they actively interrogate model lineage before approving production deployment. This is less about compliance theatre and more about real legal exposure.

Greyhound Standpoint – At Greyhound Research, we see this investigation as more than a regulatory sidebar — it’s part of a growing pattern of discomfort around how data is being appropriated in the name of AI. Models that were once admired for their performance are now being scrutinised for their origin stories. Particularly in industries governed by strict rules — finance, public health, government services — we’re observing procurement teams revise their approach. It’s becoming clear: no proof of provenance, no deal.

Greyhound Pulse Insights – The Greyhound CIO Pulse 2025 report shows that seven out of ten AI buyers in Europe now insist on training data disclosure in every RFP. Just two years ago, this number was below 40%. What changed? A slew of legal warnings, public missteps, and mounting concern about shadow datasets — all of which are prompting buyers to play defence.

Greyhound Fieldnotes – Per a recent Greyhound Fieldnote from a Nordic bank, a generative AI pilot was paused mid-rollout after legal teams flagged uncertainty around the origin of training data. The vendor failed to confirm whether European citizen data had been involved. Compliance overruled product leads, and the programme was ultimately restructured around a Europe-based model with fully disclosed inputs. This decision was driven by regulatory risk, not model performance.

Global Regulators Could Follow Ireland’s Lead on AI Data Use

Greyhound Flashpoint – What happens in Dublin won’t stay in Dublin. Ireland’s move could easily shape how regulators in California, Berlin, or São Paulo rethink consent in the age of AI. The Greyhound Regulator Pulse 2025 report shows that 64% of global privacy officers are actively watching EU developments to guide their own policy scaffolding. It’s not a matter of if this spreads — just how fast.

Greyhound Standpoint – From where we stand at Greyhound Research, this probe could do for AI what Schrems II did for data transfers: set the tone for global scrutiny. It’s not simply about X or one case — it’s about the nature of “consent” and whether it survives machine-scale scraping. Regions like Germany and the Netherlands are unlikely to sit idle, and even outside the EU, countries like Singapore and Canada are known to mirror such precedents. The narrative is shifting from enforcement to example-setting.

Greyhound Pulse Insights – The Greyhound Sector Pulse 2025 report shows that nearly 60% of General Counsels across G20 nations are drafting internal AI policies using EU precedents as anchor references. Of those, 71% anticipate that local regulators will eventually mirror European positions on training data rights and disclosure obligations. This suggests Ireland’s actions are already functioning as a global signal.

Greyhound Fieldnotes – Per a recent Greyhound Fieldnote from an Asian government IT department, an AI-driven digital ID programme came under review when it emerged that biometric data was being used for training without citizen consent. Given the project’s national importance, it led to a formal investigation. The issue was not functionality, but lack of declared reuse terms — a powerful example of how silent repurposing can spark public crisis.

Enterprises Must Demand Transparent AI Training Disclosures

Greyhound Flashpoint – There’s a pivot underway. Conversations are shifting from what a model can do to where it learned to do it. The Greyhound CIO Pulse 2025 report shows that 68% of enterprise buyers now insist on clarity around data origin and legal basis before signing off on AI tools. No one wants to be tomorrow’s headline because of yesterday’s lazy due diligence.

Greyhound Standpoint – In our advisory work at Greyhound Research, we tell clients that AI isn’t just a product — it’s a supply chain. You wouldn’t deploy hardware without knowing where the components came from. Why treat your models differently? At a minimum, enterprise buyers must ask for five things: clarity on source (public vs proprietary), geography of origin, evidence of legal basis for use (consent or equivalent), proof of auditability, and update logs. Any vendor who balks at these should raise alarms.

Greyhound Pulse Insights – The Greyhound Vendor Risk Pulse 2025 report shows that 61% of European procurement leaders now include indemnity clauses that hold vendors directly accountable for training data violations. This marks a near tripling from 2022. The message is clear: enterprises are tightening contracts to reflect growing regulatory and reputational risk.

Greyhound Fieldnotes – Per a recent Greyhound Fieldnote from an advisory with a EU headquartered pharmaceutical major, contract talks with a U.S.-based AI vendor stalled over undisclosed use of Reddit and Twitter data. The vendor refused to certify GDPR alignment. Legal flagged the ambiguity as non-compliant. The deal only progressed after the vendor submitted to an independent audit and presented a redrafted training disclosure. What began as a rollout discussion became a governance checkpoint.

Analyst In Focus: Sanchit Vir Gogia

Sanchit Vir Gogia, or SVG as he is popularly known, is a globally recognised technology analyst, innovation strategist, digital consultant and board advisor. SVG is the Chief Analyst, Founder & CEO of Greyhound Research, a Global, Award-Winning Technology Research, Advisory, Consulting & Education firm. Greyhound Research works closely with global organizations, their CxOs and the Board of Directors on Technology & Digital Transformation decisions. SVG is also the Founder & CEO of The House Of Greyhound, an eclectic venture focusing on interdisciplinary innovation.

Copyright Policy. All content contained on the Greyhound Research website is protected by copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of Greyhound Research or, in the case of third-party materials, the prior written consent of the copyright owner of that content. You may not alter, delete, obscure, or conceal any trademark, copyright, or other notice appearing in any Greyhound Research content. We request our readers not to copy Greyhound Research content and not republish or redistribute them (in whole or partially) via emails or republishing them in any media, including websites, newsletters, or intranets. We understand that you may want to share this content with others, so we’ve added tools under each content piece that allow you to share the content. If you have any questions, please get in touch with our Community Relations Team at connect@thofgr.com.


Discover more from Greyhound Research

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from Greyhound Research

Subscribe now to keep reading and get access to the full archive.

Continue reading