The Irish Data Protection Commission (DPC) has initiated a formal inquiry into how personal data from publicly accessible posts on the X social media platform is processed for artificial intelligence training.
The investigation specifically targets the use of EU and European Economic Area users' data for developing Grok Large Language Models (LLMs).
Grok, developed by Elon Musk's xAI company, powers an AI chatbot integrated directly into the X platform. The DPC's investigation centres on personal information derived from public posts made by European users, which was previously controlled by X Internet Unlimited Company (XIUC). This entity, formerly Twitter International Unlimited Company, rebranded on 1 April 2025 as part of Musk's ongoing platform transformation.
Expanding Regulatory Authority Under the AI Act
Given the DPC's expanded role under the EU Artificial Intelligence Act, the inquiry takes on additional significance. As Ireland's designated competent authority and fundamental rights authority under this landmark legislation, the DPC now wields substantial powers and responsibilities beyond traditional data protection concerns. These include monitoring compliance with AI-specific regulations, enforcing obligations on technology companies, and protecting fundamental rights potentially affected by AI technologies.
This case represents one of the first major tests of the DPC's enhanced mandate and could establish important precedents for implementing AI regulation across the European Union.
Data Protection in the Age of Generative AI
Large language models like Grok require massive datasets for training, raising novel questions about consent and data usage. The investigation highlights the growing tension between rapid AI development and European data protection principles, particularly regarding whether public posts constitute freely available training data or whether explicit consent is required for such repurposing.
Transatlantic Tensions and Corporate Consequences
This investigation occurs against a backdrop of heightened transatlantic tensions over tech regulation. The Trump administration has frequently criticised the EU's regulatory approach toward American tech companies, characterising hefty fines against US tech giants as discriminatory and akin to additional taxation on American enterprises.
Elon Musk, an influential adviser to President Trump, has similarly expressed concerns about European policies he views as restrictive to online speech and innovation. This inquiry could reignite tensions between Brussels and Washington, complicating ongoing trade discussions as the US seeks lighter regulatory burdens for American technology firms operating in Europe.
Historical Context and Potential Outcomes
The inquiry follows earlier legal proceedings where the DPC sought restrictions on X's processing of EU user data for AI development. Those proceedings concluded after X agreed to stop processing personal data collected without explicit user consent. The DPC has established itself as a significant regulatory force, having previously issued nearly €3 billion in fines to Meta, though X historically faced only one relatively minor €450,000 fine in 2020.
Broader Implications for AI Governance
As nations and regional blocs develop divergent approaches to regulating artificial intelligence, companies with global operations face increasing compliance challenges.
For X and other social media platforms, the investigation underscores the need to carefully consider data governance strategies and potentially develop region-specific approaches to AI development that respect local regulatory frameworks.
As this investigation unfolds, it will likely provide valuable insights into how European regulators intend to balance innovation with the protection of fundamental rights in the rapidly evolving field of artificial intelligence.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.