The European Commission has opened formal proceedings against X under the Digital Services Act (DSA), focusing on the integration of its generative AI tool, Grok, and the operation of its recommender systems in the EU. On paper, this is another DSA enforcement step against a designated Very Large Online Platform (VLOP). In substance, however, it may prove to be one of the first major tests of how the DSA governs generative AI embedded directly into platform infrastructure.

At the centre of the investigation lies the question: did X properly assess and mitigate the systemic risks, created by deploying Grok at scale in the EU? If not, the legal and financial consequences could be significant, including fines of up to 6% of global annual turnover. But the deeper issue is structural rather than punitive. The global outrage surrounding Grok highlights the urgency of addressing the harms at stake and invites scrutiny of whether the DSA’s systemic risk framework is adequately calibrated to capture these emerging forms of AI-enabled platform harm.

From Reactive Moderation to Structural Responsibility

As a VLOP, X is subject to obligations under Articles 34 and 35 DSA. These provisions require platforms not merely to react to illegal content, but to identify, assess and mitigate systemic risks stemming from the design and functioning of their services. Those risks include the dissemination of illegal content, negative effects on fundamental rights, gender-based violence, harms to minors and serious consequences for users’ physical and mental well-being. Crucially, where new functionalities significantly alter a platform’s risk profile, the DSA requires an ad hoc risk assessment prior to deployment.

The Commission’s concern appears to be that Grok’s integration, including its text and image generation capabilities and its incorporation into recommender systems, materially changed X’s systemic risk landscape. If generative AI was introduced without a robust prior assessment of foreseeable harms, this would strike at the core logic of the DSA.

The shift embodied in the DSA is important. Under the earlier E-Commerce Directive model, liability largely hinged on notice and takedown: platforms were not liable for illegal content if they removed it expeditiously once notified. The DSA retains elements of that regime but adds a new layer of structural responsibility. It no longer suffices to remove harmful content after the fact; platforms must scrutinise how their design choices may generate or amplify harm in the first place.

Generative AI as a Foreseeable Risk Multiplier

Since 2024, Grok has been embedded into X in ways that enable users to generate text and images and to receive contextual AI-driven responses to posts. Reports have highlighted instances in which the tool generated or facilitated the dissemination of manipulated sexually explicit images, including deepfake-style content disproportionately targeting women and minors. In certain circumstances, concerns have been raised that outputs could amount to illegal content under EU law, potentially even child sexual abuse material.

From a DSA perspective, the decisive issue is not whether every harmful output was intended or avoidable. It is whether the risks were foreseeable and whether they were adequately assessed and mitigated before large-scale deployment. The generation of non-consensual pornographic imagery is not an obscure or unpredictable edge case in the context of generative AI. It is a well-documented and widely discussed risk. That predictability matters legally.

The integration of generative capabilities into recommender systems compounds the issue. The DSA recognises that systemic harm is often a function not only of content creation but of amplification. Recommender systems shape visibility, engagement, and virality. If Grok-generated content can be surfaced, promoted, or algorithmically boosted, the risk is no longer confined to isolated outputs. It becomes structural.

The Commission’s extension of proceedings to examine X’s recommender systems particularly in light of a shift toward a Grok-based model reflects this concern. The DSA’s risk-based approach is designed precisely to capture such interactions between content generation and algorithmic amplification.

Deepfakes and the Gender Dimension

AI has exposed and amplified longstanding gender biases in technology, largely rooted in the predominance of men in computer science degrees, tech development teams, and among investors and entrepreneurs. Algorithmic and AI systems additionally reproduce and amplify errors and biases embedded in training datasets, a problem exacerbated by the accessibility and cost conditions of high-quality training data (T. Margoni et al 2025). Consequently, AI outputs may make gender biases more visible and pervasive.

A central element of the controversy concerns the creation and dissemination of pornographic deepfakes, including both explicit images and videos of individuals engaged in sexual acts, alongside intimate material such as nude images. These forms of content have profound impacts on victims, who experience severe interrelated psychological, social, and economic harms, such as reputational damage, anxiety, and depression (A. Diel et al 2025).  These resulting harms are, however, not evenly distributed. Women and girls are disproportionately targeted by non-consensual pornographic imagery, raising acute concerns relating to dignity, privacy, equality and protection against gender-based violence. Notably, over 95% of all AI generated deepfakes circulating online are of a pornographic nature, with almost all victims being female (HomeSecurity Heroes 2024). Beyond harms to individuals, non-consensual pornographic deepfakes therefore also advance misinformation and gender-based violence online.

The rapid and low-cost online dissemination of deepfakes, facilitated through online platforms such as X or Reddit, significantly exacerbates harms to individuals. Platforms amplify the content’s reach and exposure, worsening reputational and psychological impacts.

The ability to secure swift removals of such content is therefore crucial for victims. In this regard, the DSA liability and accountability rules are relevant in curbing the spread of deepfakes. Under Article 6, hosting platforms are to be held liable when they have actual knowledge of illegal content and fail to remove or disable access to it promptly. User notices or law enforcement orders are deemed to grant actual knowledge. By imposing potential liability and sanctions, this approach strengthens incentives for platforms to ensure the swift removal of non-consensual deepfakes, thereby playing a key role in curbing the dissemination of such type of material.

Other DSA requirements can further help tackle the spread of non-consensual pornographic deepfakes. Article 14 obliges hosting providers to provide information on content moderation policies and tools, while Article 16 mandates effective notice-and-action mechanisms through which users can report illegal content. Section 5 of the Regulation additionally includes further due diligence obligations for VLOPs, explicitly requiring such platforms to consider effects on fundamental rights and gender-based violence when conducting systemic risk assessments and in implementing mitigation measures. Importantly in this context, Recital 87 clarifies that VLOPS used for disseminating pornographic content must ensure victims of non-consensual imagery can effectively exercise their rights through rapid processing of notices and removals, thereby effectively providing a “fast lane” for redress (N. Krack 2024).

In principle, therefore, the DSA contains the conceptual tools to address AI-enabled sexual harms at scale. Whether those tools are applied rigorously and effectively enforced in practice is another matter. By merely harmonising procedural aspects of content moderation, the DSA may lack teeth and be able to guarantee effective enforcement for victims. Victims are burdened with the time-consuming and often overwhelming task of submitting requests for each instance of non-consensual pornographic content online. Given the volume of requests to be processed by platforms, particularly large VLOPS, flagged content may not always be swiftly removed (See this US study on X). Moreover, even when material is taken down, it may be rapidly re-uploaded and proliferated across other platforms, undermining the effectiveness of removal efforts from a victim’s perspective.

Overall, while the DSA represents a significant step toward protecting victims of non-consensual deepfakes, it may be insufficient to fully address the scale and speed of these harms in practice.

Fragmentation and Centralisation

The Artificial Intelligence Act introduces transparency obligations for certain AI-generated content, which may assist moderation and traceability, but it does not centrally frame generative AI risks through a gender violence lens. The Gender-Based Violence Directive criminalises specific forms of non-consensual sexually explicit deepfakes, yet it does not cover all variants of intimate image manipulation and leaves Member States room to extend protection.

At the same time, the General Data Protection Regulation law may apply where personal data are processed without a legal basis; copyright law may be implicated where source materials are misappropriated; national criminal law may intervene where thresholds of illegality are met; and national personality and image rights regimes may apply where individuals’ likenesses or personal attributes are depicted without consent. Deepfake harms therefore sit at the intersection of multiple regimes, none of which is fully comprehensive on its own (N. Krack 2022).

The Commission is not alone in probing Grok. French authorities have reportedly conducted a raid at X’s Paris office. In the United Kingdom, Ofcom under the Online Safety Act and the Information Commissioner’s Office (ICO) have initiated investigations. Ireland’s Data Protection Commission has opened a probe into Grok’s sexual AI imagery, and civil society organisations have sought urgent judicial relief in Amsterdam.

The Grok case exposes these regulatory fault lines in practice. The non-consensual generation and circulation of sexualised deepfakes does not fit neatly within a single legal category. Instead, it simultaneously triggers platform governance under the DSA, data protection scrutiny concerning unlawful processing of personal and biometric data, potential criminal liability for image-based abuse, and, in some cases, media or copyright concerns. Each regime isolates a different dimension of harm, yet none captures the phenomenon in its entirety. The result is not merely regulatory overlap, but structural fragmentation in the face of a technologically integrated harm.

Within the EU, the DSA attempts to mitigate fragmentation by centralising oversight of VLOPs in the hands of the European Commission, as the EU-wide meta-regulator. Once formal proceedings are opened, national Digital Services Coordinators are relieved of competence in relation to the suspected infringements. The Grok case therefore tests whether this centralised model can deliver consistent and effective enforcement, particularly where AI tools blur the boundaries between content moderation, algorithmic design and product development. This has important implications for the future of AI and data regulation which has seen the proliferation of national competent bodies and authorities (see here and here).

Despite fragmentation and the recent national regulatory proposals targeting deepfakes, there are nonetheless ample legal bases for action. The key challenge rather remains ensuring effective enforcement of existing regulatory regimes to meaningfully reduce harms to victims (N. Krack 2024; S. Karttunen 2026) particularly as related to the timely and permanent removal of such forms of content from online platforms. Due to the inherently cross-boundary nature of deepfakes online, policy efforts should be directed towards strengthening international cooperation and existing enforcement mechanisms to better protect victims.

A Defining Moment for AI-Enabled Platforms

As generative systems become embedded into platform architecture, the line between “content” and “infrastructure” begins to blur. AI tools no longer merely host or rank speech; they help produce, reshape and amplify it. When integrated into recommender systems, search, advertising or creator tools, generative models can alter the dynamics of visibility and virality at scale.

For other VLOPs, the signal is clear. AI-powered summarisation tools, synthetic image generators, conversational assistants or AI-curated feeds may all materially change a platform’s risk profile. If they affect how illegal or harmful content is generated or amplified, they likely trigger renewed obligations to assess and mitigate risks to fundamental rights, minors and gender equality. Risk assessment cannot be treated as a static compliance formality; it must be integrated into product development when AI systems are deployed or significantly updated.

If the Commission pursues the case against X rigorously, it could clarify that embedding generative AI into core functionalities automatically triggers further risk assessment under the DSA. If enforcement remains narrow or superficial, the credibility of the DSA as a forward-looking governance framework may suffer. Either way, the Grok investigation is a pivotal test of whether Europe’s digital rulebook (which is under revision) can keep pace not only with platform scale, but with the speed and transformative force of AI integration.

Share this article!