1. Background

As the European Union’s digital rulebook continues to expand rapidly, EU institutions have intensified their efforts to streamline and better coordinate existing legislation – a trend reflected, most recently, in the European Commission’s proposal for a Digital Omnibus Regulation.

Against this backdrop, a study commissioned by the European Parliament’s Committee on Industry, Research and Energy (the “Study“)[1] examines how the AI Act interacts with the broader EU digital regulatory ecosystem. Rather than providing a simple inventory of overlaps, its purpose is to assess whether the combined application of these frameworks produces uncertainty, duplicative obligations or uneven burdens that could fragment the internal market. Taking a forward looking approach, the Study offers a set of short, medium and long term recommendations on how to reduce regulatory friction while safeguarding the Union’s commitment to rights, trust and safety.

Indeed, while the AI Act has quickly assumed the role of a cornerstone in the European Union’s governance of artificial intelligence, it remains only one element within a much broader regulatory ecosystem that includes, among others, the General Data Protection Regulation (GDPR), the Data Act, the Digital Services Act (DSA), the Digital Markets Act (DMA), the Cyber Resilience Act (CRA) and a wider set of EU digital policy frameworks. Each of these instruments was designed with a distinct policy objective, yet when viewed collectively they raise a more complex question: whether this expanding body of legislation operates as a coherent system or whether its cumulative effect risks placing Europe’s AI ecosystem at a competitive disadvantage.

  1. Regulatory logic of the AI Act and emerging tensions

The AI Act is structured around a risk-based approach: it prohibits certain AI practices, imposes detailed obligations on AI systems based on their “risk category”, introduces transparency duties for specific use cases and establishes a dedicated regime for general purpose AI models, including those with systemic impact. Much of this architecture draws on the European Union’s product safety framework, yet it also incorporates newer concepts such as fundamental rights impact assessments, traceability obligations and post-market monitoring.

As the Study notes, this combination gives rise to several tensions. Extending product safety tools to areas such as fundamental rights protection is not straightforward, as these domains rely on more qualitative and context dependent assessments. The regulation also applies to both providers and deployers, creating chains of responsibility that may be difficult to navigate for SMEs or non-specialist users.[2] The classification of high risk systems, based on annexes and partially subjective assessments of harm potential, can lead to ambiguity and divergent interpretations across Member States. And although the AI Act includes some pro-innovation mechanisms, the overall framework remains weighted towards risk mitigation, with prescriptive requirements that can become demanding when combined with obligations arising from other digital laws.

  1. Strategic implications of the broader EU digital framework

The Study then widens the lens. When the AI Act is assessed alongside other regulations such as the GDPR, the DSA, the DMA, the Data Act and the CRA, the issue is no longer whether each individual obligation is justified, but how these frameworks interact when applied simultaneously to the same actors and use cases. In fact, companies often find themselves navigating several parallel assessment and governance processes, many of which address comparable risks but rely on different procedural logics. Managing these overlapping obligations can complicate development cycles, slow the deployment of new systems and narrow the space for experimentation, while also creating differences in how compliance is approached across the Union.

According to the Study, this cumulative burden is felt unevenly across the market. Startups and SMEs, which typically have limited capacity to absorb complex regulatory obligations, are more exposed to these pressures. By contrast, larger firms, typically equipped with more structured compliance functions and broader operational resources, tend to be better placed to manage such obligations. However, this difference does not diminish the importance of robust safeguards; rather, it underscores the need to ensure that regulatory frameworks support innovation across the full spectrum of actors in the European AI landscape.

  1. Recommendations for a more integrated digital framework

With the purpose of addressing these challenges, the Study proposes a set of recommendations designed to introduce greater coherence into the European Union’s digital regulatory landscape.

In the short term, it calls for closer cooperation among supervisory authorities and for clearer, joint guidance on how overlapping obligations should be interpreted. It also suggests using mutual recognition mechanisms, for example by allowing a well-documented data protection impact assessment to satisfy part of the analysis required for a fundamental rights impact assessment. Greater harmonisation of sandbox procedures would likewise help reduce fragmentation across Member States.

In the medium term, the Study recommends targeted legislative adjustments to clarify roles within the AI value chain, streamline overlapping obligations in areas such as fundamental rights and cybersecurity and reinforce mechanisms that ensure the effective exercise of individual rights in AI mediated environments.

Looking further ahead, the Study invites the Union to reflect in the long term on the structure of its digital regulatory architecture as a whole. Consolidation and simplification, together with a clearer strategic alignment between instruments, could help create a framework that preserves the Union’s constitutional values while offering more agile and predictable pathways for innovation.

      5.  The AI Act–DSA interface

Particular attention is placed on the intersection between the AI Act and the DSA, noting that online platforms increasingly rely on AI systems to structure, filter and moderate user activity. As AI becomes embedded in recommender systems, content moderation workflows, detection tools and risk assessment processes, the two frameworks inevitably converge. The Study stresses that this relationship is not accidental: the DSA regulates the services through which information circulates, while the AI Act governs many of the systems that make those services function.

A first point of interaction concerns the area of risk assessment. Under the DSA, very large online platforms (VLOPs) and very large online search engines (VLOSEs) must identify, analyse and mitigate systemic risks linked to their services, including the dissemination of illegal content, the spread of disinformation and the effects of algorithmic amplification. At the same time, the AI Act requires providers and deployers of certain systems to carry out risk management procedures and, where applicable, fundamental rights impact assessments. According to the Study, these two sets of obligations operate in adjacent domains but follow different methodologies, raising practical questions about whether, and to what extent, one assessment may satisfy the requirements of the other. This lack of alignment is a potential source of friction for platforms that rely heavily on automated tools.

A second area where the two instruments intersect is transparency. The DSA imposes detailed obligations on platforms to explain the functioning of their recommender systems and to provide users with meaningful information about how content is curated. The AI Act, by contrast, focuses on technical transparency, requiring documentation, data governance, logging and traceability for certain AI systems. The Study notes that, in practice, platforms may have to prepare separate sets of documents to comply with both frameworks, even when the underlying system is the same. This risks generating duplicated work and, in some cases, inconsistent disclosures.

Content moderation provides a further point of interaction. Many of the automated tools used to detect or prioritise illegal or harmful content fall within the scope of the AI Act, particularly when they rely on general-purpose or high-risk AI components. At the same time, the DSA requires VLOPs and VLOSEs to implement robust processes to mitigate systemic risks, many of which rely extensively on AI-enabled moderation and detection systems. The Study points out that platforms must therefore ensure that their moderation tools are sufficiently effective to meet DSA standards while also complying with the technical and organisational safeguards mandated by the AI Act.

Finally, the Study highlights a governance challenge. The AI Act establishes an oversight structure centred on the AI Office, national supervisory authorities and notified bodies, while the DSA relies on Digital Services Coordinators and, for VLOPs and VLOSEs, on the direct supervisory powers of the European Commission. Although neither instrument formally requires cross-regime coordination, the Study highlights a concrete risk of divergent oversight in practice. This is because AI systems used by platforms for functions such as content moderation, recommender systems or systemic risk mitigation may simultaneously fall under DSA obligations and the AI Act’s risk-based requirements.

  1. Towards a more integrated AI Act–DSA framework

Through this lens, the interplay between the AI Act and the DSA illustrates a broader challenge identified by the Study The two instruments were designed to pursue distinct policy objectives and operate through different regulatory logics, yet they increasingly converge in practice as AI systems become integral to core platform functions. This convergence makes it essential to ensure clear guidance and workable compliance processes for operators subject to both regimes.

In the short term, improved coordination and shared guidance could ease the practical burdens faced by platforms that must comply with both regimes. Over the medium term, targeted legislative clarifications may be necessary to reconcile overlapping obligations, particularly where AI systems underpin essential DSA functions. In the longer run, the Study invites the Union to reflect more deeply on the structure of its digital governance architecture as a whole, with a view to greater coherence, simplification and strategic consolidation. The AI Act–DSA interface thus becomes a compelling example of where this evolution is most urgently needed, and where a more integrated regulatory design could meaningfully strengthen the effectiveness and predictability of EU digital policy.

  1. The reverse perspective: evaluating the DSA’s interaction with other EU digital laws

A further dimension to this discussion emerges from a recent development at EU level. On 17 November 2025, the European Commission published its Report on application of Article 33 of the DSA and the interaction of that Regulation with other legal acts (the “Report“), covering more than fifty legislative instruments across areas such as data protection, audiovisual media, consumer protection, intellectual property and cybersecurity. The Commission concluded that, in most cases, the DSA and surrounding frameworks operate in mutually reinforcing ways, either by building on one another or by applying in parallel. At the same time, the report identified a number of provisions where regulatory overlaps would benefit from closer coordination in order to ensure clarity and consistency of application.

The Commission also assessed the designation threshold for VLOPs and VLOSEs, confirming that the current criteria remain appropriate in a rapidly evolving digital environment. Taken together, these findings echo several of the concerns raised in the Study, particularly the need for greater alignment between horizontal and sectoral rules. They also illustrate a complementary analytical path: whereas the Study examines the AI Act through the lens of the wider digital regulatory landscape, the Commission approaches the issue from the opposite direction by evaluating how the DSA interfaces with other EU laws.

Significantly, the Report confirms that the AI Act is among the EU legal acts most frequently associated with potential overlaps or ambiguities in the application of the DSA. Particularly, it notes that this complex interplay particularly affects domains such as dark patterns, recommender systems, product safety and content moderation – all areas in which both the DSA and the AI Act establish parallel obligations and supervisory mechanisms. As a result, stakeholders repeatedly stressed the need for clearer operational guidance to determine the respective scope and precedence of each framework, warning that fragmented duties, inconsistent enforcement and uncertainty in the prioritisation of rules risk undermining the effectiveness of both instruments.

Taken together, the perspectives offered in the Study and in the Report converge on a shared message: as the EU’s digital rulebook continues to expand, ensuring coherence, coordination and clarity across instruments will be essential to maintaining legal certainty and effective enforcement.

 

[1] Hans Graux, Krzysztof Garstka, Nayana Murali, Jonathan Cave and Maarten Botterman, Interplay between the AI Act and the EU Digital Legislative Framework (Policy Department for Transformation, Innovation and Health, Directorate-General for Economy, Transformation and Industry, Study PE 778.575, October 2025).

[2] Ibid, 8.

Share this article!