Skip to main content

The AI Omnibus: what is settled and what is still in play

At a glance

A legislative package, known as the "AI Omnibus", is currently under negotiation in Brussels and is expected to be finalised by July 2026. It proposes a small set of important targeted amendments to the EU AI Act, which remains in effect. Amendments include extending compliance deadlines for high-risk AI systems and certain transparency requirements, and introducing a ban on harmful synthetic content. This article examines six key areas under consideration, what appears largely settled, and next steps.

Regardless, organisations should prioritise developing robust approaches to risk-tiering their AI systems, and documenting their governance, risk management, and controls frameworks. These capabilities will be critical for ensuring compliance with existing obligations, and to be well prepared to implement the detailed standards and guidance as and when they become available.

Important note: This article reflects the state of play as of 30th March 2026. The AI Omnibus is still being negotiated, and the final text may differ from what is described here.

What is the AI Omnibus, and what is it not?

The EU AI Act came into force in August 2024 with multiple phased implementation deadlines, a number of which are already applicable. The prohibitions of the most dangerous AI practices have been applicable since February 2025. The rules for providers of general-purpose AI (GPAI) models followed in August 2025. The next, and most extensive, set of obligations, covering high-risk AI systems and transparency requirements, is due to take effect in August 2026.

However, late last year it became apparent that the supporting ecosystem of technical standards, guidance and supervisory infrastructure was not going to be ready in time for the August 2026 deadline.

Rather than letting the obligations take effect before both organisations and regulators are equipped to meet them, the Commission proposed the AI Omnibus. The proposal facilitates some targeted amendments, specifically to push back selected compliance timelines, clarify some supervisory arrangements, and refine a limited number of requirements to make them more proportionate and better support innovation. In response to recent market developments, it is now also likely to introduce a new ban on AI-generated intimate imagery without consent.

Since the Commission published the AI Omnibus proposal in November 2025, both the EU Parliament and member states’ governments (i.e. the EU Council) have set out their separate views on what the final text should look like. In April, the three bodies will begin negotiations to finalise the text, aiming to reach an agreement by May or June 2026.

This timeline would allow the AI Omnibus to officially become law before the original August 2026 deadline under the AI Act would have taken effect (see Figure 1). This is crucial for providing organisations with legal certainty regarding their compliance deadlines. This article examines six key areas addressed by the AI Omnibus.

Before doing so, however, it is important to emphasise that the AI Omnibus does not repeal the AI Act or fundamentally alter the core requirements. The AI Act’s original risk-based classification system, existing prohibited practices, transparency obligations, and requirements for high-risk AI systems remain unchanged. [See EU AI Act: Forging a strategic response]

Skip to description

Figure 1 - AI Omnibus finalisation timeline (estimated)

High-risk AI systems: compliance deadlines are set to move back

This is one of the most significant changes in the package. The AI Act imposes detailed requirements on high-risk AI systems, such as those used in recruitment, credit scoring, law enforcement, healthcare, product safety, or critical infrastructure. These extensive requirements cover areas including data quality and management, technical documentation, human oversight, and accuracy and robustness, amongst others.

Under the current timetable, they are due to take effect on 2 August 2026 for most high-risk AI systems (Annex III), and 2 August 2027 for high-risk AI systems embedded in products already covered by other EU safety legislation (Annex I). These deadlines are now almost certain to be extended, and there is broad consensus on the new dates (see Figure 2).

Skip to description

Figure 2 - Changes to compliance deadline for high-risk AI systems

The expectation is that this delay will provide sufficient time for the technical standards currently being developed by European Standardisation Organisations1 to be finalised. However, it remains unclear what happens if they are not, which is still a possibility given the slow pace of progress so far.

The Commission has the legal power to issue its own technical specifications as a fallback, alongside AI Act guidelines. However, no such contingency plans have been set out so far, but it is an area that warrants monitoring. Further delays to the compliance deadlines themselves, however, are highly unlikely, in our view.

KEY TAKEAWAY

The requirements for high-risk AI systems are not changing. Only the timeline is shifting. Organisations should treat this additional time as an opportunity to build effective AI governance and compliance frameworks that will better position them to implement emerging standards and guidance, and to be ready when the obligations take effect.

Short grace period for marking and labelling AI-generated content

The AI Act sets out transparency rules to ensure people know when they are interacting with AI or viewing AI-generated or manipulated content. The simpler requirements, such as informing users that they are interacting with AI, remain on track, and will be applicable from August 2026.

What the AI Omnibus proposes is to adjust the deadline for a more significant and technically challenging obligation. This involves marking AI-generated content in a machine-readable format so that it can be more easily detected. This could involve, for example, embedding digital watermarks or metadata that automated systems can read, rather than just attaching a visible label.

The delay will only apply to AI systems already on the market before 2 August 2026. Any new generative AI product launched on or after that date will need to comply from day one, a distinction that has direct implications for product release planning.

The final deadline for existing systems has not been settled, but negotiations currently point to a date between November 2026 and February 2027.

Skip to description

Figure 3 - Changes to compliance deadline for marking and labelling AI-generated content

To support this requirement, the Commission’s AI Office is developing a voluntary Code of Practice on the marking and labelling of AI-generated content. This Code aims to provide organisations with practical guidance on complying with these transparency rules. A second draft was published in March, with the final version expected by mid-2026. Although voluntary, the Code is likely to become the benchmark against which regulators will assess compliance in practice.

KEY TAKEAWAY

Whichever date prevails, it is not far off, and implementing the technical requirements will take time and effort. While existing AI systems may receive a limited extension, new generative AI products launched after 2 August 2026 will need to comply from day one. Organisations should therefore treat content marking as an active priority and integrate compliance deadlines into their go-to-market plans.

A new ban on AI-generated intimate imagery without consent

There is now strong support to introduce a new outright ban on AI systems that create or manipulate sexually explicit images of real, identifiable people without their consent. This covers so-called “nudification” tools and other forms of deepfake intimate imagery. The ban is also set to cover AI-generated Child Sexual Abuse Material (CSAM).

The issue has gained particular political urgency following widely reported incidents in which mainstream AI tools were exploited to generate non-consensual imagery. In response, the Commission launched a formal investigation into a Very Large Online Platform (VLOP) under the Digital Services Act (DSA).

This new prohibition would sit in the AI Act’s most serious category of banned AI practices, carrying the highest available penalties: fines of up to €35 million or 7% of global annual turnover. An exception is expected for AI systems with effective technical safeguards that prevent misuse. The details of what these safeguards entail has not been fully defined yet. Early indications suggest it will not be sufficient to have safety measures in place at launch. Instead, safeguards must reliably prevent misuse and remain effective on an ongoing basis.

The timeline for the new prohibition to take effect has not been fully settled, but some proposals suggest a February 2027 deadline.

KEY TAKEAWAY

Given the strong support across all EU institutions, this ban is likely to feature in the final AI Omnibus. Organisations developing, deploying, or distributing generative AI systems capable of producing realistic imagery of people should review their safety measures to consistently prevent misuse, both pre- and post-market. They must also ensure clear procedures and adequate resources to promptly investigate and act on misuse reports.

The AI Office: a bigger remit, boundaries still being drawn

The Commission’s AI Office already serves as the EU level supervisor for providers of GPAI models under the AI Act. For AI systems, however, supervision remains largely the responsibility of national regulators in each EU member state. The AI Omnibus proposes to change this, expanding the AI Office’s role to take on supervision of certain AI systems at EU level.

Two categories of AI system would come under the AI Office’s direct oversight, under current proposals:

  1. AI systems that qualify as, or are embedded within, designated VLOPs and Very Large Online Search Engines (VLOSEs) under the DSA. For instance, AI systems when used to drive content recommendations on a social media platform or AI overviews in search results.
  2. AI systems built on GPAI models from the same organisation or group. For example, a company might offer a GPAI model and sell a distinct AI system built on it to banks, retailers, or other businesses. Currently, the AI Office would supervise the GPAI model, while national regulators in buyers' countries would supervise the AI system. Under the AI Omnibus proposal, supervision of both the GPAI model and system would fall to the AI Office.

There is support for these proposals, in principle. The key outstanding question is where to draw the boundaries of the AI Office’s remit and how its powers will interact with those of national authorities.

Member states’ governments have also proposed that certain sensitive sectors be carved out from EU level supervision. For example, AI systems used by financial institutions, law enforcement agencies, and border management authorities, amongst others, could remain under the oversight of their existing national regulators.

There is also an ongoing discussion about the extent to which national regulators should retain a role in supervising AI systems within VLOPs and VLOSEs alongside the AI Office.

KEY TAKEAWAY

For providers of GPAI models and systems, centralised EU supervision could simplify regulatory engagement, replacing multiple national regulators with a single point of contact. However, organisations purchasing and deploying AI systems will need to consider how the AI Office’s supervisory approach applies to their systems, and its implications for their AI Act compliance and vendor management. In some cases, and/or specific sectors, it means organisations may need to engage with both national regulators and the AI Office. All organisations should monitor the final boundaries, including carved-out sectors, as negotiations conclude.

Registration of lower-risk AI systems in high-risk areas: obligation likely to be simplified, not removed

Under the AI Act, AI systems used in areas such as recruitment, credit scoring, or access to other essential services are designated as high-risk. However, not all AI systems in these areas will necessarily qualify. Organisations can assess their AI systems and conclude that they fall below this threshold. This would be the case, for example, if the system performs only a narrow procedural task, such as document sorting, and does not directly shape individual decisions. AI systems below the threshold do not need to meet the full high-risk requirements. They do, however, still require registration in a public EU database.

The Commission wanted to eliminate the registration obligation altogether, arguing it imposes an unnecessary burden for organisations. However, negotiations currently indicate the registration requirement will remain, albeit in a simplified form.

What is not being eased is the assessment itself. Organisations operating in high-risk areas will still need to determine whether their AI systems meet the threshold, and to document that reasoning thoroughly.

KEY TAKEAWAY

While a simpler registration form will reduce administrative burden, the underlying assessment remains unchanged and is the critical compliance step. If a regulator queries why an organisation assessed an AI system as falling below the high-risk threshold, the rigour and evidence behind that assessment will matter. Organisations should therefore ensure their assessment processes are robust, well-documented, and defensible.

General AI literacy obligation: likely to be softened, but not for high-risk systems.

The AI Act currently places a general obligation on all providers and deployers of AI systems to ensure a sufficient level of AI literacy in their organisations (Article 4). This obligation has been in effect since February 2025. However, the requirement is broadly phrased, and many stakeholders have argued that it is too imprecise to be meaningfully implemented or enforced.

The Omnibus is likely to at least soften the general AI literacy requirement. Currently, discussions range from replacing organisations’ duty to “ensure” AI literacy with a lighter obligation to “support the improvement of” it, to removing the obligation altogether. The Commission could also be tasked with issuing practical guidance and governments could be taking on a larger role in promoting AI skills across the economy.

However, it is important to distinguish this general AI literacy obligation from the separate, more specific requirements that apply to high-risk AI systems. The AI Act’s high-risk framework includes detailed obligations around human oversight, and staff competence and training for individuals involved in operating or overseeing high-risk AI systems. These requirements are not affected by the Omnibus and will apply in full once the high-risk deadlines take effect.

KEY TAKEAWAY

Any softening of the general AI literacy duty does not diminish the need for organisations to invest in AI skills and competence. For those operating high-risk AI systems, specific obligations remain firmly in place. Beyond the AI Act’s legal requirements, without an appropriate level of AI literacy, organisations will not be able to deploy AI responsibly, compliantly, and in line with their risk appetite.

Next steps for the AI Omnibus

Final negotiations on the AI Omnibus will start in earnest in April, with all sides aiming to reach agreement before the summer. Whether that timetable holds will depend on how quickly the remaining points of disagreement can be resolved.

But negotiations have moved at pace so far and, bar any late unforeseen issues, the AI Omnibus, and its extended compliance timelines, are likely to become law before August 2026. We will provide further analysis as the shape of the final text becomes clear.

Skip to description

Figure 4 - AI Act Timelines: current vs. AI Omnibus proposed amendments

Footnotes:

1. Once finalised and approved by the Commission, these will be known as 'harmonised standards'. Although industry-led and voluntary, once their reference is published in the EU Official Journal, conformity with the harmonised standards will provide a presumption of compliance with the relevant obligations of the AI Act.