Europe’s Digital Rulebook Is Being Rewritten
And the consequences will shape the next decade of AI, privacy, and power
The European Commission’s Digital Omnibus package is the most significant shift in EU digital governance since the GDPR. Behind the neutral language of “simplification” sits a profound political recalibration: a deliberate loosening of AI and data protections that only a few years ago were held up as the global gold standard.
At the heart of the proposal is a pair of moves that would have been unthinkable during the passage of the AI Act. First, high-risk AI obligations are pushed back to December 2027. Second, companies are granted sweeping permission to use personal data, including sensitive categories, under the banner of “legitimate interest”. That dramatically reduces the need for consent and softens one of the GDPR’s core protections. Civil society groups warn this could open the door to profiling and discrimination on a scale previously constrained by European law.
Equally stark is the ability for firms to simply declare that their high-risk AI systems are low risk. That bypasses mandatory registration and transparency requirements that were designed to keep the public and regulators informed about where powerful systems are being deployed. Removing this accountability mechanism rewires the intent of the AI Act itself. What was meant to be a predictable, rules-based framework becomes a self-assessment exercise.
The Commission frames the package as regulatory hygiene. Much of the narrative focuses on reducing red tape, rationalising overlapping instruments, and easing burdens on SMEs. Some of this is legitimate. Europe’s regulatory architecture has become dense, sometimes contradictory, and occasionally slow to adapt. But this revision goes far beyond tidying up the edges. It is a strategic pivot.
Two forces sit behind it. One is geopolitical competition. Large US tech companies lobbied aggressively, arguing the AI Act and GDPR were anti-innovation. They were joined by Mario Draghi’s 2024 report, which bluntly warned that European competitiveness was deteriorating under the weight of regulation. The second is internal political pressure. Faced with slow growth and lagging AI capability, officials are increasingly convinced Europe risks falling further behind without regulatory relief.
Yet the backlash is equally strong. More than 120 organisations, including Amnesty International and Noyb, describe the package as the largest rollback of digital rights in EU history. They argue it risks normalising mass data processing without consent, weakening key GDPR principles like data minimisation and accountability, and enabling more intrusive AI practices such as biometric surveillance. These are not fringe voices; they are the same organisations that helped shape the original privacy framework.
The proposals still require parliamentary and member-state approval. That means nothing is final. But the direction of travel is unmistakable. Europe is shifting from a rights-first model to a growth-first model. The question is whether that shift is temporary pragmatism or the start of a permanent rebalancing of European digital governance.
The trade-off is now unavoidable. Europe wants to compete in AI and attract investment. But it also built the most protective digital rights regime on the planet. Reconciling those two ambitions was always going to be difficult. The Digital Omnibus package makes clear which side the current Commission believes must give way.
As AI becomes the infrastructure of modern life, this debate will define Europe’s technological identity:
A continent that once set the rules for the world is now deciding how many of them it is willing to keep.



Agree! Europe and UK is in danger of regulating itself out of the AI revolution! Still does it have less paperwork than immigration?