Everywhere at Once: The Digital Omnibus and the Missing Protection for Workers

European Union, Luxembourg © David Mangan

When Marshall McLuhan said that in the electric age “you have to be everywhere at once,” he was describing a world in which technology dissolves boundaries and forces constant participation. Workers today inhabit precisely this condition: their data moves across systems they cannot see; their actions are continuously monitored; and their digital traces circulate independently of them. In such an environment, individuals cannot meaningfully control where their data goes or how it is used. If that is the context, deregulating the use of technology and data is hardly a neutral approach.

This deregulatory approach sits at the heart of the European Commission’s Digital Omnibus proposals. Both the horizontal package revising the wider digital regulatory landscape and the dedicated Omnibus on Artificial Intelligence (AI) are presented as technical adjustments designed to “simplify,” “streamline,” and “reduce burdens.” Yet the substance points in a different direction: deregulation in the name of competitiveness, often at the expense of privacy, workers’ rights, and fundamental rights. In this blogpost I analyse several of the amendments with implications for employment and worker protection. The Commission claims to maintain high standards of protection while simultaneously enabling a more frictionless economy of personal data. Its own explanatory memorandum concedes that the current framework is perceived as an obstacle to global AI competitiveness, and proposes to remove bureaucratic hurdles while insisting that personal data and fundamental rights will remain protected. The tension between these contrasting objectives is not incidental; it is structural, and the “Everywhere at Once” approach, I argue, shapes the whole design of the Omnibus proposals. The present analysis is divided in two parts. In Part 1, I analyse the Digital Omnibus Proposal, and its impact on the General Data Protection Regulation (GDPR). In Part 2, I analyse the Digital Omnibus on the Artificial Intelligence Act (AI Act).

At the end of this analysis, I will reflect on whether we should ever have expected the GDPR and AI Act, both instruments rooted in consumer-oriented logic, to offer robust protection to workers in the first place. Perhaps the real issue is not only that deregulation is harmful, but that in the domain of employment we continue to rely on tools never designed for the specific vulnerabilities of workers.

Part 1: Digital Omnibus
Deregulating the digital economy while claiming high standards

The Commission frames the Digital Omnibus1 as a way to ensure that the digital economy can “do its job” through more cost-effective, innovation-friendly regulation. But most amendments point to one direction: lowering protections for individuals by making data protection flexible and optional in practice. The explanatory memorandum argues that the accumulation of rules has harmed competitiveness. The proposed response in this regard, according to the explanatory memorandum is:

  • More “innovation-friendly” implementation;

  • Reduced administrative burden;2

  • Regulatory clarity yet “without undermining predictability, policy goals, and high standards”.3

The tension of “everything at once” is clear: the proposal attempts to justify lowering protections as long as it benefits business activity, particularly in the AI sector. Several amendments are framed as “clarifications”, notably regarding training AI systems with personal data. But beneath the language of certainty lies a redefinition of what counts as personal data itself. Defining what counts as personal data is fundamental, as it determines the scope of the GDPR itself.

The new Article 4(1) GDPR definition: A controller-relative concept of personal data

Article 3(1)(a) amends Article 4(1) GDPR (defining what counts as personal data and thus the overall scope of the GDPR) and adds:

“Information relating to a natural person is not necessarily personal data for every other person or entity, merely because another entity can identify that natural person. Information shall not be personal for a given entity where that entity cannot identify the natural person to whom the information relates, taking into account the means reasonably likely to be used by that entity. Such information does not become personal for that entity merely because a potential subsequent recipient has means reasonably likely to be used to identify the natural person to whom the information relates”4

Information is not personal data for an entity if that entity cannot identify the person, even if another entity could. This makes the very notion of personal data flexible, subjective, and dependent on the controller’s intentions or capabilities. The vague criteria (“means reasonably likely to be used”) open the door to arbitrary interpretation. This new definition of personal data is designed to allow AI developers to train models on massive datasets claiming the data is “not personal for them”. Processor and AI providers would thus be able to store employees’ personal data (provided by employers when deploying AI at work) and use them to train algorithmic management tools, without GDPR’s provisions applying to it.

This amounts to a structural weakening of the GDPR, essentially codifying a loophole that enables extensive AI training without full data protection obligations. It also weaponizes ECJ case law (amongst others, the Scania decision)5 to justify a controller-relative definition: only if you can directly identify a data subject, will you be considered a data controller under the revised GDPR. This approach creates significant accountability gaps, particularly in employment contexts where employers routinely rely on external providers and processors. In this regard, it is to be understood whether this controller-relative definition could be exploited by employers in the employer-employee-AI provider “triangle” to claim that employees’ data is not personal data.

Sensitive data: the door is open for AI training

Article 3(3)(a) amends Article 9(2) GDPR (defining the exceptions for their collection and processing) and adds:

‘(k) processing in the context of the development and operation of an AI system as defined in Article 3, point (1), of Regulation (EU) 2024/1689 or an AI model, subject to the conditions referred to in paragraph 5.6

(l) processing of biometric data is necessary for the purpose of confirming the identity of a data subject (verification), where the biometric data or the means needed for the verification is under the sole control of the data subject.’7

To process sensitive data according to Article 9(2)(k), the Regulation at Article 3(3)(b) also adds to Article 9(2) GDPR that:

For processing referred to in point (k) of paragraph 2, appropriate organisational and technical measures shall be implemented to avoid the collection and otherwise processing of special categories of personal data. Where, despite the implementation of such measures, the controller identifies special categories of personal data in the datasets used for training, testing or validation or in the AI system or AI model, the controller shall remove such data. If removal of those data requires disproportionate effort, the controller shall in any event effectively protect without undue delay such data from being used to produce outputs, from being disclosed or otherwise made available to third parties

Article 9(2)(k) introduces a significant departure from the GDPR’s core principles by allowing the processing of sensitive data, explicitly prohibited under Article 9(1), for the development and operation of AI systems. This includes some of the most intrusive categories of data: racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic and biometric data, health data, and information on sex life or sexual orientation. Controllers are required to adopt “appropriate organisational and technical measures,” yet the proposal offers no indication of what those measures should entail, leaving the requirement devoid of substance. Even more troubling is the clause allowing controllers to keep sensitive data if its removal would involve “disproportionate effort.” This open-ended escape hatch effectively neutralises the necessity requirement and invites overcollection.

The proposal further asserts that, in such cases, controllers must “protect the data from being used to produce outputs.” But this is technically and conceptually implausible. Given the notorious black-box nature of modern AI systems, even developers seldom understand how or why a model generates specific outputs. Expecting controllers to prevent sensitive data from influencing outputs – while simultaneously allowing that data to be used in training – misunderstands how machine learning works. Once data enters the training pipeline, its influence is irretrievable; you cannot unmix it.

Taken together, these provisions amount to the de facto legalisation of sensitive data processing for AI training under permissive and weakly defined conditions. Far from ensuring safeguards, Article 9(2)(k) marks a clear move toward unbounded liberalisation of AI development – in relation to the categories of data that carry the highest risks for discrimination, exploitation, and harm.

The new Article 9(2)(l) would instead enable employers, on a systematic basis, to use biometric systems to control access to the workplace and for related verification purposes. The provision claims to limit risks by requiring that “the means needed for the verification is under the sole control of the data subject.” But even if employers cannot technically access or view workers’ biometric data, this safeguard misses the core problem. Normalising biometric verification in the workplace already creates a disproportionate form of monitoring. The power imbalance inherent in employment means that workers cannot meaningfully refuse biometric systems that become conditions of entry, attendance, or task assignment. Employers can exert pressure, formal or informal, simply through the expectation that workers use these systems, regardless of whether the employer ever touches the underlying biometric data. The issue is not only what employers can access, but what they can mandate. A regime that frames biometric use as safe merely because the employer cannot see the raw data ignores how control operates in workplaces. It legitimises invasive technologies whose effects do not depend on employers having direct access to the biometric information. The result is a structural green light for the expanded, routine use of biometrics at work, under the guise of “sole control” that workers cannot realistically exercise.

Automated Decision-Making: the more the better?

Article 3(7) replaces the current text of Article 22(1) and (2) GDPR (right not to be subject to solely automated DM)with:

1. A decision which produces legal effects for a data subject or similarly significantly affects him or her may be based solely on automated processing, including profiling, only where that decision:

(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller regardless of whether the decision could be taken otherwise than by solely automated means;8

(b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests; or

(c) is based on the data subject's explicit consent.

The proposed revision of Article 22 makes fully automated decisions (ADM) easier to deploy. Under the new Article 22(1)(a), a decision can be automated if it is deemed necessary for the performance of a contract, even when a human could perform the same task. This change effectively normalises ADM across key domains, including employment. The proposal aligns with the interests of those companies and service providers preferring ADM over human oversight.

From a critical perspective, the reinterpretation of the “necessity” test in Article 22(1)(a) is particularly troubling. Recital 38 states that the mere fact a human could perform the same task does not prevent ADM use.9 In practice, this means that controllers can deploy ADM whenever they judge it “necessary” for contract performance – a broad standard and highly subjective. In employment contexts, this may open the door to organisations running more automated management systems. Workers will still enjoy the right provided for by Article 22(3) GDPR to “at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”; yet with this new formulation Article 22(1) significantly shifts power toward controllers and their interest in embedding automation as the default rather than the exception.

One positive note: the reinforced DPIA obligations in Article 35 GDPR

A positive development in the Digital Omnibus is the emphasis on clarifying which processing activities are considered high-risk, particularly in employment contexts. Article 3(9) tasks the European Data Protection Board (EDPB) with transmitting to the Commission a list of processing operations that qualify as high-risk, thereby making a Data Protection Impact Assessment (DPIA) under Article 35 mandatory in the identified circumstances. This could improve legal certainty and strengthen safeguards in areas such as employment, where the use of AI and other algorithmic systems may have significant implications for workers’ rights.

The amendments to Article 35 further reinforce this framework. Paragraphs 4, 5, and 6 are replaced to empower the EDPB to prepare lists of processing operations that either require or do not require a DPIA, as well as a common template and methodology for conducting DPIAs. Taken together, these measures provide for a more consistent approach to identifying high-risk data processing and ensuring that appropriate safeguards are applied.

Part 2: Digital Omnibus on AI
Lowering standards and burdens for AI providers

In its explanatory memorandum, the Commission notes that it conducted a series of public consultations to identify challenges in implementing the AI Act’s provisions. The feedback highlighted potential obstacles that could delay or complicate the effective application of certain AI rules.10 To address these issues, the proposal introduces simplification measures aimed at ensuring a timely, smooth, and proportionate implementation of the AI Act. To accelerate deployment, it proposes:

  • Fewer requirements for SMEs;

  • Lighter post-market monitoring;

  • Reduced registration duties if the provider self-certifies the system is not high risk;

  • New guidelines from the Commission to simplify compliance;

  • Reduced data protection obligations (thanks to the Digital Omnibus amendments).

In the next subsection of this Part 2, I analyse some of the main proposed changes to the AI Act relevant for employers and employees.

AI literacy Article 4 AI Act: from mandatory requirement to soft encouragement

The current AI Act, at Article 4, requires providers and deployers to ensure adequate AI literacy. AI literacy refers to the set of competencies that enables individuals to understand, critically evaluate, and effectively use artificial intelligence technologies in their personal and professional lives. It includes the ability to recognize AI-driven decisions, understand their potential impacts, and make informed choices about interacting with AI systems. The Digital Omnibus proposal, at Article 1(4), replaces this obligation with a broad duty for Member States and the Commission to merely “encourage” literacy efforts.

This change removes any direct responsibility from those who design, sell, or deploy AI systems and shifts it to a general, non-binding encouragement at the policy level. AI literacy – the set of skills that allows individuals to understand, critically evaluate, and effectively use AI in their personal and professional lives – is thus no longer guaranteed. By lowering standards in this domain, the Commission effectively chooses to leave workers, consumers, and citizens less informed and more vulnerable to opaque, potentially harmful AI systems.

Bias detection: (yet another) normalisation of sensitive data processing

The Digital Omnibus inserts a new Article 4a into the AI Act, entitled “Processing of personal data for bias detection and mitigation.”. This provision allows providers of high-risk AI systems to process special categories of personal data (such as racial or ethnic origin, political opinions, religious beliefs, trade union membership, health data, or information on sex life or sexual orientation) to the extent necessary to ensure bias detection and correction, in accordance with Article 10(2), points (f) and (g) of the AI Act. To my understanding, the proposal appears to draw inspiration from earlier work by Van Bekkum and Borgesius.11

Article 4a explicitly ties this processing to a set of safeguards and mandatory conditions:

·      Necessity: Bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data.

·      Technical safeguards: Sensitive data must be subject to state-of-the-art security and privacy-preserving measures, including pseudonymisation and limitations on re-use.

·      Access and confidentiality: Only authorised personnel may access the data, under strict controls and documentation, to prevent misuse.

·      No transmission to third parties: Sensitive data must not be transmitted, transferred, or otherwise accessed by external parties.

·      Deletion: Sensitive data must be deleted once bias has been corrected or the retention period ends, whichever comes first.

·      Documentation: Records of processing activities must justify why the processing of sensitive data was necessary and why the same objective could not be achieved with other data.

While these conditions appear comprehensive on paper, the practical effect is limited. The first condition – requiring sensitive data to be “the only option” to prevent bias – offers no guidance on how necessity should be demonstrated or verified. In practice, a provider can claim that bias prevention requires sensitive data, which alone suffices to justify processing. Even the other safeguards (security, confidentiality, deletion, and documentation) do not prevent over-collection or misuse if the necessity condition is interpreted broadly.

When read together with the new Article 9(2)(k) GDPR in the Digital Omnibus, the implications are stark. Sensitive data can be processed based on a pre-emptive statement that it is needed for bias detection and risk mitigation, effectively creating a wide exception to the original Article 9(1) GDPR prohibition. In other words, Article 4a, in combination with Article 9(2)(k), establishes a framework in which the use of highly sensitive personal data for AI training and deployment is largely legitimized – based primarily on a provider’s assertion rather than demonstrable necessity – raising serious concerns about overreach, accountability, and the protection of fundamental rights.

Self-certification: a big loophole in high-risk classification

Article 1(6) of the Digital Omnibus on AI will replace Article 6(4) of the AI Act, introducing a provision whereby a provider who considers that an AI system listed in Annex III is not high-risk must simply document their assessment before placing the system on the market or putting it into service. Upon request, this documentation must be provided to national competent authorities.

From a critical perspective, this represents a significant exception for providers. By allowing providers to self-certify that their systems are not high-risk, the Digital Omnibus effectively removes these systems from the full scope of AI Act obligations. This means that AI tools used for monitoring, management, or other impactful purposes could entirely evade regulatory oversight, unless they fall under the limited prohibitions in Article 5 or the transparency duties for limited-risk systems in Article 50 – requirement that is far weaker than those applied to high-risk AI.

In practice, this self-certification mechanism substantially reduces the enforceable scope of the AI Act. Providers can develop and deploy a wide range of systems without meaningful checks or accountability, undermining the Regulation’s goal of safeguarding fundamental rights, worker protections, and consumer trust in AI technologies.

Conclusion: everywhere at once, nowhere protected

In conclusion, it is worth reflecting on whether we should ever have expected the GDPR and AI Act – both instruments rooted in consumer-oriented logic – to provide robust protection to workers. The Digital Omnibus exemplifies the risks of relying on such frameworks. By relaxing safeguards, enabling self-certification, and normalising the use of sensitive data, it shifts risk onto workers while leaving employers and providers with extensive discretion. As we deregulate technology in the name of enabling market efficiency, and continue linking GDPR and AI Act provisions to labour law protections, the effect is direct: working conditions themselves are affected. Lowered obligations on literacy, bias mitigation, data minimisation, and human oversight do not simply impact abstract rights. They shape how workers are managed, monitored, and assessed.

The conclusion is that, perhaps, we should stop treating consumer-oriented data protection and AI regulation as proxies for labour law protections. As I argued in Lost in Translation: Is Data Protection Labour Law Protection, this conflation creates a systemic “gap” where workers are exposed, and existing frameworks are interpreted in ways that reduce, rather than secure, protections at work.12 What is needed instead is a frank, targeted discussion on regulating data at work – one that transcends any specific employment context, platform, or non-standard form of work – and establishes strong, enforceable rules governing what can be an algorithmic boss and what not.13 Only through labour law can we ensure that technological progress does not come at the expense of workers’ rights, agency, and dignity. Continuing on the path of consumer/technology law will persistently bring downward compromises, as the Digital Omnibus case bluntly shows us.


  • 1Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL amending Regulations (EU) 2016/679, (EU) 2018/1724, (EU) 2018/1725, (EU) 2023/2854 and Directives 2002/58/EC, (EU) 2022/2555 and (EU) 2022/2557 as regards the simplification of the digital legislative framework, and repealing Regulations (EU) 2018/1807, (EU) 2019/1150, (EU) 2022/868, and Directive (EU) 2019/1024 (Digital Omnibus) {SWD(2025) 836 final}
  • 2Digital Omnibus, Explanatory Memorandum, Section 1, Page 1. See also Recitals 1 to 4 on the Regulation’s goals.
  • 3European Council, Conclusions, EUCO 12/25, Brussels, 26 June 2025, paragraph 30
  • 4Cf also with Recital 27 of the proposal on the new notion of personal data.
  • 5See: Case C-319/22 Gesamtverband Autoteile-Handel e.V. v Scania CV AB, ECLI:EU:C:2023:837 and Case C-413/23 P, EDPS v SRB, ECLI:EU:C:2025:645.
  • 6Cf with Recital 33 of the Digital Omnibus Proposal
  • 7Cf with Recital 34 of the Digital Omnibus Proposal
  • 8Cf with clarification at Recital 38 ‘(…) This means that the fact that the decision could also be taken by a human does not prevent the controller from taking the decision by solely automated processing When several equally effective automated processing solutions exist, the controller should use the less intrusive one.’
  • 9Cf with clarification at Recital 38 ‘(…) This means that the fact that the decision could also be taken by a human does not prevent the controller from taking the decision by solely automated processing When several equally effective automated processing solutions exist, the controller should use the less intrusive one.’
  • 10Digital Omnibus on AI, Page 1.
  • 11Marvin Van Bekkum and Frederik Zuiderveen Borgesius, ‘Using Sensitive Data to Prevent Discrimination by Artificial Intelligence: Does the GDPR Need a New Exception?’ (2023) 48 Computer Law & Security Review 105770.
  • 12Michele Molè, ‘Lost in Translation: Is Data Protection Labour Law Protection?’ (2025) 45 Comparative Labor Law & Policy Journal 553.
  • 13Michele Molè, ‘Commodified, Outsourced Authority: A Research Agenda For Algorithmic Management at Work’ (2024) 17 Italian Labour Law e-Journal 169.
Comments (0)
Your email address will not be published.
Leave a Comment
Your email address will not be published.
Clear all
Become a contributor!
Interested in contributing? Submit your proposal for a blog post now and become a part of our legal community! Contact Editorial Guidelines
Image
EU Labour Law

Book Ad List

Books
book1
Privacy@work
Editors: Frank Hendrickx, Elena Gramano, David Mangan
€110.00