From Rule-making to Rule-shaping
Dec 02, 2025
The EU's Evolving AI Governance Strategy
As governments race to regulate artificial intelligence, the European Union has long positioned itself as the world’s standard-setter. Yet in late 2025, the bloc signaled a notable shift: several high-risk compliance requirements under the EU AI Act may now be delayed to 2027 as part of a broader “Digital Omnibus” effort to streamline rules and reduce regulatory friction. This recalibration raises important questions. Is the EU refining its approach to enable innovation, or softening its stance in response to industry pressure? And how will global standards interact with these evolving regulatory timelines?
Note: This article is informational and does not constitute legal advice. Organizations should consult legal counsel for AI Act compliance decisions.
Background: What the EU AI Act Is and Why It Matters
The EU AI Act is the first comprehensive attempt to regulate artificial intelligence at scale, categorizing systems by risk level and imposing corresponding obligations. High-risk systems - such as those used in critical infrastructure, employment, education, law enforcement, or essential public services - face stringent requirements around data governance, transparency, human oversight, robustness, and security.
The Act has been influential beyond European borders. Its structure - tiered risk categories, mandatory documentation, and enforcement mechanisms - serves as a reference point for other jurisdictions and international bodies seeking to balance innovation with responsible governance.
Because AI development is global and decentralized, a single region’s regulations inevitably influence global practices. Firms operating internationally often conform to the strictest regimes to standardize their compliance processes, effectively exporting those norms abroad. Thus, any change in the EU’s posture has ripple effects across multinational companies, startups, and national policymakers.
What the Digital Omnibus Does: Delays, Simplifications, and Business Concerns
In November 2025, the European Commission introduced a package of regulatory adjustments collectively referred to as the Digital Omnibus. Among these proposals is a deferral of several high-risk AI obligations to 2027, paired with simplification measures aimed at reducing compliance burdens - a move that followed strong lobbying from technology firms and some member states seeking more time and flexibility.
Supporters argue that the extension reflects practical realities:
- Many high-risk system developers struggle to meet documentation and testing requirements in their current form.
- Smaller companies in particular face issues scaling compliance infrastructures without compromising competitiveness.
- Technical standards referenced by the law are still under development, leaving ambiguity about how certain obligations should be implemented.
The Commission frames these adjustments as constructive refinements rather than retreats. The message: the EU remains committed to strong AI governance but recognizes the need to ensure regulations remain feasible, especially as technologies evolve faster than anticipated.
Crucially, the Digital Omnibus remains subject to the EU’s ordinary legislative process. Organizations should treat the proposal as a signal to fine-tune, not pause, their AI Act compliance programs.
Risks and Critiques: Does Relaxing Timelines Dilute Protections?
Critics contend that postponing high-risk requirements creates a protection gap at a moment when AI systems are being adopted at unprecedented speed. If obligations around transparency, robustness, or oversight are delayed, communities may be exposed to harms the legislation was meant to prevent.
Civil society groups and some policymakers raise concerns about:
- Delayed safeguards in sensitive domains, such as hiring, healthcare, social benefits, and public administration, where algorithmic decisions can materially affect people’s lives.
- Reduced accountability, as companies may feel less urgency to establish governance frameworks for risk assessment, bias evaluation, and human oversight.
- Signaling effects, in which other jurisdictions may perceive the EU’s adjustments as permission to deprioritize regulation or delay their own enforcement.
There is also a distributional concern. Vulnerable groups - including low-income communities, migrants, and historically marginalized populations - may be most exposed to opaque or error-prone AI systems in public services or employment, precisely where stronger safeguards are needed.
Some legal scholars note that the delay could introduce fragmentation. Member states already vary in their preparedness for implementation, and a postponed timeline may widen gaps in national-level enforcement or oversight.
The central question is whether the EU can strike a balance that maintains its leadership in responsible innovation while not overburdening industry. The answer will depend on how quickly institutions, companies, and standards bodies move to fill any temporary gaps with practical governance measures.
Complementary Governance: Role of International Standards and Global Coordination
While the EU refines the AI Act’s rollout, global standards bodies are becoming increasingly influential. The International AI Standards Summit, held in Seoul under the auspices of ISO, IEC, and ITU, produced a joint declaration calling for internationally aligned, standards-based governance as a complement to national regulations. The “Seoul Statement” emphasizes safe, inclusive, and interoperable AI, particularly for countries and organizations with fewer resources for bespoke regulatory frameworks.
International standards can support the EU’s evolving regulatory posture in several ways:
- Providing technical clarity where the AI Act remains principle-based but lacks detailed implementation instructions, for example through standards on risk management, transparency documentation, and robustness testing.
- Enabling interoperability for businesses operating in multiple jurisdictions, reducing the cost of complying with divergent rules and creating a more level playing field for SMEs.
- Supporting capacity building for countries and organizations with fewer regulatory or technical resources, helping reduce global inequality in AI safety and governance.
By integrating standards from bodies like ISO and IEC into conformity-assessment schemes and internal compliance programs, the EU and industry may be able to bridge gaps created by postponed regulatory obligations while ensuring continued progress toward robust AI governance.
This synergy between regulation and standards reflects a broader shift: rather than relying solely on statutory rules, the EU is edging toward a more adaptive model where global standards, iterative updates, and phased obligations work together to shape responsible innovation.
What Organizations Should Do Now
For organizations developing or deploying AI in or into the EU, the signal from these developments is not to slow down, but to mature governance intelligently:
- Keep AI Act readiness on track
- Maintain inventories of AI systems, with clear mapping to risk levels.
- Begin or continue building documentation, data governance, and human-oversight processes for likely high-risk systems.
- Align with emerging standards
- Track relevant ISO/IEC and ITU AI standards and integrate them into your internal controls where feasible.
- Use standards as a practical blueprint for risk management, evaluation, and documentation.
- Focus on high-impact use cases and vulnerable groups
- Prioritize governance for systems affecting employment, healthcare, credit, social services, and law enforcement.
- Involve affected stakeholders where possible and monitor for unintended impacts on marginalized communities.
- Build cross-functional governance
- Establish or reinforce AI risk committees that bring together legal, compliance, engineering, product, and ethics perspectives.
- Regularly update your AI risk register and scenario-plan for different enforcement timelines.
The organizations best positioned for whatever final form the AI Act takes will be those that treat this period not as a regulatory pause, but as time to deepen responsible-AI practices anchored in widely recognized standards and principles.
References
- European Commission and media coverage of the Digital Omnibus proposal on AI Act timelines and simplifications.
- Analyses by legal and policy firms on impacts of delayed high-risk obligations and interactions with technical standards.
- ISO / IEC / ITU communications on the International AI Standards Summit and the Seoul Statement on global AI standards and cooperation.