The Responsible AI Governance Patchwork Is Becoming a System
Dec 23, 2025
Responsible AI policy is no longer just a debate about values. It is increasingly a debate about implementation: who must do what, by when, under which oversight body, and with what evidence. Over the last year, three signals stand out - New York’s new frontier-model law, UNESCO’s readiness work with the Philippines, and the EU’s insistence on keeping the AI Act timeline intact. Together, they suggest the global “patchwork” of AI governance is evolving into something closer to a system: different jurisdictions, but increasingly similar policy plumbing.
This matters for anyone building or deploying AI at scale. A world of converging governance mechanisms rewards organizations that treat Responsible AI as a repeatable operational capability - not a one-off compliance scramble.
From principles to infrastructure: what changed in 2024-2026
For several years, Responsible AI was dominated by soft-law: ethics principles, high-level declarations, and voluntary guidance. Those still matter, but they now sit alongside an expanding layer of governance infrastructure:
- Phased legal obligations that start with the most harmful or highest-risk uses and expand over time
- Institutional capacity - offices, regulators, and enforcement pathways built to receive reports and act on them
- Evidence requirements - documentation and artifacts that translate “trustworthy AI” from aspiration to auditable practice
The EU AI Act illustrates this shift. The European Commission has publicly rejected calls to pause implementation (“no stop the clock”), with key obligations phasing in on a known schedule - including general-purpose AI (GPAI) obligations starting in August 2025 and high-risk AI obligations from August 2026, per Reuters reporting. (Reuters)
At the same time, subnational jurisdictions are moving. New York’s law is not simply a statement of intent - it creates concrete reporting deadlines and a designated oversight location within state government. (The Wall Street Journal)
Finally, capacity-building is becoming part of governance, not an afterthought. UNESCO’s AI readiness work with the Philippines is a model of how governments can translate ethical commitments into an actionable national strategy grounded in institutional and technical reality. (UNESCO)
Three governance moves that signal maturation: New York, EU, UNESCO-Philippines
New York: publish safety protocols, report critical incidents, stand up oversight
In December 2025, New York Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. (Governor Kathy Hochul) The law targets large AI developers and requires two policy-relevant capabilities:
- Transparency about safety protocols - developers must create and publish information about their safety practices (Governor Kathy Hochul)
- Rapid incident reporting - qualifying “critical incidents” must be reported to the state within 72 hours of determining an incident occurred (Governor Kathy Hochul)
Just as important as the obligations is the governance design. Reporting rules are only meaningful if there is a place for reports to go and a mandate to act. New York’s approach creates an oversight office within the New York State Department of Financial Services tied to enforcement and public reporting. (The Wall Street Journal)
European Union: “no stop the clock” on a phased compliance machine
The EU AI Act is built as a phased system: different requirements activate on different dates. Reuters reported in July 2025 that the European Commission confirmed it would proceed as scheduled and rejected requests to slow down. (Reuters)
Institutionally, the EU has also stood up an AI Office within the European Commission to support implementation and governance of the AI Act. (European Commission)
UNESCO and the Philippines: readiness as a governance accelerator
While New York and the EU illustrate enforcement trajectories, UNESCO’s work with the Philippines highlights another pillar of Responsible AI: governance readiness.
UNESCO reports it completed an AI Readiness Assessment with the Philippine government, intended to support ethical AI governance and inform the country’s national strategy work. (UNESCO) In practice, readiness work fills a critical gap: laws and principles can fail if institutions lack the skills, processes, data governance, and procurement controls to implement them.
Convergence points: transparency, incident reporting, lifecycle risk management
Even across very different governance contexts, a set of common mechanisms is emerging.
Transparency is shifting from marketing to documentation
New York’s requirement to publish safety protocol information pushes transparency toward concrete artifacts. (Governor Kathy Hochul) In the EU context, transparency is embedded in a broader risk-based structure that expects documentation and accountability measures to scale with risk. (Digital Strategy)
The key change is that transparency is less about “trust us” and more about “show your work.”
Incident reporting is becoming the accountability trigger
A major operational feature of the RAISE Act is the 72-hour reporting requirement after an incident is determined. (Governor Kathy Hochul) Reporting obligations create an internal forcing function: organizations must define incident thresholds, detection pathways, escalation roles, and evidence retention policies. Without those, a deadline is just a liability.
Risk management frameworks are the interoperability layer
When rules vary across jurisdictions, organizations need a stable internal backbone. The NIST AI Risk Management Framework (AI RMF 1.0) is a widely used voluntary structure designed to help organizations manage AI risks across the lifecycle, organized around four core functions: Govern, Map, Measure, Manage. (NIST) NIST also maintains an AI RMF Playbook to translate the framework into suggested actions. (NIST)
The practical takeaway: laws will differ, but risk controls can be standardized. A mature Responsible AI program uses a framework like NIST AI RMF as the control plane, then maps jurisdiction-specific obligations onto it.
The hard part: definitions, thresholds, and cross-border coordination
As governance becomes real, the difficult problems stop being philosophical and start being technical and legal:
- What counts as a “critical incident”?
- Who qualifies as a “large AI developer” or “frontier model” developer under a given regime?
- How do you coordinate across jurisdictions when obligations and definitions differ?
New York’s law shows how quickly state-level policy can set expectations for developers operating nationally. (The Wall Street Journal) The EU shows how a large market can enforce timelines that reshape global product planning. (Reuters) UNESCO shows how governance expands not only through enforcement, but through capacity and strategy-building that spreads norms internationally. (UNESCO)
What organizations should do now: build a single control plane for many regimes
Organizations that treat every new rule as a bespoke fire drill will lose time and credibility. A better approach is to build a Responsible AI “control plane” - one set of internal capabilities that can satisfy many regimes with minimal marginal effort.
Five practical moves:
- Adopt a lifecycle framework as your backbone (for example, NIST AI RMF) (NIST)
- Define incident taxonomy and escalation paths now (especially where fast reporting is required) (Governor Kathy Hochul)
- Treat “safety protocol transparency” as an artifact, not a press release (Governor Kathy Hochul)
- Map obligations to controls once, then reuse (EU timelines plus local requirements) (Reuters)
- Invest in readiness across the org (skills, procurement, data governance, accountability) (UNESCO)
Conclusion: the patchwork is becoming a system
The direction of travel is clear. Responsible AI is entering its “systems phase”: compliance timelines, oversight bodies, reporting duties, and readiness methodologies that operationalize ethics into repeatable practice.
New York’s RAISE Act shows how incident reporting and safety transparency can become baseline expectations. (The Wall Street Journal) The EU AI Act shows that large jurisdictions may not slow down to accommodate industry discomfort, with major obligations already calendared for 2025 and 2026. (Reuters) UNESCO’s readiness work shows that global governance is also built through capacity and strategy, not only enforcement. (UNESCO)
For builders and deployers, the winning strategy is to stop thinking in terms of “this regulation” and start thinking in terms of “this capability.” Build the capability once, map it everywhere.
References
- New York RAISE Act: NY Governor press release (Governor Kathy Hochul)
- New York RAISE Act: NY DFS press release (Department of Financial Services)
- New York RAISE Act coverage (WSJ) (The Wall Street Journal)
- EU AI Act timeline confirmation (Reuters, July 2025) (Reuters)
- European Commission AI Office: press release and policy page (European Commission)
- UNESCO Philippines AI Readiness Assessment and related materials (UNESCO)
- NIST AI RMF and AI RMF 1.0 PDF (NIST)