When AI Goes Wrong
Dec 11, 2025
Why U.S. Attorneys General Are Forcing a New Era of Accountability
Generative AI systems have grown exponentially in reach and capability, but so have concerns about their reliability, safety, and potential for harm. This tension burst into the policy arena when a coalition of U.S. state attorneys general (AGs) issued formal warnings to leading AI companies - Microsoft, Meta, Google, and Apple - over reports of harmful or “delusional” AI outputs, especially in interactions with vulnerable users. Their intervention signals a major turning point: AI governance in the United States is moving from voluntary guidelines toward enforceable accountability expectations. The AGs’ actions reflect a broader global trend in which governments are no longer content to rely on industry goodwill and are instead seeking regulatory mechanisms that match AI’s societal impact.
The legal warning: What prompted AG scrutiny of generative AI outputs
The AGs’ letters were prompted by mounting evidence that generative AI systems can produce not only inaccurate or misleading content but also harmful advice or behaviors when users are distressed, mentally ill, or otherwise vulnerable. According to reporting from Reuters, the coalition expressed alarm at cases where models responded with dangerous content delivered with strong apparent confidence, making them seem trustworthy even when they were wrong1. This convergence of high user trust and high system uncertainty represents a significant safety risk.
State AGs - often the first movers in consumer protection and technology oversight - are now positioning themselves as key actors in mitigating AI harm. Their warnings request detailed information on company safety practices, risk mitigations, and transparency mechanisms. These include questions about model evaluation methodologies, data governance, guardrails, and whether independent reviewers have been granted access to safety documentation. The message is clear: industry self-regulation is no longer sufficient.
Why harmful and “delusional” outputs raise regulatory red flags
Generative AI’s tendency to produce hallucinations is not new, but the context has changed. Widespread deployment means these errors now reach millions, including individuals who may rely on AI tools in moments of crisis. When systems output advice that appears authoritative - medical, legal, emotional, or otherwise - they can influence real-world decisions.
Regulators view this not merely as an accuracy issue but as a form of consumer protection risk.
- False or misleading representations fall within consumer protection law.
- Failure to mitigate foreseeable harms may constitute negligence.
- Opaque system behavior undermines autonomous and informed user decision-making.
The result is a growing legal consensus: AI companies must anticipate how their tools will be used in sensitive contexts, not simply react to misuse.
This aligns with global policy frameworks, such as the EU AI Act, which imposes risk-based controls for high-impact AI systems. Likewise, the Council of Europe’s Framework Convention on AI emphasizes safeguards built around human rights, democratic values, and the rule of law.
The emerging model of state-driven AI oversight
While the U.S. lacks a centralized federal AI law, state attorneys general are filling the regulatory gap. Their evolving model includes:
1. Leveraging existing consumer protection statutes
AGs can invoke unfair or deceptive practices laws to demand transparency about system risks and limitations.
2. Pushing for independent evaluation
Their warnings echo calls across the policy landscape for third-party scrutiny of AI models, which may include bias testing, robustness analysis, and red-team reports.
3. Increasing legal exposure for unsafe deployment
By establishing clear expectations and documenting company responses, AGs create pathways for future enforcement.
4. Catalyzing legislative movement
State action often precedes and pressures federal policymaking. These warnings may serve as early templates for national accountability standards.
Implications for developers, deployers, and high-risk sectors
The AG interventions are poised to reshape how companies build and deploy AI, especially in domains where safety is paramount.
For developers:
- Pre-release testing and robust evaluations will likely become mandatory expectations.
- Documentation of data sources, limitations, and risk mitigations may become part of legal compliance.
- Systems aimed at general audiences may need safeguards specifically for vulnerable users.
For deployers:
- Vendor selection increasingly involves assessing safety and auditability.
- Organizations may face liability for deploying models without adequate oversight.
For high-risk sectors:
Healthcare remains illustrative. In parallel to the AG actions, the NAACP has urged equity-first standards to address risks of biased or misleading outputs in medical AI. Reuters reports that the NAACP is pushing for oversight reforms that center equity, accountability, and community impact in system design2.
For consumers:
Greater regulatory scrutiny may lead to clearer disclosures, safer user flows, and improved reporting pathways for harmful AI behavior.
How this pressure aligns with global regulatory trends
The AG warnings mirror a global movement toward structured and ethics-driven AI oversight.
European Union
The EU AI Act implements a risk-tiered approach that mandates transparency, human oversight, and testing obligations for high-risk systems.
UNESCO
UNESCO’s AI Readiness Assessment for the Philippines highlights the importance of embedding ethics into national governance approaches and building institutional capacity for oversight3.
India
India’s call for an ethical AI alliance reinforces the global shift toward multi-stakeholder governance - including government, academia, and civil society4.
United States
Without a comprehensive federal AI law, the U.S. is evolving through distributed oversight: state AGs, federal agencies (FTC, CFPB, HHS OCR), and civil rights groups collectively shaping norms.
The global pattern is unmistakable: responsible AI is being codified into expectations, not aspirations.
Conclusion: A new accountability era is emerging
The letters from U.S. state attorneys general represent a pivotal shift in AI oversight. Industry arguments for flexibility and innovation now meet a regulatory reality in which safety, transparency, and accountability are treated as core system requirements. As generative AI embeds itself deeper into society, regulators are making clear that trust must be earned through verifiable, auditable, and rights-respecting practices.
This may be remembered as the moment U.S. AI governance moved decisively toward enforceable oversight—aligning with global momentum and setting the stage for a more accountable AI ecosystem.
References
- Reuters — “Big Tech warned over AI ‘delusional’ outputs by US attorneys general.” https://www.reuters.com/business/retail-consumer/microsoft-meta-google-apple-warned-over-ai-outputs-by-us-attorneys-general-2025-12-10/ β©
- Reuters — “NAACP pressing for ‘equity-first’ AI standards in medicine.”
https://www.reuters.com/business/healthcare-pharmaceuticals/naacp-pressing-equity-first-ai-standards-medicine-2025-12-11/ β© - UNESCO — “AI Readiness Assessment Report: Anchoring Ethics in AI Governance in the Philippines.”
https://www.unesco.org/en/articles/unesco-ai-readiness-assessment-report-anchoring-ethics-ai-governance-philippines β© - Times of India — “India needs ethical AI alliance with all states: Industries minister.”
https://timesofindia.indiatimes.com/city/chennai/india-needs-ethical-ai-alliance-with-all-states-industries-min/articleshow/125896214.cms β©