Responsible AI Blog

From the EU AI Act to NIST and OECD guidelines, we monitor the evolving landscape of Responsible AI governance β€” and what it means for your organization.

From Principles to Practice

Nov 27, 2025

How NIST’s AI Risk Management Work Pushes AI Risk Management Forward

Summary

The rapid expansion of generative AI, complex AI supply chains, and emerging attack techniques has put unprecedented pressure on regulators and industry alike. The National Institute of Standards and Technology (NIST) responded by building on its 2023 AI Risk Management Framework (AI RMF) with additional guidance - including a Generative AI Profile and complementary cybersecurity and supply-chain resources. Taken together, these materials help organizations move beyond abstract ethics principles toward concrete risk management practices, a critical shift as AI systems grow more powerful and widely deployed.

Overview of the AI RMF and Its Role in Governance

NIST released the AI Risk Management Framework (AI RMF) 1.0 in January 2023 as a voluntary, sector-agnostic framework to help organizations manage risks associated with AI systems across their lifecycle, from design through deployment and decommissioning. The framework is explicitly non-regulatory and intended for flexible use across sectors, focusing on characteristics such as trustworthiness, reliability, transparency, and accountability in AI systems.

At its core, the AI RMF organizes AI risk management into four primary functions: Govern, Map, Measure, and Manage. These functions guide organizations in establishing governance structures, identifying and assessing system risks, measuring performance and impact, and managing or mitigating risk across the AI lifecycle.

As AI capabilities and use cases - especially generative models - have evolved, NIST has complemented the original AI RMF with profiles, implementation resources, and technical guidance that make the framework more actionable in practice.

New and Emerging NIST Guidance: Generative AI, Supply-Chain, and Cybersecurity Alignment

Rather than replacing the AI RMF, NIST has been extending it with additional documents that address contemporary risks:

  • Generative AI Profile (NIST AI 600-1)
    NIST’s Generative AI Profile (NIST AI 600-1) is a companion resource that helps organizations apply the AI RMF functions and categories specifically to generative AI systems such as large language models and image generators. It organizes risks and mitigations around issues like harmful or deceptive content, hallucinations, data leakage, and prompt-based attacks, offering a practical lens for applying the RMF to generative models.

  • AI supply-chain risk management (AI-SCRM) emphasis
    NIST and other stakeholders increasingly highlight that AI systems depend on complex supply chains involving training data, model weights, third-party libraries, and external services. Building on existing NIST work on Cybersecurity Supply Chain Risk Management (C-SCRM), recent guidance and industry interpretations of the AI RMF emphasize provenance, data sourcing, third-party vetting, and dependency management. This helps organizations assess and mitigate upstream and downstream risks tied to components they do not fully control.

  • Integration with cybersecurity and privacy frameworks
    From its initial release, the AI RMF was intended to complement existing NIST cybersecurity and privacy standards, such as the NIST Cybersecurity Framework and NIST Special Publication 800-53. Subsequent NIST publications, workshops, and implementation resources continue to clarify how AI-specific risks can be managed using familiar security and privacy controls. For organizations already using NIST guidance, this makes it easier to embed AI risk management into existing governance structures rather than building entirely new processes.

These developments reflect an understanding that AI risk is multidimensional: it involves bias and fairness, but also security, supply-chain integrity, data protection, system reliability, and resilience.

Why This Evolving Guidance Matters: Use Cases and Emerging Risks

NIST’s ongoing work to refine and contextualize the AI RMF matters because it helps organizations navigate real-world challenges arising from modern AI deployments:

  • Risks unique to generative AI
    Generative models can hallucinate false information, leak sensitive data, generate harmful or biased content, or be manipulated via adversarial prompts. The Generative AI Profile offers structured ways to identify such risks, connect them to AI RMF categories, and select appropriate mitigations such as content filtering, red-teaming, and human oversight.

  • Supply-chain vulnerabilities
    When training data, model components, or services come from external providers, organizations may lack visibility into provenance, licensing, or latent vulnerabilities. Incorporating supply-chain risk concepts into AI governance supports due diligence on data sources, third-party models and APIs, and development tools, reducing the likelihood of data poisoning, backdoors, or other systemic exposures.

  • Regulatory and compliance readiness
    As governments around the world consider or implement AI-specific regulations, organizations that already follow structured risk management practices - particularly those aligned with widely recognized frameworks like NIST’s - will be better positioned to demonstrate due care, respond to audits, and adapt to new legal requirements.

  • Operational resilience and accountability
    By tying AI risk management to governance processes, documentation, and continuous monitoring, organizations can reduce the likelihood of catastrophic failures, legal liabilities, and reputational harms. The AI RMF structure helps clarify roles and responsibilities across compliance, security, engineering, and product teams.

In short, NIST’s expanding set of materials turns AI risk management from a purely theoretical discussion into a more practical, operational discipline, especially important for generative-AI and other high-impact use cases.

Recommendations for Organizations: How to Apply NIST’s AI RMF in Practice

For organizations looking to operationalize the AI RMF and related guidance, the following steps provide a pragmatic starting point:

  1. Create an AI system inventory and risk map
    Catalog existing and planned AI systems, including their purposes, data sources, and dependencies. Use the Map function to classify systems by potential impact and to identify key risk areas (e.g., bias, safety, security, privacy, supply chain).

  2. Integrate AI risk into existing security and privacy programs
    Where possible, align AI controls with established cybersecurity and privacy practices rather than creating entirely separate processes. Leverage existing policies, technical controls, and governance forums to address AI-specific risks.

  3. Adopt generative-AI–specific safeguards where relevant
    For systems that use generative models, draw from the Generative AI Profile and similar resources to implement measures such as prompt-input restrictions, output filtering, robust logging, human-in-the-loop review for high-impact decisions, and red-teaming.

  4. Strengthen AI supply-chain due diligence
    Apply supply-chain risk management concepts to data, models, and external services. This may include vendor questionnaires, technical assessments, contractual requirements regarding security and provenance, and ongoing monitoring of third-party components.

  5. Establish governance structures and clear accountability
    Use the Govern function of the AI RMF to define who is responsible for AI risk across the organization. Clarify roles for policy setting, oversight, implementation, and independent review.

  6. Continuously monitor, measure, and manage risks
    Treat AI risk as dynamic. Implement ongoing evaluation and monitoring processes to detect model drift, performance degradation, new vulnerabilities, or unintended harms, and feed these insights back into governance and design.

By embedding these practices into day-to-day operations, organizations can reduce risk while still realizing the benefits of AI innovation.

Broader Impact: AI RMF as a Reference Point in Global AI Governance

Although the AI RMF is voluntary and U.S.-based, it is increasingly referenced by industry groups, standards bodies, and policymakers as a useful structure for AI risk management. Its emphasis on flexibility and alignment with existing security and privacy frameworks makes it attractive for organizations operating across multiple jurisdictions.

In that sense, the AI RMF and its companion documents act as a bridge between high-level ethical principles and concrete regulatory requirements. They do not replace emerging AI laws or sector-specific rules, but they offer a practical way to organize governance efforts and demonstrate that an organization is actively managing AI risks.

In a world where AI is increasingly embedded in critical infrastructure, healthcare, finance, and public services, NIST’s ongoing work on the AI RMF marks an important step toward making risk management a core, foundational element of responsible AI deployment.

References

  • “AI Risk Management Framework | NIST,” NIST ITL, n.d. (NIST)
  • NIST, “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (NIST AI 600-1),” 2024. (NIST Publications)
  • “NIST AI RMF 2025 Updates: What You Need to Know,” iSPartners, 2025. (I.S. Partners)
  • “NIST Says Risk Management is Central to Generative AI Adoption,” GovCIO Media & Research, Nov. 19, 2025. (GovCIO Media & Research)
  • “Understanding the NIST AI Risk Management Framework,” LogicGate, Sep. 4, 2025. (LogicGate Risk Cloud)
  • “AI Risk Management: A TL;DR,” Wiz, Jan. 31, 2025. (wiz.io)
  • “NIST Proposes New Cybersecurity Guidelines for AI Systems,” Campus Technology, Aug. 19, 2025. (Campus Technology)

 

Ready to Build a AI-Ready Organization?

Your business already has the data - now it’s time to unlock its potential.
Partner with Accelerra to cultivate a culture of AI & Data Literacy, align your teams, and turn insights into confident action.

Start Learning Today
Contact Sales Team