← Back to Insights
GovernanceMarch 2026

Understanding the NIST AI Risk Management Framework

A practical guide to the NIST AI Risk Management Framework (AI RMF), a voluntary resource for organizations looking to manage AI risks and build trustworthy AI systems. This article breaks down the framework's structure, core functions, and how it complements standards like ISO 42001.


What Is the NIST AI RMF?

The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023 by the National Institute of Standards and Technology, is a voluntary framework designed to help organizations manage the risks of AI systems throughout their lifecycle. Directed by the National Artificial Intelligence Initiative Act of 2020, the AI RMF provides a structured, flexible approach to AI risk management that works across industries, use cases, and organization sizes.

Unlike prescriptive regulations, the AI RMF is voluntary, rights-preserving, non-sector-specific, and use-case agnostic. Organizations can adopt it in varying degrees based on their resources, risk tolerance, and the complexity of their AI systems. It is not a compliance checklist. It is a set of practices and outcomes that help organizations think critically about AI risk and act on it.

Why AI Risk Is Different

Traditional software risk management frameworks were not built for the unique challenges AI introduces. AI systems can be trained on data that changes over time, sometimes significantly and unexpectedly. They are frequently complex, making it difficult to detect and respond to failures when they occur. And they are inherently socio-technical, meaning their risks emerge not just from the technology itself but from the interplay between technical design, human behavior, and the social context in which they operate.

Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable outcomes for individuals and communities. The AI RMF addresses this by providing a structured way to anticipate, assess, and manage these risks before they manifest at scale.

The Seven Characteristics of Trustworthy AI

At the heart of the AI RMF is a definition of what trustworthy AI looks like. The framework identifies seven characteristics that AI systems should exhibit, and these characteristics inform every aspect of risk assessment and management:

  • Valid and Reliable: The system performs consistently and accurately under expected conditions, and its outputs can be trusted.
  • Safe: The system does not endanger human life, health, property, or the environment under normal or reasonably foreseeable conditions.
  • Secure and Resilient: The system withstands adversarial attacks and continues functioning or fails safely under unexpected conditions.
  • Accountable and Transparent: There is clear ownership of AI decisions, and the processes behind those decisions are visible to relevant stakeholders.
  • Explainable and Interpretable: The system's outputs and decision logic can be understood by those who need to act on them, at a level appropriate to the audience.
  • Privacy-Enhanced: The system protects personal data and respects individual privacy throughout its lifecycle.
  • Fair, with Harmful Bias Managed: The system is designed and monitored to minimize unfair bias and inequitable outcomes.

These characteristics are not independent toggles. They interact with each other, and organizations must balance tradeoffs between them. A highly secure but opaque system may sacrifice explainability. A privacy-enhanced system under data-sparse conditions may lose accuracy. The framework recognizes that trustworthiness is a spectrum, and the right balance depends on context.

The AI RMF Core: Four Functions

The operational heart of the framework is the AI RMF Core, organized into four functions: Govern, Map, Measure, and Manage. These functions provide outcomes and actions that enable organizations to address AI risks in practice.

Govern: Setting the Foundation

The GOVERN function cultivates a culture of risk management and establishes cross-cutting governance for AI. It is the only function that applies across all stages of AI risk management and enables the other three functions to work effectively.

Key outcomes include:

  • AI policies, processes, and procedures are in place, transparent, and implemented effectively
  • Accountability structures ensure the right teams and individuals are empowered and trained for AI risk management
  • Workforce diversity and inclusion are prioritized in risk management activities
  • Organizational teams are committed to a culture that considers and communicates AI risk
  • Policies address AI risks arising from third-party software, data, and supply chain issues

Strong governance is the connective tissue between technical AI development and organizational values. Without it, risk management becomes ad hoc and inconsistent.

Map: Understanding Context and Framing Risk

The MAP function establishes the context needed to frame risks related to a specific AI system. In practice, AI actors in charge of one part of the development process often lack full visibility into other parts, making it difficult to anticipate the full range of impacts.

Key outcomes include:

  • The intended purpose, legal context, and deployment settings for the AI system are documented
  • The AI system is categorized by task, method, and capability
  • Expected benefits and costs are benchmarked and understood
  • Risks and benefits are mapped across all components, including third-party software and data
  • Impacts to individuals, groups, communities, and society are characterized

The MAP function should produce enough contextual knowledge for an informed go/no-go decision about whether to proceed with designing, developing, or deploying an AI system. It is the foundation for everything that follows.

Measure: Quantifying and Monitoring Risk

The MEASURE function employs quantitative, qualitative, or mixed-method tools to analyze, assess, benchmark, and monitor AI risk. It takes the risks identified in MAP and applies rigorous testing and evaluation.

Key outcomes include:

  • Appropriate methods and metrics are identified and applied, starting with the most significant risks
  • AI systems are evaluated against each trustworthy characteristic (validity, safety, security, fairness, privacy, explainability, and more)
  • Mechanisms for tracking identified risks over time are in place
  • Feedback about the efficacy of measurement itself is gathered and assessed

AI systems should be tested before deployment and regularly while in operation. Where tradeoffs between trustworthy characteristics arise, measurement provides a traceable basis for management decisions, whether that means recalibration, mitigation, or removal of the system entirely.

Manage: Acting on Risk

The MANAGE function allocates resources to the risks that have been mapped and measured. It encompasses risk treatment, response, recovery, and communication plans.

Key outcomes include:

  • AI risks are prioritized, responded to, and managed based on impact, likelihood, and available resources
  • Strategies to maximize benefits and minimize negative impacts are planned and documented
  • Third-party AI risks are regularly monitored with controls applied
  • Post-deployment monitoring plans are implemented, including mechanisms for incident response, recovery, and decommissioning

Risk response options include mitigating, transferring, avoiding, or accepting risks. The MANAGE function also requires that mechanisms be in place to supersede, disengage, or deactivate AI systems that demonstrate outcomes inconsistent with their intended use.

How the AI RMF Complements ISO 42001

Organizations sometimes ask whether they need both the NIST AI RMF and ISO/IEC 42001. The short answer: they serve different but complementary purposes.

ISO 42001 is a certifiable management system standard. It defines the requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS). It tells you what structures and processes your organization needs to have in place.

The NIST AI RMF is a risk-focused practice guide. It defines the characteristics of trustworthy AI and provides a detailed methodology for identifying, measuring, and managing risks at the individual AI system level.

In practice, the two frameworks reinforce each other:

  • ISO 42001's Clause 6 (Planning) calls for AI risk assessment. The NIST AI RMF's MAP and MEASURE functions provide a detailed methodology for conducting those assessments.
  • ISO 42001's Clause 8 (Operation) requires AI-specific controls. The NIST AI RMF's trustworthiness characteristics inform what those controls should address.
  • ISO 42001's Clause 9 (Performance Evaluation) requires monitoring and auditing. The NIST AI RMF's MEASURE and MANAGE functions provide specific metrics and tracking mechanisms.
  • The NIST AI RMF's GOVERN function aligns directly with ISO 42001's requirements for leadership commitment, policies, and accountability structures.

At DevPro, we use the NIST AI RMF as a core input to our ISO 42001-compliant governance implementations. The framework's trustworthiness characteristics shape our risk assessment methodology, and its four-function model informs how we structure operational controls and monitoring programs.

AI Actors: Who Is Responsible?

The AI RMF introduces the concept of AI actors, defined as those who play an active role in the AI system lifecycle. This includes organizations and individuals that design, develop, deploy, evaluate, or use AI systems.

The framework emphasizes that AI risk management is not the responsibility of a single team. It requires diverse and multidisciplinary perspectives, including views from actors outside the organization. Effective governance distributes responsibility across:

  • Executive leadership that takes responsibility for decisions about AI risks
  • Technical teams that design, build, and test AI systems
  • Governance and compliance teams that set policies and monitor adherence
  • End users and affected communities whose feedback informs risk assessment and system improvement

Getting Started with the AI RMF

Adopting the AI RMF does not require implementing every subcategory on day one. The framework is designed to be applied incrementally:

  • Start with GOVERN: Establish the governance structures, policies, and accountability needed to support the other three functions.
  • Move to MAP: For your highest-risk AI systems, define context, document intended use, and characterize potential impacts.
  • Apply MEASURE: Select metrics and testing approaches for the most significant risks identified in MAP.
  • Implement MANAGE: Prioritize risks, allocate resources, and put monitoring and response plans in place.

The process is iterative. Each cycle through the functions strengthens your organization's ability to manage AI risk and builds toward the kind of operational maturity that frameworks like ISO 42001 formalize.

DevPro's AI governance consulting helps organizations operationalize both the NIST AI RMF and ISO 42001, turning frameworks into working governance systems that scale with your AI portfolio.


Sources: NIST, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)" (January 2023); National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283); ISO/IEC 42001:2023; NIST AI RMF Playbook.