AI Governance

NIST AI RMF 1.0 Explained: The Four Functions Every AI Program Needs

| | 16 min read | Updated March 22, 2026

Bottom Line Up Front

The NIST AI RMF (AI 100-1) organizes AI risk management into four core functions: Govern, Map, Measure, and Manage. Released in January 2023 by the National Institute of Standards and Technology, the framework provides voluntary, risk-based guidance that applies across industries, AI system types, and organizational sizes. Over 88% of organizations deploy AI, but only 21% report mature AI governance.

Eighty-eight percent of organizations now use AI in at least one business function [McKinsey State of AI 2025]. Fewer than one in five have a mature governance framework to manage what those systems produce [Deloitte State of AI in the Enterprise, 8th Edition, 2026]. The math is straightforward: four out of five organizations deploying AI systems are doing so without a structured method for identifying, measuring, or managing the risks those systems create.

NIST published the answer in January 2023. The AI Risk Management Framework (AI 100-1) gives organizations four functions, twenty-four categories, and over a hundred subcategories for building an AI risk program from the ground up. It is voluntary, sector-agnostic, and designed to scale from a three-person startup to a federal agency. Two years after release, the framework has become the reference point for U.S. AI governance, the baseline for state-level AI legislation in Colorado and Texas, and the foundation that maps directly to both the EU AI Act and ISO/IEC 42001.

Four functions determine whether your AI program is a documented governance system or a collection of policies gathering dust. Start with the one most organizations skip entirely.

The NIST AI RMF (AI 100-1) organizes AI risk management into four core functions: Govern, Map, Measure, and Manage. Released in January 2023 by the National Institute of Standards and Technology, the framework provides voluntary, risk-based guidance that applies across industries, AI system types, and organizational sizes. Over 88% of organizations deploy AI, but only 21% report mature AI governance [McKinsey 2025, Deloitte 2026].

What Makes NIST AI RMF Different from Other AI Governance Frameworks?

The NIST AI RMF stands apart from competing frameworks because it was built as a risk management system, not a compliance checklist. ISO/IEC 42001, published in December 2023, provides a certifiable management system with 38 controls across 10 clauses and 4 annexes [ISO/IEC 42001:2023]. The EU AI Act, entering full enforcement in August 2026, imposes binding legal obligations with penalties reaching 35 million euros or 7% of global revenue [EU AI Act Art. 99]. The NIST framework does neither of these things. Instead, it provides a voluntary, adaptable structure that organizations use to build AI governance tailored to their risk tolerance, sector requirements, and organizational maturity. Organizations already implementing the NIST AI RMF have approximately 60-70% of the foundation needed for EU AI Act compliance, and NIST publishes official crosswalk documents mapping its functions to both ISO 42001 and the EU AI Act [NIST AI RMF Crosswalk Documents].

Voluntary Does Not Mean Optional

Colorado’s AI Act (SB 24-205) grants an explicit affirmative defense to organizations accused of algorithmic discrimination. The defense requires proof of violation discovery and cure, plus documented compliance with a “nationally or internationally recognized risk management framework” for AI. The statute names the NIST AI RMF directly [Colorado SB 24-205 Section 6-1-1703]. Texas TRAIGA follows the same model. When voluntary frameworks become the standard against which courts measure “reasonable care,” voluntary adoption becomes a strategic imperative.

Framework Architecture

The framework divides into two parts. Part 1 describes AI risks, characteristics of trustworthy AI systems (valid, reliable, safe, secure, accountable, transparent, explainable, privacy-enhanced, and fair), and the organizational context for risk management. Part 2 presents the four core functions: Govern, Map, Measure, and Manage. Each function contains categories and subcategories that organizations implement based on their specific AI risk profile [NIST AI 100-1]. Govern cuts across all three other functions, forming the connective tissue that holds the entire program together.

Download NIST AI 100-1 and the AI RMF Playbook from nist.gov. Read Part 1 for organizational context and Part 2 for the four functions. Map your existing AI governance activities to the framework’s categories before building new processes. Identify gaps where no current activity exists. Prioritize Govern function implementation first, as it establishes the decision-making structure the other three functions depend on.

Govern: The Function Every AI Program Needs First

Only 7% of organizations have achieved advanced AI governance maturity with real-time policy enforcement, according to the 2026 AI Risk and Readiness Report [Cybersecurity Insiders 2026]. The Govern function addresses this gap directly. GOVERN is the only cross-cutting function in the NIST AI RMF, meaning it does not operate in sequence with Map, Measure, and Manage. It operates through all three simultaneously. Govern establishes the organizational policies, processes, roles, and culture that determine how AI risk decisions get made, who makes them, and what accountability structures apply when those decisions produce harm. Without Govern, the other three functions produce documentation without authority. Risk assessments accumulate, metrics get collected, and remediation plans get written, but nobody has the mandate to enforce them [NIST AI 100-1 Section 5].

What Govern Covers

The Govern function contains six categories (GOVERN 1 through GOVERN 6) covering policies and procedures, accountability structures, workforce diversity and AI expertise, organizational culture around risk, stakeholder engagement, and oversight of third-party AI systems. GOVERN 1 requires documented policies for AI risk management. GOVERN 4 addresses whether the organization’s culture supports surfacing AI risks without retaliation. GOVERN 6 targets third-party and shadow AI systems that operate outside formal procurement channels [NIST AI 100-1 GOVERN 1-6].

Why Organizations Skip Govern

Most teams start with Map or Measure because those functions produce tangible artifacts: risk inventories, bias metrics, performance benchmarks. Govern produces governance documents, committee charters, and escalation procedures. These feel bureaucratic until an AI system produces a discriminatory output and the organization discovers that no documented process exists for who decides the remediation path, who communicates to affected parties, or who reports to regulators.

Establish an AI governance committee with representatives from legal, risk, engineering, and the business unit deploying AI systems. Document a charter specifying decision authority for AI risk acceptance, risk remediation timelines, and incident escalation paths. Assign a named individual (not a team) as the accountable executive for AI risk. Create a policy requiring AI system inventory registration before deployment. Review and update governance documents annually or when regulatory requirements change.

How Does the Map Function Identify AI Risks Before They Materialize?

Fifty-one percent of organizations reported at least one negative AI-related incident in the past twelve months [McKinsey State of AI 2025]. The Map function exists to reduce that number by forcing organizations to understand their AI systems before measuring or managing anything. MAP establishes context: what AI systems exist, what purposes they serve, who they affect, and what risks they carry based on their specific deployment environment. This is where the framework departs from generic risk assessment. MAP does not ask “what are your AI risks?” in the abstract. It asks “what does this specific system do, in this specific context, affecting these specific populations, with these specific failure modes?” The answers produce a risk profile that is unique to each AI system [NIST AI 100-1 Section 6].

The Five MAP Categories

MAP contains five categories (MAP 1 through MAP 5). MAP 1 defines the intended purpose and context of the AI system. MAP 2 classifies the system based on the types of people and communities it affects. MAP 3 examines the benefits and costs of the AI system across stakeholders. MAP 4 documents the risks and expected impact of the AI system throughout its lifecycle. MAP 5 tracks the likelihood and severity of identified risks [NIST AI 100-1 MAP 1-5].

Context Changes Everything

An AI model that recommends products on an e-commerce site carries different risks than the same model architecture recommending treatment options in a hospital. The Map function forces this distinction. A natural language processing model used for internal document summarization has a different risk profile than the same model deployed in a customer-facing chatbot handling insurance claims. Mapping requires organizations to document the specific deployment context, data sources, affected populations, and failure consequences for each system individually. This is why AI risk assessment under the NIST framework scales linearly with the number of AI systems: each system gets its own MAP analysis.

Build a system-level AI inventory capturing: system name, vendor or internal team, deployment date, intended purpose, data sources, affected populations, and deployment context. For each system, document the specific risks it creates based on its use case, not generic AI risks. Classify each system’s risk tier based on the severity and likelihood of potential harms to the affected population. Update the MAP analysis when the system’s purpose, data sources, or deployment context changes.

Measure: Quantifying AI Risk with Metrics That Matter

Organizations managed an average of four AI-related risks in 2025, up from two in 2022, indicating growing awareness but limited measurement maturity [McKinsey State of AI 2025]. The Measure function converts the qualitative risk profiles produced by MAP into quantifiable, trackable metrics. MEASURE establishes the methods, tools, and benchmarks organizations use to evaluate AI systems against the trustworthiness characteristics defined in Part 1 of the framework. Validity, reliability, fairness, bias, security, privacy, and transparency all require specific measurement approaches. A bias metric suitable for a hiring algorithm differs from one suitable for a content recommendation engine. MEASURE requires organizations to select, validate, and document the measurement methodologies appropriate to each AI system’s risk profile [NIST AI 100-1 Section 7].

What Gets Measured

MEASURE contains four categories (MEASURE 1 through MEASURE 4). MEASURE 1 requires appropriate methods and metrics for quantifying AI risks. MEASURE 2 focuses on evaluating AI systems for trustworthiness characteristics. MEASURE 3 addresses tracking and monitoring mechanisms. MEASURE 4 covers gathering feedback from affected communities and internal stakeholders about AI system performance and impact [NIST AI 100-1 MEASURE 1-4].

The Measurement Gap

Most organizations measure AI performance (accuracy, throughput, latency) but not AI risk (bias rates, privacy exposure, explainability scores). Performance metrics tell you the system works. Risk metrics tell you the system works safely. The Measure function closes this gap by requiring both dimensions. The Deloitte 2026 report found that 73% of organizations cite data privacy as their top AI-related concern, yet fewer than a quarter regularly measure AI risk maturity [Deloitte State of AI in the Enterprise, 8th Edition, 2026]. The distance between “concerned about” and “actively measuring” defines the Measure function’s value.

Define metrics for each AI system aligned to its specific risk profile. At minimum, track: accuracy and error rates by demographic subgroup (fairness), data access and retention patterns (privacy), model explanation capability (transparency), and system uptime and failure modes (reliability). Establish measurement frequency: quarterly for low-risk systems, monthly for high-risk systems. Document measurement methodologies so auditors and regulators see exactly how you derived each metric. Compare results against baselines established during MAP.

How Does the Manage Function Close the AI Risk Loop?

Sixty-eight percent of organizations describe their AI governance as reactive or still developing [Cybersecurity Insiders AI Risk and Readiness Report 2026]. The Manage function transforms reactive postures into proactive ones. MANAGE takes the risks identified by MAP and quantified by MEASURE and applies specific treatments: accept, transfer, mitigate, or avoid. This is where governance produces outcomes. MANAGE covers risk prioritization, resource allocation for remediation, communication to stakeholders when risks materialize, and continuous monitoring of treated risks to confirm the treatment holds over time. The function completes the risk management loop, connecting back to GOVERN for decision authority and to MAP for context updates when conditions change [NIST AI 100-1 Section 8].

The Four MANAGE Categories

MANAGE contains four categories (MANAGE 1 through MANAGE 4). MANAGE 1 addresses risk prioritization and resource allocation based on impact assessments. MANAGE 2 covers risk treatment strategies and their implementation plans. MANAGE 3 focuses on managing risks from third-party AI components, including vendor risk management and supply chain considerations. MANAGE 4 establishes processes for communicating AI risk information to stakeholders, including affected individuals, regulators, and internal decision-makers [NIST AI 100-1 MANAGE 1-4].

Third-Party AI Risk

MANAGE 3 deserves special attention. Most organizations consume AI through vendor products, APIs, and embedded models rather than building from scratch. Microsoft reports that 80% of Fortune 500 companies use active AI agents [Microsoft Security Blog, February 2026]. When the AI system producing risk sits inside a third-party product, the organization deploying it still bears the governance responsibility. MANAGE 3 requires organizations to evaluate, monitor, and document third-party AI risks with the same rigor as internally built systems. This aligns directly with enterprise AI governance principles that extend accountability beyond internal development teams.

Build a risk treatment register for every AI system in your inventory. For each identified risk, document: the treatment decision (accept, mitigate, transfer, avoid), the rationale, the responsible party, the implementation timeline, and the verification method. Assign risk owners at the business-unit level, not only in IT or security. Establish a cadence for reviewing treatment effectiveness: quarterly for high-risk systems, semi-annually for low-risk systems. Create an AI incident response playbook specifying notification timelines, escalation paths, and communication templates for affected stakeholders.

Implementing the NIST AI RMF: From Document to Operating System

The GenAI Profile (NIST AI 600-1), released in July 2024, extended the base framework to address twelve risks specific to generative AI, including hallucination, data poisoning, intellectual property infringement, and CBRN information synthesis [NIST AI 600-1]. In December 2025, NIST released the draft Cybersecurity Framework Profile for AI (NIST IR 8596), mapping AI security controls to the CSF 2.0 functions [NIST IR 8596]. Then in February 2026, NIST launched the AI Agent Standards Initiative to develop governance standards for autonomous AI agents [NIST CAISI, February 2026]. Each release builds on the AI RMF’s four-function architecture. Organizations that implement the base framework today position themselves to absorb these extensions without restructuring their governance program.

The Implementation Sequence

Start with Govern. Establish the decision-making structure before generating data. Then Map your existing AI systems. Most organizations discover systems they did not know existed during this phase, particularly shadow AI deployments adopted by business units without formal approval. Measure follows Map because you need to know what you are measuring before selecting metrics. Manage follows Measure because you need quantified risk data before making treatment decisions. The full cycle takes most organizations three to six months for an initial pass, with ongoing iteration as new AI systems enter the environment.

Mapping to Other Frameworks

Organizations operating under multiple regulatory requirements benefit from the NIST AI RMF’s alignment properties. GOVERN maps to ISO 42001 Clauses 5 (Leadership) and 6 (Planning). MAP aligns with EU AI Act Articles 9 (Risk Management System) and 13 (Transparency). MEASURE corresponds to ISO 42001 Clause 9 (Performance Evaluation) and EU AI Act Article 10 (Data Governance). MANAGE maps to ISO 42001 Clause 10 (Improvement) and EU AI Act Articles 62-73 (Market Surveillance and Enforcement). NIST publishes official crosswalk documents for both mappings, reducing duplicate compliance effort for organizations subject to multiple standards [NIST AI RMF Crosswalk Documents].

NIST AI RMF Function ISO 42001 Alignment EU AI Act Alignment
GOVERN Clause 5 (Leadership), Clause 6 (Planning) Art. 9 (Risk Management), Art. 17 (Quality Management)
MAP Clause 8 (Operation), Annex A Controls Art. 9 (Risk Management), Art. 13 (Transparency)
MEASURE Clause 9 (Performance Evaluation) Art. 10 (Data Governance), Art. 15 (Accuracy)
MANAGE Clause 10 (Improvement) Art. 62-73 (Surveillance), Art. 72 (Incident Reporting)

Build a single AI governance program using the NIST AI RMF as the base layer. Add ISO 42001 controls where certification is required. Layer EU AI Act requirements on top for AI systems deployed in or affecting EU residents. Use the official NIST crosswalk documents to map control activities across frameworks and eliminate duplicate effort. Start the GenAI Profile (AI 600-1) gap analysis for any generative AI systems in production. Monitor NIST IR 8596 for the final Cybersecurity Framework Profile for AI, expected in 2026.

The NIST AI RMF is the most practical starting point for AI governance in 2026. The framework’s four-function structure, Govern, Map, Measure, Manage, converts abstract AI risk into documented, measurable, and auditable processes. Organizations that implement it gain a governance foundation that maps to ISO 42001 and satisfies 60-70% of EU AI Act requirements. With state-level AI laws in Colorado and Texas granting affirmative defenses to organizations demonstrating NIST AI RMF compliance, the framework has moved from voluntary best practice to de facto legal protection. Build the governance structure first. The rest follows.

Frequently Asked Questions

What is the NIST AI RMF and who published it?

The NIST AI Risk Management Framework (AI 100-1) is a voluntary, risk-based guidance document published by the National Institute of Standards and Technology in January 2023 for managing AI system risks across the full AI lifecycle. The framework applies to all AI system types, all sectors, and all organizational sizes, making it the broadest U.S. AI governance standard available [NIST AI 100-1].

What are the four core functions of the NIST AI RMF?

The four core functions are Govern, Map, Measure, and Manage, with Govern serving as a cross-cutting function that operates through the other three simultaneously. Govern establishes policies and accountability. Map identifies AI systems and their contexts. Measure quantifies risks using appropriate metrics. Manage applies risk treatments and monitors their effectiveness [NIST AI 100-1].

How does the NIST AI RMF compare to ISO 42001?

The NIST AI RMF provides flexible, voluntary risk management guidance while ISO/IEC 42001 establishes a certifiable management system with 38 formal controls across 10 clauses. Organizations needing third-party certification choose ISO 42001. Organizations seeking adaptable risk guidance start with NIST AI RMF. The two frameworks are complementary, and NIST publishes an official crosswalk document mapping its functions to ISO 42001 clauses [NIST AI RMF Crosswalk Documents].

Does the NIST AI RMF help with EU AI Act compliance?

Organizations implementing the NIST AI RMF satisfy approximately 60-70% of EU AI Act requirements, particularly for risk management (Article 9), transparency (Article 13), and data governance (Article 10). Gaps remain in conformity assessments, CE marking, incident reporting timelines, and specific penalty structures that the voluntary NIST framework does not address [NIST AI RMF Crosswalk Documents, EU AI Act].

Is NIST AI RMF compliance legally required?

The NIST AI RMF is voluntary at the federal level, but state-level AI laws create strong adoption incentives. Colorado’s AI Act (SB 24-205) and Texas TRAIGA both grant affirmative defenses to organizations that demonstrate compliance with nationally recognized AI risk management frameworks, naming the NIST AI RMF explicitly [Colorado SB 24-205 Section 6-1-1703]. This transforms voluntary adoption into a documented legal protection strategy.

What is the NIST GenAI Profile (AI 600-1)?

NIST AI 600-1, published in July 2024, extends the AI RMF specifically for generative AI by identifying twelve risks unique to or worsened by generative AI systems, including hallucination, data poisoning, CBRN information synthesis, and intellectual property infringement. The GenAI Profile maps each risk to the existing Govern, Map, Measure, and Manage functions and provides suggested mitigations [NIST AI 600-1].

How long does NIST AI RMF implementation take?

Initial implementation of the four core functions takes most organizations three to six months, depending on the number of AI systems in the inventory and the maturity of existing governance structures. Organizations with established risk management programs (SOC 2, ISO 27001) complete the process faster because existing control frameworks provide transferable governance infrastructure.

What is the NIST AI Agent Standards Initiative?

NIST launched the AI Agent Standards Initiative in February 2026 through its Center for AI Standards and Innovation (CAISI) to develop governance standards for autonomous AI agents capable of independent action. The initiative focuses on three pillars: industry-led standards development, open-source protocol creation, and research into AI agent security and identity [NIST CAISI, February 2026].

Get The Authority Brief

Weekly compliance intelligence for security leaders. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.

Need hands-on guidance? Book a free technical discovery call to discuss your compliance program.

Book a Discovery Call

Discipline in preparation. Confidence in the room.

Josef Kamara, CPA, CISSP, CISA, Security+
Josef Kamara
Josef Kamara
CPA · CISSP · CISA · Security+

Former KPMG and BDO. Senior manager over third-party risk attestations and IT audits at a top-five global firm, and former technology risk leader directing the IT audit function at a Fortune 500 medical technology company. Advises growth-stage SaaS companies on SOC 2, HIPAA, and AI governance certifications.