The federal Chief Artificial Intelligence Officer reads OMB Memorandum M-25-21 once. The deliverables are clear. A CAIO designated within 60 days. An AI Governance Board convened within 90 days. A public AI Strategy within 180 days. A high-impact AI determination process documented within 365 days. A use case inventory submitted annually. Pre-deployment testing for high-impact systems. Ongoing monitoring. Appeals workflows. The list is long, the deadlines are real, and the memo does not say how to produce any of these artifacts in a way that survives an Inspector General review.
The framework that does is NIST AI Risk Management Framework 1.0, published as NIST AI 100-1 in January 2023. Four functions: Govern, Map, Measure, Manage. The federal AI program leads who are running the deliverables successfully picked NIST AI RMF 1.0 (NIST AI 100-1) as their operating system early. The ones still struggling treated M-25-21 as a checklist and the framework as theory.
The honest read on the federal landscape: agencies reported 3,611 AI use cases across 56 submitting agencies in the 2025 consolidated inventory, roughly double the 1,757 reported in the 2024 cycle. That growth is the demand signal. The crosswalk that follows is what an AI Governance Board should build this quarter.
NIST AI RMF 1.0 (NIST AI 100-1) is the operating system OMB Memorandum M-25-21 quietly assumes federal agencies are already running. The memo specifies the deliverables: a Chief Artificial Intelligence Officer (CAIO), an AI Governance Board, an annual use case inventory, a high-impact determination process, pre-deployment testing, ongoing monitoring, and a public AI strategy. The four AI RMF functions of Govern, Map, Measure, and Manage produce the evidence each obligation requires.
The M-25-21 Architecture in Ninety Seconds
OMB Memorandum M-25-21, “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust,” was issued April 3, 2025. It rescinded the Biden-era M-24-10 and replaced the prior bifurcation between “rights-impacting” and “safety-impacting” AI with a single category: high-impact AI. The companion procurement memo, OMB Memorandum M-25-22, replaced M-24-18 on the same date. Both memos implement Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” signed January 23, 2025, which rescinded the prior EO 14110.
The memo applies to all Executive Branch agencies, including independent regulatory agencies as defined in 44 U.S.C. § 3502(1) and § 3502(5). It does not cover National Security Systems. The Intelligence Community is exempt from §§ 4(a) through (b) and from inventory requirements. Some requirements are limited to agencies covered by the Chief Financial Officers Act under 31 U.S.C. § 901(b).
The Deadlines That Drive the Operating Model
The Consolidated Table of Actions on page 25 of M-25-21 sets the cadence. Sixty days, by approximately June 2, 2025, to designate or retain a CAIO under § 3(a)(i). Ninety days, by July 2, 2025, to convene the AI Governance Board for CFO Act agencies under § 3(a)(ii). One hundred and eighty days, by September 30, 2025, for CFO Act agencies to publish a public AI Strategy under § 2(a) and to submit the agency Compliance Plan to OMB under § 3(b)(ii) on a two-year cadence through 2036. Two hundred and seventy days, by December 29, 2025, for generative AI policies and updated internal policies under §§ 3(b)(iii) and (iv). Three hundred and sixty-five days, by April 3, 2026, to document implementation of minimum risk management practices for high-impact AI use cases under § 4(a)(i).
The High-Impact AI Definition
The high-impact AI definition in M-25-21 § 5 covers AI with an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on civil rights, civil liberties, or privacy; access to education, housing, insurance, credit, employment, and other programs; access to critical government resources or services; human health and safety; critical infrastructure or public safety; or strategic assets or resources. Section 6 lists categories of AI presumed to be high-impact, which streamlines the determination for common use case patterns. Detailed determination methodology is covered in high-impact AI classification under the federal framework.
NIST AI RMF 1.0 in Federal Context
NIST AI RMF 1.0 organizes risk management into four functions. Each one produces a category of evidence the M-25-21 record will require.
Govern produces the standing committee structure, accountability charter, AI use policy, and third-party oversight regime. In federal context, Govern is where the CAIO charter, AI Governance Board terms of reference, and the CFO Act agency’s AI Strategy live. NIST AI 100-1 Section 5 organizes Govern into six categories addressing organizational policies, accountability, workforce, third-party engagement, and processes for tracking risks and incidents.
Map produces the per-system context analysis: intended purpose, deployment environment, affected populations, and identified risks. In federal context, Map is where the high-impact determination memo for each AI use case originates. NIST AI 100-1 Section 6 organizes Map into five categories covering context establishment, AI system categorization, capability and risk identification, third-party considerations, and impact characterization.
Measure produces the quantitative and qualitative evidence: pre-deployment test plans, validity and bias evaluations, fairness metrics by demographic subgroup, and security testing. Measure is where the AI Impact Assessment required by M-25-21 § 4(b)(ii) lives, along with the pre-deployment testing required by § 4(b)(i). NIST AI 100-1 Section 7 organizes Measure into four categories covering metrics selection, evaluation, tracking, and feedback.
Manage produces the risk treatment register, ongoing monitoring evidence, incident response artifacts, and human-oversight design. Manage is where the continuous monitoring program required by § 4(b)(iii) lives, along with human oversight, appeals and remedies, and end-user feedback obligations under §§ 4(b)(iv) through (vii). NIST AI 100-1 Section 8 organizes Manage into four categories covering risk prioritization, treatment, third-party monitoring, and risk treatment communications.
The Crosswalk: M-25-21 Obligations Mapped to AI RMF Functions
The center of gravity for any agency AI program is the table that maps each memo obligation to the framework function that produces the required evidence.
| M-25-21 Obligation | Section | AI RMF Function | What the Auditor Will Request |
|---|---|---|---|
| Designate or retain a CAIO | § 3(a)(i) | Govern | CAIO appointment letter, position description, written authorities including high-impact determination authority |
| Convene the AI Governance Board | § 3(a)(ii) | Govern | Board charter, member roster covering IT, cybersecurity, data, budget, legal, privacy, civil rights, civil liberties; meeting cadence; dated minutes |
| Develop and publish AI Strategy | § 2(a) | Govern | Public AI Strategy URL, maturity assessment, plans for infrastructure, data quality, workforce, and governance |
| Submit and post Compliance Plan | § 3(b)(ii) | Govern | Compliance plan submission record, OMB template completion, two-year resubmission cadence |
| Update internal policies | §§ 3(b)(iii)-(iv) | Govern | Revised IT and acquisition policies, generative AI acceptable use policy |
| Maintain AI use case inventory | § 3(b)(v) | Govern + Map | Annual inventory submission, public version, per-use-case context documentation |
| Determine high-impact AI use cases | § 4(a) | Map | Written determination memo per use case, evidence of CAIO independent review, criteria mapped to § 6 presumed-high-impact categories |
| Conduct pre-deployment testing | § 4(b)(i) | Measure | Test plan reflecting real-world conditions, alternative test methodology where source code is unavailable, mitigations plan |
| Complete AI Impact Assessment | § 4(b)(ii) | Measure + Map | Documented assessment covering purpose, data quality, civil-rights impacts, reassessment schedule, costs, independent reviewer comments, signed risk acceptance |
| Conduct ongoing monitoring | § 4(b)(iii) | Manage | Monitoring schedule, drift and adverse-impact detection evidence, mitigation actions taken |
| Human training and oversight | §§ 4(b)(iv)-(v) | Manage + Govern | Operator training records, fail-safe design documentation, named human accountable for risk acceptance |
| Remedies, appeals, end-user feedback | §§ 4(b)(vi)-(vii) | Manage | Appeals workflow, public feedback channel, evidence of incorporated feedback |
| Procure AI consistent with M-25-22 | M-25-22 §§ 3-4 | Map + Measure + Manage | Performance-based contract clauses, IP and data-rights terms, vendor performance monitoring, M-25-21 compliance flow-down clauses |
Govern is the only AI RMF function that maps to every M-25-21 obligation. Treat it as the connective tissue of the program, not the first item on a checklist. Map, Measure, and Manage produce evidence; Govern produces the signature authority that makes the evidence count.
Three operational implications follow from the table. Govern carries the program. Skip Govern and the other three functions produce evidence with no signature authority behind it. Map is the function the use case inventory and the high-impact determination both pull from. Build Map first or every other artifact lacks context. Measure and Manage are joint owners of the AI Impact Assessment and the ongoing monitoring program. Treating them as a single working group reduces hand-off failures.
A Worked Example: One High-Impact Determination
The mechanics of the crosswalk become concrete in a single use case. The example that follows is illustrative, constructed from M-25-21 § 6 directly. A mid-tier federal benefits agency deploys an AI model that scores incoming benefits applications for likelihood of fraudulent submission. High-likelihood scores route the application to a fraud investigator. Low-likelihood scores route to standard adjudication. Investigators retain authority over the final determination but rely on the score to prioritize caseloads.
Step one is Map 1.1, define purpose and context. The system informs which applications a human investigator examines first in a benefits adjudication context. Document the deployment environment, data sources, and the applicant population affected. Step two is applying M-25-21 § 6 presumed-high-impact categories. Subsection (m) explicitly covers the ability to apply for or adjudication of requests for critical federal services, processes, and benefits, and the detection of fraudulent use of government services. The use case touches both. Presumption attaches.
Step three is the § 5 outcome test. The score is a fraud-risk number that prioritizes investigator attention. Does it serve as a principal basis for a decision with material effect on the applicant’s access to a benefit? The decision to delay an application during fraud review materially affects access. The score is a principal basis for that delay even when a human investigator retains final authority. M-25-21 footnote 27 makes clear that human oversight does not defeat the high-impact determination. Step four is Map 5.1 and 5.2, likelihood and severity. Score errors disproportionately affecting a protected class would create civil-rights exposure under Title VI and the agency’s nondiscrimination authorities.
Step five is the CAIO determination. The CAIO signs a written determination that the use case is high-impact, citing § 6(m) presumed categories and the § 5 outcome test. Documentation goes into the central waiver and determination tracking system per § 4(a)(iii) and (iv). A summary is published on the agency website per § 4(a)(iv). Step six triggers the minimum practices: pre-deployment testing of fraud-score performance by demographic subgroup under Measure 2.11; AI Impact Assessment with privacy and civil-rights analysis under Measure 2.6 and Map 5.1; ongoing monitoring of false-positive rates by subgroup under Manage 4.1; an appeals process integrated with existing benefits-determination appeals under Manage 4.3; operator training for fraud investigators on score interpretation under Govern 3.2.
The resulting evidence package is a CAIO-signed determination memo, a board-reviewed AI Impact Assessment, a pre-deployment test report with subgroup analysis, a monitoring plan with quarterly reassessment cadence, an appeals procedure, and a public summary entry on the agency M-25-21 page. That is the audit-ready file.
Practical Implementation: What “Done” Looks Like
Four operational artifacts carry the program.
The AI use case inventory is the canonical source. OMB collects it via standardized template and the agency posts a public version. Updates are at least annual; agencies are encouraged to update on an ongoing basis under § 3(b)(v). The 2024 consolidation captured 2,133 use cases across 41 agencies. The 2025 consolidation captured 3,611 across 56. Feed every new system into the inventory before any other artifact is produced.
The CAIO charter is the signature authority document. SES level for CFO Act agencies; GS-14 or above for non-CFO Act agencies. Position must report close enough to the Deputy Secretary or equivalent to participate in executive decision-making per § 3(a)(i). The charter must explicitly enumerate the ten CAIO responsibilities in §§ 3(a)(i)(A) through (J), including the non-delegable waiver authority in § 4(a)(ii). Notification to OMB is required within 30 days of any vacancy or change. Detail on the role expectations is covered in the OMB M-25-21 Compliance Guide.
The Governance Board structure provides the cross-functional check. Chair at Deputy Secretary level. Vice-chair is the CAIO. Mandatory representation from IT, cybersecurity, data, budget, statistics, legal counsel, privacy, civil rights, and civil liberties under § 3(a)(ii)(B). Add procurement, customer experience, program evaluation, and program-office implementers when relevant. Quarterly minimum cadence; monthly during the first year of M-25-21 stand-up.
The evidence archive is what an Inspector General or OMB review will request. The CAIO appointment letter and position description. The Governance Board charter, roster, and dated minutes from the last four meetings. The full AI use case inventory plus per-system Map documentation. For every high-impact determination: the written memo with criteria, the AI Impact Assessment, the pre-deployment test report, the monitoring plan, the appeals process, and the operator training record. The agency AI Strategy and the Compliance Plan submitted to OMB. The generative AI acceptable use policy. The central waiver and determination tracker. Public summaries of determinations and waivers published on the agency website per § 4(a)(iv).
Where Contractors Selling AI to Government Fit
OMB Memorandum M-25-22 applies to AI systems and services acquired by Executive Branch agencies on solicitations issued on or after September 30, 2025, and to options or extensions exercised after that date under § 2(c). Contractors selling AI to government must absorb six obligations into their delivery model.
Performance-based contract clauses with Statements of Objectives, Performance Work Statements, Quality Assurance Surveillance Plans, and contract incentives tied to measurable outcomes under § 4(b)(iii). High-impact disclosure: solicitations for systems likely to host high-impact use cases must communicate documentation and transparency requirements that flow down under § 4(c)(i). Intellectual property and data rights: contracts must permanently prohibit the use of non-public agency inputs and outputs to further train publicly or commercially available AI absent express agency consent under § 3(d)(iv). This is the line that disqualifies many off-the-shelf SaaS AI products in their default configuration.
Independent evaluation and testing rights: vendors must provide access for agency evaluation, and contracts must not prohibit agencies from internally disclosing how the vendor conducts testing under § 4(d)(iii)(E). Vendor lock-in protections through knowledge transfer, data and model portability, transparent licensing, and pricing under § 4(d)(iii)(C). Performance monitoring and rollback: contractors must support agency-defined quarterly or biannual evaluation cadences and may be required to roll back versions that fail performance standards under § 4(d)(iii)(F). A vendor who cannot map its product documentation to AI 100-1 categories will struggle to satisfy the agency’s M-25-22 evidence requirements regardless of how good the underlying model is. Vendor risk treatment is covered in detail in the AI vendor risk assessment guide.
The CAIOs winning at M-25-21 picked an operating system early. NIST AI RMF 1.0 (NIST AI 100-1) is the only one Congress, OMB, and NIST have all aligned around. Run the four functions, produce the evidence artifacts the auditor will request, and the memo’s deliverables fall out the back end. Treat the framework as theory and the program defends itself one ad-hoc memo at a time.
Frequently Asked Questions
What is the relationship between NIST AI RMF and OMB M-25-21?
NIST AI RMF 1.0 (NIST AI 100-1) is a voluntary risk management framework published by NIST in January 2023. OMB Memorandum M-25-21, issued April 3, 2025, sets mandatory governance requirements for federal agencies implementing AI. The four AI RMF functions of Govern, Map, Measure, and Manage produce the evidence each M-25-21 obligation requires, which is why federal AI program leads use the framework as their operating model.
Is OMB M-24-10 still in effect?
No. OMB M-24-10 was rescinded April 3, 2025 and replaced by M-25-21. The Biden-era bifurcation between “rights-impacting” and “safety-impacting” AI is no longer current policy. Agency documents that still cite M-24-10 should be updated on the next revision cycle.
Which agencies must implement M-25-21?
All Executive Branch agencies, including independent regulatory agencies under 44 U.S.C. § 3502(1) and § 3502(5). National Security Systems are not covered. The Intelligence Community is exempt from §§ 4(a) through (b) and from inventory requirements. Some requirements are limited to CFO Act agencies under 31 U.S.C. § 901(b).
What is high-impact AI under M-25-21?
High-impact AI is AI with an output that serves as a principal basis for decisions with legal, material, binding, or significant effect on civil rights, civil liberties, privacy; access to education, housing, insurance, credit, or employment; access to critical government resources; human health and safety; critical infrastructure; or strategic assets. The single category replaced the prior “rights-impacting” and “safety-impacting” split.
What is the deadline for documenting minimum risk management practices?
Three hundred and sixty-five days from April 3, 2025, which is April 3, 2026. M-25-21 § 4(a)(i) requires every covered agency to document implementation of the minimum risk management practices for high-impact AI use cases by that date. Subsequent compliance plans are due every two years through 2036.
Who must designate a Chief AI Officer?
Every covered agency under M-25-21 § 3(a)(i). The CAIO must be at SES level for CFO Act agencies and at GS-14 or above for non-CFO Act agencies. The position must report close enough to the Deputy Secretary or equivalent to participate in executive decision-making.
How does AI use case inventory reporting work?
OMB collects the inventory through a standardized template at least annually. Each agency posts a public version on its website. The 2024 consolidation captured 2,133 use cases across 41 agencies; the 2025 consolidation captured 3,611 across 56. The inventory is the canonical source for high-impact determinations and downstream evidence.
What does M-25-22 require of AI vendors selling to federal agencies?
Performance-based contract clauses, high-impact disclosure obligations, IP and data rights that prohibit using non-public agency inputs to further train commercial AI, independent evaluation and testing rights, vendor lock-in protections, and performance monitoring with rollback authority. M-25-22 applies to solicitations issued on or after September 30, 2025.
Subscribe to The Authority Brief for next week’s analysis.