Federal AI Governance

OMB M-25-21 Compliance Guide: The New Federal AI Governance Framework

· 10 min read · Updated May 2, 2026

Bottom Line Up Front

OMB M-25-21, issued April 3, 2025, replaces M-24-10 and implements EO 14179. It requires every agency to designate a Chief AI Officer, establish an internal AI governance board, publish an annual AI use case inventory, and apply enhanced oversight to high-impact AI systems whose outputs materially affect rights, services, safety, or sensitive federal resources. Twenty-four major departments must publish public AI strategies by approximately October 2025.

The conventional take on Office of Management and Budget (OMB) M-25-21 is that the Trump administration ripped out the Biden-era guardrails and told agencies to move fast. That reading is wrong, and acting on it will leave your agency exposed.

M-25-21 does not abandon AI governance. It restructures it. The bifurcated “rights-impacting” and “safety-impacting” categories from M-24-10 created compliance confusion, inconsistent agency interpretations, and governance theater around low-risk systems. M-25-21 collapses those two categories into a single, sharper standard: high-impact AI classification framework. For the systems that actually matter, the oversight requirements are no weaker. They are more precisely targeted.

What follows is the framework analysis every federal compliance officer, Chief AI Officer (CAIO), and agency counsel needs before the 180-day public strategy deadline arrives in October 2025.

Comply with OMB M-25-21 by completing five actions: designate a Chief AI Officer with agency-wide governance authority, stand up an internal AI governance board, classify every AI system against the “high-impact” standard (materially affects rights, services, safety, or sensitive resources), maintain a public annual AI use case inventory, and publish a public AI strategy within 180 days of issuance. The companion memorandum M-25-22 governs how agencies procure AI.

OMB M-25-21 Compliance Requirements: What Changed from M-24-10

M-24-10 organized agency obligations around two separate AI categories: rights-impacting AI and safety-impacting AI. Each category carried its own risk management checklist. In practice, agencies spent significant effort debating which category a system fell into before any substantive governance work began.

M-25-21 eliminates that categorization debate. One standard now governs: high-impact AI. A system qualifies if its outputs materially affect individual rights, access to government services, personal safety, or sensitive federal resources. The governance obligations attach to the impact, not to the administrative label.

The policy goal behind the change is stated directly in M-25-21: foster AI innovation, advance AI governance, and promote responsible AI use. The ordering matters. Innovation comes first because the prior framework’s administrative overhead was itself a barrier. Governance comes second because accountability for high-impact systems is non-negotiable.

Three compliance timeline triggers agencies should track now:

  • CAIO designation: required at all covered agencies
  • Internal AI governance board: required at all covered agencies
  • Public AI strategy: 24 major departments, 180 days from April 3, 2025 (approximately October 2025)

The audit fix. Audit every AI system currently classified under M-24-10’s rights-impacting or safety-impacting categories. Apply the M-25-21 high-impact definition to each. Some systems will move out of enhanced oversight scope. Some will stay in. Document the reclassification rationale before the public strategy deadline.

The High-Impact AI Classification Framework

High-impact AI under M-25-21 captures systems whose outputs materially affect four domains: individual rights, access to government services, personal safety, and sensitive federal resources. “Materially affect” is the operative threshold. Informational or administrative AI systems that advise human decision-makers without directly determining outcomes sit in a different risk tier.

The practical test is outcome proximity. An AI system that screens benefits eligibility and produces a determination that a caseworker typically accepts without review is high-impact. An AI system that summarizes policy documents for an analyst to read and interpret is not. The distinction turns on whether the AI output is effectively the decision, or one input among many.

M-25-21 requires enhanced oversight for high-impact systems, which includes pre-deployment testing, ongoing performance monitoring, human oversight mechanisms, and documentation of the system’s role in the decision chain. These are not new concepts, but their application is now scoped to systems where the risk justifies the cost.

One area where M-25-21 takes a notably different posture from M-24-10: the explicit encouragement of open-source AI and code reuse. Agencies are directed to remove barriers to AI adoption, not add them.

The audit fix. Map every active AI system to the four high-impact domains. For each system that touches rights, services, safety, or sensitive federal resources, document the human oversight mechanism and the pre-deployment testing record. Systems with no documentation should be treated as non-compliant until the record is built.

Chief AI Officer and Governance Board Requirements

M-25-21 requires every covered agency to designate a Chief AI Officer. The CAIO is not a symbolic appointment. The role carries accountability for agency-wide AI governance, coordination with the interagency CAIO Council, and oversight of the public AI strategy publication requirement.

The interagency CAIO Council is coordinated by OMB and exists to drive cross-agency consistency. That structure matters for compliance officers in agencies that share data, systems, or infrastructure with other departments. Governance gaps at one agency become shared risk across the council.

Internal AI governance boards are required at the agency level. The board functions as the institutional check on AI deployment decisions, high-impact classifications, and the use case inventory process.

The CAIO and governance board requirements together address the accountability gap that produced M-24-10’s uneven implementation. When no single official owns AI governance outcomes, accountability diffuses and documentation lags. M-25-21 creates a named owner and an institutional body. Auditors will look for both.

Bottom Line Up Front

The CAIO requirement reflects a governance philosophy shift that goes beyond M-25-21. Federal AI governance is converging toward the CISO model: a designated officer with cross-functional authority, board-level visibility, and a public accountability record. Agencies that treat the CAIO as a compliance checkbox will spend the next two years retrofitting real authority into a hollow title.

The audit fix. Confirm the CAIO designation is documented, current, and has a clear reporting line. Verify the internal AI governance board has a charter, a membership roster, and a meeting record. If the CAIO position is vacant or the board is inactive, treat both as material compliance gaps requiring immediate escalation.

AI Use Case Inventory Under M-25-21

The AI use case inventory requirement carries forward from M-24-10. Agencies must maintain and publish an annual inventory of AI use cases. The public nature of the requirement is intentional: it creates external accountability and enables the interagency CAIO Council to identify duplication, share solutions, and flag emerging risks across the federal enterprise.

M-25-21’s inventory requirement operates alongside the high-impact classification framework. Each inventoried system should carry a classification status: high-impact or not. That classification drives the governance tier applied to the system. An inventory that lists AI systems without classification assessments does not satisfy the governance structure M-25-21 builds.

Annual publication means the inventory is a living document. Systems are added, retired, and reclassified. The governance board should own the inventory update process. The CAIO should sign off on the published version.

The audit fix. Pull the current AI use case inventory and verify every listed system carries a high-impact classification determination. Add any AI systems not yet in the inventory. Set a governance board review cycle timed to the annual publication requirement. The CAIO should review and approve the final published version.

AI Procurement Under M-25-22

OMB M-25-22 is the procurement companion to M-25-21. Where M-25-21 governs how agencies use and govern AI internally, M-25-22 governs how agencies buy it. The two memoranda work as an integrated framework, not as independent compliance tracks.

For compliance officers, any AI system acquired through federal procurement after M-25-22’s issuance should be evaluated against both frameworks before deployment. M-25-22 sets the standards vendors must meet and the contract terms agencies should require. M-25-21 determines the governance tier the system enters once deployed.

M-25-21’s encouragement of open-source AI and code reuse intersects directly with procurement. Agencies are directed to consider open-source options and to reuse existing government code where feasible. A procurement process that defaults to proprietary commercial solutions without evaluating open-source alternatives does not align with the framework.

The interagency dimension matters here too. If another agency has already built and deployed an AI system for a similar use case, M-25-21’s code reuse encouragement suggests evaluating that solution before developing or procuring a new one. The CAIO Council is the coordination mechanism for identifying those opportunities.

The audit fix. Review any AI procurement currently in progress or planned for the next 12 months. Confirm each acquisition is evaluated against both M-25-21 governance requirements and M-25-22 procurement standards. Document whether open-source and code reuse options were considered.

Dimension M-24-10 (Biden, March 2024) M-25-21 (Trump, April 2025)
Risk categories Two categories: rights-impacting and safety-impacting Single category: high-impact AI
Classification scope Broader; more systems captured across two buckets Narrower; focused on material impact to rights, services, safety, or sensitive resources
CAIO requirement Required Required; interagency CAIO Council coordination added
AI governance board Required Required
AI use case inventory Required, annual, public Carried forward, annual, public
Open-source AI Not emphasized Explicitly encouraged; code reuse directed
AI procurement governance Addressed within M-24-10 Separate companion memorandum M-25-22
Public AI strategy Not required Required; 24 major departments, 180-day deadline
Governing executive order EO 14110 (Biden, October 2023) EO 14179 (Trump, January 2025)

OMB M-25-21 is a governance restructuring, not a governance rollback. Agencies that reclassify their AI systems under the high-impact standard, appoint a functioning CAIO, activate their governance board, and publish a credible AI strategy by October 2025 will be compliant and positioned well for whatever oversight attention federal AI draws in 2026. Agencies that treat the simplified risk category as a signal to reduce oversight will have a problem the first time a high-impact system produces a bad outcome with no governance record behind it.

Frequently Asked Questions

What is the OMB M-25-21 compliance guide framework?

OMB M-25-21 establishes the federal AI governance framework under the current administration. It requires CAIO designation, internal AI governance boards, annual public AI use case inventories, enhanced oversight for high-impact AI, and public AI strategy publication within 180 days for 24 major departments.

What replaced the rights-impacting and safety-impacting categories?

M-25-21 replaces M-24-10’s bifurcated structure with a single “high-impact AI” standard. A system qualifies if its outputs materially affect individual rights, access to government services, personal safety, or sensitive federal resources.

When is the public AI strategy deadline?

Twenty-four major federal departments must publish public AI strategies within 180 days of the April 3, 2025 issuance. That deadline falls approximately in October 2025.

What does the Chief AI Officer do under M-25-21?

The CAIO holds agency-wide accountability for AI governance, oversees the AI use case inventory and classification process, and participates in the interagency CAIO Council coordinated by OMB.

How does M-25-22 relate to M-25-21?

M-25-22 governs AI procurement as the companion memorandum to M-25-21. M-25-21 governs use and oversight after deployment. M-25-22 governs acquisition. Both apply to any AI system a federal agency deploys.

Does M-25-21 require open-source AI?

M-25-21 does not mandate open-source AI, but explicitly encourages it alongside code reuse as part of the innovation and barrier-removal directives. Agencies are expected to evaluate open-source options before acquiring proprietary solutions.

Is the AI use case inventory still required?

Yes. The annual public AI use case inventory carries forward from M-24-10. Each inventoried system should include a high-impact classification determination.

Which executive order does M-25-21 implement?

M-25-21 implements EO 14179, “Removing Barriers to American Leadership in AI,” signed January 2025. EO 14179 revoked Biden’s EO 14110 and directed OMB to issue new AI governance guidance.

Subscribe to The Authority Brief for next week’s analysis.

Discipline in preparation. Confidence in the room.

Josef Kamara, CPA, CISSP, CISA, Security+
Josef Kamara
Josef Kamara
CPA · CISSP · CISA · Security+

Former KPMG and BDO. Senior manager over third-party risk attestations and IT audits at a top-five global firm, and former technology risk leader directing the IT audit function at a Fortune 500 medical technology company. Advises growth-stage SaaS companies on SOC 2, HIPAA, and AI governance certifications.

The Authority Brief

One compliance analysis per week from Josef Kamara, CPA, CISSP, CISA. Federal and private compliance, written for practitioners.