Every AI risk assessment I have reviewed in the last two years shares the same structural flaw. The organization catalogs the AI systems. Lists the vendors. Documents the use cases. Then applies the same risk scoring methodology built for IT infrastructure: likelihood times impact, on a five-by-five heat map. The scoring produces a number. The number produces a color. The color produces a false sense of coverage.
AI risk does not map onto traditional IT risk matrices. A customer churn prediction model scoring a protected zip code 34% higher than comparable accounts creates regulatory exposure no firewall addresses. A clinical AI tool hallucinating medication dosages threatens patient safety in ways no access control mitigates. The EU AI Act requires providers of high-risk AI to implement lifecycle risk management systems [EU AI Act Art. 9]. The NIST AI RMF structures this process into four functions purpose-built for these threats: Govern, Map, Measure, and Manage [NIST AI 100-1].
The framework shift matters because auditors are already checking. Organizations treating AI risk as an appendix to their IT risk register carry examination findings. Those structuring assessment around the four NIST functions produce the evidence auditors recognize.
An AI risk assessment identifies, analyzes, and treats risks specific to AI systems: bias, hallucination, data provenance, and decision accountability. The NIST AI RMF 1.0 [NIST AI 100-1] structures the process into four functions: Govern, Map, Measure, and Manage. The EU AI Act [Art. 9] and ISO 42001 [Clause 6.1] mandate lifecycle assessment for high-risk systems.
What AI Risk Assessment Means Under NIST AI RMF
AI risk assessment differs from traditional IT risk assessment in scope, method, and lifecycle. IT risk assessment evaluates threats to infrastructure: unauthorized access, data breaches, system downtime. AI risk assessment evaluates threats from the model itself: biased outputs, fabricated data, privacy violations in training sets, and decisions no human reviewed.
The NIST AI Risk Management Framework (AI RMF 1.0) [NIST AI 100-1], published January 2023, is the industry standard for structuring this process. It is voluntary for the private sector but rapidly becoming the standard of care for AI liability defense. Federal agencies and government contractors align with it under executive order.
AI Risk Assessment vs AI Impact Assessment
Three related terms create confusion. They address different questions:
- AI risk assessment: Identifies and quantifies threats to the organization from AI systems. Focuses on likelihood and severity of harm. Structured by NIST AI RMF [NIST AI 100-1].
- AI impact assessment: Evaluates how an AI system affects individuals, communities, and society. Required under ISO 42001 Clause 6.1.4 [ISO/IEC 42001:2023]. Extends beyond organizational risk to societal impact.
- Algorithmic impact assessment: A broader public-interest evaluation introduced by the AI Now Institute in 2018. Focuses on transparency and public accountability. PwC and the Canadian government have published operational templates [PwC 2024].
Most organizations need all three. Start with the risk assessment (NIST AI RMF). Layer the impact assessment (ISO 42001) when pursuing certification. Use algorithmic impact assessments for public-facing systems where transparency obligations apply.
The Three Frameworks Driving Mandatory Assessment in 2026
Three frameworks converge in 2026 to make AI risk assessment a regulatory expectation, not a best practice:
| Framework | Authority | Risk Assessment Requirement | Enforcement |
|---|---|---|---|
| EU AI Act | EU Regulation 2024/1689 | Article 9: lifecycle risk management system for high-risk AI [EU AI Act Art. 9] | August 2, 2026 (high-risk provisions). Fines up to EUR 15M or 3% global turnover. |
| NIST AI RMF 1.0 | NIST AI 100-1 (Jan 2023) | Four functions: Govern, Map, Measure, Manage [NIST AI 100-1] | Voluntary. Mandatory for US federal agencies. Standard of care for liability. |
| ISO/IEC 42001:2023 | ISO | Clause 6.1: risk identification, analysis, evaluation, treatment, and impact assessment [ISO 42001 Cl. 6.1] | Certification-based. Required by enterprise procurement increasingly. |
California adds a fourth regulatory layer. The CCPA’s automated decision-making technology (ADMT) risk assessment regulations, finalized September 2025, require businesses to conduct risk assessments before deploying AI for significant consumer decisions. Compliance with the risk assessment provisions began January 1, 2026 [CPPA 2025].
Download NIST AI 100-1 and the NIST AI RMF Playbook from the NIST AI Resource Center. Map your current AI inventory against the EU AI Act Annex III high-risk categories. For each system flagged as high-risk, confirm you have documented risk assessment evidence addressing all four NIST functions before August 2, 2026.
The Four Functions of NIST AI RMF
The NIST AI RMF organizes risk management into four core functions [NIST AI 100-1]. Unlike the NIST Cybersecurity Framework’s six functions (Identify, Protect, Detect, Respond, Recover, Govern), the AI RMF uses four: Govern, Map, Measure, and Manage. Govern sits at the center, cross-cutting all activities. The remaining three execute sequentially.
Govern: Organizational Risk Culture and Accountability
Govern establishes who owns AI risk, what policies direct AI use, and how the organization allocates resources for AI risk management [NIST AI 100-1, GOVERN 1.1-1.7]. This function runs continuously across all stages.
The first governance question every auditor asks: who is the designated accountable party for AI risk? Assigning AI governance to the CISO alone fails because AI risk extends beyond security into bias (HR), transparency (Legal), accuracy (Product), and privacy (DPO). Organizations establishing a cross-functional AI steering committee, as recommended in ISO 42001 Clause 5.1 [ISO 42001 Cl. 5.1], cover all risk domains.
GOVERN subcategories to address:
- GOVERN 1.1: Legal and regulatory requirements for AI are identified and documented.
- GOVERN 1.3: Processes for AI risk management are established and integrated into broader enterprise risk.
- GOVERN 1.5: Ongoing monitoring mechanisms are in place for deployed AI systems.
- GOVERN 4.1: Organizational practices foster a culture of AI risk awareness.
Map: Context, Stakeholders, and Harm Identification
Map establishes the operational context for each AI system [NIST AI 100-1, MAP 1.1-1.6]. This is where you answer: what does this system do, who does it affect, and what happens when it fails?
MAP 1.1 requires documenting the intended purpose of each AI system. MAP 1.5 requires defining organizational risk tolerances for AI. MAP 5.1-5.2 address the identification of affected communities and stakeholders whose input should inform the assessment.
The Map function is where shadow AI becomes a governance crisis. UpGuard’s 2025 survey found over 80% of workers use unapproved AI tools [UpGuard 2025]. Gartner’s 2025 cybersecurity leadership survey puts the figure at 69% of organizations with confirmed or suspected prohibited GenAI use [Gartner 2025]. You cannot map what you have not inventoried.
Measure: Quantifying Risk Likelihood and Severity
Measure uses quantitative, qualitative, or mixed methods to assess, benchmark, and monitor AI risk [NIST AI 100-1, MEASURE 1.1-4.2]. This function answers: how bad is the risk, and how often does it occur?
NIST defines seven characteristics of trustworthy AI. Each characteristic becomes a measurement axis during assessment:
| Characteristic | Risk Domain | Measurement Example |
|---|---|---|
| Valid and Reliable | Inaccurate outputs, hallucination | Accuracy rate across test datasets, hallucination frequency per 1,000 outputs |
| Safe | Physical or psychological harm | Incident reports, harm severity scoring |
| Secure and Resilient | Adversarial attacks, data poisoning | Red team results, prompt injection test pass rate |
| Accountable and Transparent | Decision opacity, ownership gaps | Explainability score, RACI matrix completeness |
| Explainable and Interpretable | Black-box decision making | Feature importance documentation, decision audit trail |
| Privacy-Enhanced | PII in training data, data leakage | Data lineage audit, PII detection scan results |
| Fair with Harmful Bias Managed | Discrimination across protected classes | Disparate impact ratio, demographic parity metrics |
The NIST Generative AI Profile (AI 600-1), published July 2024, adds 12 risks specific to generative AI: confabulation, harmful bias, CBRN information generation, data privacy violations, intellectual property infringement, information integrity erosion, and six others [NIST AI 600-1]. If your organization uses LLMs, ChatGPT, Copilot, or any generative model, the GenAI Profile supplements the base framework.
Manage: Treatment, Monitoring, and Documentation
Manage takes the risks identified in Map and quantified in Measure, then applies treatment: mitigate, transfer, accept, or avoid [NIST AI 100-1, MANAGE 1.1-4.2]. This function also establishes incident response protocols for AI-specific failures.
Treatment options mirror traditional risk management with AI-specific adaptations:
- Mitigate: Retrain the model on debiased data. Add guardrails to prevent hallucinated outputs. Implement human-in-the-loop review for high-stakes decisions.
- Transfer: Purchase AI-specific liability insurance. Contractually allocate risk to the AI vendor through updated service agreements.
- Accept: Document the residual risk and obtain formal acceptance from the AI steering committee for low-impact use cases.
- Avoid: Decommission the AI system when residual risk exceeds organizational tolerance.
MANAGE 4.1 requires documenting post-deployment monitoring plans. MANAGE 4.2 requires mechanisms for stakeholders to report AI system issues. Both feed into the incident response process.
For each AI system in your inventory, create a one-page risk summary documenting: (1) the GOVERN owner and steering committee approval date, (2) the MAP context including intended purpose, affected stakeholders, and risk tolerance, (3) the MEASURE results against each of the seven trustworthiness characteristics, and (4) the MANAGE treatment decision with residual risk acceptance signature. Store these summaries in your AI risk register.
Cross-Framework Compliance: NIST AI RMF, EU AI Act, and ISO 42001
Organizations operating across jurisdictions need a single AI risk assessment process satisfying all three frameworks simultaneously. The good news: significant overlap exists. Running NIST AI RMF thoroughly covers approximately 80% of EU AI Act Article 9 requirements and aligns directly with ISO 42001 Clause 6.1.
EU AI Act Article 9: The Mandatory Risk Management System
Article 9 of the EU AI Act requires providers of high-risk AI systems to implement a risk management system as a “continuous iterative process planned and run throughout the entire lifecycle” [EU AI Act Art. 9(1)]. The system must:
- Identify and analyze known and reasonably foreseeable risks to health, safety, or fundamental rights [EU AI Act Art. 9(2)(a)]
- Estimate and evaluate risks under intended use and reasonably foreseeable misuse [EU AI Act Art. 9(2)(b)]
- Evaluate risks arising from post-market monitoring data [EU AI Act Art. 9(2)(c)]
- Adopt targeted risk management measures to address identified risks [EU AI Act Art. 9(2)(d)]
The NIST AI RMF’s Map and Measure functions align directly to Article 9(2)(a) and 9(2)(b). The Manage function covers 9(2)(d). Post-market monitoring under Article 72 maps to NIST’s GOVERN 1.5 and MANAGE 4.1.
ISO 42001 Clause 6.1: Planning for Risks and Opportunities
ISO/IEC 42001:2023 Clause 6.1 requires organizations to determine AI-specific risks and opportunities, then plan actions to address them [ISO 42001 Cl. 6.1]. The clause breaks into four sub-requirements:
- 6.1.1 General: Identify risks and opportunities related to your AI management system’s intended outcomes.
- 6.1.2 AI risk assessment: Identify risks linked to AI systems. Analyze consequences for the organization, individuals, and society. Evaluate likelihood and impact against risk criteria.
- 6.1.3 AI risk treatment: Select controls from Annex A (39 controls). Document a Statement of Applicability (SoA).
- 6.1.4 AI system impact assessment: Assess the impact on individuals and society, not the organization alone. This requirement extends beyond traditional risk assessment into societal impact territory.
Organizations pursuing ISO 42001 certification will find the NIST AI RMF provides the operational methodology Clause 6.1 requires. NIST gives you the “how.” ISO 42001 gives you the “what must be documented.” The EU AI Act gives you the “or else.”
The Cross-Walk Table
| Assessment Activity | NIST AI RMF | EU AI Act | ISO 42001 |
|---|---|---|---|
| Establish governance and accountability | GOVERN 1.1-1.7 | Art. 9(1), Art. 17 (QMS) | Cl. 5.1, Cl. 5.3 |
| Inventory AI systems and define context | MAP 1.1-1.6 | Art. 9(2)(a), Annex III classification | Cl. 6.1.1 |
| Identify and analyze risks | MAP 3.1-3.5, MAP 5.1-5.2 | Art. 9(2)(a), 9(2)(b) | Cl. 6.1.2 |
| Quantify and measure risk | MEASURE 1.1-4.2 | Art. 9(2)(b), Art. 15 (accuracy) | Cl. 6.1.2 (evaluate) |
| Select and implement controls | MANAGE 1.1-4.2 | Art. 9(2)(d), Art. 9(4) | Cl. 6.1.3, Annex A |
| Assess societal impact | MAP 5.1-5.2 | Art. 27 (FRIA) | Cl. 6.1.4 |
| Continuous monitoring | GOVERN 1.5, MANAGE 4.1 | Art. 72 (post-market monitoring) | Cl. 9.1 |
Build one assessment process, not three. Use the NIST AI RMF four-function structure as your operational backbone. Map each NIST subcategory to the corresponding EU AI Act article and ISO 42001 clause using the cross-walk table above. When an auditor requests evidence for any single framework, your documentation serves all three.
Building Your AI Risk Register
The AI risk register is the central artifact auditors request. It documents every AI system, its risk profile, treatment decisions, and residual risk. Without it, your AI risk assessment exists as a concept, not evidence.
Risk Categories Every Register Needs
Structure your register around the seven NIST trustworthiness characteristics [NIST AI 100-1], extended with operational categories:
- Bias and fairness risk: Disparate impact on protected groups. Measure using demographic parity, equalized odds, or calibration metrics.
- Accuracy and hallucination risk: Fabricated outputs, incorrect citations, factual errors. Measure using benchmark accuracy, hallucination rate per output batch.
- Privacy risk: PII in training data, data leakage through model outputs, re-identification risk. Reference NIST AI 600-1 GenAI Profile risk category 8 [NIST AI 600-1].
- Security risk: Prompt injection, model extraction, data poisoning, adversarial inputs. Reference OWASP Top 10 for LLMs 2025 [OWASP LLM Top 10 2025].
- Accountability risk: No designated owner for model decisions. No audit trail for outputs.
- Transparency risk: Affected individuals unaware AI influenced the decision. Black-box models with no explainability documentation.
- Third-party and supply chain risk: Vendor API dependencies, model provenance unknown, training data sourcing undocumented. NIST AI 600-1 addresses this as value chain and component integration risk [NIST AI 600-1].
Scoring Methodology Beyond Heat Maps
Qualitative heat maps (red/yellow/green) give executives a visual summary. They fail auditors. A scoring methodology needs quantifiable inputs.
Apply a two-axis scoring model:
- Likelihood: Score 1-5 based on historical incident data, model complexity, and exposure surface. A customer-facing LLM with no guardrails scores 5. An internal reporting tool with human review scores 2.
- Impact: Score 1-5 based on regulatory exposure, financial consequence, and affected population size. A hiring screener affecting thousands of applicants scores 5. An internal summarization tool scores 2.
Risk score = Likelihood x Impact. Systems scoring 15-25 require immediate treatment. Systems scoring 8-14 require documented monitoring. Systems scoring 1-7 require annual review. Document the scoring rationale. Auditors verify the methodology, not the score.
The Shadow AI Problem
Your risk register is incomplete if it only covers sanctioned AI systems. The 2026 International AI Safety Report, authored by over 100 experts across 30 countries, identifies risks emerging after deployment as a primary governance challenge [International AI Safety Report 2026].
Shadow AI, the unauthorized use of AI tools by employees, is the single largest gap in most risk registers. IBM’s 2025 data shows shadow AI incidents account for 20% of all breaches and carry a cost premium: $4.63 million versus $3.96 million for standard breaches [IBM 2025]. The Allianz Risk Barometer 2026 ranks AI as the #2 global business risk, up from #10 in 2025, the largest single-year jump in the survey’s history [Allianz 2026].
Run a shadow AI discovery sprint before building your risk register. Survey every department head on AI tool usage. Scan network traffic for API calls to OpenAI, Anthropic, Google, Midjourney, and other AI endpoints. Cross-reference SaaS procurement records against known AI vendors. Add every discovered system to the register with a “discovered, unassessed” status. Assign an owner and complete the initial assessment within 30 days of discovery.
What Auditors Check in an AI Risk Assessment
Assessment quality matters more than assessment existence. Auditors reviewing AI risk programs look for specific evidence artifacts, not general policies. Here is what the examination focuses on.
The Evidence Package
A complete AI risk assessment produces these deliverables:
- AI system inventory: Every AI system cataloged with purpose, owner, deployment date, data sources, affected populations, and risk classification. Export as a structured spreadsheet or database record.
- Risk register: Each system scored on likelihood and impact with documented treatment decisions and residual risk acceptance. Signed by the AI steering committee or designated accountable party.
- Assessment methodology documentation: The scoring criteria, the risk categories evaluated, the data sources used, and the frequency of reassessment. NIST AI RMF Playbook subcategories provide the template.
- Treatment evidence: For each mitigated risk, the specific control implemented and the evidence of implementation. Retraining logs, guardrail configurations, human review protocols, vendor contract amendments.
- Monitoring dashboards or reports: Ongoing performance metrics showing model accuracy, bias metrics, incident counts, and drift detection results over time.
Common Assessment Failures
From reviewing AI governance programs across regulated industries, these failures appear repeatedly:
- Inventory gap: The risk register covers 5 approved AI systems. The network scan reveals 23. Shadow AI accounts for 78% of the actual AI footprint.
- Assessment-without-measurement: The register lists risks as “medium” with no supporting data. No accuracy benchmarks. No bias testing results. No red team findings. The word “medium” is not evidence.
- Static assessment: The risk assessment was performed once at deployment. No reassessment after model updates, data pipeline changes, or regulatory shifts. NIST AI RMF and EU AI Act Article 9 both require continuous, lifecycle-spanning assessment.
- Missing societal impact: The assessment covers organizational risk. It ignores impact on individuals and communities. ISO 42001 Clause 6.1.4 explicitly requires this broader assessment [ISO 42001 Cl. 6.1.4].
- No designated owner: The register lists “IT Department” as the risk owner. No individual name. No steering committee charter. No accountability chain. GOVERN 1.3 requires established, identifiable ownership [NIST AI 100-1].
Before your next AI governance review, run this pre-audit checklist: (1) Confirm every AI system in the inventory has an individual human owner, not a department. (2) Verify each risk score has quantitative supporting evidence, not qualitative labels. (3) Check the assessment date on every register entry and flag any older than 12 months. (4) Validate a societal impact assessment exists for every system classified as high-risk. (5) Confirm the AI steering committee reviewed and signed the register within the last quarter.
AI risk assessment under NIST AI RMF is not a documentation exercise. It is an operational discipline requiring inventory visibility, quantitative measurement, cross-framework alignment, and continuous monitoring. Organizations with a signed risk register covering all four NIST functions, mapped to EU AI Act Article 9 and ISO 42001 Clause 6.1, hold a defensible position in any regulatory examination. Organizations without one hold a liability.
Frequently Asked Questions
What is an AI risk assessment?
An AI risk assessment is a systematic process to identify, analyze, and treat risks specific to AI systems, including bias, hallucination, data provenance issues, security vulnerabilities, and accountability gaps. The NIST AI RMF 1.0 [NIST AI 100-1] organizes the process into four functions: Govern, Map, Measure, and Manage.
What are the four functions of the NIST AI RMF?
The four functions are Govern (establish AI risk culture and accountability), Map (identify context, stakeholders, and potential harms), Measure (quantify risk likelihood and severity against seven trustworthiness characteristics), and Manage (treat risks and establish monitoring) [NIST AI 100-1]. Govern operates continuously across all activities.
How does the EU AI Act require AI risk assessment?
Article 9 of the EU AI Act requires providers of high-risk AI systems to implement a risk management system running throughout the entire product lifecycle [EU AI Act Art. 9]. High-risk categories are defined in Annex III and include employment, credit, education, healthcare, and law enforcement AI. Non-compliance carries fines up to EUR 15 million or 3% of global annual turnover [EU AI Act Art. 99].
What is the difference between AI risk assessment and AI impact assessment?
AI risk assessment identifies threats to the organization from AI systems, focusing on likelihood and severity. AI impact assessment evaluates how an AI system affects individuals and society, extending beyond organizational risk. ISO 42001 Clause 6.1.4 requires both [ISO 42001 Cl. 6.1.4]. The EU AI Act’s Fundamental Rights Impact Assessment (FRIA) under Article 27 adds a human rights dimension for deployers of high-risk systems.
Is the NIST AI RMF mandatory?
The NIST AI RMF is voluntary for the private sector [NIST AI 100-1]. Federal agencies and government contractors align with it under executive orders. It is becoming the standard of care for AI liability defense in the US, meaning courts and regulators reference it when evaluating whether an organization exercised reasonable diligence in managing AI risk.
How often should AI risk assessments be conducted?
Both NIST AI RMF and EU AI Act Article 9 require continuous, lifecycle-spanning assessment, not annual point-in-time reviews [NIST AI 100-1] [EU AI Act Art. 9(1)]. Reassess after every model update, training data change, deployment scope expansion, or regulatory shift. At minimum, conduct a formal review quarterly for high-risk systems and annually for lower-risk systems.
What are the main categories of AI risk?
NIST AI RMF identifies seven trustworthiness characteristics as risk categories: validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy, and fairness with managed bias [NIST AI 100-1]. The NIST GenAI Profile (AI 600-1) adds 12 risks specific to generative AI, including confabulation, intellectual property infringement, and CBRN information generation [NIST AI 600-1].
Does ISO 42001 require AI risk assessment?
ISO/IEC 42001:2023 Clause 6.1.2 requires organizations to identify and analyze AI-related risks, assess consequences for the organization, individuals, and society, and evaluate likelihood and impact against defined risk criteria [ISO 42001 Cl. 6.1.2]. Clause 6.1.3 requires selecting controls from Annex A (39 controls) and documenting a Statement of Applicability. Clause 6.1.4 adds a mandatory AI system impact assessment covering societal effects.
Get The Authority Brief
Weekly compliance intelligence for security leaders and technology executives. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.