AI Governance

ISO 42001 Explained

| | 16 min read

Bottom Line Up Front

ISO/IEC 42001:2023 is the first certifiable international standard for AI management systems. The standard's 10 clauses and 42 Annex A controls provide the governance architecture organizations need to manage AI risk, satisfy regulators, and demonstrate responsible AI practices to customers and auditors.

Your organization runs three ML models in production. One scores credit applications. One predicts customer churn. One screens resumes for your hiring pipeline.

The VP of Engineering owns the infrastructure. The data science team owns the models. Legal drafted an AI ethics statement eighteen months ago.

Nobody owns the governance system connecting all three.

The EU AI Act enters enforcement in August 2026. Article 9 requires a documented risk management system for every high-risk AI deployment [EU AI Act Art. 9]. Two of your three models qualify.

Without a management system, you lack the audit trail, the risk registers, and the lifecycle documentation regulators expect. Seventy-six percent of organizations plan to pursue an AI audit or certification within the next 24 months [Protecht Group 2025].

ISO/IEC 42001:2023 provides the management system framework. Published in December 2023, it is the first certifiable international standard for AI governance. This guide breaks down the standard’s 10-clause structure, its 42 Annex A controls, the integration path with ISO 27001, and the certification process for organizations building an AI management system from the ground up.

ISO 42001 is the first international standard for AI management systems (AIMS). Published in December 2023 by ISO/IEC, it provides a certifiable framework of 10 clauses and 42 Annex A controls covering AI policy, risk assessment, lifecycle management, data governance, and transparency. Organizations use ISO 42001 to demonstrate responsible AI governance to regulators, customers, and auditors.

What ISO 42001 Requires: The 10-Clause Framework

ISO 42001 follows the Annex SL high-level structure shared by ISO 27001, ISO 9001, and other management system standards [ISO 42001:2023 Clause 1]. The 10 clauses divide into three tiers: foundational definitions (Clauses 1-3), governance architecture (Clauses 4-7), and operational execution (Clauses 8-10).

Every clause builds on the one before it. Skip a tier and the system collapses during audit.

Clauses 4-5: Governance Foundation

Clause 4 defines the organizational context for your AI management system [ISO 42001:2023 Clause 4]. The standard requires you to identify internal and external issues affecting AI outcomes, map interested parties (regulators, customers, affected individuals), and define the AIMS scope.

A healthcare AI system serving EU patients has a different scope boundary than an internal chatbot serving a 200-person sales team. The scope statement drives every subsequent decision.

Clause 5 establishes leadership accountability [ISO 42001:2023 Clause 5]. Top management demonstrates commitment by publishing an AI policy, assigning roles and responsibilities, and allocating resources.

The AI policy is not a principles document. It defines which AI activities the organization performs (development, deployment, or both), the governance objectives for those activities, and the risk appetite for AI-related decisions.

The distinction matters. Most organizations have an AI ethics statement. Few have an AI policy meeting Clause 5 requirements.

The ethics statement says “we value fairness.” The Clause 5 policy states who approves new AI deployments, what risk thresholds trigger escalation, and how the organization monitors AI system performance against defined objectives.

Clauses 6-7: Planning and Support Infrastructure

Clause 6 introduces the AI-specific risk assessment methodology [ISO 42001:2023 Clause 6]. Unlike ISO 27001’s focus on information security risks, ISO 42001 requires organizations to assess risks related to AI system outputs, bias, transparency, and societal impact. The clause also mandates an AI system impact assessment for each system within scope: a documented evaluation of potential consequences to individuals, groups, and societies affected by the AI system’s decisions.

Clause 7 addresses the support infrastructure [ISO 42001:2023 Clause 7]. Three requirements stand out.

First, competence: the organization demonstrates its personnel possess the skills to develop, deploy, and monitor AI systems. Second, awareness: all relevant staff understand the AI policy, their contribution to the AIMS, and the consequences of nonconformity. Third, documented information: the organization maintains records sufficient to demonstrate conformity during audit.

Clauses 8-10: Operations and Continuous Improvement

Clause 8 operationalizes the planning from Clause 6 [ISO 42001:2023 Clause 8]. Organizations implement the AI risk treatment plan, execute AI system impact assessments on a defined schedule, and control operational processes for AI system development and deployment.

This clause is where governance meets engineering. Documentation of model training, validation, testing, and deployment decisions lives here.

Clause 9 requires performance evaluation through monitoring, measurement, internal audit, and management review [ISO 42001:2023 Clause 9]. Internal audits follow the same protocol as ISO 27001: planned intervals, qualified auditors, documented findings, and corrective action tracking. Management review evaluates whether the AIMS achieves its objectives and directs resource adjustments.

Clause 10 closes the loop with continual improvement [ISO 42001:2023 Clause 10]. Nonconformities trigger root cause analysis and corrective action. The organization evaluates whether systemic changes to the AIMS prevent recurrence.

This is the Plan-Do-Check-Act cycle applied to AI governance: not a one-time implementation, but an ongoing operational discipline.

Map each clause to a responsible owner in your organization. Assign Clauses 4-5 to your AI governance committee chair or CISO. Assign Clause 6 risk assessment to your risk management lead.

Assign Clauses 8-10 to your AI operations team lead. Document ownership in a RACI matrix before starting implementation. The auditor asks “who owns this clause?” during every Stage 2 assessment.

The 42 Annex A Controls: What Auditors Examine

Annex A of ISO 42001 contains 42 control objectives organized into nine domains (A.2 through A.10) [ISO 42001:2023 Annex A]. These controls function identically to ISO 27001’s Annex A: the organization selects applicable controls, documents justifications for any exclusions, and implements controls proportionate to identified risks. The Statement of Applicability (SoA) records these decisions and becomes the auditor’s primary reference document.

AI Policies and Internal Organization (A.2-A.3)

Domain A.2 requires a documented AI policy addressing development and use of AI systems [ISO 42001:2023 Annex A, A.2]. The policy aligns with business requirements, ethical considerations, and applicable regulations. Domain A.3 establishes the internal organizational structure: defined roles, allocated responsibilities, and segregation of duties for AI governance activities.

The audit checkpoint here is specificity. A generic “responsible AI” policy fails. The auditor examines whether the policy addresses your specific AI use cases, defines decision-making authority for AI system approvals, and references the risk appetite established in Clause 5.

A 50-person SaaS company deploying one customer-facing ML model needs a different policy than a financial institution running 40 AI models across lending, fraud detection, and trading.

AI System Lifecycle and Data Governance (A.6-A.7)

Domain A.6 covers the AI system lifecycle from initial design through decommissioning [ISO 42001:2023 Annex A, A.6]. The standard requires documented processes for each phase: requirements definition, data collection and preparation, model development, testing, deployment, monitoring, and retirement.

Traceability across phases is the key requirement. The auditor follows a single AI system from concept to production and verifies documentation exists at every transition.

Domain A.7 addresses data governance with two focus areas: quality and provenance [ISO 42001:2023 Annex A, A.7]. Organizations define data quality requirements for each AI system, implement processes to verify data meets those requirements, and maintain provenance records documenting data sources, transformations, and lineage. For organizations already tracking data lineage under data protection requirements, A.7 extends existing processes rather than creating new ones.

Transparency and Third-Party Management (A.8-A.10)

Domain A.8 governs transparency: what information the organization discloses to individuals affected by AI system decisions [ISO 42001:2023 Annex A, A.8]. The disclosure requirements vary by risk level. A recommendation engine requires different transparency than an automated credit scoring system.

The standard requires organizations to define disclosure policies and implement processes to deliver information in an accessible format.

Domains A.9 and A.10 address AI system use controls and third-party relationships [ISO 42001:2023 Annex A, A.9-A.10]. A.9 requires monitoring of AI system performance in production, including drift detection and outcome evaluation. A.10 establishes requirements for managing AI systems provided by or to third parties: vendor assessments, contractual obligations, and ongoing oversight.

If your organization uses a third-party AI model (from a cloud provider, for instance), A.10 requires documented due diligence on the provider’s AI governance practices.

Create a Statement of Applicability listing all 42 Annex A controls. For each control, document three things: applicability to your scope (yes/no with justification), current implementation status (not started, partial, full), and the evidence artifact supporting the control.

Model your SoA on the ISO 27001 format if you already maintain one. The SoA is the first document the certification auditor requests.

ISO 42001 and ISO 27001: The Integration Advantage

Organizations with an existing ISO 27001 certification hold a structural advantage. Both standards share the Annex SL high-level structure, identical clause numbering (4-10), and the same Plan-Do-Check-Act operational model [ISO 27001:2022, ISO 42001:2023]. Integration reduces documentation effort, consolidates internal audit programs, and delivers 30-50% cost savings on certification [Advisera 2025].

Structural Alignment Through Annex SL

Annex SL mandates the same high-level clause structure for every ISO management system standard [ISO Annex SL]. Clause 4 (Context), Clause 5 (Leadership), Clause 6 (Planning), Clause 7 (Support), Clause 8 (Operation), Clause 9 (Performance Evaluation), and Clause 10 (Improvement) appear in both ISO 27001 and ISO 42001 with near-identical requirements.

The differences live in the domain-specific extensions: ISO 27001 adds information security risk treatment; ISO 42001 adds AI system impact assessment and AI-specific risk treatment. Organizations also running SOC 2 programs find the Annex SL structure maps cleanly to Trust Services Criteria, creating a unified governance layer across all three frameworks.

Dimension ISO 27001 ISO 42001
Scope Information security management AI management systems
Annex A Controls 93 controls across 4 domains 42 controls across 9 domains
Risk Focus Confidentiality, integrity, availability AI outcomes, bias, transparency, societal impact
Impact Assessment Not required AI system impact assessment required
Certification Cycle 3-year cycle, annual surveillance 3-year cycle, annual surveillance
Typical Cost Savings Baseline 30-50% reduction with existing ISO 27001

Practical Integration: Where the Two Systems Merge

Start with your existing ISMS documentation. Expand the Clause 4 context analysis to include AI-specific interested parties: data subjects affected by AI decisions, AI model vendors, and regulators enforcing AI-specific legislation. Extend the Clause 6 risk assessment methodology to cover AI risks: model bias, data drift, explainability gaps, and unintended downstream effects.

Clause 7 support processes transfer directly. Your existing competence framework, awareness training program, and document control procedures serve both management systems. Add AI-specific competence requirements (ML engineering, data science, AI ethics) to the existing skills matrix rather than building a parallel system.

Run combined internal audits. An auditor examining Clause 9 performance evaluation checks both the ISMS and AIMS in a single engagement. The shared clause structure means one audit schedule, one audit team, one set of findings, and one management review.

Organizations running separate audits for each standard double their audit burden for no additional governance value.

Conduct a gap assessment between your existing ISO 27001 ISMS and ISO 42001 requirements. Focus on three areas where ISO 42001 extends beyond ISO 27001: AI system impact assessment (Clause 8), AI-specific Annex A controls (A.5 through A.10), and data provenance requirements (A.7). Build your implementation plan around closing these specific gaps rather than rebuilding from scratch.

ISO 42001 and Regulatory Compliance: EU AI Act and NIST AI RMF

ISO 42001 does not exist in a regulatory vacuum. Two frameworks define the current AI governance environment: the EU AI Act (binding regulation with enforcement deadlines) and the NIST AI Risk Management Framework (voluntary guidance with federal adoption).

ISO 42001 maps to both. Neither mapping is complete. Understanding the overlaps and gaps determines whether certification delivers regulatory value or creates a false sense of compliance.

Mapping ISO 42001 to the EU AI Act

The EU AI Act and ISO 42001 share 40-50% overlap in high-level requirements [Vanta 2025]. ISO 42001’s risk assessment process (Clause 6) maps to the EU AI Act’s risk management system requirements under Article 9. The AI system impact assessment maps to the conformity assessment obligations for high-risk systems.

Annex A data governance controls (A.7) align with Article 10’s data quality and data governance requirements [EU AI Act Art. 10].

The gaps matter more than the overlaps. The EU AI Act requires specific technical documentation formats (Article 11), explicit human oversight mechanisms (Article 14), and accuracy metrics for high-risk systems (Article 15) [EU AI Act Art. 11, 14, 15]. ISO 42001 addresses these topics at a governance level but does not prescribe the specific technical implementations the Act requires.

Certification demonstrates a systematic approach to AI governance. It does not substitute for Article-by-Article compliance analysis.

The European standards body CEN-CENELEC is developing harmonized standard prEN 18286 to formally bridge ISO 42001 and the EU AI Act. Until finalized, organizations should treat ISO 42001 as a governance foundation and layer EU AI Act-specific controls on top.

Alignment with NIST AI RMF

The NIST AI Risk Management Framework organizes AI governance into four functions: Govern, Map, Measure, and Manage [NIST AI RMF 1.0]. ISO 42001 Clauses 4-5 align with the Govern function. Clauses 6-8 map across Map, Measure, and Manage.

NIST publishes an official crosswalk document linking ISO 42001 clauses and controls to NIST AI RMF categories and subcategories [NIST AI RMF Crosswalk].

The critical difference: ISO 42001 is certifiable. The NIST AI RMF is not. Organizations serving U.S. federal agencies adopt the NIST AI RMF as the risk methodology and use ISO 42001 as the management system wrapper.

Organizations in EU markets implement ISO 42001 for certification value and reference NIST AI RMF for risk assessment depth. The two frameworks complement each other. Implementing both creates a governance program stronger than either one alone.

Build a three-column compliance mapping matrix. Column one: ISO 42001 control reference. Column two: corresponding EU AI Act article. Column three: corresponding NIST AI RMF subcategory.

Identify every row where an EU AI Act requirement has no matching ISO 42001 control. Those gaps become your supplementary compliance workstream. Prioritize gaps affecting any AI system classified as high-risk under the EU AI Act before the August 2026 enforcement date.

ISO 42001 Certification: Timeline, Cost, and Process

Certification follows the same two-stage audit model used by ISO 27001 and every other Annex SL management system standard. Stage 1 reviews documentation and readiness. Stage 2 assesses implementation effectiveness.

The certificate is valid for three years with annual surveillance audits at 12-month intervals.

Implementation Timeline by Starting Point

Organizations building from scratch typically complete implementation in 6-12 months [Sprinto 2025]. The timeline breaks into four phases: gap assessment (4-6 weeks), documentation development (8-12 weeks), implementation and evidence collection (8-16 weeks), and internal audit plus management review (4-6 weeks). Organizations with an existing ISO 27001 certification compress this to 4-6 months by reusing governance documentation and extending existing processes.

The bottleneck is rarely documentation. The bottleneck is the AI system impact assessment required under Clause 8. Most organizations have never formally assessed the societal and individual impacts of their AI systems.

Building the methodology, executing the assessments, and documenting the results consumes the largest block of implementation time.

Certification Costs by Organization Size

Total investment spans consulting, implementation, and certification audit fees. Small organizations (under 50 employees) budget $15,000-$40,000. Mid-sized organizations (50-500 employees) budget $30,000-$80,000.

Large enterprises (500+ employees) budget $60,000-$200,000 or more, depending on the number of AI systems in scope [Sprinto 2025, Advisera 2025]. Certification body audit fees alone range from $5,000-$25,000 for the initial certification, with annual surveillance audits at 30-40% of the initial fee.

Organizations already holding ISO 27001 certification achieve 30-50% cost savings through documentation reuse, shared internal audit programs, and combined management reviews. The cost equation favors organizations treating ISO 42001 as an ISMS extension rather than a standalone project.

Selecting an Accredited Certification Body

Accreditation matters. The ANSI National Accreditation Board (ANAB) accredits certification bodies to issue ISO 42001 certificates in the United States [ANAB 2025]. Schellman holds the first ANAB accreditation for ISO 42001.

A-LIGN and SGS also hold ANAB accreditation. Internationally, BSI (which certified KPMG Australia as the first ISO 42001 organization globally) and Bureau Veritas operate accredited programs.

Select a certification body with demonstrated AI domain expertise, not one adding ISO 42001 to an existing catalog without qualified auditors. The standard requires auditors who understand ML model lifecycles, data governance pipelines, and AI risk assessment methodologies. Ask prospective certification bodies how many ISO 42001 audits they have completed and what AI-specific qualifications their audit teams hold.

Start with a formal gap assessment against all 42 Annex A controls. Score each control on a maturity scale: 0 (not addressed), 1 (informal), 2 (documented), 3 (implemented and monitored). Use the gap scores to build a prioritized implementation roadmap.

Address all controls scored 0 or 1 first. Request proposals from at least two ANAB-accredited certification bodies. Compare audit team qualifications, AI domain experience, and total three-year certification cost including surveillance audits.

ISO 42001 converts AI governance from a principles document into an auditable management system. Organizations deploying high-risk AI systems face a binary choice: build the governance infrastructure now at implementation cost, or retrofit it under regulatory pressure at three times the expense. The standard’s Annex SL alignment with ISO 27001 makes the integration path practical for any organization already running an ISMS.

Frequently Asked Questions

What is ISO 42001?

ISO/IEC 42001:2023 is the first international standard for AI management systems (AIMS). Published in December 2023, it provides a certifiable framework of 10 clauses and 42 Annex A controls for organizations developing, providing, or using AI systems. The standard covers AI policy, risk assessment, lifecycle management, data governance, transparency, and third-party oversight [ISO 42001:2023].

How does ISO 42001 differ from NIST AI RMF?

ISO 42001 is a certifiable management system standard with prescribed clauses and auditable controls. The NIST AI Risk Management Framework is a voluntary guidance document organized around four functions: Govern, Map, Measure, and Manage. ISO 42001 provides the governance structure; NIST AI RMF provides the risk methodology.

NIST publishes an official crosswalk mapping the two frameworks [NIST AI RMF Crosswalk]. Many organizations implement both.

How long does ISO 42001 certification take?

Implementation takes 6-12 months for organizations starting from scratch. Organizations with existing ISO 27001 certification typically complete implementation in 4-6 months by reusing governance documentation and extending existing risk assessment processes [Sprinto 2025]. The certification audit itself requires 3-10 days depending on organization size and AI system scope.

What does ISO 42001 certification cost?

Total costs range from $15,000-$40,000 for small organizations (under 50 employees) to $60,000-$200,000+ for large enterprises. Certification body audit fees alone range from $5,000-$25,000. Organizations with existing ISO 27001 certification achieve 30-50% cost savings through documentation reuse and integrated audit programs [Advisera 2025].

Does ISO 42001 certification satisfy EU AI Act requirements?

ISO 42001 certification demonstrates a systematic approach to AI governance but does not automatically satisfy EU AI Act requirements. The two frameworks share 40-50% overlap in high-level requirements [Vanta 2025]. Organizations should implement ISO 42001 as a governance foundation and conduct a separate gap analysis against EU AI Act Articles 9-15 for high-risk AI systems.

Which organizations have achieved ISO 42001 certification?

Notable certified organizations include KPMG Australia (first globally, via BSI), IBM (Granite 4.0 models, via Schellman), Anthropic (January 2025), KPMG U.S. (November 2025), Changi Airport Singapore (February 2025, via SGS), and Cornerstone Galaxy (December 2025). Adoption is accelerating as the EU AI Act enforcement deadline approaches.

How does ISO 42001 relate to ISO 27001?

Both standards share the Annex SL high-level structure with identical clause numbering (4-10) and the Plan-Do-Check-Act operational model. Organizations integrate the two systems by extending their existing ISMS documentation, expanding risk assessments to cover AI-specific risks, and running combined internal audits. Integration delivers 30-50% cost savings compared to standalone implementations [Advisera 2025].

What are the 42 Annex A controls in ISO 42001?

Annex A contains 42 control objectives organized into nine domains: AI Policies (A.2), Internal Organization (A.3), Resources for AI Systems (A.4), Assessing Impacts (A.5), AI System Lifecycle (A.6), Data for AI Systems (A.7), Information for Interested Parties (A.8), Use of AI Systems (A.9), and Third-Party and Customer Relationships (A.10) [ISO 42001:2023 Annex A]. Organizations document applicability and implementation status in a Statement of Applicability.

Get The Authority Brief

Weekly compliance intelligence for security leaders and technology executives. Frameworks decoded. Audit strategies explained. Regulatory updates analyzed.

Discipline in preparation. Confidence in the room.

Josef Kamara, CPA, CISSP, CISA, Security+
Josef Kamara
Josef Kamara
CPA · CISSP · CISA · Security+

Former KPMG and BDO. Senior manager over third-party risk attestations and IT audits at a top-five global firm, and former technology risk leader directing the IT audit function at a Fortune 500 medical technology company. Advises growth-stage SaaS companies on SOC 2, HIPAA, and AI governance certifications.