AI Governance | The Library

Executive frameworks for managing the technical risk associated with Generative AI and automated systems. We align organizational AI deployment with the NIST AI RMF 1.0 to ensure safety, algorithmic accountability, and regulatory compliance in the age of agentic AI.

All FISMA & NIST RMF FedRAMP CMMC DCAA Federal AI Governance GovCon Compliance Federal Cybersecurity Federal Zero Trust Federal GRC Engineering AI Governance GRC Engineering Cybersecurity Cloud Security HIPAA SOC 2
AI Governance

AI Bias Auditing: Compliance Requirements Across Three Jurisdictions

State-level AI laws in the United States more than doubled from 49 to 131 in a single year [Stanford AI Index 2025]. Federal agencies issued 59 AI regulations in 2024, up from 25 the year...

Read the Guide
AI Governance

NIST AI RMF 1.0 Explained: The Four Functions Every AI Program Needs

Eighty-eight percent of organizations now use AI in at least one business function [McKinsey State of AI 2025]. Fewer than one in five have a mature governance framework to manage what those systems produce [Deloitte...

Read the Guide
AI Governance

Singapore Agentic AI Governance Framework: Four Dimensions of Trust

Every AI governance conversation in 2026 starts with the EU AI Act. That is the wrong starting point. Europe built a compliance machine: 113 articles, six risk tiers, penalties up to EUR 35 million. It...

Read the Guide
AI Governance

Colorado AI Act (SB 205): Compliance Playbook

Colorado's AI Act (SB 205) takes effect June 30, 2026, making it the first US state law requiring deployers of high-risk AI systems to implement risk management policies, impact assessments, consumer notifications, and appeal processes....

Read the Guide
AI Governance

US State AI Laws 2026: The Multi-State Compliance Map

All 50 states introduced over 1,200 AI bills in 2025 alone, with Colorado, Texas, California, Illinois, and New York enacting laws covering algorithmic discrimination, transparency, training data disclosure, and frontier model safety. No federal AI...

Read the Guide
AI Governance

NIST AI RMF Affirmative Defense: Compliance as Protection

Colorado SB 205 and Texas TRAIGA grant affirmative defenses to organizations accused of algorithmic discrimination by high-risk AI systems. Claiming the defense requires two prongs: proof of violation discovery and cure, plus documented compliance with...

Read the Guide
AI Governance

AI Agent Audit Trails: Logging Autonomous Decisions

AI agent audit trails require five logging layers beyond traditional application logs: decision logs, tool invocation logs, delegation and authority logs, memory and context logs, and inter-agent communication logs. The EU AI Act Article 12...

Read the Guide
AI Governance

Agentic AI Risk Assessment: The 5-Layer Evaluation Framework

Agentic AI risk assessment evaluates five dimensions absent from traditional AI risk: autonomy, delegation, tool use, persistence, and multi-agent coordination. Organizations applying IT risk matrices to autonomous agents miss the categories causing the most damage....

Read the Guide
AI Governance

Multi-Agent System Governance: When Agents Manage Agents

Multi-agent system governance addresses accountability and failure containment when AI agents orchestrate, delegate to, and supervise other agents. Three governance models (hierarchical, federated, marketplace) carry distinct risk profiles mapped to OWASP Agentic Top 10 threats...

Read the Guide
AI Governance

EU AI Act Human Oversight: Article 14 Compliance for High-Risk AI Systems

The greatest risk in high-risk AI is not the algorithm. It is the human approving the algorithm's output without reading it. A 2025 systematic review of 35 studies involving 19,774 participants confirmed what practitioners already...

Read the Guide