ISO 42001 as the governance foundation for agentic AI

The standard was built for this moment

When ISO 42001 was published in 2023 as the world's first international standard for AI management systems, it was designed to provide organisations with a structured, auditable and scalable approach to governing the development, deployment and operation of AI systems across their business. It arrived at exactly the right moment – just as AI systems were beginning to transition from tools that assist humans to systems that act on behalf of humans.

The emergence of agentic AI – autonomous systems capable of goal-directed action, multi-step task execution and real-world interaction with limited human oversight – tests the limits of every existing governance framework. But ISO 42001, properly implemented, is well-suited to serve as the governance foundation for organisations navigating this transition. Here is how.

The management system principle

ISO 42001 is a management system standard, which means it follows the same high-level structure as ISO 9001 (quality), ISO 27001 (information security) and ISO 14001 (environmental management). This structure is deliberate. It ensures that AI governance is not an isolated compliance exercise but is embedded in organisational strategy, leadership accountability, resource allocation, operational processes and continuous improvement cycles.

This matters enormously for agentic AI governance because the risks associated with autonomous systems cannot be managed through one-off technical controls or periodic reviews. They require continuous monitoring, adaptive risk management and clear accountability chains – exactly what a management system provides. The NIST AI Risk Management Framework, which the Berkeley Agentic AI Risk-Management Standards Profile builds upon, is structured around four functions: Govern, Map, Measure and Manage. ISO 42001 operationalises all four.

How ISO 42001 maps to agentic AI governance requirements

The Govern function in the NIST AI RMF requires that policies, processes and practices for mapping, measuring and managing AI risks are in place, transparent and effectively implemented – and that accountability structures ensure the right people are empowered and responsible. ISO 42001 Clauses 4 (context), 5 (leadership) and 6 (planning) directly address these requirements. They require organisations to understand the internal and external factors that affect AI governance, establish leadership commitment and accountability at the highest organisational level and develop AI objectives and risk treatment plans that are documented and measurable.

For agentic AI specifically, this means your ISO 42001 implementation should explicitly address: the scope of autonomous decision-making authority granted to AI systems, the human oversight mechanisms required for different levels of agent autonomy, the escalation pathways for anomalous agent behaviour and the shutdown and intervention procedures that must be in place before deployment.

The Map function in the NIST AI RMF requires that context is established and understood – including the intended purposes and deployment settings of AI systems, the risk tolerances of the organisation and the impacts to individuals and society. ISO 42001 Clause 6.1 (actions to address risks and opportunities) and Clause 8.4 (AI system impact assessment) are the implementation vehicles for this. A properly executed AI impact assessment under ISO 42001 should include analysis of the specific risks that arise from agentic capabilities – the velocity problem, the oversight subversion risk, the multi-agent interaction dynamics – not just the conventional AI risks of bias and data quality.

The Measure function requires that appropriate methods and metrics are identified and applied to evaluate AI systems for trustworthy characteristics. ISO 42001 Clause 9 (performance evaluation) addresses this through requirements for monitoring, measurement, analysis and evaluation. For agentic systems, this means building measurement processes that go beyond task completion metrics to include behavioural monitoring, anomaly detection and periodic red-team evaluation.

The Manage function requires that AI risks are prioritised, responded to and managed – and that post-deployment monitoring plans are implemented. ISO 42001 Clause 8 (operational controls) and Clause 10 (improvement) address the operational and corrective dimensions of this. Critically, an ISO 42001 management system requires documented incident response procedures, which for agentic AI must specifically include the scenarios where an agent behaves unexpectedly, acquires unauthorised access or exhibits the kinds of emergent behaviours the Berkeley paper identifies.

The certification dimension

ISO 42001 is a certifiable standard. Independent third-party certification provides external validation that an organisation's AI management system meets the requirements of the standard – and it provides that validation through a rigorous audit process that examines not just documentation but actual implementation.

For boards and executives, certification serves several important functions. It provides assurance that AI governance commitments are being fulfilled in practice, not just on paper. It creates accountability by subjecting governance processes to external scrutiny. And it provides a credible signal to regulators, customers, partners and investors that the organisation takes AI governance seriously.

As AI regulation accelerates globally – with the EU AI Act now in force, Singapore's AI governance frameworks evolving and disclosure requirements emerging across multiple jurisdictions – ISO 42001 certification is likely to transition from a competitive differentiator to a baseline expectation for organisations deploying AI in high-stakes contexts.

Implementation priorities for agentic AI

For organisations that are either beginning an ISO 42001 implementation or seeking to extend an existing implementation to cover agentic AI, the Berkeley paper provides detailed guidance on the specific controls and considerations that should be incorporated. These include: defining agent autonomy levels and documenting the authority and tool access granted to each level; establishing role-based permission management systems with minimum privilege principles; building human oversight checkpoints with clear trigger conditions; developing shutdown and intervention procedures that are tested, not just documented; and creating feedback and incident reporting mechanisms that capture near-misses as well as actual incidents.

None of these are technically complex. All of them require organisational commitment and disciplined implementation. That is precisely what a management system standard is designed to support and sustain over time.

Relevant frameworks: ISO 42001 (Clauses 4–10) | NIST AI RMF (all four functions) | Berkeley Agentic AI Profile: Govern 1.2, 1.4, Map 1.1, 1.5, Measure 1.1, Manage 1.3

Contact us

Previous
Previous

Human oversight in the age of AI agents: Designing for accountability

Next
Next

The loss of control problem: What happens when AI agents go off-script