Mapping your AI risk landscape: What the NIST AI RMF requires and why it matters

Risk mapping is not a compliance exercise

One of the most common governance failures in AI deployment is treating risk mapping as a box-ticking exercise – a document produced to satisfy a policy requirement, filed and revisited only when something goes wrong. For agentic AI systems, this approach is not merely inadequate. It is actively dangerous.

The Map function of the NIST AI Risk Management Framework exists to ensure that organisations understand the context in which their AI systems operate, the risks those systems generate and the potential impacts on individuals, communities and the organisation itself – before deployment, not after. The UC Berkeley Agentic AI Risk-Management Standards Profile, which builds on the NIST AI RMF, provides detailed supplementary guidance on how to execute the Map function for agentic systems specifically. The guidance reveals just how much most organisations are currently missing.

Understanding context: Map 1.1

Map 1.1 is one of the high-priority subcategories in the Berkeley framework. It requires that the intended purposes, potentially beneficial uses, applicable laws and norms and prospective deployment settings of the AI system are understood and documented – including the specific user populations, potential positive and negative impacts, assumptions about system purposes and the metrics that will be used to assess performance.

For agentic AI, the Berkeley paper identifies eight distinct risk categories that should be assessed under Map 1.1: discrimination and toxicity; privacy and security; misinformation; malicious actors and misuse; human-computer interaction; loss of control; socioeconomic and environmental harms; and AI system safety, failures and limitations. Each of these categories contains specific subcategories of risk that are particular to agentic systems and that would not be captured in a risk assessment designed for conventional AI.

The loss of control category alone – which includes self-proliferation, self-modification, self-exfiltration, agentic misalignment, deceptive behaviour and scheming, reward hacking, collusion and long-term planning and goal pursuit – represents a risk landscape that most corporate risk frameworks are not currently equipped to assess. Executives should ask: does our current AI risk assessment process capture any of these? And if the answer is no, what does that mean for our governance posture?

Cumulative risk at scale

One of the most important principles in the Berkeley framework's Map guidance is the requirement to assess the cumulative impact of actions performed at scale. The paper is explicit: individual actions that appear low-risk in isolation may pose significant risk when executed repeatedly by autonomous agents at scale.

This is a materially different risk assessment approach from what most organisations apply. Conventional risk assessment evaluates the risk of a single event or decision. Agentic AI risk assessment must evaluate the risk of a pattern of decisions – including the aggregate effects of many individually low-consequence actions that together constitute a significant impact.

An agent that makes minor pricing errors in 0.1% of transactions might appear to pose negligible risk when each transaction is considered individually. At scale – processing millions of transactions – that error rate produces consequences that are neither minor nor negligible. Risk tolerances must be set with this cumulative dimension in mind.

Defining risk tolerances: Map 1.5

Map 1.5 – another high-priority subcategory in the Berkeley framework – requires that organisational risk tolerances are determined and documented. For agentic systems, the framework recommends establishing multiple tiers of risk below the intolerable threshold, to provide adequate response time as a system approaches unacceptable risk levels.

The Berkeley paper identifies specific risk tolerance considerations that are particular to agentic AI: unauthorised access and privilege escalation, lack of adherence to instructions and control, the cumulative risk of low-risk actions at scale, rapid capability emergence and the correlated failure risk of systems sharing underlying models or data. Risk tolerances should be set for each of these dimensions, not just for overall system performance.

ISO 42001 Clause 6.1 requires that organisations identify AI risks and determine their response – accept, treat, transfer or avoid – based on documented criteria. The risk tolerance framework established under Map 1.5 provides the criteria that should underpin those decisions. Without explicit risk tolerances, risk treatment decisions become ad hoc and inconsistent.

Characterising impacts: Map 5.1

Map 5.1 requires that the likelihood and magnitude of identified impacts are assessed, including both beneficial and harmful impacts, across individual users, communities organisations and society. For agentic AI, this assessment must account for the agent's specific properties – its autonomy level, authority, causal impact capacity and environmental scope – because these properties directly determine the magnitude of potential harm.

The Berkeley paper recommends a dimensional governance approach to this assessment: rather than classifying an AI system as simply high-risk or low-risk, evaluating it across multiple dimensions including autonomy (from no autonomy to full autonomy), authority (the range of actions the agent can perform and its level of integration with other systems), causal impact (from observation only to comprehensive environmental control) and environmental complexity (from simulated to physical). A system's position on each of these dimensions shapes its risk profile and the governance controls it requires.

This multidimensional approach is more demanding than binary risk classification. It is also more accurate and it produces governance responses that are proportionate rather than generic.

Making risk mapping operationally useful

For the Map function to serve its purpose, the outputs of risk mapping need to be directly connected to the governance controls, evaluation frameworks and monitoring processes that follow. A risk map that identifies priority risks but is not connected to Measure or Manage activities is a document, not a governance tool.

ISO 42001 provides the management system structure that ensures this connection. The standard requires that risk assessments inform risk treatment plans, that risk treatment plans are implemented through operational controls and that controls are monitored and reviewed. When properly implemented, this creates a closed loop between risk identification and risk management – which is exactly what agentic AI governance requires.

Relevant frameworks: NIST AI RMF (Map 1.1, 1.5, 2, 3, 5.1) | ISO 42001 Clauses 6.1, 8.4 | Berkeley Agentic AI Profile: Map function (all sections)

Contact us

Previous
Previous

Measuring agentic AI risk: Why traditional audits are not enough

Next
Next

Multi-agent systems: Why the whole is riskier than the sum of its parts