The AI supply chain: A governance gap most boards are ignoring

Your AI system is not just your AI

When an organisation deploys an agentic AI system, it is typically deploying a complex assembly of components built by multiple parties: a foundation model from one provider, a deployment framework from another, third-party data sources, external APIs, pre-trained specialist models, software libraries and cloud infrastructure. Each of these components introduces potential risks – and the organisation deploying the system is ultimately accountable for the aggregate risk of the whole, regardless of where each component originated.

This is the AI supply chain governance challenge and it is one that the UC Berkeley Agentic AI Risk-Management Standards Profile identifies as a high-priority area under the NIST AI RMF's Govern function. For most organisations, it represents a significant gap between the scope of their current AI governance frameworks and the actual boundaries of their AI risk exposure.

What the supply chain encompasses

The Berkeley paper, drawing on research by Sheh and Geappen, identifies the AI supply chain as encompassing: the data used to train and fine-tune models; the models themselves; the software frameworks and libraries used to deploy and orchestrate them; the cloud infrastructure on which they run; the third-party APIs and tools that agents are given access to; and any external agents or agentic systems with which the deployed system interacts.

Each layer of this supply chain is a potential source of risk. Training data can introduce biases, vulnerabilities or harmful content into model behaviour. Pre-trained models from external providers may have capabilities, limitations or alignment properties that are not fully disclosed or well understood. Software libraries may contain security vulnerabilities. External APIs may change their data formats or behaviour in ways that cause agents to misinterpret their outputs. And third-party agents – increasingly common in multi-agent architectures – introduce risks that depend on the governance standards of the organisations that developed them.

NIST AI RMF Govern 6 and the ISO 42001 response

The NIST AI RMF addresses supply chain governance under Govern 6, which requires that policies and procedures are in place to address AI risks and benefits arising from third-party software, data and other supply chain issues. The Berkeley paper's guidance on Govern 6.1 is specific: governance mechanisms for agentic AI must account for risks from interactions with external agents and oversight cannot be limited to individual agent behaviour but must monitor the health and safety of the agent's interactions with external agentic systems and tools.

ISO 42001 provides the management system scaffolding for implementing these requirements. Clause 8 (operational planning and control) requires that controls are implemented for risks identified in the planning phase, which for supply chain risks means: documented assessment of all third-party components used in AI systems; defined criteria for supplier and component selection that include AI-specific risk considerations; ongoing monitoring of third-party components for changes that could affect system behaviour or risk profile; and procedures for responding to supply chain incidents, including component vulnerabilities or supplier failures.

The AI bill of materials

One of the practical governance tools recommended in the Berkeley paper for supply chain transparency is the AI Bill of Materials (AIBOM) – a formal record of the components used to build, train, test and deploy an AI system, modelled on the Software Bill of Materials (SBOM) concept that is now widely adopted in software security governance.

An AIBOM enables organisations to understand what their AI systems are made of, to assess the risk profile of each component and to respond efficiently when a vulnerability or incident affecting one component needs to be addressed across all systems that depend on it. Without an AIBOM organisations are managing AI supply chain risk in the dark – they may not know which of their systems are affected by a disclosed vulnerability in a commonly used model or library until the consequences are already visible.

For boards requesting evidence of AI governance maturity, asking for an AIBOM for each significant AI deployment is a reasonable and practical starting point.

Intellectual property and autonomous action

An often-overlooked dimension of AI supply chain governance is intellectual property risk. Agentic AI systems acting autonomously may take actions that infringe third-party intellectual property rights – reproducing copyrighted content, using proprietary data in training or fine-tuning or generating outputs that incorporate protected material without attribution or licence.

The Berkeley paper recommends implementing content filtering to reduce this risk and exercising particular caution with systems that continuously learn from their environments. For organisations in content-intensive industries – media, publishing, research, professional services – this risk warrants explicit risk assessment and documented controls.

Monitoring for supply chain changes

The Berkeley paper's guidance on Govern 1.5 identifies several events that should trigger a comprehensive re-evaluation of the risk management plan, including integration of new supply chain components, removal of existing components and changes to entities in the supply chain. This is a governance requirement that few organisations have yet built into their AI operational processes.

In practice, it means establishing a monitoring process that tracks material changes in the supply chain of deployed AI systems – model updates, API changes, infrastructure changes, new third-party integrations – and triggering risk review processes when those changes occur. Without this, an organisation may find that the AI system it deployed and assessed six months ago has quietly changed in ways that invalidate its earlier risk assessment.

A practical starting point

For executive teams seeking to address AI supply chain governance as a near-term priority, three steps are worth immediate attention. First, commission a supply chain inventory for each significant AI deployment, documenting all components, their origins and their access permissions. Second, assess whether existing supplier risk management processes adequately cover AI-specific supply chain risks, including model vulnerabilities, data provenance and autonomous action risks. Third, establish a process for monitoring material supply chain changes and triggering risk reviews when they occur.

The AI supply chain is not a technical issue to be delegated entirely to IT and procurement. It is a governance issue that carries enterprise-level risk. Boards that treat it as such will be better positioned to manage it effectively.

Relevant frameworks: NIST AI RMF (Govern 6, Govern 1.5) | ISO 42001 Clauses 6.1, 8.4, 8 | Berkeley Agentic AI Profile: Govern 6.1, Govern 1.5

Contact us

Previous
Previous

From deployment to decommission: The full lifecycle of AI governance

Next
Next

Measuring agentic AI risk: Why traditional audits are not enough