Image 1 of 1
AI Governance, Ethics, and Legal Considerations
Artificial intelligence is moving faster than most organisations' governance frameworks. The legal and regulatory landscape is solidifying rapidly — the EU AI Act is now in force, sector regulators are issuing AI-specific guidance, courts are beginning to address AI liability, and investors and standards bodies are expecting organisations to demonstrate accountable AI governance.
Most leaders do not yet have a working understanding of what this means for their organisation, their personal liability, or their ESG and compliance obligations. This course fills that gap. It is not a technical course on how AI works — it is a governance, ethics, and legal course on what responsible AI deployment requires of decision-makers.
Content covers the AI governance landscape: what the EU AI Act requires, how it classifies AI systems by risk level, and what prohibited and high-risk applications mean in practice; AI ethics principles and how they translate from philosophy to operational policy — fairness, transparency, accountability, human oversight, and non-discrimination; the legal dimensions of AI deployment including liability for AI-caused harm, intellectual property issues in AI-generated content, data protection and privacy obligations under GDPR and equivalent frameworks, and contractual risk in AI vendor relationships; AI and human rights — algorithmic discrimination, automated decision-making and the right to explanation, and AI's intersection with labour rights; the ISO 42001 AI management system standard and what it requires organisations to establish; connecting AI governance to ESG disclosure obligations and investor expectations; and what boards and audit committees should be asking about AI use within their organisations.
The course draws on current case law, regulatory guidance, and real-world incident case studies to ground governance principles in practical reality. Participants leave with a clear framework for evaluating their organisation's AI governance maturity and a structured approach to the decisions they need to make. Suitable for board members, general counsel, compliance officers, risk managers, HR directors, technology leaders without deep governance backgrounds, and any senior manager responsible for overseeing or approving AI deployments.
Artificial intelligence is moving faster than most organisations' governance frameworks. The legal and regulatory landscape is solidifying rapidly — the EU AI Act is now in force, sector regulators are issuing AI-specific guidance, courts are beginning to address AI liability, and investors and standards bodies are expecting organisations to demonstrate accountable AI governance.
Most leaders do not yet have a working understanding of what this means for their organisation, their personal liability, or their ESG and compliance obligations. This course fills that gap. It is not a technical course on how AI works — it is a governance, ethics, and legal course on what responsible AI deployment requires of decision-makers.
Content covers the AI governance landscape: what the EU AI Act requires, how it classifies AI systems by risk level, and what prohibited and high-risk applications mean in practice; AI ethics principles and how they translate from philosophy to operational policy — fairness, transparency, accountability, human oversight, and non-discrimination; the legal dimensions of AI deployment including liability for AI-caused harm, intellectual property issues in AI-generated content, data protection and privacy obligations under GDPR and equivalent frameworks, and contractual risk in AI vendor relationships; AI and human rights — algorithmic discrimination, automated decision-making and the right to explanation, and AI's intersection with labour rights; the ISO 42001 AI management system standard and what it requires organisations to establish; connecting AI governance to ESG disclosure obligations and investor expectations; and what boards and audit committees should be asking about AI use within their organisations.
The course draws on current case law, regulatory guidance, and real-world incident case studies to ground governance principles in practical reality. Participants leave with a clear framework for evaluating their organisation's AI governance maturity and a structured approach to the decisions they need to make. Suitable for board members, general counsel, compliance officers, risk managers, HR directors, technology leaders without deep governance backgrounds, and any senior manager responsible for overseeing or approving AI deployments.