In the ever-evolving landscape of technology, 2023 has been marked by the dazzling rise of AI, crowned as the buzzword of the year. The groundbreaking advancements in AI, spearheaded by innovations like ChatGPT and Google Bard, have not only captured public imagination but also signaled a seismic shift in organisational efficiency and risk landscapes.
The Call for Pause and Governance
A pivotal moment occurred in March 2023 when over 30,000 people, including prominent technology leaders, advocated for a temporary halt in AI development beyond ChatGPT-4’s capabilities. This bold initiative, rooted in a profound understanding of AI’s potential risks, urged a collaboration between policymakers and AI developers to establish robust AI governance mechanisms. The proposal emphasised the necessity of overseeing and tracking high-risk AI systems, exploring watermarking technologies to differentiate real from artificial, implementing stringent auditing systems, and enforcing AI-specific risk management.
Regulatory responses to AI have been brewing for some time. The European Union, always a step ahead, introduced the comprehensive AI Act, imposing hefty penalties for non-compliance. Similarly, financial regulators in the U.S. and the UK have equated AI model governance with other critical risk management processes. The White House’s AI Bill of Rights further hints at upcoming, more stringent AI regulations.
The essence of AI governance lies in its ability to prevent harm and foster trust. Organisations should embrace a “do no harm” principle, acknowledging the potential impacts of AI across all societal segments throughout its lifecycle. Trustworthy AI, as defined by the EU AI Act and NIST’s AI Risk Management Framework (AI RMF), encompasses legal compliance, technical robustness, ethical soundness, and various other attributes like reliability, security, accountability, transparency, privacy, fairness, and the management of harmful bias.
Practical Steps for Implementing AI Governance
Identification: It’s crucial to know and document your AI systems. This includes understanding the context, development details, monitoring information, risks and impacts, and change management processes. Resources like the NIST AI RMF and the EU AI Act can provide valuable guidance.
Risk Assessment: Assessing AI systems’ risks is vital to understand potential harms and necessary controls. This involves considering data classification, functional importance, and specific AI usage. The EU AI Act and other frameworks provide guidelines for categorising risks and prohibited AI uses.
Implement and Assess Controls: Controls should be implemented across all AI lifecycle stages, tailored to identified risks. This includes policy drafting, ethical assessments, data governance, risk management, model reviews, and clear deployment strategies. The EU AI Act’s conformity assessment is a critical step for high-risk AI systems.
Ongoing Monitoring: Continuous monitoring of AI systems in production is essential, encompassing control reassessment, regular reviews, incident tracking, and risk identification. Proactive incident reporting and effective communication strategies are key.
Ultimately risk managers must utilise existing frameworks to govern AI effectively while adapting to AI’s unique challenges. By identifying, assessing, controlling, and monitoring AI systems, organisations can harness AI’s benefits while mitigating risks. Given the rapid advancements in generative AI, staying agile and forward-thinking in regulatory compliance is crucial for effective AI governance.