Organisations are increasingly aware of the new risks and ethical considerations of artificial intelligence (AI) and machine learning (ML) for their business, requiring an update to existing enterprise risk frameworks and governance processes. At Deloitte, we have been advising organisational leaders across industry on AI governance. In this blog post, we highlight a few common challenges we have encountered from these engagements.
1. Regulations and consumer expectations may vary across countries. Multi-national organisations often wish to define a single universal policy, but there is an inherent conflict between the desired clarity and the complexity of the real world’s jurisdictional differences in both definition and interpretation. Having compiled a database of laws, regulations, and guidance, it is clear to us that adopting the lowest common denominator might make good sense in some areas for the sake of simplicity. An example would be a ban on the use of facial recognition technology for commercial usage. However, in other areas, having a blanket policy could materially reduce the scope of innovation activities in certain contexts and jurisdictions – to the detriment of the organisation’s competitive opportunity. In addition to regulatory expectations, organisations must be cognisant of consumer expectations; there have been studies that show that – for consumers, what’s acceptable in one country may not be acceptable in another.
It is also important to recognise the rapid development of the regulatory frameworks on AI. This requires a nimble approach, not only to policy maintenance, but also to managing the models in use within an organisation on a continuous basis.
2. Careful consideration should be given to what governance should be mandatory and standardised versus what should be discretionary and specific to the use case. It is often difficult to identify the key areas that should be mandatory and standardised across the organisation, compared to what needs to be considered on a case-by-case basis.This was especially true in one of our projects with a US-based global bank. We inventoried their AI systems and identified which risks are more likely to be consistent across use cases, such that a more uniformed approach is possible. For example, we recommended a standardised process for unintended bias testing, information security, and robustness / reproducibility. However, ethical and legal risks, including defining fairness for each use case, are highly variable and should be considered on a case-by-case basis, and this approach links nicely with our first observation about simplicity versus complexity in governance approach.
3. A clear and well-understood approach to managing third party technologies is essential. When using AI-as-a-service (AIaaS), typically the most important risks, such as ethical considerations and fairness, remain with the organisation, whilst the less important technology risks such as availability and scaleability are outsourced. The key differences between AIaaS and building your own AI are that 1) You may have no insight into the model itself, 2) You may have no insight into the datasets used to train the AI, and 3) You may have no insight into how your own organisations data will be used within the AI and handled in general. Ensuring that your governance framework is capable to adequately addressing these limitations is an essential element of a comprehensive approach. For example, it is important to consider the privacy implications, especially if AIaaS is being trained on customer data. Organisations should also ensure that appropriate contractual obligations are in place for certain controls to be in place, such as fairness testing and explainability assessments.
4. Training and awareness across the three Lines of Defence (LoDs) is vital to effective governance. Training often tends to focus on the first LoD and the front-line strategy teams, especially in defining standards and guardrails for the organisation. However, it is important that all three Lines of Defence understand their responsibilities and activities for governance and controls and are supported with frameworks appropriate to their roles. We have found significant opportunities to help the second and third lines enhance their roles in governing their organisations AI usage, complemented with appropriate training and tooling.
5. Tools and methods for monitoring and testing are critical. A lot of organisations, especially in financial services, have been experimenting with AI. When there are only a handful of AI systems in production, it was feasible to implement ad hoc, manually based testing to ensure ongoing alignment of the AI’s design and purpose. However, recently, especially with the accelerated digitalisation from the pandemic, many organisations have begun scaling their AI systems, and as such, risk management can no longer be on an ad hoc basis. AI governance must scale with the volume of AI systems planned to be in production.
Systems with frequent retraining present an especially problematic area where monitoring is crucial. For example, a large UK bank’s forecasting model monitoring required manual maintenance of monitoring metrics. This was an unsustainable approach and a significant future risk if faced with rapid shifts in input data, such as due to a temporary policy change. We recommended putting in place automated monitoring systems for key metrics in input data and on model robustness to automatically notify relevant owners for any unusual changes outside of expected boundaries.
Summary
AI governance is a relatively new hot topic. It requires engagement from various experts in: regulation, digital risks, governance, third party technology, data ethics, and AI/ML. As organisations update their enterprise risk frameworks to address the new risks and ethical considerations of AI/ML, they may face unique challenges for which there is limited information about best practices. Our hope is that by sharing these key learnings, we will help organisations in managing AI risks so that they can innovate with confidence.