AI TEACHES ITSELF. BUT CAN IT LEARN RISK INTELLIGENCE?
The Situation
A financial services company had a problem. It faced increased risk exposure from its artificial intelligence (AI) due to inconsistent monitoring, risk identification, governance, and documentation of multiple applications across its business units.
It had to be addressed. The issues potentially exposed the company to poor customer experiences; negative brand image; and legal, regulatory, and compliance violations.
How was this happening? Their AI models and applications were generating results quickly, sometimes within a few hours. And by their nature, AI models have an inherent ability to learn and make algorithmic adjustments to optimize their performance.
The organization’s executives realized that they didn’t have a robust mechanism to manage the risks and ensure the AI algorithms operated within the guardrails of how the company intended them to operate. Further, information on vendor AI models was limited, constraining the ability to identify risks.
The company wanted help managing existing AI risks and to develop a rigorous process for keeping a watch on emerging ones. But to do that and perform risk assessments quickly, the company had to expand its data science, statistical, and risk management capabilities.
Deloitte helped the company conduct in-depth analysis across 60+ AI models owned across different business functions in order to capture a clear risk profile of each AI model.
Deloitte’s AI governance and risk management specialists collaborated with the company’s data science team to review the AI models and develop a risk assessment. Each AI model was reviewed against Deloitte’s AI Risks dimensions and aligned with Deloitte’s Trustworthy AI™ framework.
Recommended steps included ways to address and mitigate issues while enabling responsible AI model development and deployment. The governance plans also defined ongoing monitoring plans and the training needed to manage AI risks.
But more importantly, we helped our client make AI more human. We recommended that humans keep watch on the AI algorithms. We made sure that the learning behind each AI model was transparent and that guardrails and accountability existed if and when an AI model produced unexpected outcomes.
AI DIDN'T JUST GET MORE INTELLIGENT. IT GOT MORE HUMAN.
The Impact
Thanks to Trustworthy AI, our team:
200+ AI models identified a Providing a better understanding of the definition, taxonomy, and application of AI across the company
60+ AI models assessed a Identifying the risks associated with AI use and deployment
Trust in AI adoption a Creating confidence via a defined operating model and rigorous structure to manage AI responsibly
Models for the future a Setting up systems to survey AI models and algorithms as they learn over time, not merely on day zero or day one
Contacts:
Cory Liepold
Principal
Deloitte & Touche LLP
cliepold@deloitte.com
+1 612-397-4168
Satish Iyengar
Senior manager
Deloitte & Touche LLP
siyengar@deloitte.com
+1 704-877-3162
RISK INTELLIGENCE THAT'S ANYTHING BUT ARTIFICIAL.