Skip to main content

Do bots understand risk?

A financial institution addresses what AI does when no one is looking

AI TEACHES ITSELF. BUT CAN IT LEARN RISK INTELLIGENCE?

The Situation

A financial services company had a problem. It faced increased risk exposure from its artificial intelligence (AI) due to inconsistent monitoring, risk identification, governance, and documentation of multiple applications across its business units.

It had to be addressed. The issues potentially exposed the company to poor customer experiences; negative brand image; and legal, regulatory, and compliance violations.

How was this happening? Their AI models and applications were generating results quickly, sometimes within a few hours. And by their nature, AI models have an inherent ability to learn and make algorithmic adjustments to optimise their performance.

The organisation’s executives realised that they didn’t have a robust mechanism to manage the risks and ensure the AI algorithms operated within the guardrails of how the company intended them to operate. Further, information on vendor AI models was limited, constraining the ability to identify risks.

The company wanted help managing existing AI risks and to develop a rigorous process for keeping a watch on emerging ones. But to do that and perform risk assessments quickly, the company had to expand its data science, statistical, and risk management capabilities.

The Solve

Deloitte helped the company conduct in-depth analysis across 60+ AI models owned across different business functions in order to capture a clear risk profile of each AI model.

Deloitte’s AI governance and risk management specialists collaborated with the company’s data science team to review the AI models and develop a risk assessment. Each AI model was reviewed against Deloitte’s AI Risks dimensions and aligned with Deloitte’s Trustworthy AI™ framework.

Recommended steps included ways to address and mitigate issues while enabling responsible AI model development and deployment. The governance plans also defined ongoing monitoring plans and the training needed to manage AI risks.

But more importantly, we helped our client make AI more human. We recommended that humans keep watch on the AI algorithms. We made sure that the learning behind each AI model was transparent and that guardrails and accountability existed if and when an AI model produced unexpected outcomes.

AI DIDN'T JUST GET MORE INTELLIGENT. IT GOT MORE HUMAN.

The Impact

Thanks to Trustworthy AI, our team:

  • Brought understanding of how AI applications can generate outcomes devoid of business context when left unchecked
  • Created a consistent classification and approach to AI algorithms and techniques
  • Met or exceeded industry benchmarks in AI governance capabilities set by peer organizations
  • Improved AI safeguards, transparency, and confidence across businesses with policies to determine who is responsible for the output of AI system decisions·       
  • Put in place an agile and targeted operating model to manage AI adoption in a responsible manner with appropriate governance and controls

200+ AI models identified a Providing a better understanding of the definition, taxonomy and application of AI across the company

60+ AI models assessed a Identifying the risks associated with AI use and deployment

Trust in AI adoption a Creating confidence via a defined operating model and rigorous structure to manage AI responsibly

Models for the future a Setting up systems to survey AI models and algorithms as they learn over time, not merely on day zero or day one

RISK INTELLIGENCE THAT'S ANYTHING BUT ARTIFICIAL.