Skip to main content

For many organisations ‘becoming digital’ has involved large-scale automation of repetitive processes, and Artificial Intelligence (AI) has played a role to increase the breadth of coverage. However, it brings new risks and governance challenges that continue to act as a barrier to scaling, and prompted many to question the legitimacy of the role of technology in organisations and society at large. How can we find the balance between innovation, control and sustainability?

In order to navigate these risks you must:

  • Understand your current ‘AI risk’ exposure,
  • Ensure AI outcomes are validated for both efficacy and ethics,
  • Put in place governance that supports the maintenance of these outcomes,
  • Ensure you are ready for crises when they occur.

Explore our AI Risk Management Framework

How can we help you?

 

We pride ourselves on our hands-on experience at all levels of our data science capability, but we are not content to just follow industry trends. Instead we have collaborated with academic and public policy forums, pushing the AI risk conversation forwards. While our suite of accelerators expedite our AI risk engagement’s execution and prove out internal research endeavors, ensuring our solutions remain both practical and current.

Strong governance practices around AI enable organisation to innovate
with confidence whilst reducing the risk of complex technology. This is
a crucial step for businesses looking to build, deploy and maintain
trustworthy AI, consistently and at scale.


AI Inventorisation

The absence of a universally agreed definition of AI in the enterprise
makes targeted application of risk management activities a significant
challenge. Services include:

  • Buidling a consistent definition of AI to be applied across the organization.
  • Capturing AI model usage throughout the enterprise.

 

Risk and Control Governance Review

Engagement with risk and control functions should happen early in the AI
lifecycle to help ensure potential issues are identified and addressed
up-front and a ‘control by design’ approach taken. Services include:

  • Identification of the potential impact on existing risks and controls.
  • Development of a framework to cover factors such as accountability, reliability, fairness and ethics.

 

Operating Model Design & Implementation

Robust operating models are essential for the governance of AI models
with the impact of AI pervasive and requiring input from multiple
stakeholders (Technology, Operation, Model, etc.). Governance
requirements should be commensurate with the level of risk for each use
case, to encourage innovation and direct focus to higher risk areas.

 

Governance Technology

Automation is required to scale AI services, ensuring appropriate processes and controls are in place. Services include:

  • Design and implementation of ongoing model management tooling as
    part of model validation and control execution to give comfort around
    models that are in the live environment.

Model risk is traditionally considered in financial use cases where a
firm's model may produce an incorrect estimate, resulting in an
inadequate performance or losses.

The breadth of AI systems application extended significantly beyond
the standard scope, surfacing a variety of new risks to be managed.

As a response to this risk, the model validation seeks to ensure that
a model behaves predictably, as expected, and solves the business
problem posed to it.

 

Independent Model Validation

Independent model validation leveraging established framework and experience in model design, deployment and validation. Scorecard model expresses validation results to business audience, with more detailed findings documented and communicated in an iterative fashion. Services include:

  • Shallow model validation, where primary burden of evidence is on the model owners.
  • In depth model validation, where Deloitte owned analysis of code data etc. provides independent evidence generation.
  • T-shaped model validation, where a mixture of in depth and shallow
    review is performed, based on risk appetite and areas of concern.

 

Model Validation Training

Training of existing AI/validation teams in our model validation approach. Services include:

  • ‘Chauffeured’ model validation to support hands-on real-world learning of AI model validation techniques.
  • Training workshops for senior leaders and AI practitioners with
    initial assessment of the current state of maturity of internal AI
    models and validation process.
  • Privacy training to augment existing training packages with training solutions, ‘train the trainer’ packages.

 

Model Validation Governance

Setup support for a Model Validation Centre of Excellence, providing organisational structure, strategy, governance and management of AI environments. Services include:

  • Top down design with Oversight, Framework, Operating Model and Tools, Data and Technology.
  • Operating model components include Stakeholder Buy-In, Effective
    Prioritisation, Benefits Realisation, Controlled Development, Risk and
    Compliance, Delivery at Scale.

As data-driven techniques are increasingly applied across industries,
AI systems have an unprecedented scale of impact on our lives with
often unforeseen or unintended societal and individual implications.

The objective is to ensure that the use of AI does not lead to biased
or unfair outcomes, is well-governed, and works as intended, in the
interest of consumers and markets.

 

AI Ethics Framework

Robust model ethics require a well understood and measurable definition of a firm’s ethical position. Services include:

  • Confirmation of the prioritised ethical principles and values of the organisation (e.g. equal opportunity in employment).
  • Benchmarking with Deloitte’s ethics framework (based on existing regulatory environment, cultural environment etc.)

 

Ethical Governance

Operating models must reflect a firm’s ethical policy if the framework is to be effective. Services include:

  • Organisation structure setup (e.g., AI ethics CoE), strategy, governance and management of AI environments.
  • Redesign of top-down structure with ethical focus of Oversight, Framework, Operating Model and Tools, Data and Technology.

 

Independent Ethical Validation

Existing models should be validated against applicable regulations and a firm’s ethical principles. Services include:

  • Identification of any issues or shortcomings in the AI products with respect to the ethical principles.
  • Reporting any ethical trade-offs (e.g. bank branch safety vs.
    privacy), assess acceptability from key stakeholders, including
    customers, board, leadership, and employees.

AI Ethics Awareness and Training

Establishing an ethical culture requires understanding of the key risks and issues presented by AI. Services include:

  • Roundtable workshops to discuss emerging topics and issues with Deloitte SMEs.
  • Bespoke training workshops for senior leaders, analysing ethical trade-offs and framework considerations.
  • AI practitioner engagement, focusing on quantitative/technical ramifications of ethical considerations.

Instantiate processes into the development and live environment to
manage the impact of potential erroneous outcomes, including reputation
and crisis management.

 

Crisis Preparation

Crises are an inevitable part of business operation, and organisation’s
must be ready to respond to the diverse challenges that an AI crisis
presents. Services include:

  • Execution of fire drill scenarios within clients, simulating a
    crisis event within AI, for example the emergence of an unethical or
    poorly performing model.
  • Engagement models include rapid model validation tests on simulated data, communications and decisioning.

 

Reputation Monitoring

Reputation monitoring is both supported by AI innovations, and more
important because of AI innovation. Ensuring organisations are
continually analysing their media footprint is critical to maintaining
brand value. Services include:

  • Monitoring newsfeeds, twitter, and other media for changes in a
    firm’s brand sentiment, searching for critical events, for example the
    emergence of unethical behaviour in a chatbot model.


Rapid Response

Should crises occur, expert resources are not always available to
analyse root causes and closely related issues at the speed that the
market demands. Services include:

  • Operating Model Redesign.
  • AI Model Validation.
  • Ethical and Responsible Validation.

The speed at which AI solutions and their associated business
processes can change makes auditability and traceability challenging,.
This can result in errors that manifest on a scale and timespan that has
previously been unprecedented.

Through the EMEA Centre for Regulatory Strategy (ECRS) Deloitte
continues to make a leading investment in the area of regulatory change,
which continues to pose a major challenge for the financial services
industry.

Thought Leadership

By drawing together regulatory specialists with practitioners from
Deloitte’s Risk Advisory, Strategy Consulting and other relevant areas
to understand and advise on regulatory change, we focus on the
strategic, business model and aggregate impacts of regulation.


Horizon Scanning

We maintain strong relationships with regulators, central banks,
standard setters, finance ministries and major industry trade bodies,
allowing the ECRS to provide insights from the forefront of regulation.


Global Insight

While the focus of the EMEA group is on local regulation the ECRS works
closely with its Deloitte counterparts in centres in US and Asia
Pacific, tapping into the regulatory and remediation trends across the
globe.