Skip to main content

Internal Audit hot topic: Digital risk - artificial intelligence

Digital risk - artificial intelligence is a new topic in this year’s Deloitte IT Internal Audit hot topics 2023 survey results.


Why is it important?

Organisations’ adoption of Artificial Intelligence (AI) is increasing, as they seek to capture the opportunity associated with deploying AI. High priority focus areas include increased productivity, superior decision-making capabilities and new revenue streams.

With the improvement of AI systems functionality and reduction in their cost, it is expected that the rate of AI adoption will continue to increase, however, with opportunity comes risk. There is the potential for AI systems to facilitate decisions which lead to sub-optimal outcomes or make unethical decisions which are not aligned with an organisation’s values.

It is important to consider the existing regulatory requirements for handling personal data, and the regulatory environment for AI is developing rapidly. Public awareness of AI and the associated reputational risk are driving organisations to develop a deeper understanding of where and how they are deploying AI.

Organisations should consider their compliance position alongside their desire to unlock more value from data across the organisation.

The widely anticipated finalisation of the European Commission’s Artificial Intelligence Act (AI Act) is expected to be passed into law late 2023 or early 2024.

What’s new?

  • there has been a notable increase in the adoption of AI technology, as well as an increase in the complexity of algorithmic decision-making by AI systems
  • there is increasing awareness from the public of the ethical risks associated with AI, which is leading to organisations taking steps to understand their use of AI systems and putting in place processes to ensure AI is used responsibly
  • more “off-the-shelf” AI solutions are available in the market, such as CV screening tools
  • some solutions can act as a “black box” whereby the underlying algorithms are not readily accessible to the organisation using them
  • decision-making over the trade-offs associated with AI use (such as privacy and personalisation) are being defined, and accountability of these decisions are being discussed at board level.
  • to ensure transparency of customer impacting decision making, some organisations are turning to a human-centric AI approach. “Human in the loop” AI is becoming common place in high-risk areas.
  • the regulatory landscape for AI is emerging. Various guidance and frameworks exist (e.g. UK government, Alan Turing Institute, ICO, FCA, NIST , EU Regulator), but the approach to regulation is still being worked through globally
  • the widely anticipated finalisation of the European Commission’s Artificial Intelligence Act (AI Act) is expected in 2023, and when passed into law, it is positioned to function as a new global standard for AI regulation. It aims to introduce a common regulatory and legal framework for artificial intelligence, with a scope that encompasses all sectors, and to all types of artificial intelligence.


What should Internal Audit be doing?

The adoption rate of emerging technologies, such as AI, and their use cases are different for each company. Internal audit teams need to determine an appropriate response in the context of the organisation's risk appetite and rate of adoption.

See Image 1, in which we provide a blueprint of key building blocks to be considered by internal audit teams in their review of the use of AI in the organisation.

Image 1: Enterprise strategic and operational building blocks for AI adoption

Internal Audit should prepare their teams to engage and assess emerging risks where AI adoption becomes more pervasive. We have summarised some of the key actions internal audit teams should consider:

  1. training and awareness: familiarise and train the team on the basics of AI, including how it works, the distinct types of AI, and the potential risks and benefits of using AI. This will help them understand the context in which AI is being used within the organisation. The team should stay up to date with developments in AI, the related risks and best practices to continually improve the internal audit team's knowledge and skills. External subject matter experts (SMEs) may be brought in to help assess the risks of complex AI systems
  2. alignment with AI strategy: develop a clear understanding of the organisation's AI strategy, including its objectives and the processes in place for developing and implementing AI applications
  3. governance, controls, and accountability: identify and understand the key risks and relevant governance in place within the organisation, including ownership and accountability. Work with the technology and risk functions to support the use of appropriate controls over the use of AI, and any underlying tools and technologies used to support AI
  4. AI inventory: understand the range of AI applications and systems being used within the organisation, including how they work and the processes and controls that are in place to ensure their proper functioning
  5. validation: periodically plan and independently validate that AI systems being used are in line with appropriate control standards, monitoring requirements, and ethical principles commensurate to the organisation’s objectives and risk appetite
  6. regional regulatory environments: with varying guidance provided through each regulatory body and government and the rapidly changing legal and regulatory landscape for AI, a global organisation should consider the regional context for each AI development. It is important to decipher which controls and governance should be standardised across the organisation and which should be discretionary.

Internal Audit should work closely with the business to understand the extent of AI use and assess whether risk management processes are fit for purpose.

In conclusion, the increasing adoption of AI brings with it new opportunities for organisations to improve productivity, decision-making and create new revenue streams. However, it also brings new risks, including the potential for sub-optimal and unethical decisions that are not aligned with an organisation's values. As the public becomes more aware of the ethical risks associated with AI, organizations are taking steps to understand their use of AI systems and ensure they are used responsibly.

Internal Audit is well positioned to ensure organisations have the confidence to pursue the opportunities AI present.

This article has expanded on excerpts from the Digital Risk – Artificial Intelligence hot topic as noted in the Deloitte UK’s annual report on Information Technology Hot topics for Internal Audit for 2023.

 

 

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey