Skip to main content

The increasing role of AI in financial services:

Considering AI and ML in your Audit and Assurance Policy

Financial services firms are increasingly focusing on how they can use artificial intelligence (AI) to drive strategy and improve business models. As AI becomes more central to the business, links to directors’ remuneration and key performance indicators are increasingly prevalent in disclosure to investors and in Annual Reports, but may not be subject to assurance or considered as part of the statutory audit.

Ethical use of AI

 

Banks, insurers and payment providers are natural users of AI and machine learning (ML) as they are able to amass high volumes of data which give valuable insights into risk and customer behaviour. As AI and ML start to be used to reduce costs, improve pricing and accelerate growth, many firms are developing frameworks to make sure this data is used in ethical and appropriate ways. These might include issues around fairness, explain-ability and robustness, as well as how appropriate oversight of AI is ensured. Firms should consider whether they have defined a set of ethical AI principles and the extent they wish to state their commitment to these principle in their public disclosures, and may wish to consider how the OECD principles or the EU’s ethical guidelines on ethical AI can help them shape these.

For firms subject to the Senior Managers and Certification Regime, individuals subject to the regime need to be as confident in the use of AI and ML by the business as they are in traditional models and human decisions. More broadly, Deloitte research has found that Boards are “over optimistic” about their oversight of technology, and emphasises that “Boards need to be vigilant and self-critical in fast-changing areas.”

This is particularly important where the use of AI and ML can impact customer outcomes and lead to detriment by exacerbating existing inappropriate biases in data and leading to unfair decision making or pricing if not subject to correct controls, processes and oversight. The UK Financial Conduct Authority remains focussed on how the use of AI can benefit consumers, whilst aware of the risks and the need for consumer confidence.

As well as needing to ensure that metrics presented around pricing, growth and costs are robust, Boards may also want to seek assurance that frameworks for ethical use of data developed by the business, and in some cases shared publicly, are being adhered to.

Regulatory interest

 

Whilst use of AI and ML is most extensively discussed in reporting by motor insurers, its use is increasing across the sector, including within banking. In its recent consultation paper the Bank of England reiterated their view that use of AI and ML introduces unique risks, as well as amplifying existing risk associated with the use of models. The paper also introduces an expanded definition of model, which could impact firms existing use of automated decision making, for example:

In the face of new and increased risk, claims made about the use of AI and its centrality to strategy and business model should be included within the Audit and Assurance Policy, as well as considering how assurance over the operation of artificial intelligence and machine learning where it is material to the business.

Looking forward

 

As financial services firms continue to face cost pressures and seek to innovate the use of AI and ML will grow. Firms face need to balance technological progress and the need to maintain the trust and confidence of consumers. Assurance can help firms report on its use in a responsible and robust way, giving confidence to Boards and consumers that the benefits are accurately captured and that its deployment is delivering equal or better outcomes for consumers.

To learn more about how Assurance can give confidence in use of AI and ML, contact our expert teams.

For more information, or to discuss how Assurance can give confidence in use of AI and ML, contact our expert teams.