Skip to main content

Generative AI and the financial sector

Navigating the opportunities and risks

The generative AI genie is well and truly out of the bottle.

With the release of products such as OpenAI’s ChatGPT and Google’s Bard, cutting-edge AI products are now more than ever in the hands of everyday users.

The promised benefits from generative AI are vast, ranging from faster generation of business reports and improved querying of large internal document repositories, to dynamic learning experiences and automated feedback on our work. We are already seeing a wide range of both consumer and commercial applications being developed on top of the latest generative AI models, and generative AI is already becoming part of the fabric of our everyday lives.


The risks of generative AI in the financial sector

However, we also need to be alert to the risks and potential harms posed by generative AI, especially in the financial sector where the consequences of errors and misuse can be severe, both for consumers and businesses.

The financial sector relies on trust, and must remain reliable and secure to ensure it can continue to provide valuable services to its clients. The financial sector is also highly regulated, which places additional onus and risk on companies to ensure compliance and to avoid costly errors that could otherwise result in large fines and additional sanctions.

This currently conflicts with some of the existing limitations of cutting-edge generative AI models, including hallucinations (i.e. the outputs are not factually correct), biases, information leakages, privacy and security concerns, legal risks, and more.

For consumers in the financial sector, such as individuals that hold bank accounts, mortgages or investments, both security and privacy are undoubtedly key concerns. There are numerous reported instances, for example, of confidential commercial information that has been input into generative AI models being leaked to other users. Although to our knowledge this has not yet been reported say for individual bank accounts, this is certainly a plausible scenario. Of course, it is perhaps unlikely – and certainly discouraged – that users input key personal information such as bank details into widely available consumer generative AI products, but this may become more commonplace as models are incorporated into customer services and online banking applications.

Perhaps even more concerning are potential applications of generative AI to bypass existing security measures that protect individual accounts. For example, it is reportedly possible to gain access to an individual’s bank account by cloning their voice using one of the now many voice generation tools that can mimic voices from short audio samples.

This is not only a problem for consumers, but also for businesses. For example, imagine that you work in a finance department and receive an email from your CFO/CEO asking to transfer a large amount of money. You would probably challenge this before carrying out the instruction. However, if someone calls you posing as the CFO/CEO using a convincing AI-generated cloned voice, you might have little doubt and execute their instructions. Scenarios like this will increasingly become commonplace, and we need to ensure we are able to successfully counter these types of attacks. 


Navigating generative AI risks whilst exploiting new opportunities

In Deloitte, we help clients across the financial sector to manage their risks, including risks posed by the adoption of new technologies such as generative AI.

For example, one of the main approaches currently used to protect customers in the financial sector is known as KYC (know-your-customer) systems. Scans of identity documents such as passports and driving licences are now widely used to verify customers in a secure and reliable manner. However, recent advances in AI mean that image generation models are increasingly able to create convincing fakes of the documents required to pass KYC checks. Deloitte is actively researching ways to distinguish between AI-generated fakes and real ID scans. In this instance, we may not need to entirely replace existing security measures: instead, we can augment them with additional checks, creating complementary layers of security similar to the multi-factor authentication (MFA) that is now routinely used when logging into banking apps and email accounts. Similar approaches can also be used to help protect against the voice cloning attacks described above.

And whilst generative AI undoubtedly poses risks within the financial sector, it also presents new opportunities. For example, within KYC, we have also been exploring the use of generative AI to speed up the checking and validation of certain documents, in particular checks that are still done manually and therefore incur higher costs. Previous algorithms were often too rigid and generated large numbers of false negatives due, for example, to simple typos or variations in the spelling of names. Using generative AI to infer whether words such as “Rose” and “Rosemary”, or “Stnaford Street” and “Stanford St”, are in fact the same entity can help solve some of these challenges. We can also use generative AI to help complete data that is missing in forms or entered in the wrong field. For example, generative AI can be used to evaluate all properties of an address as a single entity, rather than line-by-line, which can reduce the number of mismatches.


Getting the balance right

As with all new technologies, generative AI poses both risks and opportunities to organisations across the financial sector.

With the current hype at almost fever pitch, it may be tempting to try and deploy generative AI solutions at accelerated speeds, but whilst this may deliver some value, it also poses significant risks.

Organisations across the financial sector should certainly be experimenting with generative AI, but they should also proceed with caution, and seek to balance opportunity with risk by, for example, identifying initial use cases in low-risk/high-reward areas, and by experimenting not only with proprietary models, but also with open-source models that can be deployed locally and reduce privacy and data leakage risks. This is also a hugely uncertain and fast-paced area of development, and organisations should seek to remain flexible and adaptable in their approaches where possible to avoid implementations that quickly becoming outdated.

Deloitte has worked at the intersection of AI and the financial sector for over a decade, and has specialist teams that develop, test and deploy AI solutions into highly regulated environments.

If you would like to find out more about the range of generative AI services that Deloitte can offer to the financial sector, please get in touch.