Skip to main content

Generative AI and Fraud – What are the risks that firms face?

The development of Generative Artificial Intelligence (“Generative AI”) offers exciting opportunities for companies to develop automated systems and controls to protect themselves and consumers from financial crime and risk. Over the past few months, there has, however, been extensive discussion about the risks that Generative AI poses to industries regarding the potential for malicious actors to deploy Generative AI to commit fraud in many different forms. These developments in Generative AI also present opportunities to detect, prevent and mitigate fraud.

In this blog, we touch on some of the fraud risks that AI presents and offer some considerations for firms to mitigate those risks.

Introduction to AI enabled Identity Fraud

 

Much of the controversy around AI has been focused on the advances made in Generative AI, which can be used to create new media, including audio, code, images, text, and videos. There are also certain websites or apps that can generate human-like responses in chat conversations. The leading Generative AI products include the GPT family (including ChatGPT) and BERT, and these AI programs are rapidly expanding their capacity to provide all forms of audio, visual media and complex documentation upon simple request. The ability to create audio, video, images and documents that appear genuine to the human recipient presents fraudsters with new opportunities to commit fraud in ways we discuss below.

Deepfakes & Voice Spoofing

 

Generative AI programs such as VALL-E, DALL-E and Midjourney can be used to clone vocal patterns, create audio files, and create fabricated photo and video media. Regula, a global developer of forensic devices and identity verification solutions, published in the findings of a survey that 37% of organisations globally have experienced some kind of deepfake voice fraud attempt. The first ever AI facilitated fraud case was reported in 2019 where scammers used an AI generated voice clip of an energy group CEO to direct the CEO of its UK subsidiary to release EUR 230,000 to a fictitious Hungarian supplier.

The use of vocal recognition as a means of identity verification for accessing bank accounts may no longer prove to be as secure as they have previously been as AI advances in its ability to mimic speech patterns and vocal identification measures.

Email Phishing

 

Fraudsters typically use Email Phishing to manipulate individuals into making transactions that reward fraudsters or offer up personal data that facilitates fraud. It involves a fraudster sending an email that purports to be from a genuine source, requesting the recipient provide security data either directly or via clicking a link to a bogus website. Cambridge based cybersecurity firm Darktrace has warned that since the release of ChatGPT, it has seen an increase in the linguistic complexity, the volume of text, punctuation and sentence length used in suspicious emails by those targeting their customers. They have also seen a decrease in the number of manipulative emails which rely on tricking victims into clicking malicious links. As a result of this switching of methods used, Darktrace estimates that Generative AI is being utilised to construct increasingly sophisticated phishing scams and that cybercriminals may be redirecting their focus on crafting more sophisticated social engineering scams that exploit user trust, which may have used ChatGPT or other forms of AI.

The damage resulting from successful phishing scams (e.g. customer data loss) can impact reputation, market position and bring regulatory scrutiny. Developments in phishing sophistication pose greater challenges for those developing defences and staff training to recognise these attacks.

Synthetic Identity Fraud

 

Synthetic identity fraud is a type of identity theft in which criminals combine both real and fake personal information to create a new, fictitious identity that can then be used for various identity-related schemes, such as obtaining credit or goods.

This is the fastest growing form of financial crime in the United States and now costs US financial institutions billions of dollars annually. It is particularly pernicious as it is hard to detect with traditional fraud detection methods. Firms are increasingly turning to advanced fraud detection and prevention technologies that themselves incorporate AI, to identify patterns and anomalies signifying fraudulent activity.

Document Forgery Fraud

 

Fraudulently produced documents present a real risk to auditors and third parties looking to verify information about a corporate entity. Generative AI programs can potentially create bank statements, accounting documentation, Board minutes and other documents required during an audit, due diligence exercise or other process requiring evidence, with comparatively little effort when compared to traditional forgery methods and with greater authenticity. This is exacerbated by new “fraud-as-a-service” offerings, where experienced fraudsters with a suite of skills and technologies offer their services to enable others seeking to undertake a particular form of fraud.

A number of AI systems promote the ability to identify forged documents using traditional methods (freehand simulation, tracing or simple electronic manipulation) and to detect false accounting. It may only be a matter of time before Generative AI becomes sophisticated enough to be able to generate identity, banking and corporate documents almost entirely indistinguishable from genuine ones by human reviewers.

What can firms do to protect themselves?

 

The forms of fraud that have emerged from the adoption of AI are complex, and this continues to be a quickly evolving area. Considering the types of fraud we have explored above, we set out below some key considerations for firms as AI usage becomes more widespread by fraudsters:

  • Consider if your fraud risk management framework is robust enough to resist novel modes of fraud enabled by Generative AI or whether outside assistance is required.
  • Reflect on your fraud risk management systems and consider if there are areas of vulnerability in which the forms of AI-enabled fraud set out above could occur.
  • Identify areas of financial controls (e.g. payments) vulnerable to Senior Manager override and ensure that secure systems are in place that prevent irregular transactions.
  • Understand if there are particular identity verification requirements in your business model that could be subject to evasion by the production of AI generated content.
  • Assess the level of training and awareness required of staff members to minimise the risk of vulnerability to more sophisticated phishing scams.
  • Review identification verification procedures and consider whether they are sophisticated enough based on the size and nature of customer and supplier base.

Should you wish to discuss this topic further or require support with considering the fraud risks posed by AI and assurance services relating to your Control Framework, please don’t hesitate to get in touch with our Fraud Governance, Risk and Control team here. Should you wish to further assess your future with AI technologies please contact our AI Assurance team here.