Skip to main content

Generative AI - Risks and controls

This follows Deloitte’s previous blog on generative AI risk and ethical considerations.

Over the past few months there has been extensive discussion across industry around the capabilities of generative AI technology and its potential to revolutionise the way we work. Whilst it is clear that these technologies will facilitate increased efficiencies, there are also some risks to consider.

In this blog we touch on some of the risks and controls for firms to consider as they embark on their journey to generative AI adoption.

Fraud
 

It is without question that foundation models such as the GPT family and BERT have the potential to accelerate the pace at which firms can research, produce, and document content. The capabilities of these and other models could however be adopted by those seeking to commit fraud including for example falsifying invoices, transactions or even supporting with the development of fake identities. Firms relying on inputs from third parties to support business decisions (e.g. the insurance or lending sector) should consider whether they have sufficient controls in place to identify whether the evidence provided is genuine.

Reputational
 

Many generative AI systems are yet to incorporate ethics into their decision making and their results are reliant on the data upon which they have been trained. This could result in the AI system performing in an unexpected manner, including creating outputs that are not aligned with an organisation’s own ethical principles. Firms should consider the extent to which they have transparency over the training data used and level of testing conducted to identify potential issues such as bias or discrimination.

Financial
 

Algorithms and machine learning models have already been adopted by financial institutions to support with trading, investment and credit decisions. As firms consider the adoption of generative AI technology there is an increased risk that unidentified flaws or inadequate data could result in financial losses. To mitigate against this risk it is important to ensure AI systems are extensively tested prior to deployment and consider the need for human oversight in areas of higher risk. Firms should also consider the establishment of monitoring controls and alerts to identify if the AI system is performing in manner that was not originally intended.

Regulatory
 

Recently there has been a surge in activity by governments and regulatory bodies focussed on ensuring firms have adequate AI governance structures and controls in place with some countries now outright banning the use of certain generative AI technologies. Failure to comply with these requirements could expose firms to regulatory fines (e.g. 10% of global revenues under the proposed EU AI Act). Firms need to consider whether their development or use of generative AI technology will fall under the scope of these new requirements and put appropriate plans in place to ensure they are regulatory ready. At the core of many of these regulations is the need for appropriate governance and controls over AI development and usage.

Privacy & Technology
 

Major questions persist around the presence of personal data within the datasets used to train generative AI systems. Opacity around what data was collected, for what purpose, and how it is used is likely to result in increased risks for firms creating generative AI systems or utilising their outputs. Firms will need to consider how to navigate these fluid challenges as they implement data privacy controls. This includes, for example, new policies around data retention and access rights to data relating to requests via the generative AI system. The resilience of technology and cloud infrastructure will also need further consideration, specifically for firms with a large number of employees and/or clients adopting generative AI.

Generative AI can also create new vectors for cybersecurity risk and adversarial uses which are difficult to predict. For instance, generative AI poses risks to biometric security systems, like facial recognition.

Legal
 

There has been much debate recently on the topic of the legal ownership of content produced by generative AI. Laws and legal interpretations may differ across jurisdictions and firms should consider the extent to which they will own the intellectual property rights over any content produced. Early engagement with legal experts on this will reduce the risk of potential disputes over ownership in future years. Firms should also seek to identify any other potential current or future risk to litigation relating to the use of this AI technology, including for example the proposed EU AI Liability Directive.

Next steps
 

Generative AI systems are clearly complex in nature and navigating the potential risks can be challenging. We set out below some key take-aways for firms to consider as generative AI usage becomes more widespread:

  • Identify where generative AI could be used internally or by third parties such as clients, suppliers or other key stakeholders.
  • Determine whether this technology presents any new or incremental risks or regulatory obligations to your firm.
  • Define your firm’s risk appetite for generative AI adoption and develop related policies and procedures.
  • Assess the completeness and adequacy of the design of existing controls, including the requirement for additional policies and procedures
  • Remediate any control gaps identified to ensure risks associated with generative AI are adequately mitigated against

Please read our recent report on the “implications of generative AI on business” for more insights on this topic.

Should you wish to discuss this topic further or require support with considering the risks posed by AI and the necessary enhancements to your Control Framework, please don’t hesitate to get in touch with our AI Assurance team here.