Skip to main content

Safety first – the critical role of risk management and governance in Gen-AI adoption

Embracing the Generative AI Revolution in Financial Services Recommendations for Future Success (Part 2 of 5)

While the transformative potential of Generative AI (Gen-AI) should not be underestimated, the financial services sector faces unique challenges in ensuring its safe and responsible adoption. Used to managing vast quantities of sensitive data, financial services firms are more than familiar with by stringent regulations and critical risks to manage. Building a robust risk management framework, one that ensures data privacy and security and fosters trust and transparency, is critical to successfully implementing Gen-AI. This article – the second in a five-part series on the topic of embracing Gen-AI in financial services – explores key strategies for mitigating the risks associated with Gen-AI, highlighting the importance of good governance, cross-functional collaboration and proactive security measures. This piece is informed by insights from the deployment of Deloitte’s own proprietary Gen-AI solution, PairD1.

Building future-proof foundations for Gen-AI through best practice risk management


To ensure the adoption of Gen-AI is handled responsibly, financial institutions must embed risk management strategies directly into the development and deployment phases of their projects. Rather than addressing risks reactively, institutions need to adopt a proactive, forward-thinking approach that anticipates potential issues before they arise. This requires firms to ‘shift-left’ on risk, by:

  • Embedding risk management tools earlier in the process: begin by addressing risks at the design and deployment stages of Gen-AI systems. This means incorporating real-time bias detection, ethical guardrails, diverse training datasets, and ‘human-in-the-loop’ review processes right from the outset. For example, in a Gen-AI-powered marketing tool, build bias mitigation mechanisms into workflows to prevent biased content generation.
  • Employing universal guardrails for tech-agnostic risk mitigation: develop a comprehensive AI governance policy that applies across all Gen-AI tools, regardless of the vendor or underlying technology used. By addressing data privacy, security, access controls and usage limitations together, and by establishing universal guardrails that transcend individual technologies, institutions can ensure the ethical handling of AI-driven processes and data across different platforms and vendors.
  • Breaking down silos with joint governance: a successful risk management framework requires cross-functional collaboration between IT, security, legal, compliance and lines of business. Establishing a Gen-AI steering committee will help institutions create overarching AI governance policies, helping them to review use cases and monitor emerging risks more effectively.

Prioritising data privacy and security for Gen-AI


Data privacy and security are both essential in the heavily regulated world of financial services, where sensitive customer and client data must be treated with extreme care. Hence, institutions need to strike a balance between leveraging the power of Gen-AI and ensuring strict compliance with regulatory standards, customer trust, and data protection. When seeking to rapidly scale their Gen-AI operations, therefore firms should consider the following:

  • Managing external risks: modern hyperscale solutions allow institutions to flex their infrastructure dynamically as demand is added to their systems. This can be a common challenge with Gen-AI, given the significant processing loads it involves. However, as institutions explore the use of ‘hyperscalers’, including cloud-based AI services, they should ensure sensitive data remains within permissible geographic boundaries, implementing robust data loss prevention measures to protect their holdings.
  • Secure data processing: firms should also ensure that their Gen-AI processing adheres to industry best practices and is fully compliant with relevant regulations in the jurisdictions they are operating in. This extends both to encryption and secure data management.

And, beyond managing data risk, we believe firms should consider these additional risk mitigation measures to ensure the success and compliance of their Gen-AI programmes:

  • Leveraging experience to accelerate adoption: institutions need to learn from their journeys as they go, accelerating their adoption of secure Gen-AI systems by leveraging past assessments, learnings and pre-approved solution patterns, particularly as they relate to regulation. This will help deliver a faster track to full compliance. Maintaining a repository of security controls and best practices that meet FS-specific regulatory requirements can also help streamline the approval and deployment of future Gen-AI solutions.
  • Building secure AI systems by design: financial institutions should adopt a multi-layered approach to the construction of all AI-based systems. This means restricting data access to authorised personnel, encrypting data – both at rest and in transit – to prevent unauthorised access, as well as deploying systems to detect suspicious activity in real-time. Regular audits will also ensure ongoing compliance and security.
  • Employee Training and Awareness: for institutions, it is essential to cultivate a culture of security around Gen-AI, with regular training on data privacy, security best practices and the specific risks posed by these tools to ensure employees are fully security-aware. They should also establish clear and enforceable policies to govern the secure handling and use of sensitive data internally.

Trust and transparency – the essential pillars of responsible Gen-AI adoption


One of the other key challenges to successful Gen-AI adoption is how to build trust with users and stakeholders. The algorithmic processes driving Gen-AI models often operate as ‘black boxes’, lacking simple means of explanation. By building transparency and explainability into their Gen-AI tools, institutions can mitigate this challenge, helping AI become more trusted and more widely adopted within their organisations. Other useful measures include:

  • Illuminating the ‘black box’: it is essential for firms to ensure their AI outputs are as explainable and transparent as possible. For example, a Gen-AI-powered loan application system should provide not only an approval or denial but also indicate the key factors influencing the decision arrived at, whether that is an assessment of the customer’s credit score, income or other financial indicators.
  • Set Clear Expectations: institutions should also communicate clearly and openly concerning the limitations, capabilities and risks associated with their Gen-AI systems. Users need to understand how the system operates and what data it uses, as well as where it can be best utilised. For example, when deploying a Gen-AI chatbot for a financial services customer service use case, make it abundantly clear to customers that they are interacting with an AI agent, and provide accessible information about the data it was trained on and its intended purpose for those wishing to know more.
  • Collaborative engagement: it is also important to involve users from the start of the process, engaging them both at the design and development stages. Conducting workshops and gathering feedback on prototype AI interfaces will help to ensure alignment with user expectations and needs. Iterative feedback loops within the AI systems themselves will also allow users to report issues, flag biases and suggest improvements as they go.

Bottom line, transparency is not only a collaborative effort, but also the cornerstone of responsible AI adoption. It is essential to emphasise explainability and clear communication, especially when it comes to the different adoption profiles firms will encounter. For example, the rollout of PairD revealed distinctly different adoption profiles between user groups.

More junior staff displayed a greater willingness to experiment and explore PairD's capabilities, while senior users gravitated towards immediate, task-oriented applications that offered clear time and cost savings. This underscored the need for tailored onboarding strategies and targeted communications that addressed the specific needs and concerns of different users. And, by providing regular updates, soliciting user feedback, and fostering collaboration, we built trust from the ground up across all key user groups.

Ensuring fairness and performance through monitoring and governance


Gen-AI models are not static. Their performance can degrade over time, particularly as data patterns evolve, or new biases emerge. Continuous monitoring of these systems is therefore essential for maintaining the fairness and accuracy of inferences over time.

Performance tracking, through the use of dashboards to monitor key performance indicators (KPIs), such as model accuracy, fairness, and bias detection can help in this regard, as can automated ‘bias alerts’ set up to notify relevant teams when performance dips or biases are detected. Both can also ensure that issues are picked up and addressed promptly.

Supporting the fairness of systems doesn’t end here, however, as firms also need to develop robust governance frameworks to guide the responsible deployment of Gen-AI. These frameworks contribute to assurance, ensuring AI systems remain transparent, fair and aligned with all relevant organisational and regulatory standards. Key elements of an AI governance framework include:

  • Ethical guidelines: clear principles for ethical AI development, including fairness, transparency and accountability.
  • Model monitoring: continuous monitoring processes to dynamically assess model performance, fairness and compliance.
  • Feedback channels: dedicated channels for collecting feedback from users and stakeholders to identify and address potential issues.
  • Model retraining and fine-tuning: ensuring that models are modified and updated as needed to address performance drift, mitigate bias and incorporate new data.

In conclusion, we believe that financial institutions can safely embrace the transformative potential of Gen-AI by building strong foundations in risk management, data security, and governance. By embedding risk considerations into the AI development process, ensuring transparency and fairness and fostering a culture of security, institutions can mitigate the unique risks posed by Gen-AI while maintaining trust with their customers and stakeholders.

Yet making Gen-AI safer for financial institutions is not just a question of addressing today’s risks, but also of anticipating future challenges. As AI evolves, institutions must remain agile, continuously refining their risk management frameworks and governance practices to stay ahead of the curve. Striking the correct balance between innovation and safety is critical, and so, by adopting a proactive approach, financial services institutions can set themselves up to successfully navigate the challenges and opportunities presented Gen-AI. Having laid the foundations for success, firms can then turn their attention to scaling their Gen-AI ambitions. Our next article looks at enterprise adoption strategies for Gen-AI, laying out a pathway to successful scaling.

______________________________________________________________________________

References:

1 Developed by Deloitte’s AI Institute, PairD is an internal Generative AI platform designed to help the firm’s people with day-to-day tasks, including drafting content, writing code and carrying out research safely and securely. The tool is also able to create project plans, give project management best practice advice and suggest task prioritisation.