While the transformative potential of Generative AI (Gen-AI) should not be underestimated, the financial services sector faces unique challenges in ensuring its safe and responsible adoption. Used to managing vast quantities of sensitive data, financial services firms are more than familiar with by stringent regulations and critical risks to manage. Building a robust risk management framework, one that ensures data privacy and security and fosters trust and transparency, is critical to successfully implementing Gen-AI. This article – the second in a five-part series on the topic of embracing Gen-AI in financial services – explores key strategies for mitigating the risks associated with Gen-AI, highlighting the importance of good governance, cross-functional collaboration and proactive security measures. This piece is informed by insights from the deployment of Deloitte’s own proprietary Gen-AI solution, PairD1.
To ensure the adoption of Gen-AI is handled responsibly, financial institutions must embed risk management strategies directly into the development and deployment phases of their projects. Rather than addressing risks reactively, institutions need to adopt a proactive, forward-thinking approach that anticipates potential issues before they arise. This requires firms to ‘shift-left’ on risk, by:
Data privacy and security are both essential in the heavily regulated world of financial services, where sensitive customer and client data must be treated with extreme care. Hence, institutions need to strike a balance between leveraging the power of Gen-AI and ensuring strict compliance with regulatory standards, customer trust, and data protection. When seeking to rapidly scale their Gen-AI operations, therefore firms should consider the following:
And, beyond managing data risk, we believe firms should consider these additional risk mitigation measures to ensure the success and compliance of their Gen-AI programmes:
One of the other key challenges to successful Gen-AI adoption is how to build trust with users and stakeholders. The algorithmic processes driving Gen-AI models often operate as ‘black boxes’, lacking simple means of explanation. By building transparency and explainability into their Gen-AI tools, institutions can mitigate this challenge, helping AI become more trusted and more widely adopted within their organisations. Other useful measures include:
Bottom line, transparency is not only a collaborative effort, but also the cornerstone of responsible AI adoption. It is essential to emphasise explainability and clear communication, especially when it comes to the different adoption profiles firms will encounter. For example, the rollout of PairD revealed distinctly different adoption profiles between user groups.
More junior staff displayed a greater willingness to experiment and explore PairD's capabilities, while senior users gravitated towards immediate, task-oriented applications that offered clear time and cost savings. This underscored the need for tailored onboarding strategies and targeted communications that addressed the specific needs and concerns of different users. And, by providing regular updates, soliciting user feedback, and fostering collaboration, we built trust from the ground up across all key user groups.
Gen-AI models are not static. Their performance can degrade over time, particularly as data patterns evolve, or new biases emerge. Continuous monitoring of these systems is therefore essential for maintaining the fairness and accuracy of inferences over time.
Performance tracking, through the use of dashboards to monitor key performance indicators (KPIs), such as model accuracy, fairness, and bias detection can help in this regard, as can automated ‘bias alerts’ set up to notify relevant teams when performance dips or biases are detected. Both can also ensure that issues are picked up and addressed promptly.
Supporting the fairness of systems doesn’t end here, however, as firms also need to develop robust governance frameworks to guide the responsible deployment of Gen-AI. These frameworks contribute to assurance, ensuring AI systems remain transparent, fair and aligned with all relevant organisational and regulatory standards. Key elements of an AI governance framework include:
In conclusion, we believe that financial institutions can safely embrace the transformative potential of Gen-AI by building strong foundations in risk management, data security, and governance. By embedding risk considerations into the AI development process, ensuring transparency and fairness and fostering a culture of security, institutions can mitigate the unique risks posed by Gen-AI while maintaining trust with their customers and stakeholders.
Yet making Gen-AI safer for financial institutions is not just a question of addressing today’s risks, but also of anticipating future challenges. As AI evolves, institutions must remain agile, continuously refining their risk management frameworks and governance practices to stay ahead of the curve. Striking the correct balance between innovation and safety is critical, and so, by adopting a proactive approach, financial services institutions can set themselves up to successfully navigate the challenges and opportunities presented Gen-AI. Having laid the foundations for success, firms can then turn their attention to scaling their Gen-AI ambitions. Our next article looks at enterprise adoption strategies for Gen-AI, laying out a pathway to successful scaling.
______________________________________________________________________________
References:
1 Developed by Deloitte’s AI Institute, PairD is an internal Generative AI platform designed to help the firm’s people with day-to-day tasks, including drafting content, writing code and carrying out research safely and securely. The tool is also able to create project plans, give project management best practice advice and suggest task prioritisation.