Artificial intelligence (AI) technologies are rapidly transforming today’s business models, and emerging Generative AI and advanced applications are presenting new opportunities and possibilities for AI in finance and accounting. In the second part of our series about AI in finance and accounting, we explore ways to manage emerging AI risks and how to implement a trustworthy AI framework for success.
A blog post by Beth Kaplan, Katie Glynn, Court Watson, Oz Karan, Madeline Mitchell
Artificial intelligence (AI) and machine learning technologies are rapidly transforming today’s controllership business models, and this new generation of AI capabilities has the potential to play a critical role in the future of finance. In the first part of our series on this new frontier in AI, we explored the building blocks and various applications of AI and Generative AI in finance and accounting, as well as their possible implications across businesses. However, understanding what AI is—and is not—is the beginning of successfully implementing it into the finance function. To implement a meaningful AI strategy, it is critical to know the emerging risks around AI and Generative AI, as well as a possible framework for implementing AI that sets up an AI strategy for success.
As finance and accounting professionals begin to incorporate AI capabilities and enhance their processes with AI, changes driven by this evolution and rapid adoption of AI and Generative AI technologies call for reimagining governance processes, mechanisms, and operational controls. This starts with understanding emerging risks and then incorporating a trustworthy framework that can drive AI policy and a strategy for success.
Managing AI risks
There are numerous risks that are present when adopting AI and Generative AI technologies. While many more will likely emerge, some of the more common current risks include:
Privacy: Models are built on data sharing and may require specific consent for data used (confidential information, personally identifiable information) and require bespoke data handling processes.
Regulatory permissibility: Emerging and inconsistent regulation may result in the allowable use of AI in one jurisdiction being impermissible in others, or it may require additional bias testing or reporting.
Amplification of biases: The risk of amplifying biases relates to the inherent biases in the underlying data, amplified once the models are trained on the underlying data.
Safe usage: The risk of safe usage is associated with how and where large language models (LLMs) are used, such as to generate autonomous action for machinery on a factory floor.
Responsible applications: The risk of responsible applications is associated with the various use cases that will likely be contemplated, such as using LLMs for heightened automated cyberthreats.
Sovereignty: The risk of sovereignty relates to the expectation that AI models trained on specific data sets are subject to sovereignty or residency regulations and will be required to run only on data centers within that jurisdiction.
Lack of certifications: The risk of lack of certifications speaks to the risk that LLMs may be subject to future regulation as they are increasingly used for insights, advice, or expert information.
Managing these and other AI risks that are likely to emerge is possible through a framework and AI policy, but it is crucial to understand these risks so governance mechanisms can be built into an AI policy.
A strong AI risk management framework puts trust at the core of AI operations. It contemplates the AI life cycle stages, regulatory jurisdictions, adjacent programs, control frameworks, and governance cadences needed to manage AI risk and establish trust in AI capabilities for internal and external stakeholders. The first step to bringing this framework to life is implementing an enterprise AI policy, which serves as the foundation for effective, responsible, and ethical AI practices. To give an idea of a trustworthy framework, let’s look at the framework through each governance-cadenced routine that would make up an enterprise AI policy.
Examples of the AI framework within the enterprise AI policy
AI tracking and inventory
Life cycle standards
Risk assessment and measurement
Regulatory and functional alignment
Comprehensive AI risk management principles serve as the cornerstone of sound AI practices. Deloitte’s Trustworthy AI™ framework provides a backdrop to a sustainable, safe, and responsible AI use environment and risk management program. These are the pillars that make up our Trustworthy AI framework:
Getting started with a Generative AI or broader AI implementation strategy is a complex process given the risks involved and rapid pace of change in the marketplace. However, for incorporating an AI framework, here are some considerations and leading practices to assist with implementing a Generative AI technology to help optimize finance and accounting process.
AI implementation checklist
AI strategy
Guidance and training
Risk framework and governance
Licensing and permissions
Monitoring
To understand more about the new frontier in AI technologies, Generative AI applications, and possible opportunities for AI in finance and accounting, read Part I in our series: Exploring Generative AI in finance.
To hear our panel discussion about the new era of AI opportunities finance and accounting, listen to our webcast on demand: A new frontier: Exploring artificial intelligence in finance.