Skip to main content

Artificial intelligence in the boardroom

Five considerations for managing risk in the AI era

AI is redefining the way organizations govern and manage risk. Boardrooms and their directors are at the forefront of this AI evolution. Discover five actions board members can take to drive responsible oversight and strategic advantage to prepare for the future of AI.

AI and the new risk management landscape

Artificial intelligence (AI) types and applications are proliferating across industries, from machine learning and Generative AI (GenAI) to agentic systems and physical AI. While use cases have grown, so have the risks AI creates. For boards, the AI era has exposed new challenges in governance and risk management. Most boards (72%) report having one or more committees responsible for risk oversight, and more than 80% have one or more risk management experts, according to a Deloitte survey.¹ For all the attention and investment in managing other kinds of business risk, AI demands the same treatment.



AI security risks can compromise sensitive data, biased outputs can raise compliance problems, and irresponsible deployment of AI systems can have crosscutting ramifications for the enterprise, consumers, and society at large. Given the impact, boards can serve a vital role in helping the organization address AI risks.

Five ways board members can prepare for an AI future

Being an advocate and guide for AI risk management means, in part, asking the right questions. This necessitates AI literacy. To take part in AI risk management, board members can build AI literacy through traditional methods, such as bringing in speakers and subject matter specialists and pursuing independent learning through classes, lectures, and reading.

A large language model (LLM) could also help in this regard, as an LLM-enabled application can summarize and help explain, in natural language, the complexities of AI’s functions, limits, and capabilities.

If AI literacy in the boardroom is important, even more so is fluency in the C-suite. Board members are in a position to urge executives to build their AI fluency. As the power and allure of AI grows and use cases multiply, business leaders need knowledge and familiarity with the technology to responsibly shape AI programs.

Decisions around AI safety, security, accountability, and all the factors that impact AI risk management flow out of a baseline understanding of what AI is and what it can do. As stewards of the enterprise, board members can encourage the importance of AI fluency.

Board members often hail from fields steeped in finance and business management. This background allows them to be informed leaders on fiscal and competitiveness issues. Given that AI is a technical and complex field raising its own collection of hurdles and risks, boards may look to expand their in-house subject matter knowledge by recruiting an AI professional to the board.

Such a person should bring experience as an operational AI leader with a track record of implementing successful AI projects in similar organizations. A professional with operational AI experience can provide the insight boards need for oversight and governance.

Governance isn’t an ad hoc exercise, and boards face the need to implement controls to guide the responsible use of AI. Boards may stand up subcommittees to oversee vital enterprise activities, such as for audits, succession planning, and risk management related to finance and operations. AI governance can be supported with a 
similar tactic.

The lexicon, capabilities, risks, and trajectory of AI are all in flux as the technology matures. A subcommittee or dedicated group is positioned to remain focused and informed on this complex, fast-changing technology. Boards could also extend existing subcommittees’ mandates to include AI components. For example, the audit committee could include planning for algorithmic auditing.

Given their role, board members aren’t directly working with AI, but they are important stakeholders with essential responsibilities. As enterprise leadership and lines of business explore how AI can be a productivity enhancer and innovation driver, the board can take a higher-level, big-picture view of AI programs and focus on guiding the enterprise in the responsible deployment and management of AI.

In this, it’s helpful to leverage a framework for assessing risk and how it impacts compliance and governance (for example, Deloitte’s Technology Trust Ethics framework). It can help board members make clear-eyed evaluations and usher the organization toward the most valuable use of AI.

The AI landscape is changing quickly. In addition to the steps above, board members can turn to advisors who are already developing the tactics and standards for AI governance and oversight. While risk management is always a moving target, literacy, professional experience, dedicated attention, and a vision for the future can help boards better position and guide their organizations in the era of AI.

Want more insights?

Our continuing five-action perspective series is just one way we’re staying ahead of the curve to help organizations navigate the fast-evolving world of AI. For the latest research, market trends, and specialist commentary on the technologies shaping our future—including more articles from this series—subscribe to the Deloitte AI Institute newsletter.

Endnotes

¹ J. H. Caldwell, Global risk management survey, 12th edition, Deloitte Insights, 2021.

The Deloitte AI Institute

We collaborate with academic groups, startups, entrepreneurs, innovators, mature AI product leaders, and visionaries to explore AI risks, policies, ethics and use cases. Access our full body of work and join our live events for more.

Did you find this useful?

Thanks for your feedback