Skip to main content

CCO Council Summit: How to Boost Confidence in AI Through Risk Management

Deloitte’s Clifford Goss outlines three key reasons for organizations to boost confidence in AI: risk management, regulatory compliance, and a competitive edge.

Compliance can be complex. But, according to Deloitte Global and US Risk, Regulatory, & Forensic leader Gina Primeaux, as complexity increases, so should standardization. And monetization. And especially collaboration with other C-suites. Hear Gina’s take on the evolving role of CCOs and how they can navigate new regs, new requirements, and new expectations.

Chief compliance officers (CCOs) can bring a unique perspective to the table during the initial stand up and enhancement of AI risk management programs due to their experience with negotiating disparate regulatory jurisdictions and collaborating cross-functionally, says Clifford Goss, Ph.D., a partner and AI Regulatory, Risk and Forensic leader with Deloitte & Touche LLP.

“CCOs can play a critical role because they’ve dealt with other situations in the past involving multiple regulatory jurisdictions that require bringing together people from different silos,” Goss says. “While AI risks and risk management programs continue to evolve, the need to establish a customized risk management framework is not really a new concept. It plays to the strength of the CCO, who can bring value to the table by offering a perspective early on and throughout the process with respect to what’s going to work for the enterprise and how to right-size the program for its current and anticipated AI use environment and regulatory environment.”

Speaking during a custom panel at the Wall Street Journal CCO Council Summit, Goss laid out three reasons why organizations may need a confidence boost around AI use.

“The first is centered around risk,” he said. “We are now beyond the early days of innovative AI technology such as generative AI. Most or many organizations have already adopted some technology and are seeing the benefits. They are also seeing the risks.”

These risks include the unethical use of data, data leakage, cyber risks, and misuse, each of which can drive financial loss and reputational damage. To mitigate these risks, organizations need strong AI risk management programs to give them the confidence to implement and use AI.

The second reason relates to regulation.

“We’re living in a world of patchwork global regulations,” Goss observes, asking the audience to consider mandates such as the EU AI Act, other national AI regulations, and potentially dozens of local and state rules in the U.S., alongside industry standards. “Many of these regulations have serious penalties for non-compliance, so organizations need the confidence to know that they can use their AI, generative AI, and agents without falling out of compliance.”

The last reason to boost confidence relates to a competitive edge. Organizations that can quickly and creatively use AI and generative AI will likely outpace their competitors; however, it’s also likely they may not use these technologies if they’re not confident that the outputs are accurate, reliable, and dependable.

To build trust, both internally and externally, around the use of AI models, it is important for organizations to establish a resilient AI risk management program.

“One often overlooked component of resilience is the ability for the program to be flexible and adaptive to change, be it technological or regulatory,” Goss notes. “For example, for the last few years, everything was about generative AI, and now suddenly we’re talking about agentic AI and multi-agent systems. This [type of shift] introduces new risks. The ability to adapt the program to identify new risks and put in place new controls to comply with any changes in regulations is very important.”

Let’s talk about the evolving role of the CCO and how to navigate new compliance risks and regulations.

We'd like to hear from you.

Did you find this useful?

Thanks for your feedback