Skip to main content

Getting ahead of the risks of artificial intelligence

Leading AI adopters manage potential risks and achieve better outcomes

Susanne Hupfer

Risk concerns are holding back AI adoption—and research suggests that actively managing AI risk boosts the technologies’ benefits to the organization.

WHEN Deloitte’s recent State of AI in the Enterprise study asked AI adopters about their organization’s top adoption challenges, “managing AI-related risks” topped the list—tied with integration and data challenges, and on par with implementation concerns.1 And while worry is high, action to ameliorate risks is lagging: Fewer than one-third practice more than three AI risk management activities.2 And fewer than four in 10 adopters report that their organization is “fully prepared” for the range of AI risks that concern them.

To investigate whether actively managing AI risks has any tangible benefit, we compared two groups of AI adopters that approach those risks differently: Risk Management Leaders (11%) undertake more than three AI risk management practices and align their AI risk management with their organization’s broader risk management efforts, while Risk Management Dabblers (51%) undertake up to three AI risk management practices but are not aligning them with broader risk management efforts.3

The Leaders believe AI has greater strategic importance to their business: 40% see AI as “critically important” to their business today, versus only 18% of the Dabblers—and within two years, those numbers are expected to rise to 63% and 36%, respectively. A strong focus on actively managing AI risks appears to pay off in several ways. The Risk Management Leaders:

  • Report lower levels of concern about a range of potential risks of AI, such as AI failures affecting business, backlash from customers, negative employee reactions, potential job losses, lack of transparency, and ethical issues
  • Are less likely to report that their organization is slowing its adoption of AI technologies because of emerging risks—as the figure illustrates, 58% of the Dabbler group reports this, versus only 41% of the Leader group
  • Are establishing bigger leads over competitors: 46% of the Leaders report that AI helps them establish a “significant lead” over their competition, versus just 20% of the Dabblers

Implications for executives

We believe AI adopters would do well to emulate the Risk Management Leaders:

Take a proactive approach to AI risks. Consider what risk management activities your organization is undertaking for AI, and whether there are others you could put into place.

Integrate AI risk management. Consider aligning AI risk management with your organization’s broader risk management efforts and expanding the focus of your risk management specialists to include AI.

For their part, AI solution providers may be able to improve their competitive positioning by incorporating risk management into their offerings. We suggest sharpening your risk management game: Certify that you perform regular auditing and testing of your AI systems to help ensure accuracy, regulatory compliance, and lack of bias. By reducing risks for your customers, you can be better positioned to build customer trust.4

By actively managing potential AI risks, adopters and providers alike should improve their chances of being able to forge ahead and capitalize on AI.

Technology, Media & Telecommunications

Deloitte’s Technology, Media & Telecommunications (TMT) industry practice brings together one of the world’s largest group of specialists respected for helping shape many of the world’s most recognized TMT brands—and helping those brands thrive in a digital world. 

Learn more

Cover image by: Viktor Koen

  1. Beena Ammanath, David Jarvis, and Susanne Hupfer, Thriving in the era of pervasive AI: Deloitte’s State of AI in the Enterprise, 3rd Edition, Deloitte Insights, July 14, 2020.

    View in Article
  2. We studied the kinds of AI risk management activities that some organizations practice, such as keeping a formal inventory of AI implementations; ensuring that vendors provide unbiased AI solutions; auditing and testing AI systems to check for accuracy, regulatory compliance, and lack of bias; charging a single executive with oversight of AI risks, training developers to recognize and resolve ethical issues; establishing a board or policies to guide AI ethics; and collaborating with external parties around AI ethics.

    View in Article
  3. 38% of adopters occupy the middle ground between the Leaders and Dabblers. Their answers generally fall “in the middle,” so we won’t focus on them here.

    View in Article
  4. Deloitte’s Trustworthy AI Framework outlines a way that organizations can think about developing ethical safeguards across six critical dimensions.

    View in Article

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey