Skip to main content

Why internal audit should act now to support the establishment of an effective AI governance and risk management program

Talking points

  • As AI adoption accelerates, governance in many companies is still catching up.
  • Internal audit teams should act as an AI catalyst—promoting the adoption of a robust AI governance program.
  • By evaluating adherence to leading practices, internal audit teams can provide assurance that AI governance, risk management, and controls are well designed and functioning effectively.

Artificial intelligence (AI) has quickly shifted from a boardroom buzzword to a strategic imperative for many organizations. As executive teams double down on the latest AI innovations, companies are accelerating experiments with chatbots, agentic systems, and advanced analytics—aiming to boost quality, cut costs, and unlock new growth. Yet, while these pilots race ahead, many organizations are just beginning to establish AI governance frameworks—or, in some cases, have yet to get started—opening themselves up to serious risks and consequences.

This gap between the rapid deployment of AI and the slower pace of governance development presents internal audit organizations with a unique and timely opportunity to step in to add immediate value to their companies. Internal audit can serve as the seatbelt for a company that already has the accelerator to the floor with its AI pilot programs. This emerging role for internal audit involves elevating risk conversations and embedding assurance early in the AI deployment process. 

In this blog, we’ll take a closer look at internal audit’s role as a catalyst in helping the enterprise appropriately govern AI expansion. We’ll also explore leading practices teams can follow to get up to speed on effective AI governance.

Guidance for AI governance

Let’s begin with some general guidance for enterprises starting their AI governance journey. When establishing AI governance, companies should: 

  • Inventory with intent. Go beyond formal projects; look for robotic process automation (RPA) bots that use large language models (LLMs), Software as a Service (SaaS) integrations, and experimental pilot projects.
  • Consider all types of risks. Evaluate policies against the five pillars of AI assurance: transparency, fairness, privacy and security, reliability, and accountability. 
  • Vary testing techniques. Combine traditional control testing with new methods: adversarial prompting, boundary/stress tests, and data-lineage tracing.
  • Recognize the risks posed by agentic AI. Autonomous agents, which plan, act, and learn with minimal human oversight, can potentially introduce more risk than, say, a Q&A chatbot. Agentic AI controls should therefore include goal alignment, guardrails, audit trails, and human-in-the-loop overrides to effectively mitigate this higher level of risk.
  • Don’t forget that culture counts. In our experience, effective governance is 20% policy and 80% behavior. The organization should foster the mindset that responsible AI is everyone’s job.
Internal audit’s changing role

Given the rapid rise of AI adoption across today’s companies, the internal audit organization can no longer afford to take a passive, wait-and-see approach to governance. Instead, it should step up as a proactive catalyst to help navigate both the opportunities and risks of AI. Specifically, internal audit should: 

  • Raise awareness of AI risks by identifying and communicating emerging threats and vulnerabilities to the organization.
  • Champion the establishment of a robust AI governance program to promote responsible innovation.
  • Evaluate the design and test the effectiveness of the governance program to confirm it’s working as intended.
Practical steps that can have a major impact

AI governance should be grounded in core governance principles but also allow for today’s rapid pace of innovation. The goal for the enterprise is to set clear AI guardrails—enabling responsible deployment without unnecessary roadblocks.

Here are four practical steps internal audit teams can take to help the organization establish its AI governance framework:

  1. Validate the AI landscape: Inventory use cases, data flows, and model types—including “shadow” projects—and compare them to what the organization tracks. Example result: A quick-start survey uncovers a marketing team’s generative pretrained transformer (GPT) content generator running without approval.
  2. Assess the governance framework: Compare existing policies to leading practices (e.g., NIST AI RMF, ISO 42001). Identify missing guardrails for ethics, bias, security, and accountability. Example result: Governance gap analysis shows the company policy lacks practical monitoring expectations for critical AI applications.
  3. Test AI control design and operating effectiveness: Validate that AI controls (e.g., data quality, model monitoring, change management, human oversight) work in real life—before and after deployment. Example result: For an agentic AI that autonomously reprices e-commerce SKUs, an internal audit might simulate out-of-range scenarios to confirm that escalation triggers work as intended.
  4. Advocate and elevate: Present findings to senior leadership, recommend remediation, and track progress—not just as an annual exercise, but on a continuous basis. Example result: Audit dashboards to the audit committee highlighting risk trends and remediation status of audit issues involving AI governance.
Moving forward

AI innovation won’t wait for perfect governance, but strong governance is essential—so internal audit should act now. Start by piloting an AI governance review and a deep-dive assessment on a high-risk use case. Embed AI risks into the annual audit plan and use data analytics to monitor emerging issues. Partner with risk owners, compliance, and technology to encourage practical controls. By acting early, internal audit can help the organization capture AI’s benefits while managing risk and meeting stakeholder expectations.

What role can Deloitte play?

Deloitte can advise you on internal audit’s expanding AI role. Our Audit & Assurance professionals have extensive experience advising finance executives and internal audit teams in establishing effective AI safeguards, risk management procedures, and internal controls that balance oversight and innovation. For more information, visit our website, or reach out to us directly. 

The services described herein are illustrative in nature and are intended to demonstrate our experience and capabilities in these areas; however, due to independence restrictions that may apply to audit clients (including affiliates) of Deloitte & Touche LLP, we may be unable to provide certain services based on individual facts and circumstances.

This publication contains general information only and Deloitte is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services, nor should it be used as a basis for any decision or action that may affect your business. Before making any decision or taking any action that may affect your business, you should consult a qualified professional advisor. Deloitte shall not be responsible for any loss sustained by any person who relies on this publication.

Get in touch

Ryan Hittner

United States
Audit & Assurance Principal

Ryan is an Audit & Assurance principal with more than 15 years of management consulting experience, specializing in strategic advisory to global financial institutions focusing on banking and capital markets. Ryan co-leads Deloitte's Artificial Intelligence & Algorithmic practice which is dedicated to advising clients in developing and deploying responsible AI including risk frameworks, governance, and controls related to Artificial Intelligence (“AI”) and advanced algorithms. Ryan also serves as deputy leader of Deloitte's Valuation & Analytics practice, a global network of seasoned industry professionals with experience encompassing a wide range of traded financial instruments, data analytics and modeling. In his role, Ryan leads Deloitte's Omnia DNAV Derivatives technologies, which incorporate automation, machine learning, and large datasets. Ryan previously served as a leader in Deloitte’s Model Risk Management (“MRM”) practice and has extensive experience providing a wide range of model risk management services to financial services institutions, including model development, model validation, technology, and quantitative risk management. He specializes in quantitative advisory focusing on various asset class and risk domains such as AI and algorithmic risk, model risk management, liquidity risk, interest rate risk, market risk and credit risk. He serves his clients as a trusted service provider to the CEO, CFO, and CRO in solving problems related to risk management and financial risk management issues. Additionally, Ryan has worked with several of the top 10 US financial institutions leading quantitative teams that address complex risk management programs, typically involving process reengineering. Ryan also leads Deloitte’s initiatives focusing on ModelOps and cloud-based solutions, driving automation and efficiency within the model / algorithm lifecycle. Ryan received a BA in Computer Science and a BA in Mathematics & Economics from Lafayette College. Media highlights and perspectives First Bias Audit Law Starts to Set Stage for Trustworthy AI, August 11, 2023 – In this article, Ryan was interviewed by the Wall Street Journal, Risk and Compliance Journal about the New York City Law 144-21 that went into effect on July 5, 2023. Perspective on New York City local law 144-21 and preparation for bias audits, June 2023 – In this article, Ryan and other contributors share the new rules that are coming for use of AI and other algorithms for hiring and other employment decisions in New York City. Road to Next, June 13, 2023 – In the June edition, Ryan sat down with Pitchbook to discuss the current state of AI in business and the factors shaping the next wave of workforce innovation.

Michael Schor

United States
Internal Audit Market Offering Leader

Michael is a partner in Deloitte & Touche LLP’s Audit & Assurance practice, where he focuses on advising our domestic and international clients on matters of internal controls, including information technology, regulatory matters, risk management issues, and control and compliance management processes. Michael leads Deloitte’s Internal Audit practice, with more than 20 years of experience advising Deloitte’s largest clients on the various elements of the internal audit lifecycle, including: risk assessments, audit planning and execution, and reporting to audit committees and key executive stakeholders. In addition to Michael’s responsibilities providing risk-based services to market-leading clients, he also leads Deloitte’s efforts around the modernization of key third line functions. Specifically, Michael focuses on how internal audit functions execute, as well as what internal audit functions focus on, and is known for this work with mid-market and super-regional banks.

The Pulse Blog

Subscribe to receive timely perspectives on trending audit and assurance topics.