Skip to main content

Building trustworthy AI

Risk-ready innovation for modern controllership

Artificial intelligence is rapidly transforming controllership. But as AI adoption accelerates, so do regulatory expectations and risk considerations. Explore how finance leaders can implement trustworthy AI governance frameworks that balance innovation, compliance, and value in today’s evolving risk environment.

A blog post by Travis Behan, Derek SnaidaufRyan Hittner and Beth Kaplan

From journal entry automation to general artificial intelligence (GenAI)-powered flux analysis, AI is rapidly moving from experimental use cases to the core of controllership operations. AI presents tremendous opportunities to increase efficiency, improve insight generation, and enhance decision-making. However, as AI adoption accelerates, so does the complexity of managing its risks.

For controllership leaders, the challenge is no longer whether to adopt AI, but how to do so responsibly. Building trustworthy AI programs requires balancing innovation with governance, enabling organizations to harness value while protecting financial integrity, regulatory compliance, and stakeholder confidence.

As AI becomes embedded in financial processes, organizations face a new category of risks that extend beyond traditional technology and operational considerations.

AI systems are increasingly attractive targets for cyberattacks, particularly as they rely on large data sets and interconnected environments. At the same time, sensitive financial and organizational data may be compromised through AI inference risks, where models reveal confidential information.

Beyond security concerns, organizations should also consider ethical and reputational risks. AI models can unintentionally enable biased or discriminatory outcomes if data or algorithms are flawed. Similarly, AI-driven tools can amplify disinformation or surveillance capabilities at scale, creating governance and public trust challenges.

Another growing concern is overreliance on AI outputs. While AI can significantly enhance productivity, excessive dependence without proper oversight may result in inaccurate or unsafe decisions. Additionally, poorly aligned AI objectives may conflict with broader organizational goals, values, or human judgment, further reinforcing the need for strong governance frameworks.

Regulatory focus on AI risk management and governance is expanding globally, creating new expectations for finance and accounting functions. Adoption rates underscore why regulators are paying close attention. Recent polling of finance and accounting professionals indicates that more than 80% expect AI-powered tools—such as AI agents and GenAI chatbots—to become standard components of the finance technology arsenal in the near future. Additionally, more than half of organizations report they are already deploying agentic AI or other advanced AI technologies.1

Regulatory bodies globally are intensifying their focus on AI risk management and governance. As of 2025, multiple jurisdictions have introduced or proposed regulations designed to address risk and establish standards for AI transparency, accountability, and responsible development and use.2

For controllership teams, this multifaceted and evolving risk landscape highlights the importance of embedding governance and risk management into AI adoption strategies and financial control environments from the outset.

Regulators are also providing targeted guidance relevant to financial reporting and auditing. Regulatory agencies, including the Public Company Accounting Oversight Board (PCAOB) and the Securities and Exchange Commission (SEC), are also highlighting expectations and providing guidance related to AI adoption in financial reporting, auditing, and disclosure practices.

PCAOB and SEC guidance

PCAOB emphasizes several critical considerations for organizations deploying AI within financial reporting and audit environments:

  • Maintaining appropriate human oversight over AI-generated outputs
  • Ensuring auditability and transparency of AI-generated content
  • Protecting data security and privacy throughout AI development and usage

The SEC has also signaled increased scrutiny in several areas:

  • Preventing “AI-washing” or overstating AI capabilities in disclosures or investor communications
  • Strengthening risk disclosures related to AI usage and dependencies
  • Establishing AI-focused regulatory task forces to monitor emerging risks

Together, these developments reinforce the expectation that organizations treat AI governance as a core element of financial risk management.

Effective AI governance should not slow innovation. Instead, leading organizations are adopting what can be described as a “Goldilocks” approach—creating governance frameworks that are neither overly restrictive nor insufficiently controlled.

When designed effectively, AI governance programs provide clarity, confidence, and control—enabling organizations to move faster, make more strategic AI investments, and unlock sustainable business value.

Core principles for building an effective AI governance framework

To support responsible AI adoption while maintaining operational agility, organizations should consider several foundational principles.

Focus on speed-to-value

Starting with targeted use cases and lightweight governance processes helps organizations build early success and stakeholder buy-in. Demonstrating measurable value builds momentum to encourage broader adoption and investment.

Take a risk-based approach

Not all AI applications carry the same level of risk. Governance efforts should prioritize high-impact or high-risk use cases while enabling streamlined and fast-tracked approval processes for lower-risk initiatives.

Design for flexibility and scalability

AI risk landscapes evolve rapidly. Governance frameworks need to be nimble, allowing organizations to refine policies, controls, and oversight structures as technologies and regulations evolve.

Commit to continuous improvement

Organizations should regularly measure AI performance, evaluate governance effectiveness, and invest in enhancing technology capabilities and workforce skills to improve processes and stay ahead of emerging risks.

Establishing effective AI governance across the three lines

A successful AI governance program requires clear accountability across the three lines model, ensuring that risk management and oversight responsibilities are embedded throughout the organization.

First line: Business and operational teams

The first line plays a critical role in implementing and monitoring AI solutions. Responsibilities often include automating validation and monitoring processes through continuous testing and stress testing. These teams also establish thresholds and key performance indicators to refine AI models and ensure reliable outputs.

A key risk mitigation strategy at this level is workforce upskilling. Providing targeted AI training and awareness programs helps employees understand both the benefits and limitations of AI tools, promoting responsible usage.

Risk mitigation technique: Upskill workforce with targeted training and presentations.

Second line: Risk and compliance functions

Risk and compliance teams provide oversight by reviewing model documentation, governance dashboards, and risk assessments. They are responsible for defining AI risk strategies, including taxonomies, risk appetite, key risk indicators, and testing frameworks.

Blending AI outputs with human review is a critical risk mitigation approach. Establishing clear documentation standards and transparency requirements ensures AI users can understand and appropriately challenge AI-generated insights when necessary.

Risk mitigation technique: Blend AI results and human review. Set standards for clear, accessible documentation for all AI users.

Third line: Internal audit

Internal audit functions provide independent assurance sharing results and data with audit teams, and by evaluating AI governance frameworks, model performance, and compliance with internal and regulatory expectations. This includes creating audit trails that document approvals, feedback, and reporting activities.

Promoting a culture of continuous learning is essential for internal audit teams, enabling them to stay current with AI technologies and associated risks while supporting responsible AI adoption.

Risk mitigation technique: Promote continuous learning to encourage responsible AI use.

Moving toward trustworthy AI

AI is reshaping controllership, offering unprecedented opportunities to improve efficiency, enhance analytics, and strengthen financial decision-making. However, realizing these benefits requires a deliberate focus on governance, risk management, and regulatory alignment.

By adopting a balanced governance approach, controllership leaders can build trustworthy AI programs that accelerate innovation while mitigating risk and protecting organizational integrity. Controllership functions that succeed in this approach can help position finance as a strategic driver of trustworthy enterprise-wide AI transformation.
 

1 Deloitte, “Next-gen controllership: AI and emerging tech’s impact on finance,” July 2025.

2 IAPP, Global AI Law and Policy Tracker, last updated 2025.

Stay ahead—explore the latest in controllership now!