Siirry pääsivulle

Boards Must Be Given AI Control

Employees seem to embrace AI, but what opportunities and risks does the use of AI entail for the company – and what are the responsibilities of the board? 

Hallituksen on oltava tietoinen tekoälyn riskeistä

Read the article in Finnish.

The parallel to the “iPhone moment” is highly relevant when considering the transformative impact artificial intelligence (AI) is having on business. When first experiencing AI in practice, one quickly realises that a return to previous working methods is out of the question. AI represents a paradigm shift that requires a strategic and responsible approach from both company management and the board of directors. 

Despite AI’s impressive ability to generate content and solve complex tasks, there is a clear need for training in the proper use of AI in order to manage the risks the technology represents. Information created by AI must always be reviewed by people with relevant expertise in the applicable field. This is more critical than ever, as AI systems can generate inaccurate, misleading, or even fabricated information. 

Risks and opportunities 

The survey “Now Decides Next: Generating a New Future”, conducted by Deloitte in March 2025 among 2,000 senior executives globally (170 of them in the Nordics), highlights how companies are using AI to create business value. 

One key finding was a declining level of interest in AI among senior management and boards, while interest among employees continues to increase. If this trend continues without the board understanding its role and increasing its engagement, it may result in employees adopting AI tools in ways that increase the company’s risk exposure beyond its accepted risk appetite. 

AI creates significant opportunities for both employees and companies, but these opportunities also introduce new risks. In 2026, AI assurance is expected to become one of the most important areas the board must control in order to reduce risk to acceptable levels. 

An important task for the board 

The board of directors has a statutory responsibility for the management of the company. With the introduction of new legal requirements — such as the EU AI Act, GDPR, and national data protection legislation — the board must ensure that the company operates in accordance with applicable regulations governing the use of AI. 

To meet these obligations, companies need clear processes and policies for AI usage that are consistently followed. International standards have been developed to help organisations reduce risk. One such standard is ISO/IEC 42001, “Management System for Artificial Intelligence”, which provides a framework for establishing, implementing, maintaining, and continuously improving AI governance.

Governance and control 

The board should actively assess the company’s AI governance framework. Key questions include: 

  • What information security frameworks has the company implemented? 

  • Has the company adopted ISO/IEC 42001? 

  • Are current AI practices compliant with applicable laws and regulations? 

Failure in this area can have serious consequences. Sensitive data may be stored in AI systems or external databases for future use, and deleting such data — as well as verifying its complete removal — can be both challenging and costly. The consequences may include significant fines, reputational damage, and loss of trust among employees and customers. 

Review of AI models 

A systematic review should be conducted of all AI models used in the company, whether they are internally developed or based on external solutions such as ChatGPT. The board must ensure that both the models and their use comply with data protection regulations, including GDPR and other relevant requirements. 

Consider the following scenario: an employee uses a generative AI service to analyse sensitive data such as payroll information, employee benefits, or insurance plans within a corporate group. Such use could constitute a clear breach of multiple laws and regulations, and the data involved could potentially be retained within the AI service. 

Thirdparty assurance 

The board should also consider whether the company should commission an independent thirdparty review. Such a review can assess whether the organisation complies with ISO/IEC 42001, the EU AI Act, and applicable national legislation, and whether riskreducing measures have been implemented correctly and effectively. 

Trust through control

To maintain control over the risks the company assumes in today’s AI landscape, the board should think in three steps:

  1. Establish clear governance and accountability for AI
  2. Ensure compliance with laws, standards, and internal policies
  3. Continuously monitor, review, and improve AI usage

Only through structured oversight and informed engagement can boards build trust — both internally and externally — while enabling their organisations to realise the full value of artificial intelligence.

Contacts

Alex Clarke
AI & Data Analytics Lead, Audit & Assurance
alex.clarke@deloitte.fi

Jere Lehtioksa
Manager, Tax & Legal
jere.lehtioksa@deloitte.fi

Harri Metso
Partner, Tax & Legal
harri.metso@deloitte.fi

Did you find this useful?

Thanks for your feedback