Skip to main content

​AI and Risk Management in Ireland

Colm McDonnell


Advancement in technology continues to bring about substantial accelerated change in our day-to-day lives. The latest shift in the technology sector is that of Artificial Intelligence (AI): a wave of technology with 'data' at its core. There is no particular definition for AI, however it is generally regarded as establishing a computer system to execute tasks independently without requiring any human intelligence.

AI is already a predominant factor in many transformational businesses and technical applications, typically combining data analytics and machine learning (ML). Although Irish financial institutions are concerned about the disruption by newer entities utilising AI, they have realised its potential and advantage is significant in the long-run. Globally, financial services entities have started to learn, comprehend and utilise AI to its full potential and reap business benefits and this is beginning to gain traction within Ireland.

Irish hedge funds, broker-dealers and other firms are turning to AI for higher uncorrelated returns and to optimise trade execution. Both public and private sector institutions may use these technologies for regulatory compliance, surveillance, data quality assessment and fraud detection.

Some Irish banks are using AI platforms to deliver more personalised customer interaction. However, the limited availability of accurate data in sufficient quantity and quality coupled with inadequate knowledge on the risks escorted by AI has delayed the wide spread adoption of AI in the financial sector. However, these obstacles to operationalisation have already begun to be removed, and I would recommend that organisations don't wait for perfect data or they will never commence.

Challenges to AI adoption in financial services industry

​There are a number of challenges for the wide-spread acceptance of AI technology in the Irish financial sector:

  • Conflicting views on where AI needs to be applied;
  • High quality and quantity of data is crucial for the decisions made by AI;
  • Al which evolves over time contains many processing layers that make auditability and traceability challenging; and
  • Understanding AI and its implications on the specific case.

In response to these challenges it is possible to begin negotiating them through an embedded risk management framework for AI.


Embedding AI in a risk management framework

It is important to identify and manage AI related risks and controls: Deloitte's AI Risk Management Framework (RMF) equips Irish organisations with the mechanism to do so.

Below are key considerations for AI risks covered in framework: 

Enterprise risk category

Key consideration for AI solutions


AI is frequently evolving and learning, the continuously changing data set make the AI decision difficult in discovering the built–in bias of the model, which can lead to inefficient outcomes.


Dependencies on the legacy systems may introduce a data compatibility risks. This may often be the case with a number of Irish financial services organisations.

Regulatory and compliance

The multiple hidden decision making layers employed by AI can create comprehension difficulties and present problems to regulation internally and externally in the Irish environment.


Large-scale adoption of AI can pose a significant cultural challenge due to new potential regulatory and ethical concerns.


The risk of not defining the appropriate responsibilities and accountability across the AI lifecycle.


An increased risk if the organisation tends towards clustering: this may leave the underlying algorithm disproportionately sensitive to certain data.


A 'black-box' adaptation will not produce clarity on liability between vendors, users and operators in certain circumstance of damage.

In order for Irish organisations to construct an effective risk management process, their proposed AI strategies should be first aligned to their respective risk appetite. Before assessing the impact of AI risk, consistent assessment benchmarks for all the cases should be established. One support of this element of the risk management framework is the Risk Management Life-Cycle (RMLC).

Risk Management Life Cycle (RMLC)

As a pre-cursor to the framework it is necessary to establish a robust RMLC, which means these AI risk treatment activities are redefined on a regular interval. The conceptual RMLC has four key stages, presented below:

  1. Identify
  2. Assess
  3. Control
  4. Monitor and report



At this stage the business needs to review their administration and approach in distinguishing the risk, and implement an all-inclusive, uninterrupted approach in risk identification. 


For Irish entities operating in a developing AI ecosystem, existing risk appetite assessment benchmarks are often inadequate to perform a qualitative analysis of the AI use cases. As these AI models evolve over time, organisations need to have a persistent and dynamic approach having reviewed the risk exposure, both in 'top-down' and 'bottom-up'.


The control process should consider how the AI solution interacts with various stakeholders and what the potential touch points are. There should be regular monitoring and testing of AI solutions beyond the development stage.

Monitoring and reporting

A monitoring and reporting structure should ensure that the AI solution performs in accordance with the specific use case. This section should cover both the technical performance and business operational outcomes achieved by the model.

What are regulators most likely to require?

Understanding the use of AI is attracting increased attention from regulators. Irish financial services companies planning to adopt or already using AI can expect that the level of scrutiny will only increase. Deloitte has identified a few possible key regulatory areas-related to the policies and procedures an Irish firm should consider when adopting the AI solution.


Governance, oversight and accountability

There are various challenges and difficulties where the proposed application of AI does not fit the current governance framework:

  • It is essential to have adequate governance in place along with an effective RMF and to train the members of the governance committee about the risk associated with the AI model; and
  • The firm needs to document all the procedures and controls with respect to manual 'kill switches' and 'exit chutes', to disable AI as soon as a discrepancy in the behaviour is detected.

 Capability and engagement of control

There are various risks associated with the control of ML-based systems once operationalised due to their continuous learning and adapting behaviour.

  • It is essential to provide all the staff with adequate resources and expert training in-order to completely comprehend the risk associated with all the AI solutions adopted.
  • Risk and compliance procedures have to accommodate the key components of the AI application to assist in determining the risk controls aligned with the risk appetite of the firm.

Documentation and audit trails

Documentation and audit trails may include all documentation related to the development and implementation process for AI. Logging and monitoring needs to be aligned with auditing standards.

  • Complete documentation of the testing and approval procedures needs to be captured, including all the results and how the AI model is expected to meet the needs prior to implementation.
  • A tracking and managing system needs to be in place to ensure any discovered issues meets Irish auditing standards.

Third-party risk and outsourcing

Corporate Ireland has transformed with the increasing regulatory environment and the dynamic challenging market through its engagement of technology providers. The following are useful practices that should be followed when working with third-party AI service providers.

  • An AI model, designed or deployed by an external vendor should undergo the meticulous testing and a similar monitoring procedure as if it were developed in-house; and
  • A robust continuity plan should be in place to carry out their operations in case the AI solution developed by the third party solution ceases operation, or malfunctions.


General Data Protection Regulation for AI

It should also be remembered that AI solutions which process personal data will be required to comply with the General Data Protection Regulation (GDPR) and e-Privacy Regulation (e-PR). The e-PR complements the GDPR: concerning itself with electronic communications and the right of confidentiality, data/privacy protection and cookies.

It is essential that AI is compliant to these regulations to ensure and maintain all the data in a way that guarantees the safety of the information. In general, Irish companies will be expected to adopt regulatory principles and procedures of the algorithmic accountability and auditability while adhering to automated-decision-making and profiling requirements.

Regulating AI

Understanding the risk and benefits of AI is a challenge for FS institutions, regulators and supervisors. The attention that AI is gaining has made it obligatory to the regulators and supervisors to explore the uses of AI. The major areas of concern, which regulatory and policy makers need to consider would be FS stability, potential network and herding effect and cybersecurity. The challenge that regulators are facing here is to maintain the balance between encouraging the innovation in technology and protecting customers, market integrity and financial stability.

Determining the right balance between these has become difficult due to the pace at which the technology is evolving and the pace at which new regulations need be designed and implemented. There are a number of explanations for this: regulators endeavour to catch-up when it comes to advancing technologies; and the technology itself may not be fully compatible with existing traditional regulatory structures [1].

Irish policy makers are already under pressure, as there is huge investment in the AI industry, which is strongly impacting Irish finance and the economy. From an AI perspective, regulatory guidance will play a vital role in understanding the expectations of supervisors in terms of a risk management approach. Irish regulators and FS companies alike will need to overcome national and sectoral borders and work with wide range of counterparties in developing the polices and to provide effective solution the emerging risk without losing sight of border policy and ethical concerns.


Despite some challenges to adoption, AI can become the central to Irish FS firms providing better services and adapting customer-centric approaches for the delivery of tailored solutions. However AI in the Irish financial landscape is in its infancy, which has provided FS institutions an opportunity to learn and understand the benefits and risk associated with AI through a RMLC.

It is essential to understand AI is a two-way learning process- where the board and senior management need to understand the controls and functionalities, while specialists contemplate the risk and regulatory perspective.  Irish regulators will look to identify potential risks and unexpected consequences associated with the adoption of AI in finance.

Ultimately it is about striking the right balance in Ireland between supporting technology innovation and safeguarding customer, market integrity and the financial stability. It is important that both regulators and industry work together in designing the appropriate polices to contribute to cross-border and cross-sectoral concerns of Irish societal and ethical implications of large scale implementation of AI.


This article was written by Colm McDonnell, Partner in Risk Advisory, and  first appeared in 'Finance Dublin.' 

Did you find this useful?

Thanks for your feedback

If you would like to help improve further, please complete a 3-minute survey