Skip to main content

Artificial Intelligence called to Duty in the General Insurance sector

Understanding and managing risks to provide good customer outcomes

Linda Hedqvist

At a glance
 

  • Artificial Intelligence (AI) presents a significant opportunity for general insurers (GI firms) to improve underwriting and reduce operational costs in areas such as claims management or pricing. But insurers need to place customers at the heart of their AI strategy to ensure successful implementation.
  • This means insurers will need to establish strong safeguards and controls to ensure the delivery of good customer outcomes and compliance with the Consumer Duty (Duty) when deploying AI applications.
  • This article highlights how UK insurers should look at their AI strategies and systems through the lens of the Duty, exploring some of the key challenges and actions for firms.
  • To ensure good customer outcomes, insurers need to address issues around data quality and completeness, bias and fairness, and model drift.
  • Moreover, insurers should strengthen their controls and governance of AI systems used in pricing, underwriting and claims management, paying particular attention to the customer outcomes of AI-driven pricing, as well as making sure they support customers properly, even when claims management or customer service are partially automated.
  • Placing good customer outcomes at the heart of the AI strategy will enable insurers to obtain a competitive advantage, gain the trust of customers and regulators, and mitigate the risk of setbacks in the months and years ahead.

Who this blog is for: Board members and senior executives of UK GI firms who work on technology, AI, regulatory affairs, risk, and compliance.


Context
 

The increasing availability of AI models provided by third parties has enhanced the pace of adoption of this technology by insurers. Many firms are now using, or experimenting with, AI1 - especially its Machine Learning (ML) subset2 - for pricing, customer support and claims management activities. However, without the necessary safeguards in place, the use of AI can lead to poor customer outcomes. Therefore, supervisors globally are developing their expectations around AI-related risks.

The UK’s regulatory expectations around AI are also unfolding, UK regulators will follow a risk-based, context-specific and proportionate approach to regulating AI, and have been asked to publish their approach to AI supervision by April 2024. We also expect regulators to provide detailed guidance in early 2025. But, in the short to medium term, the UK Government’s AI strategy will rely on existing regulatory frameworks. The Duty is a case in point. In the absence of a formal regulatory approach to AI, it provides the FCA with “hooks” to take action where firms’ use of AI systems results in poor customer outcomes. Most importantly, delivering good customer outcomes should be central to insurers building out their AI capabilities, which need to be underpinned by appropriate controls, monitoring, governance and risk management to identify and mitigate the risk of customer harm.

In this article, we highlight how UK GI firms can look at their AI systems through the lens of the Duty. In particular, we rely on two key use cases of AI/ML by insurers to explore possible challenges and actions for firms in light of their responsibilities under the Duty: pricing and claims management.


The current UK approach to AI regulation, and how it relates to the Consumer Duty
 

Although the FCA does not specify in its Duty guidance exactly how insurers should think about their use of AI in the context of the Duty, all the Duty’s cross-cutting rules and outcomes apply. For example, insurers need to act in good faith by designing products and services that offer fair value and meet customers’ needs in the target markets. For insurers which use AI/ML in underwriting and pricing, this could mean thinking about whether algorithms can amplify or embed bias, and whether any foreseeable harm could be avoided. Similarly, the Duty requires firms to put themselves in their customers’ shoes when considering whether their communications provide customers with the right information, at the right time. Here, insurers using AI when interacting with customers need to make sure that the information is still tailored to their needs and helps them achieve their financial goals, even if this is done via a chatbot.

To start considering AI/ML in the context of the Duty, insurers should review the UK Government’s policy paper, which outlines some key principles to guide and inform the responsible development and use of AI. These include safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, as well as contestability and redress. While these principles summarise the key risks of AI/ML for firms, they are consistent with the Duty in many ways. Insurers that want to progress with their AI pilots ahead of a formal UK regulatory approach should reflect on how these principles apply to their use cases, as well as the Duty.

When it comes to accountability and governance, for example, a key requirement of the Duty is that insurers’ governance (e.g., controls and key processes) should be set up to enable the identification of poor customer outcomes. Insurers need to be able to demonstrate how their business models, culture and actions remain focused on achieving good customer outcomes. These considerations should underpin a firm’s AI strategy but are also key ingredients to evidencing full compliance with the Duty. Similarly, Boards have to sign off on a report that their organisation complies with the principles underpinning the Duty. Having an awareness of, and ability to challenge, risks to customer outcomes posed by AI systems will be key in this regard.

The UK Government also recently published its response to its White Paper and set out additional guidance3 and key questions that regulators should consider when implementing its AI principles (see our comprehensive summary here). Some questions which the Government poses to regulators will also be relevant to firms in the context of Duty compliance, including for example:

  • How would you describe a fair outcome of AI use […]? How can you clearly communicate this description? What evidence or information is required to assess whether AI systems are being used fairly […]?
  • What is an appropriate level of transparency and explainability for different AI systems […] given potential security risks identified? Could certain forms of transparency exacerbate security risks?
  • How might evaluations of fairness change in the context of AI technologies compared to when decisions are made by humans, or non-AI software?

Below we take a closer look at two insurance-specific use cases and how insurers might want to think about them in the context of the Duty. We close with a list of key actions for insurers in their journeys to make the most of AI/ML.


Insurance-specific AI use cases
 

A. AI in pricing and underwriting – more accurate pricing and risk monitoring
 

Several UK GI firms either currently use or intend to use ML tools to enhance the speed and accuracy of underwriting processes, including pricing. For example:

  • insurers use ML models to determine the technical pricing for policies by processing and correlating large volumes of data to improve the precision of customers’ risk profiles, leading to more granular risk assessments and technical pricing;
  • ML may also be used to optimise rates. This includes the use of demand and retention models where ML methods aid in the exploration of data points; and
  • ML telematics algorithms used in the motor sector allow for more granular pricing based on data shared by customers.

UK GI firms have been under the regulatory spotlight in recent years regarding the fairness and transparency of their pricing practices. We expect insurers’ increasing use of opaque ML techniques for pricing purposes to amplify existing regulatory concern in this area, particularly following the introduction of the Duty. The FCA will be particularly wary of any potential exclusions of some customer cohorts as a result of more granular premium pricing. In the Duty guidance, the FCA specifically mentions that using algorithms within products or services in ways that could lead to consumer harm as an example of not acting in good faith4. This might apply where algorithms embed or amplify bias, leading to outcomes that are systematically worse for certain customers – unless the differences in outcome can be justified. As pricing is a use case for AI in general insurance, and one of the key outcomes of the Duty, it is crucial that GI firms can demonstrate fairness across groups of customers when using AI applications.

Key challenges:

1. Explaining how AI/ML pricing models do not lead to poor consumer outcomes: regulators are concerned about AI/ML models introducing or reinforcing bias in modelling, which could lead to unfair pricing. For example, poor-quality AI/ML model training datasets, or a lack of controls to prevent model drifts in case of unprecedented situations, could lead to irrelevant, inaccurate or biased model outputs, causing potential discrimination and customer harm. The Duty is very clear in its expectation that firms need to ensure their products provide fair value to all groups of customers, and that behavioural biases should not be exploited through pricing.

To mitigate this risk, firms need to have strong data and pricing governance frameworks in place. This includes reinforcing controls, monitoring and MI around model's data input and output to ensure customers with protected or vulnerability characteristics are not discriminated against. Firms will need to be able to justify that their fairness assessment is adapted to the product sold and the intended target market (the ICO’s work on dataset, design and outcome fairness can provide a helpful starting point for firms to develop their own fairness explanations).

The Duty also emphasises the need for firms to safeguard consumers’ data privacy.5 UK regulators may review how firms and their third party (TP) providers collect, manage, and use customer data in their AI systems. GI firms will be required to have sufficiently detailed documentation of the data used by AI/ML models to prevent data protection breaches and support model explainability.

2. AI expertise: GI firms also need to invest in enhancing AI expertise to be able to, where relevant, develop, maintain and challenge any new AI-driven pricing models in line with the Duty. While this is true more broadly across many insurance functions, the Financial Reporting Council and the Government’s Actuary’s department recently highlighted a lack the technical skills to handle advanced AI/ML techniques, especially in actuarial functions. This can possibly lead to overreliance on key stakeholders and key person risk. Where firms deploy AI/ML in the pricing process, they need to provide actuaries with the appropriate training and tools to guard against possible customer harm caused by the models. This should also extend to the independent risk, compliance and internal audit functions, which will play a key role in providing assurance that the pricing processes and policies are fit for purpose (especially where insurers build their own model). Only with the appropriate expertise will GI firms be able to demonstrate that their AI systems comply with the Duty.

B. AI in claims management: are your systems smart enough to help your customers?

AI is already widely used by GI firms in claims management as it increases the speed, reduces the cost, and could improve the customer experience. Common use cases of AI in claims handling processes include:

  • enhancing customer support with chatbots helping customers to file a claim;
  • claims triage, including reading reports while identifying and sorting complex issues depending on urgency;
  • claim adjudication, with the use of telematics or AI object recognition technologies providing estimated pay-outs; and
  • fraud management, for example where an AI system scans social media to determine the location of the policyholder at the time of the loss event.

The FCA expects firms to remove unnecessary barriers in the claims management process to ease the consumer journey and provide fair value – whether claims are managed through AI or humans. The FCA will pay particular attention to claims settlement times as pointed out in its warning and 2023 portfolio letter to GI firms (targeting the health and motor sectors specifically). AI represents a promising solution to improve claims settlement timelines but can also contribute to poor customer outcomes. The FCA will, for example, expect firms to ensure that the use of AI does not lead to more burdensome or complex claims processes for customers. Under the Duty, having a complex claims process that could deter customers from pursuing claims could constitute an example of poor practice. Here, firms need to ensure that increased settlement efficiency using AI is not achieved at the expense of deteriorating outcomes for certain customers.

Key challenges:

1. Humans in the loop:6 where firms use automated systems (e.g., chatbots), the FCA stresses that firms should provide the appropriate options to help customers. For example, a GI firm providing an online chatbot to support customers for claims management without access to a real customer agent could lead to poor outcomes, especially for vulnerable customers who might not be able to navigate the chatbot easily. Firms should test the process maps for customers, distinguishing between those that can be safely managed through high degrees of automation, and those that require human contact. Firms also need to ensure they have a process in place whereby customers can complain to challenge the outcomes they get.

Regarding the claims management back-office, the PRA and the FCA have also warned about “automation bias”, i.e. where humans confirm decisions made by AI systems without providing appropriate challenge. This example is especially relevant when AI systems are used in the claims triage and adjudication processes. To tackle this, firms could involve dedicated experienced case officers for sensitive or complex cases. Humans in the AI loops should have an active role in overseeing the model and its output. Their ability to understand and challenge the model should be tested as part of the governance process, and continuously improved and updated through appropriate training.

2. Identifying vulnerable customers to prevent foreseeable harm: in its Financial Lives survey, the FCA identified that for 77% of the people surveyed, the burden of keeping up with domestic bills and credit commitments had increased between July 2022 and January 2023. Moreover, the cost-of-living challenges have led to 13% of insurance policyholders from mid-2022 either cancelling or reducing their policy cover. Insurers should build adequate processes to identify vulnerable customers and adjust chatbot suggestions accordingly; this could include changes to the information provided and adjustments to claims settlement time. Delayed settlements or unexpected premium increases due to poorly monitored use of AI systems and lack of second line oversight in the claims management chain can have a disproportionate impact on vulnerable customers, leading to further financial difficulties. Under the Duty, firms are expected to respond flexibly to the needs of customers with characteristics of vulnerability, whether this is by providing support through different channels or adapting the usual approach.

Actions for firms – what practical steps should insurers take?
 

Table 2. Actions for firms

Review and control the datasets used as inputs for AI models

  • Carry out reviews and tests of quality and comprehensiveness of data.
  • Ensure controls are put in place to limit the type of inferences AI models can make based on customer data (e.g. limit inferences of data listed under GDPR Article 9.2)
  • Consider if “federated learning”7 solutions could help increase the volume of training data and quality of the model outputs.
  • Invest in resources to improve data quality and lineage (e.g., by hiring a full-time data quality officer).
Review contractual relationship and information exchange flows with TPs in light of the Duty
  • Enhance information workflows with TP provider to ensure data of sufficient quality is used for training.
  • Define relevant metrics to monitor that the model is used according to its intended purpose.
  • Consider reviewing existing contractual arrangements with TPs that reflect the points above.
Due diligence over 3rd party AI provider
  • Ensure the TP model was trained on data that is representative of customer base where relevant
  • Agree information sharing - i.e. material changes to the model that might impact customer models
  • Establish division of responsibility over controls and testing between insurance firm and TP AI provider
Enhance governance arrangements and data quality and lineage processes
  • Set governance responsibilities around the use of AI and data quality oversight, particularly from a Duty perspective.
  • Ensure appropriate allocation of responsibilities around use of AI across senior managers and Board members and how this is captured in the statements of responsibility where applicable.8
  • Carry out a Data Protection Impact Assessment to identify risks of data misuse and possible privacy breaches.

Model testing and assurance

  • Develop tests to assess the reliability and functionality of AI models under normal business-as-usual considerations and under stress
  • Develop specific audit trails for the AI systems in use.
Build risk-based inventories of AI models to structure AI risk management processes and prepare for potential Model Risk Management requirements for insurers9
  • Develop a definition of AI models in relevant policies
  • Develop an inventory of in-house and TP AI models and solutions used by the firm.
  • Develop risk scoring criteria and an assessment methodology for AI models to increase transparency in the documentation and lifecycle tracking (firms can leverage EU AI Act10 principles here).

Monitor customer outcomes and track fairness

  • Introduce MI to track outcomes provided to customers through AI model.
  • Monitor differential outcomes between AI and non-AI systems used for pricing and claims management to ensure that consumers with the same risk profiles are getting explainable outcomes from a risk perspective.
  • Develop MI and controls to identify how AI models can amplify bias (e.g., discriminatory practices, unfair pricing and exclusion).
  • Define an approach to assessing fairness of AI model outputs adapted to the firm’s customer base and portfolio.
Ensure that the existing skillset of the firm is sufficient to deliver the AI strategy over the medium term
  • Make sure that consideration of AI-specific skillset is included in the hiring and training strategy, especially for actuarial functions.
  • Provide adequate training around AI risk to the Board to enable effective challenge.
Ensure customers have adequate support/alternative avenues to challenge outcomes of AI models and, where necessary, interact with humans
  • Ensure customers unable to benefit from AI chatbots (e.g., those non-digitalised or unwilling to share data) are not excluded/do not have to pay a disproportionate cost to reach the same outcomes as other customers.

Ensure the adequate identification of customers’ risk characteristics.

  • Test the reliability of chatbots in identifying customers with characteristics of vulnerability.
  • Reinforced learning can help to increase a model’s ability to spot vulnerable customers and reduce incorrect profiling.
Consider using the FCA Digital Regulatory Sandbox
  • Test AI systems and products in a safe environment.
Keep scanning the regulatory horizon
  • ensure oversight of AI regulations and evolving supervisory expectations both in the UK and globally


Conclusion
 

AI-related technological breakthroughs present a great opportunity for insurers – they could greatly improve operational efficiency and reduce cost. But any AI systems need to be underpinned by appropriate controls to prevent the risk of harm to customers. Now is the right time to develop strong safeguards around the use of AI to ensure the delivery of good customer outcomes under the Duty as well as anticipate future UK supervisory approaches to AI. Placing good customer outcomes at the heart of the AI strategy will enable firms to obtain a competitive advantage, gain the trust of customers and regulators, and mitigate the risk of setbacks. Firms implementing those safeguards will then be well-positioned to leverage their AI/ML systems to prove compliance with the Duty, and monitor customer outcomes more effectively.

___________________________________________________________

References:
 

1 For the purpose of this insight, we will use the PRA/FCA definition of AI in DP5/22: “AI is the simulation of human intelligence by machines, including the use of computer systems, which have the ability to perform tasks that demonstrate learning, decision-making, problem solving, and other tasks which previously required human intelligence”.

2 “ML is a subfield within AI, […]” and “refers to a set of statistical algorithms applied to data for making predictions or identifying pattern in data” it is “a methodology whereby computer programmes build a model to fit a set of data that can be utilised to make predictions, recommendations or decisions without being programmed explicitly to do so” - Bank of England, PRA, and FCA, “Discussion Paper 5/22: Artificial Intelligence and Machine Learning”, 2022, link; and “Machine Learning in UK financial services”, 2022, link

3 His Majesty Department for Science, Implementing the UK’s AI Regulatory Principles Innovation and Technology, 2024, available at: https://assets.publishing.service.gov.uk/media/65c0b6bd63a23d0013c821a0/implementing_the_uk_ai_regulatory_principles_guidance_for_regulators.pdf.

4 FG22/5 Final non-Handbook Guidance for firms on the Consumer Duty, paragraph 5.12, 2022, available at: https://www.fca.org.uk/publication/finalised-guidance/fg22-5.pdf

5 It notably refers to the ICO’s guidance to ensure sound data use in the context of AI.

6 “The measures in place to ensure a degree of human intervention/involvement with a model before a final decision is made” as per Bank of England, PRA, and FCA, “Discussion Paper 5/22: Artificial Intelligence and Machine Learning”, 2022, link

7 “Decentralized Machine Learning framework that can train a model without direct access to users’ private data” as per Deloitte, Federated Learning and Decentralized Data, 2022, link

8 Especially as discussions are ongoing on the relevance of introducing an SMF responsible for AI as part the SMCR review.

9 In the Policy Statement 6/23 on MRM principles for banks, feedback regarding the applicability of the MRM principles to AI/ML models indicated that both firms and the PRA were aligned on the mutual benefits of the proposed principles and their applicability to AI/ML models.

10 Provisional agreement on the AI Act: EU Council, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending certain Union legislative acts, February 2024, available at: https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf