Who this blog is for: Board members and senior executives of UK GI firms who work on technology, AI, regulatory affairs, risk, and compliance.
The increasing availability of AI models provided by third parties has enhanced the pace of adoption of this technology by insurers. Many firms are now using, or experimenting with, AI1 - especially its Machine Learning (ML) subset2 - for pricing, customer support and claims management activities. However, without the necessary safeguards in place, the use of AI can lead to poor customer outcomes. Therefore, supervisors globally are developing their expectations around AI-related risks.
The UK’s regulatory expectations around AI are also unfolding, UK regulators will follow a risk-based, context-specific and proportionate approach to regulating AI, and have been asked to publish their approach to AI supervision by April 2024. We also expect regulators to provide detailed guidance in early 2025. But, in the short to medium term, the UK Government’s AI strategy will rely on existing regulatory frameworks. The Duty is a case in point. In the absence of a formal regulatory approach to AI, it provides the FCA with “hooks” to take action where firms’ use of AI systems results in poor customer outcomes. Most importantly, delivering good customer outcomes should be central to insurers building out their AI capabilities, which need to be underpinned by appropriate controls, monitoring, governance and risk management to identify and mitigate the risk of customer harm.
In this article, we highlight how UK GI firms can look at their AI systems through the lens of the Duty. In particular, we rely on two key use cases of AI/ML by insurers to explore possible challenges and actions for firms in light of their responsibilities under the Duty: pricing and claims management.
Although the FCA does not specify in its Duty guidance exactly how insurers should think about their use of AI in the context of the Duty, all the Duty’s cross-cutting rules and outcomes apply. For example, insurers need to act in good faith by designing products and services that offer fair value and meet customers’ needs in the target markets. For insurers which use AI/ML in underwriting and pricing, this could mean thinking about whether algorithms can amplify or embed bias, and whether any foreseeable harm could be avoided. Similarly, the Duty requires firms to put themselves in their customers’ shoes when considering whether their communications provide customers with the right information, at the right time. Here, insurers using AI when interacting with customers need to make sure that the information is still tailored to their needs and helps them achieve their financial goals, even if this is done via a chatbot.
To start considering AI/ML in the context of the Duty, insurers should review the UK Government’s policy paper, which outlines some key principles to guide and inform the responsible development and use of AI. These include safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, as well as contestability and redress. While these principles summarise the key risks of AI/ML for firms, they are consistent with the Duty in many ways. Insurers that want to progress with their AI pilots ahead of a formal UK regulatory approach should reflect on how these principles apply to their use cases, as well as the Duty.
When it comes to accountability and governance, for example, a key requirement of the Duty is that insurers’ governance (e.g., controls and key processes) should be set up to enable the identification of poor customer outcomes. Insurers need to be able to demonstrate how their business models, culture and actions remain focused on achieving good customer outcomes. These considerations should underpin a firm’s AI strategy but are also key ingredients to evidencing full compliance with the Duty. Similarly, Boards have to sign off on a report that their organisation complies with the principles underpinning the Duty. Having an awareness of, and ability to challenge, risks to customer outcomes posed by AI systems will be key in this regard.
The UK Government also recently published its response to its White Paper and set out additional guidance3 and key questions that regulators should consider when implementing its AI principles (see our comprehensive summary here). Some questions which the Government poses to regulators will also be relevant to firms in the context of Duty compliance, including for example:
Below we take a closer look at two insurance-specific use cases and how insurers might want to think about them in the context of the Duty. We close with a list of key actions for insurers in their journeys to make the most of AI/ML.
Several UK GI firms either currently use or intend to use ML tools to enhance the speed and accuracy of underwriting processes, including pricing. For example:
UK GI firms have been under the regulatory spotlight in recent years regarding the fairness and transparency of their pricing practices. We expect insurers’ increasing use of opaque ML techniques for pricing purposes to amplify existing regulatory concern in this area, particularly following the introduction of the Duty. The FCA will be particularly wary of any potential exclusions of some customer cohorts as a result of more granular premium pricing. In the Duty guidance, the FCA specifically mentions that using algorithms within products or services in ways that could lead to consumer harm as an example of not acting in good faith4. This might apply where algorithms embed or amplify bias, leading to outcomes that are systematically worse for certain customers – unless the differences in outcome can be justified. As pricing is a use case for AI in general insurance, and one of the key outcomes of the Duty, it is crucial that GI firms can demonstrate fairness across groups of customers when using AI applications.
Key challenges:
1. Explaining how AI/ML pricing models do not lead to poor consumer outcomes: regulators are concerned about AI/ML models introducing or reinforcing bias in modelling, which could lead to unfair pricing. For example, poor-quality AI/ML model training datasets, or a lack of controls to prevent model drifts in case of unprecedented situations, could lead to irrelevant, inaccurate or biased model outputs, causing potential discrimination and customer harm. The Duty is very clear in its expectation that firms need to ensure their products provide fair value to all groups of customers, and that behavioural biases should not be exploited through pricing.
To mitigate this risk, firms need to have strong data and pricing governance frameworks in place. This includes reinforcing controls, monitoring and MI around model's data input and output to ensure customers with protected or vulnerability characteristics are not discriminated against. Firms will need to be able to justify that their fairness assessment is adapted to the product sold and the intended target market (the ICO’s work on dataset, design and outcome fairness can provide a helpful starting point for firms to develop their own fairness explanations).
The Duty also emphasises the need for firms to safeguard consumers’ data privacy.5 UK regulators may review how firms and their third party (TP) providers collect, manage, and use customer data in their AI systems. GI firms will be required to have sufficiently detailed documentation of the data used by AI/ML models to prevent data protection breaches and support model explainability.
2. AI expertise: GI firms also need to invest in enhancing AI expertise to be able to, where relevant, develop, maintain and challenge any new AI-driven pricing models in line with the Duty. While this is true more broadly across many insurance functions, the Financial Reporting Council and the Government’s Actuary’s department recently highlighted a lack the technical skills to handle advanced AI/ML techniques, especially in actuarial functions. This can possibly lead to overreliance on key stakeholders and key person risk. Where firms deploy AI/ML in the pricing process, they need to provide actuaries with the appropriate training and tools to guard against possible customer harm caused by the models. This should also extend to the independent risk, compliance and internal audit functions, which will play a key role in providing assurance that the pricing processes and policies are fit for purpose (especially where insurers build their own model). Only with the appropriate expertise will GI firms be able to demonstrate that their AI systems comply with the Duty.
AI is already widely used by GI firms in claims management as it increases the speed, reduces the cost, and could improve the customer experience. Common use cases of AI in claims handling processes include:
The FCA expects firms to remove unnecessary barriers in the claims management process to ease the consumer journey and provide fair value – whether claims are managed through AI or humans. The FCA will pay particular attention to claims settlement times as pointed out in its warning and 2023 portfolio letter to GI firms (targeting the health and motor sectors specifically). AI represents a promising solution to improve claims settlement timelines but can also contribute to poor customer outcomes. The FCA will, for example, expect firms to ensure that the use of AI does not lead to more burdensome or complex claims processes for customers. Under the Duty, having a complex claims process that could deter customers from pursuing claims could constitute an example of poor practice. Here, firms need to ensure that increased settlement efficiency using AI is not achieved at the expense of deteriorating outcomes for certain customers.
Key challenges:
1. Humans in the loop:6 where firms use automated systems (e.g., chatbots), the FCA stresses that firms should provide the appropriate options to help customers. For example, a GI firm providing an online chatbot to support customers for claims management without access to a real customer agent could lead to poor outcomes, especially for vulnerable customers who might not be able to navigate the chatbot easily. Firms should test the process maps for customers, distinguishing between those that can be safely managed through high degrees of automation, and those that require human contact. Firms also need to ensure they have a process in place whereby customers can complain to challenge the outcomes they get.
Regarding the claims management back-office, the PRA and the FCA have also warned about “automation bias”, i.e. where humans confirm decisions made by AI systems without providing appropriate challenge. This example is especially relevant when AI systems are used in the claims triage and adjudication processes. To tackle this, firms could involve dedicated experienced case officers for sensitive or complex cases. Humans in the AI loops should have an active role in overseeing the model and its output. Their ability to understand and challenge the model should be tested as part of the governance process, and continuously improved and updated through appropriate training.
2. Identifying vulnerable customers to prevent foreseeable harm: in its Financial Lives survey, the FCA identified that for 77% of the people surveyed, the burden of keeping up with domestic bills and credit commitments had increased between July 2022 and January 2023. Moreover, the cost-of-living challenges have led to 13% of insurance policyholders from mid-2022 either cancelling or reducing their policy cover. Insurers should build adequate processes to identify vulnerable customers and adjust chatbot suggestions accordingly; this could include changes to the information provided and adjustments to claims settlement time. Delayed settlements or unexpected premium increases due to poorly monitored use of AI systems and lack of second line oversight in the claims management chain can have a disproportionate impact on vulnerable customers, leading to further financial difficulties. Under the Duty, firms are expected to respond flexibly to the needs of customers with characteristics of vulnerability, whether this is by providing support through different channels or adapting the usual approach.
Review and control the datasets used as inputs for AI models |
|
Review contractual relationship and information exchange flows with TPs in light of the Duty |
|
Due diligence over 3rd party AI provider |
|
Enhance governance arrangements and data quality and lineage processes |
|
Model testing and assurance |
|
Build risk-based inventories of AI models to structure AI risk management processes and prepare for potential Model Risk Management requirements for insurers9 |
|
Monitor customer outcomes and track fairness |
|
Ensure that the existing skillset of the firm is sufficient to deliver the AI strategy over the medium term |
|
Ensure customers have adequate support/alternative avenues to challenge outcomes of AI models and, where necessary, interact with humans |
|
Ensure the adequate identification of customers’ risk characteristics. |
|
Consider using the FCA Digital Regulatory Sandbox |
|
Keep scanning the regulatory horizon |
|
AI-related technological breakthroughs present a great opportunity for insurers – they could greatly improve operational efficiency and reduce cost. But any AI systems need to be underpinned by appropriate controls to prevent the risk of harm to customers. Now is the right time to develop strong safeguards around the use of AI to ensure the delivery of good customer outcomes under the Duty as well as anticipate future UK supervisory approaches to AI. Placing good customer outcomes at the heart of the AI strategy will enable firms to obtain a competitive advantage, gain the trust of customers and regulators, and mitigate the risk of setbacks. Firms implementing those safeguards will then be well-positioned to leverage their AI/ML systems to prove compliance with the Duty, and monitor customer outcomes more effectively.
___________________________________________________________
1 For the purpose of this insight, we will use the PRA/FCA definition of AI in DP5/22: “AI is the simulation of human intelligence by machines, including the use of computer systems, which have the ability to perform tasks that demonstrate learning, decision-making, problem solving, and other tasks which previously required human intelligence”.
2 “ML is a subfield within AI, […]” and “refers to a set of statistical algorithms applied to data for making predictions or identifying pattern in data” it is “a methodology whereby computer programmes build a model to fit a set of data that can be utilised to make predictions, recommendations or decisions without being programmed explicitly to do so” - Bank of England, PRA, and FCA, “Discussion Paper 5/22: Artificial Intelligence and Machine Learning”, 2022, link; and “Machine Learning in UK financial services”, 2022, link
3 His Majesty Department for Science, Implementing the UK’s AI Regulatory Principles Innovation and Technology, 2024, available at: https://assets.publishing.service.gov.uk/media/65c0b6bd63a23d0013c821a0/implementing_the_uk_ai_regulatory_principles_guidance_for_regulators.pdf.
4 FG22/5 Final non-Handbook Guidance for firms on the Consumer Duty, paragraph 5.12, 2022, available at: https://www.fca.org.uk/publication/finalised-guidance/fg22-5.pdf
5 It notably refers to the ICO’s guidance to ensure sound data use in the context of AI.
6 “The measures in place to ensure a degree of human intervention/involvement with a model before a final decision is made” as per Bank of England, PRA, and FCA, “Discussion Paper 5/22: Artificial Intelligence and Machine Learning”, 2022, link
7 “Decentralized Machine Learning framework that can train a model without direct access to users’ private data” as per Deloitte, Federated Learning and Decentralized Data, 2022, link
8 Especially as discussions are ongoing on the relevance of introducing an SMF responsible for AI as part the SMCR review.
9 In the Policy Statement 6/23 on MRM principles for banks, feedback regarding the applicability of the MRM principles to AI/ML models indicated that both firms and the PRA were aligned on the mutual benefits of the proposed principles and their applicability to AI/ML models.
10 Provisional agreement on the AI Act: EU Council, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending certain Union legislative acts, February 2024, available at: https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf