Skip to main content

Artificial intelligence: Swiss policyholders expect transparency and monitoring, but also see opportunities

Zurich, 1 April 2026

Artificial intelligence (AI) is already an established part of daily life in Switzerland. A look at the use of AI in the insurance sector reveals a nuanced picture: more than a third of policyholders take a cautious view of AI, while just under a fifth see AI as an opportunity for better services. The latest Deloitte survey shows that transparency around the use of AI, as well as human monitoring, are particularly important for broad acceptance.

The survey conducted by consultancy firm Deloitte among 1,291 policyholders across Switzerland at the beginning of 2026 shows that, while 71 per cent of the population use AI at least every month, there is a more cautious attitude to the use of AI in the insurance industry. 36 per cent express a critical view, while 18 per cent are positive about the use of AI.

“The survey reveals a significant difference,” says Marcel Thom, Insurance Lead at Deloitte Switzerland. “AI has arrived in many people’s daily lives. However, attitudes are more varied in the insurance sector, where customers place particular emphasis on fairness, transparency and human monitoring. This sends a clear signal to the industry as to how trust in the use of AI can be increased.”

Human monitoring remains a key factor

The study highlights a major discrepancy: 14 per cent of respondents use AI on a daily basis, while 85 per cent assume that the insurance firms are already using AI. At the same time, just over half (51 per cent) of respondents fear that AI may lead to unfair decisions. This is less a sign of any fundamental rejection of AI than an indication that there are legitimate concerns about how potentially flawed decisions by automated systems are handled. This scepticism is particularly apparent with regard to automated decisions: 61 per cent of respondents reject solely AI-based decision-making on the acceptance or rejection of an application.

At the same time, the survey shows that trust in various stakeholders when it comes to AI remains modest: 14 per cent have confidence in the supervisory authorities’ ability to monitor AI effectively, while 13 per cent trust the technology providers and 17 per cent the insurance firms. Furthermore, they have clear expectations: 86 per cent believe that important decisions should be made by people, while 85 per cent want to be informed transparently when AI is used.

Supportive AI is rated much more positively than fully automated decisions

Acceptance is highly dependent on the application. Supportive AI is rated much more positively than decision-making AI (see below). “Our work with insurance firms has taught us that trusted AI needs clear rules and responsibilities. And the survey shows that customers expect transparency and monitoring. Specifically, this means that companies must design their AI systems in such a way that policyholders are informed transparently, humans make the important decisions, and regular checks are performed,” says Madan Sathe, AI Lead for Insurance and Partner Financial Crime & Forensics.

High degree of acceptance (supportive AI):

  • 62 per cent accept AI translating a policy into simple language
  • 58 per cent accept AI tips on loss prevention
  • 57 per cent accept the use of AI to support the detection of insurance fraud

Low degree of acceptance (decision-making AI):

  • 61 per cent reject AI making decisions on acceptance/rejection
  • 41 per cent reject AI determining risk classification and premiums
  • 37 per cent reject AI using driving data to structure premiums

Consent for the use of data partly depends on the purpose

Customers are willing to share data, but only if the benefit of this data disclosure is clear. 56 per cent would share photos and supporting documentation when making claims. However, the level of willingness falls sharply in more sensitive areas such as health or driving data. “The survey also reveals that, for many customers, there is not always enough transparency as to what their data will be used for,” says Thom.

Three factors are crucial to acceptance of AI among policyholders: 73 per cent are keen to see human checking in critical cases, 67 per cent want transparent disclosure of the use of AI, and 59 per cent would like simple objection processes.

Clear ways forward for responsible use of AI

The study clearly indicates that transparency in relation to AI use is an essential ingredient for its acceptance and for trust in AI. The insurance industry should disclose where AI is used, for what purposes and how it ensures fairness. Policyholders continue to regard human monitoring as crucial when important decisions are made. At the same time, fairness standards must be defined and adhered to. Collaboration between insurance firms, regulators and technology providers is key to developing standards for the responsible use of AI.

"The study is not a wake-up call against AI, but a useful guide to its responsible use,” adds Thom. “Customers do not reject AI outright. The crucial factor is that AI is used transparently, clearly and responsibly. Insurance firms that take these expectations seriously can boost trust among policyholders and advance the use of AI in a targeted way.”

About the study

The study “Deloitte insurance firms and AI 2026" was conducted in February and March 2026. 1,291 Swiss policyholders aged 18 to 79 took part in the survey. The sample is representative of German-speaking Switzerland (71 per cent), French-speaking Switzerland (25 per cent) and Italian-speaking Switzerland (4 per cent). The study explores AI use in everyday life, attitudes to AI at insurance firms, attitudes of policyholders towards AI, the acceptance of specific applications, trust in various stakeholders, and the conditions for acceptance of AI.

Our thinking