Artificial intelligence is becoming part of everyday life in Switzerland. But when it comes to insurance, acceptance is conditional. Our survey of 1,291 policyholders reveals the path forward: customers will embrace AI when it explains rather than decides, when humans remain accountable, and when decisions are transparent. These aren't obstacles to AI adoption. They're the conditions that make it legitimate.
Artificial intelligence has moved well beyond the early-adopter phase. In Deloitte's survey, 71% of respondents say they already use AI at least monthly, and around 85% believe insurers in Switzerland are already using it. Yet familiarity does not translate into automatic approval. Only 18% express a positive view of AI in insurance, while 36% remain critical and many others are neutral. This gap matters.
The survey also suggests that the industry's AI opportunity is not primarily about novelty. Customer interaction with insurers still happens largely through established channels such as email and phone. The strategic task is therefore not simply to add more technology. It is to improve the journeys customers already use, and to do so in a way that is understandable and clearly beneficial.
The strongest support is found where AI acts as an assistant. Around 62% accept the use of AI to translate policy wording into simpler language and to illustrate what is covered and what is not. Around 58% support AI-generated tips to help prevent losses. Around 57% accept the use of AI to identify suspicious patterns in claims and flag cases for deeper review.
The pattern is clear. Customers are receptive when AI explains, guides, supports or highlights. They are far more comfortable with AI that reduces complexity than with AI that determines outcomes. For insurers, this is an important leadership signal. The fastest route to credible AI adoption is to focus first on use cases that improve comprehension, service quality and prevention.
Customer Expectations & Transparency Demands
Assume insurers already use AI
Demand human decision-making
Want transparency about AI use
The picture changes once AI moves from support to decision-making. A clear majority (61%) reject the idea that AI should decide on the acceptance or rejection of an application. Around 41% reject AI-led risk classification and premium setting. Around 37% reject the use of driving data to influence premiums or discounts.
This should not be misread as a wholesale rejection of automation. It is a rejection of opaque, high-stakes automation without visible human accountability. More than half of respondents, 51%, believe that AI in insurance can increase the risk of unfair decisions. And 86% say that important decisions should ultimately be made by a human. For underwriting, pricing and claims, human-in-the-loop is therefore not just a governance preference. It is a condition for legitimacy in the eyes of customers.
Customers do not only want AI to be well governed in the background. They want to experience that governance directly. Around 85% want to be informed when AI is being used. Three conditions stand out as especially important for broader acceptance: human review in critical cases, supported by 73% of respondents; transparent disclosure of where AI is used, supported by 67%; and simple objection or appeal processes, supported by 59%.
This has direct implications for executive teams and boards. AI governance cannot remain an internal control framework that customers never see. It needs a customer-facing expression: clear disclosure, understandable explanations, identifiable responsibility and a credible route to challenge outcomes where the stakes are high.
The survey shows a pragmatic willingness to share data - but only when the value exchange is obvious. Around 56% of respondents would share photos or receipts in the context of a claims notification. By contrast, willingness falls materially when the data becomes more personal or when its future use is less clear, such as health or driving data.
Customers are not rejecting data-driven insurance as a matter of principle. They are asking for proportionality, clarity of purpose and a direct customer benefit. Insurers should therefore frame data propositions around tangible value creation, not around technical possibility. The more consequential or sensitive the data use, the stronger the case for transparency and restraint.
Perhaps the most important strategic finding is that trust in the responsible use of AI remains limited across the insurance landscape. Only 17% of respondents express trust in insurers in this context, 14% in supervisory authorities and 13% in technology providers. No stakeholder can assume that trust will be delegated to them automatically.
For insurers, this creates both a challenge and an opportunity. Trust cannot be imported from regulation or outsourced to technology partners. It must be earned through consistent choices in design, communication, governance and customer treatment. In this sense, AI in insurance is not only a technology agenda. It is a leadership agenda.
For boards and executive teams of Swiss insurers, the implications are clear. AI should first be scaled where it explains, supports and prevents - before it is asked to decide. Human accountability should remain explicit in underwriting, pricing and claims. Transparency should be built into the customer journey, not added as an afterthought. Personal data should be used only where purpose and customer benefit are clear. And trust should be treated as a strategic asset with visible executive ownership.
The survey is not a warning against AI. It is a clear signal about the conditions under which AI will be accepted in Swiss insurance. Customers see the opportunity. But they also expect transparency, fairness and control. Insurers that take these expectations seriously can do more than deploy AI responsibly. They can strengthen trust, improve relevance and turn AI into a source of lasting competitive advantage.
Deloitte survey "Insurance and AI 2026", Switzerland, n=1,291 insured individuals aged 18-79. The article text has been aligned to the approved press release messaging and the underlying survey findings.
We are grateful to Finn Wagner, Norbert Grimm and Rebecca Roj for their valuable inputs to this report.