Skip to main content

Artificial intelligence in Swiss health insurance: From cautious acceptance to strategic responsibility

Artificial intelligence (AI) has entered everyday life in Switzerland. For a majority of the population, AI is no longer an abstract future concept but a familiar companion – used regularly, sometimes daily, and increasingly without conscious reflection. Yet when AI enters the highly sensitive domain of healthcare, attitudes change. Expectations are greater, trust becomes conditional, and the margin for error narrows significantly.

A recent Deloitte survey of more than 1,000 Swiss respondents offers a differentiated picture of how AI is perceived in healthcare, in particular in the context of health insurers. The results are neither euphoric nor dismissive. Instead, they point to a critical transition phase: AI is broadly accepted, but only under clearly defined conditions.

For boards and executive teams of Swiss health insurers, this situation creates both an opportunity and a responsibility.

AI is widely used, yet its impact in healthcare is still perceived as limited

Three out of four respondents to the survey already use AI at least monthly, and almost half use it weekly or even daily. Despite this high level of familiarity, only a small minority perceive AI as having a strong impact on the Swiss healthcare system. More than half of the population believes that AI currently plays little or no meaningful role in healthcare delivery.

This gap between widespread general use and limited perceived impact in healthcare is important. It signals that AI adoption alone, for providers and users, does not translate into visible value. From a strategic perspective, this means that experimentation is no longer sufficient. What matters now is relevance – tangible, comprehensible, and measurable impact. 

Acceptance depends on where AI is applied

There is a clear difference in attitudes to uses of AI in healthcare.

Administrative and operational applications enjoy overwhelming support. Use of AI for faster processing of benefit claims, fraud detection, and customer service automation are widely perceived as sensible and desirable. In these areas, AI is seen as a tool that improves efficiency without threatening human judgment.

The picture changes when AI moves closer to medical or individual decision-making. Applications such as personalised recommendations, automated eligibility decisions, or algorithmic assessments of care pathways are viewed far more critically. Here, neutrality and scepticism are dominant attitudes towards the use of AI.

The message is unambiguous: AI is welcome as an assistant – but not as an autonomous decision-maker.

Human oversight remains non-negotiable

More than 70% of respondents to the survey explicitly state that AI in healthcare is acceptable only if humans retain final responsibility. An overwhelming majority of people disapprove of fully automated decisions without human involvement.

This expectation has profound governance implications. It means that “human-in-the-loop” models are not merely best practice - they are a societal requirement. Boards and executive teams must therefore ensure that AI governance frameworks clearly define accountability, escalation mechanisms, and decision rights. Moreover, companies that master the interaction between AI and human have a clear advantage.

In healthcare, legitimacy is as important as efficiency.

Trust is unevenly distributed across actors

Trust in AI-enabled healthcare varies significantly depending on who deploys it.

Medical professionals and hospitals enjoy the highest level of trust. Health insurers, in contrast, face a trust deficit, and large technology companies are viewed with the greatest scepticism of all. Concerns about data misuse, commercial exploitation, and opaque algorithms are widespread.

For health insurers, this trust gap is particularly relevant. As major data holders and system orchestrators, insurers are expected to act responsibly, transparently, and in the broader interest of the healthcare system – and not merely as cost managers.

Trust, once lost, is difficult to regain. Trust, once earned, becomes a strategic asset. 

Cost reduction dominates expectations, but creates narrative risk

When individuals are asked what they expect from AI in health insurance, the most frequent answer is clear: lower system costs. Faster processes and operational efficiency follow closely behind. Improvements in patient experience or care quality, while recognised as valuable, are of secondary concern.

This presents a narrative dilemma. While cost efficiency is undoubtedly important, an overly narrow efficiency-focused AI narrative risks reinforcing existing scepticism. If AI is perceived primarily as a tool to reduce benefits or tighten controls, acceptance may erode rather than grow.

Successful AI strategies will therefore need a balanced value proposition, combining efficiency with quality, fairness, and service improvement.

Transparency and choice are decisive acceptance factors

Two conditions stand out as being almost universally supported by the respondents:

First, people want to be clearly informed when AI is used. Second, they want the ability to decide whether and how their data is used.

These expectations go beyond regulatory compliance. They reflect a broader societal demand for autonomy and respect. Transparency is not a communication exercise: it is a structural requirement. Opt-in mechanisms, explainable models, and clear communication will increasingly differentiate credible AI adopters from the rest.

AI in healthcare is a societal issue, not just a technological one

Perhaps the most important insight from the survey is that AI in healthcare is not seen as a purely technical matter. Ethical considerations, fairness, accountability, and governance play a central role in public attitudes.

People expect leadership, not following a hype. They expect competence - not experimentation without guardrails. And they expect results - not visions.

For boards and executive teams, this elevates AI from an IT topic to a critical leadership agenda item. 

What this means for Swiss health insurers

The findings suggest a clear strategic direction:

  • AI should be deployed where it creates visible value and reduces friction.
  • Human oversight must remain central.
  • Trust must be actively managed, not assumed.
  • Transparency and choice must be embedded by design.
  • Efficiency gains must be balanced with fairness and quality.

For the adoption of AI, it is important to apply mid- and long-term strategic thinking, as the journey is not a sprint but rather a marathon.

But Swiss health insurers are well positioned to take on this role. They operate in a highly regulated, trust-based system and have long-standing experience in balancing solidarity, efficiency, and quality. AI, if governed wisely, can strengthen this model rather than undermine it.

The next phase in AI adoption is not about whether it will be used in Swiss health insurance - but about how responsibly, transparently, and credibly it will become embedded.

And that, ultimately, is a leadership decision.

Authors

 

Other insurance blogs