Many Swiss companies have already started their journey of AI risk management, with regulated industries leading the way. While current risk management practices suffice for today’s needs, they must evolve to meet future demands. As competition increases and the pressures to innovate and scale AI initiatives grow, companies must advance their risk management and data management maturity. Success hinges on not just understanding the requirements, such as balancing human oversight with automation, but also on implementing strategies to increase maturity effectively and at scale. Key success factors include refining existing risk assessment processes, establishing clear AI ownership and embedding robust AI governance from the outset.
Deloitte recently conducted interviews with Swiss-based companies across various industries to understand their approach to AI innovation combined with effective AI risk management. Our findings indicate that all industries have started their AI journey and recognise the importance of AI risk management to guide their efforts. The EU AI Act is primarily used as a reference while companies await the Swiss AI regulatory proposal, expected in 2025. Regulated industries have an advantage due to their experience with specific regulatory oversight and guidance.
Incorporating AI into existing risk practices is particularly evident in risk management-focused companies where AI risk management benefits from robust existing risk frameworks. Other sectors, which are also beginning their AI journey, cannot leverage these established risk processes as effectively. This adaptive approach means most companies have already started assessing the EU AI Act and are preparing for the forthcoming legislation.
Although AI adoption is advancing in Swiss companies, most AI initiatives are still in the Proof of Concept (PoC) or internal experimental phases. This trend is not unique to Switzerland. According to Deloitte’s State of AI in the Enterprise, Wave Three report, which surveyed 2,770 respondents across 14 countries, 68 per cent of respondents indicated that their organisations had moved 30 per cent or fewer of their generative AI experiments fully into production.
Current risk management practices are often suitable for today’s AI initiatives with limited risk exposure. However, as innovation progresses and companies become more willing to deploy AI use cases with higher risk profiles, these practices will need to evolve. Companies must prepare for a more holistic and well-adapted risk management approach to scale AI innovation quickly and effectively while addressing all levels of risk.
The Swiss companies we interviewed are keen to innovate with AI, focusing on maximising value creation and cost efficiency. However, they will need to adjust their risk management strategies.
While Deloitte’s global survey focused on how organisations globally are adopting GenAI specifically, rather than AI in general, the results offer an interesting insight into how companies approach these initiatives. Although companies are confident in their technology and infrastructure for implementing GenAI, their confidence in risk and governance is significantly lower. Only 23 percent of respondents felt their existing policies were adequately prepared to scale GenAI initiatives.
Given the heightened competition and increased pressure to innovate and scale, a “one-size-fits-all” risk approach is not feasible. Companies need a flexible, sliding scale or hybrid approach between lean and expanded risk management. For very high-risk AI use cases, an expanded risk approach with strict, regular audits and monitoring is necessary. For high-risk initiatives, the risk approach (lean vs. expanded) should be context dependent. For example, AI in HR evaluations or using medical data requires a more expanded risk approach due to sensitivity and regulatory concerns. Integrating technical experts is crucial to ensure compliance and effective risk management. Each AI use case needs appropriate guidance and a robust risk management framework, as AI tools and technologies still have inherent risks and can result in first-mover advantages or disadvantages.
As Swiss companies advance in their AI innovation pipeline, they must elevate their risk management practices managing new risks, prevent market disadvantages and achieve scalability. Scaling the risk management journey alongside AI use case development is essential. More specifically, companies will need to upgrade their risk management practices, but also their data management approach including data classification, privacy and consent management.
Companies globally are proactively managing the risks associated with GenAI applications, as evidenced by our survey results. Actions such as establishing new guardrails, conducting assessments, and building oversight capabilities are being implemented. We include several of these actions in our key considerations outlined below.
The EU AI Act’s definition of AI is rather broad, so companies are advised to have a clear and workable interpretation of AI definitions at the outset to avoid overwhelming efforts. Consistency, rather than completeness, is required at this early stage.
Moving forward, these four key considerations will be key for companies to be successful:
AI innovation and risk management are evolving journeys that require cultural change within companies. Successful AI implementation can significantly influence a company’s strategic direction. Existing risk management principles, such as those for GDPR and AML, can be applied and adapted to manage AI risks effectively. This provides a strong foundation for handling both low-risk and high-risk AI use cases.
Global AI regulation is rapidly increasing, also in complexity, with diverse requirements emerging across Europe, Asia and the US. Companies must understand, monitor and prepare for these international regulations. The EU AI act serves as a good baseline, and compliance efforts now can set a strong benchmark for future global legislation. Switzerland is also considering the EU AI Act and aims to define its regulatory approach to AI by 2025.
Companies should act now to implement a sliding scale risk management approach tailored to their AI context. They should define clear organisational guidelines, including a workable AI definition, the right stakeholders and ongoing training and awareness. Combining these efforts with AI governance by design will enable innovation while ensuring robust risk management.