Deloitte recently conducted a survey involving over 30,000 consumers and employees across 11 European countries to evaluate their trust in generative AI and their readiness to adopt these tools. The results show both optimism and significant concerns about potential risks, highlighting a crucial trust gap that businesses need to address.
The European generative AI market is expanding rapidly, presenting vast opportunities, as indicated by a new Deloitte study. This is further supported by Deloitte’s State of Generative AI in the Enterprise Q3 report, where 65% of European business leaders reported increasing their investments in generative AI due to the substantial value realised so far.
However, the success of generative AI will not solely depend on which company invests the most or develops the best algorithms. Instead, it will hinge on how effectively employees use these tools and how confident consumers are in the benefits of generative AI. Companies must also overcome significant challenges to ensure people feel comfortable with the technology. To ensure the long-term success of generative AI, organisations must prioritise responsible implementation and build trust among employees and consumers.
Importance of trust
The survey highlights that trust is fundamental to achieving widespread acceptance, making it especially important. As innovation progresses rapidly, the future success of generative AI hinges on bridging the trust gap between organisations, consumers, and employees who use these tools. Trust involves showing a high level of competence and having the right intentions. These elements are essential for the successful adoption of any new technology, particularly generative AI. Key questions include whether the technology is reliable and whether it considers the interests of its stakeholders.
Bridging the trust gap: What should companies focus on?
The survey provides a clear roadmap for businesses to foster trust and promote the responsible use of generative AI by implementing a trustworthy AI strategy, with an emphasis on governance, regulatory adherence, and education.
- Build guardrails and provide the right tools: Educating employees about the risks of using unsanctioned tools is a crucial first step to minimising such dangers, especially as many workers take it upon themselves to stay current with generative AI advancements.
- Ensure adequate training: A robust learning and development programme is crucial to maximise generative AI’s potential and minimise risks. This should cover the integration of generative AI into workflows and its ethical, responsible use.
- Embrace organisational transparency: Only 51% of European workers in our study believe their employers are transparent about generative AI’s impact on their roles. Investing in transparency can address employee concerns and foster a more open, receptive attitude toward generative AI.
- Prioritise data privacy: Data privacy and security are critical for building trust, with 66% of generative AI users in our sample citing them as crucial. Respecting user privacy by limiting data use and storage to its intended purpose and duration, with clear opt-in and opt-out mechanisms, is a must.
- Keep humans in the loop: Maintaining human oversight in generative AI-driven processes is a crucial element to building trust. Organisations should aim to combine human judgement with AI capabilities, especially in areas involving ethical or sensitive implications, to build further confidence in decision-making.
Other key takeaways
- Despite widespread media coverage, 34% of survey respondents were either unaware or unsure of any generative AI tools. Among those familiar with generative AI, less than half (47%) have used it for personal tasks, while under a quarter (23%) said they have used it for work.
- Most European generative AI users believe that it can help businesses improve products and services (71%), automate routine tasks to improve employees’ work experiences (66%), and benefit society overall (59%).
- 56% of European generative AI users believe their colleagues use unapproved generative AI tools without the explicit approval of their bosses. Alarmingly for organisations, 38% claim these employees do not see any risks in using unapproved tools, and 34% believe that organisations cannot monitor this use.
- European consumers trust the results produced by generative AI more when they use it themselves compared to when organisations use it to provide services. For example, using generative AI to find public service information is trusted more than a bureaucrat using generative AI to assess an individual’s eligibility for a social welfare programme.
- Businesses should look to build trust and encourage the responsible adoption of generative AI by building guardrails, providing the right tools, ensuring adequate training, embracing organisational transparency, prioritising data privacy, and maintaining human oversight.