Skip to main content

Can Generative AI be trustworthy?

The use of generative AI is growing. While the rise of generative AI tools like ChatGPT and DALL-E opens a world of potential, can these tools be trusted, given their risks and ethical challenges?

In this article, we’ll build on the concepts discussed in our previous blog post on Risks and guardrails for Generative AI.

Explainable and reliable: Where is the information from, and can we trust it?

AI models, such as ChatGPT, have the potential to generate convincing, yet incorrect or nonsensical responses, raising red flags for reliability and accuracy – and increasing the risk of spreading misinformation.

For many real-world applications, the costs and harm resulting from a wrong answer can be greater than those from a non-answer. Models can be engineered to provide no answer when they are not certain, and humans or cross-examination models can be included in the loop to fact-check a model’s output. This is illustrated by the following example: an attorney used AI to file a court brief, and inadvertently submitted fake court citations produced by ChatGPT. This has prompted actions such as courts requiring attestations from attorneys that no portion of their filing drafted by generative AI was submitted without being checked for accuracy by a human.

Fair and impartial: Is this model learning from “wrong” or biased sources?

Bias in, bias out. Generative AI models can replicate biases from their training data, leading to potential harm. Much work still needs to be done to attempt to identify biases in training data sets and mitigate them – not only in generative AI, but for AI overall.

This calls for careful curation of training data rather than ingesting massive amounts of convenient and easily scraped sources from the internet. To mitigate bias against indigenous communities in particular, involving said communities in AI systems and establishing guidelines that protect their intellectual property rights plays a vital role. A real-world example of this is Whisper, an AI model trained on audio data from the web, including 1,381 hours of te reo Māori. This has raised some concerns among indigenous groups in New Zealand regarding cultural distortion.

Responsible: What can we do to promote responsible generative AI use?

In addition to addressing the technological aspects of generative AI, it is equally important to raise awareness among users and the public about the potential risks, as well as ethical and environmental implications.

This includes digital literacy and providing resources to help individuals understand the nuances of AI-generated content and make informed decisions when using such tools. It also encompasses assisting people in understanding the wider implications of their choices when opting for existing models or training new ones, considering the significant energy requirements and environmental effects associated with training. Since many individuals are already using generative AI at work with or without their employer’s knowledge, it is key to equip them with the knowledge they need to do so safely. The lines are becoming too blurred to avoid decisive action or policy in this space.

Private and accountable: Are we unintentionally using personal, confidential or IP information?

It can be tricky to ascertain the source data of generative AI models, which can lead to the inadvertent use of private, confidential, or intellectual property information.

It would be prudent for users to consider generative AI outputs as brainstorming drafts rather than final outputs. Wherever possible, organisations can choose models that have information and attribution controls, which can pinpoint the origin of ideas and source information. Furthermore, engineers are striving to direct a model towards "forgetting" data and updating specific information when any issues are detected, although this presents inherent challenges currently. Generative AI can use personal information in unintended ways, spreading damaging falsehoods about individuals. Artists are calling for developers to disclose the sources they use for training and seek permission from the artists responsible for the source material before using it.

 

Safe and secure: Are we inadvertently leaking data?

Many AI models remember and utilise user input prompts to continue improving. This is problematic, as users can inadvertently disclose sensitive information, which then becomes part of the model itself and can be accessed, in some form, by others.

The use of a self-hosted custom generative AI model limits where data travels, reducing the risk of data leakage. Put another way, bring the model to the data, instead of the data to the model. Where third party providers are used, their privacy and security protocols need to be carefully considered. In such cases, for safety, inputs should be anonymised before reaching the model, to prevent sensitive information from entering the model in the first place. Guidance from New Zealand’s Privacy Commissioner is that personal or confidential information should not be input into generative AI tools unless it has been explicitly confirmed that this information is not retained or disclosed by the provider.

The verdict on the trustworthiness of generative AI hangs on the proactive steps we take when harnessing it. While generative AI harbours potential risks, many of these can be mitigated by adherence to the principles of Deloitte’s Trustworthy AI Framework - explainability, fairness, responsibility, privacy, and safety. Please get in touch with us to discuss further or to arrange a workshop.

 

Acknowledgements: Michelle Lee, Lukas Kruger and Amy Dove.