Skip to main content

Generative AI in New Zealand organisations

Risks and guardrails

As apprehension grows surrounding AI language models like ChatGPT, some New Zealand organisations are opting to prohibit their use in an effort to mitigate potential risks. The concerns can be divided into user input and AI output risks:

User input risks:

  • Confidentiality breaches: Employees may inadvertently input sensitive data into AI software, potentially resulting in leaks of confidential or competitive information. This could lead to legal complications and harm a company's competitive advantage. For instance, recent news reports claim that employees at different organisations have leaked crucial data through ChatGPT.

AI output risks:

  • Misinformation: AI language models can "hallucinate", generating seemingly coherent output containing inaccuracies and even citing non-existent sources.
  • Bias perpetuation: AI algorithms could unintentionally perpetuate biases and discriminatory behaviours, raising ethical concerns. Language models learn from diverse online sources, including toxic comments, which may not accurately represent organisational perspectives but have yet to be fully eliminated from AI models.
  • Intellectual property infringement: AI models are built on vast amounts of data, often published by other entities on the internet, potentially resulting in legal risks for employers.

Some AI risks can be mitigated with guardrails. Thus, it is worth investigating safeguards specifically for generative AI language models. This is particularly relevant as generative AI language models are becoming hard to avoid. They are increasingly being integrated into enterprise applications, already utilised by numerous New Zealand workers, and are perceived to positively impact performance. To establish guardrails, organisations might explore the following measures:

  • Hosting a custom AI language model: By hosting your own instance, either locally or on a cloud server, you could better reduce the likelihood that sensitive information is exposed to insecure external applications.
  • Using secure API services: Paid API services often (but not always) offer enhanced data privacy protections compared to consumer generative AI services, wherein user input data is deleted after a set period.
  • Educating employees: As the most significant risks stem from how employees input and use generative AI, the importance of education cannot be overstated. Training on responsible AI usage and the significance of fact-checking can be implemented, along with clear guidelines and support for generative AI use within the organisation.

While genuine concerns surround generative AI, excluding it from the enterprise may prove counterproductive in the long run. A more constructive approach would involve fostering discussions on developing effective safeguards.

If you are interested in knowing more, you can read about proactive risk management in generative AI or contact us to arrange a chat, or educational workshop.