Skip to main content

New Deloitte Report: Less than one in ten organisations across Asia Pacific, including New Zealand, have governance structures necessary to ensure trustworthy AI

Auckland, 11 December 2024 – A new Asia Pacific report co-developed by Deloitte Access Economics and Deloitte’s AI Institute AI at a crossroads: building trust as the path to scale provides insights to C-suite and technology leaders on how they can develop effective AI governance as their adoption of AI accelerates, and with it, increasing challenges related to risk management.

The report is based on a survey of nearly 900 senior leaders across 13 countries in the Asia Pacific region, including New Zealand, whose responses were assessed against Deloitte’s AI Governance Maturity Index1 to identify what good AI governance looks like in practice. Good AI governance enables teams to adopt AI more effectively, build customer trust and creates paths for business to value and scale.

Key findings from the report:

  • Asia Pacific organisations with more mature AI governance frameworks report a 28% increase in staff using AI solutions and experience nearly 5% percent higher revenue growth
  • 91% of organisations surveyed across Asia Pacific are categorised as having only ‘basic’ or ‘in progress’ governance structures, indicating a significant need for improvement in AI governance practices
  • In New Zealand the top three concerns associated with AI usage were reliability and errors (87%), security vulnerabilities (85%), and privacy issues (85%)
  • While 53% of New Zealand organisations have a system for employees to raise AI concerns, there’s still work to be done. Only just under half of employees have the skills and capabilities to use AI in an ethically and legally compliant way, while almost 2/3rds of organisations are partnering with a third party to close the skills gap. Nearly a fifth of New Zealand organisations reported an increase in incidents (e.g. data breaches) in the last financial year
  • The report findings show that the benefits of AI governance are clear for New Zealand organisations. 51% saw greater trust would result in increased use of AI solutions, 42% recognised the productivity benefits from AI solutions being realised, and 38% could see improved reputation among customers.

Amy Dove, Trustworthy AI Lead Partner, Deloitte New Zealand, said the report’s findings aligned with what she has observed across Aotearoa. “We’re seeing real concerns around the risks associated with AI, but we also know that these risks can be managed with people, process and technology so that New Zealand businesses and organisations can reap the benefits of AI safely and responsibly.”

Navigating the risks from rapid AI adoption

The report highlights that investment in AI is projected to increase fivefold by 20302, reaching US$117 billion in the Asia-Pacific region alone, further emphasising the need for robust governance frameworks. Behind the rapid pace of adoption are employees, who often outpace their leaders. A previous Deloitte study on Generative AI3 found that more than two-in-five employees were already using generative AI at work, with young employees leading the way.

While AI solutions offer powerful productivity solutions, these can lead to data breaches, loss of reputation and business and regulatory fines if the risks of these tools are not managed properly. The global average cost of a data breach reached nearly USD$5 million in 2024, a 10% increase from the previous year4. For large organisations, this cost can be significantly higher.

Commenting on the report, Dr. Elea Wurth, Lead Partner Trustworthy AI Strategy, Risk & Transactions Deloitte, Asia Pacific and Australia said, “Effective AI governance is not just a compliance issue; it is essential for unlocking the full potential of AI technologies. Our findings reveal that organisations with robust governance frameworks are not only better equipped to manage risks but also experience greater trust in their AI outputs, increased operational efficiency and ultimately greater value and scale.”

Actions to build Trustworthy AI

Developing trustworthy AI solutions is essential for senior leaders to successfully navigate the risks of rapid AI adoption and fully embrace and integrate this transformative technology. Trustworthy AI provides a level of certainty that the technology is ethical, lawful and technically robust and provides confidence in senior leaders to use AI solutions throughout their organisation.

Deloitte’s Trustworthy AI Framework outlines seven key dimensions that are necessary to build trust in AI solutions which are transparent and explainable, fair and impartial, robust and reliable, respectful of privacy, safe and secure, responsible, and accountable. This framework and criteria should be applied to AI solutions from ideation through to design, development, procurement and deployment.

“The framework shows that a combination of efforts are required to manage risk. We can bring tech into human processes, but need to ensure that people are at the centre of everything,” says Ms Dove. "In New Zealand, the framework is overlaid by Te Tiriti Principles and underpinned by Te Ao Māori, which also considers data sovereignty and honouring the Treaty of Waitangi.”

Dividends from good AI Governance

The survey reveals that organisations across Asia Pacific with mature AI governance frameworks report a 28% increase in staff using AI solutions, deploying AI in more than three areas of the business. These businesses achieve nearly 5 percent higher revenue growth compared to those with less established governance.

Organisations in technology, media and telecommunications, financial and insurance services and professional services are highlighted in the report as they are more generally 'Ready' for trustworthy AI, while Government and Public Sector and Life Science and Health Care organisations face challenges in being flexible and quick to respond to new concerns emerging around the use of AI technologies.

The report also contains recommendations leaders can take to improve their AI governance5. You can view the full report here.


ENDS

Endnotes

  1. This Index, based on 12 key indicators across five pillars (organisational structure, policy and principles, procedures and controls, people and skills and monitoring, reporting and evaluation), assesses an organisation’s AI governance maturity.
  2. Ibid
  3. Generation AI in Asia Pacific | Deloitte Insights
  4. IBM (2024), “Cost of a data breach Report”, https://www.ibm.com/reports/data-breach
  5. The report recommendations are located at the end of the release.

 

Building Foundations for Trustworthy AI

Effective AI governance is critical for organisations when integrating AI solutions into their operations and business models. The report highlights four high-impact actions organisation leaders can take to improve their AI governance.

Key recommendations from the report include:

  • Prioritise AI Governance to realise returns from AI: Continuous evaluation of AI governance is required across the organisation’s policies, principles, procedures and controls. This should include monitoring of changing regulations for specific locations and industries to remain at the forefront of AI governance standards.
  • Understand and leverage the broader AI supply chain: Organisations need to understand their own use of AI as well as interactions with the broader ‘AI supply chain’—including developers, deployers, regulators, platform providers, end users, and customers—to gain a comprehensive understanding of their governance needs. Regular audits need to occur throughout the AI solution lifecycle.
  • Build risk managers, not risk avoiders: Developing employees' skills and capabilities can help organisations better identify, assess, and manage potential risks, thereby preventing or mitigating issues rather than avoiding them altogether. The "people and skills" pillar of the AI Governance Maturity Index often receives the lowest scores among organisations, highlighting a critical area for improvement.
  • Communicate and ensure AI transformation readiness across the business: Organisations should be transparent about their long-term AI strategy, the associated benefits and risks, and provide training for teams on using AI models while reskilling those whose roles may be affected by AI. Practical steps include scenario planning for high-risk events, developing narratives to convey the technology's impact, and conducting crisis exercises to test readiness for potential challenges.