Skip to main content

The importance of regulating AI in Australia

By Dr. Maryam Fanaeepour

With the increasing uptake of AI systems, the responsible management of this technology and building trust for citizens around its use has become an urgent requirement for any global organisation. Erroneous facial recognition and diagnosis systems, embedded bias, privacy and security risks, or even the recent advancements in generative AI demonstrate the necessity of AI regulation and its significant role in maintaining this technology in line with ethics and human rights. Although there have been initiatives undertaken in Australia towards regulating AI, compared to other countries Australia lags behind. Deloitte’s Trustworthy AI framework provides guidelines for responsible practice and use of AI until legislation evolves to meet the need.

According to key survey findings from Seeing beyond the surface: The future of privacy in Australia, Deloitte’s Australian Privacy Index published in 2021, 77% of consumers expressed concern about not being informed when AI is being used to process their personal information and stated that brands being transparent about their use of AI would lessen these concerns. This survey also found that 80% of consumers are worried about decisions being made by AI using inaccurate information and 79% of consumers are concerned by their inability to challenge an inaccurate decision made by AI. These findings highlight consumer demand for regulating AI.

There has been a resistance to regulating AI technologies as its thought that legislation might stop innovation in AI systems and limit its usage. Additionally, ethical guidelines such as explainability and transparency could pose a reputation risk for some organisations, as investigating existing models may reveal bias or privacy risks. 

The approach that governments take to issuing and mandating the new law is key to addressing these concerns. For regulations to be practical and effective, regulators are recommended to consult with the technical community.

AI regulation is increasingly topical as governments worldwide seek to balance the economic and societal benefits of AI against its unique risks and potential consequences. International strategies have been initiated with the aim of developing a unifying framework of ethical AI for governments internationally. These include but not limited to: 

  1. The Organisation for Economic Co-operation and Development (OECD) - an international organisation aiming to build better policies for better lives has proposed OECD's Principles on AI in 2019 which has been endorsed by more than 42 countries, including Australia, and adopted by G20;
  2. Global Partnership on AI (GPAI) - an international and multi-stakeholder initiative bridging the gap between theory and practice on responsible AI by bringing together leading experts; 
  3. IEEE Standards Association - proposing Ethically Aligned Design (EAD) as a vision for prioritising human well-being with Autonomous and Intelligent Systems (A/IS); 
  4. EU’s Ethics Guidelines for Trustworthy AI (2019) - outlining a legislative framework for trustworthy AI that is based on EU values, fundamental human rights and aims to encourage confidence in the use of AI-based solutions. This guideline requires an AI system to be lawful, ethical and robust. 

Other initiatives include the World Economic Forum, G7 Common Vision for the Future of AI, UNESCO, UN and the Nordic-Baltic Region Declaration on AI. 

All these strategies provide a strong foundation of international expertise to build on.

The shift toward regulating AI has begun internationally in Brazil, China, Canada, the US and UK, leaving Australia lagging far behind. The most progressive achievement to date is the submission of the AI Act proposal in 2021 by the European Commission as an EU Legal Framework on AI, which is predicted to come into force later this year. The Act is still in negotiation, however there is little doubt that the EU will be the first major player in regulating AI. Prior to the development of the AI Act, the EU laid the foundation towards AI regulation in 2018 with the implementation of the EU General Data Protection Regulation (GDPR). Decisions taken by machines, use of AI technologies and automated systems are directly subject to this legislation.

Comparatively, the UK government released its AI regulation policy paper in July 2022, which supports a pro-innovation light touch framework. In October 2022, the US White House released the Blueprint for an AI Bill of Rights which includes five principles: “Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Alternative Options”.

Although there have been some initiatives undertaken in Australia, such as the AI Action Plan in June 2021 and the recent Privacy Act 1998 review, Australia continues to fall further behind the rest of the globe with no decisive direction towards AI regulation in sight. Australia’s AI ethics framework provides eight principles for organisations to follow to realise the benefits of AI, minimise risks and implement good governance. Unfortunately, these principles are not enforceable, they serve only as a guideline and draw loosely from existing legislative framework.

While Australia does not have specific AI regulation, AI ecosystems are subject to the current spectrum of laws and regulations that indirectly apply to the use of AI systems and their lifecycle. This includes data privacy and security laws, human rights and anti-discrimination laws, in addition to other legislation.

On the back of global trends in AI regulation, the accelerating pace of AI adoption and increasing community concerns about the personal and societal risks that AI poses, a thoughtfully designed Australian AI regulatory framework is critical. Australia has an opportunity to position itself as a global leader in this space, given its commitment to innovation and regulatory best practice. Australia must consider a range of regulatory options against international policy and domestic expectation to arrive at an AI regulatory framework that is well-designed, well-targeted and fit-for-purpose.

Until Australia has an AI regulatory framework in place there is a need for organisations to self-regulate. Deloitte’s Trustworthy AI framework supports self-regulation of AI by: 

  • Providing a bridge of responsible practice and use of AI until legislation evolves to meet the need
  • Engendering trust between AI technology and its end users
  • Helping organisations adopt new technologies which are aligned to their business values
  • Protecting business and society from the unique and inherent risks of AI technologies.

If you would like to know more about Deloitte’s Trustworthy AI framework, please contact our team.

Authors :

Maryam Fanaeepour - Specialist Manager, Risk Advisory