By Dr. Maryam Fanaeepour
With the increasing uptake of AI systems, the responsible management of this technology and building trust for citizens around its use has become an urgent requirement for any global organisation. Erroneous facial recognition and diagnosis systems, embedded bias, privacy and security risks, or even the recent advancements in generative AI demonstrate the necessity of AI regulation and its significant role in maintaining this technology in line with ethics and human rights. Although there have been initiatives undertaken in Australia towards regulating AI, compared to other countries Australia lags behind. Deloitte’s Trustworthy AI framework provides guidelines for responsible practice and use of AI until legislation evolves to meet the need.
According to key survey findings from Seeing beyond the surface: The future of privacy in Australia, Deloitte’s Australian Privacy Index published in 2021, 77% of consumers expressed concern about not being informed when AI is being used to process their personal information and stated that brands being transparent about their use of AI would lessen these concerns. This survey also found that 80% of consumers are worried about decisions being made by AI using inaccurate information and 79% of consumers are concerned by their inability to challenge an inaccurate decision made by AI. These findings highlight consumer demand for regulating AI.