Artificial Intelligence (AI) has the potential to bring about positive changes in society, but it also carries risks. Many stakeholders believe that effective policies and regulations are necessary to mitigate these risks due to the significant impact AI can have on society.
The Deloitte network has expertise in developing and implementing new technologies for government and commercial clients. In 2018, Deloitte proposed the Trustworthy AI Framework to enhance the positive aspects of AI while safeguarding individuals and societies from its potential negative implications.
Deloitte has published extensively on AI, with a focus on the policy implications of the technology. Three recurring themes are highlighted in recent papers. The first theme is the role of AI in advancing fairness across society. It is crucial for those involved in designing, monitoring, utilising and regulating AI to proactively address bias to avoid perpetuating existing inequities or creating new ones. Policymakers are actively engaged in debates on how to regulate algorithms, assess bias and promote fairness, with Europe being at the forefront of proposing regulations in this area.
The second theme revolves around trust in AI and its integration into our daily lives. Trustworthy AI, which is ethical, lawful, and technically robust, is necessary to gain broad trust in technology, as it is integrated into everyday technology, whether users are aware of it or not. Policy implications include enhancing algorithm transparency and accountability to help individuals understand AI’s processes and conclusions. While organisational policies play a significant role in building trust, national governments also need to consider trust-building measures when deploying AI for providing services to their citizens.
The third theme highlights the role of AI in driving economic prosperity. When implemented effectively, AI can automate tasks and create opportunities for higher-value skilled work. To realise this potential, continued investment in innovation and AI-related technologies is crucial, both at the national level and within organisations. Policymakers in the United States (US) and European Union (EU) recognise the importance of AI for economic growth and competitiveness. The US emphasises investment in AI development and research, while the EU aims to create favorable conditions for AI technology to success and is working on the AI Act, which will be the first international regulation on AI. In Australia, discussions are underway to develop a national AI ethics framework that prioritises ethical and inclusive values to manage the impact of AI on people’s lives.
The papers summarised in this document emphasise the need for thoughtful policy and regulation to harness the benefits of AI while mitigating its risks. Robust policies and regulations can help to safeguard society against risks including bias and fairness, to build trust in AI and promote investment in AI-related technologies for economic prosperity. Policymakers worldwide are actively engaged in these discussions and are proposing various regulations and frameworks to govern AI’s use.