Skip to main content

Trustworthy AI

Empowering AI through trust

Artificial Intelligence is having a transformative impact on industries from technology to healthcare to finance. However, the untapped potential of AI is accompanied by risks around bias, security and trust. Learn more about how the principles of Trustworthy AI can be leveraged to mitigate such risks and adopt responsible AI.

Trustworthy AI means artificial intelligence systems that users can confidently rely on and trust. They incorporate robust mechanisms to mitigate risks and maximise benefits for individuals and society, ultimately fostering a safer, more inclusive, and ethical AI ecosystem.

We use a multidimensional framework to support organisations in adopting ethical safeguards across seven key dimensions to manage emerging risks. The focus on ensuring AI is fair, accountable and secure is driving the global regulatory landscape with regulations such as the EU AI Act which are intended to be comprehensive rules for Trustworthy AI and are the first comprehensive regulation on AI.

Trustworthy AI FAQs

AI based technologies are becoming prominent due to the benefits and efficiencies they can deliver for organisations. However, increased reliance on AI systems can lead to ethical concerns, biased decisions, privacy breaches, and reputational damage. To counter this, prioritising development of ethical and human-centric AI is key. As well as ensuring compliance with the regulatory landscape, organisations can enhance the quality and reliability of their AI systems, in turn building trust with their customers and stakeholders.

The EU AI Act is the world's first comprehensive legislation aimed at regulating Artificial Intelligence. The Act is not limited to a specific sector and is not restricted based on the size of the organisation. It has extra-territorial applicability, meaning the Act will be applicable to any organisation providing AI systems or outputs from AI systems in the EU, regardless of where such organisations may be established. The Act adopts a risk-based approach and focuses on the principles of trustworthy AI. AI systems which may pose a threat to fundamental rights and safety have been prohibited within the EU.

You should be aware of the EU AI Act and its requirements if you are a;

Provider – Organisations that develop or commission the development of an AI system and sell or put into service the AI system under their own name or trademark.
Deployer – Organisations using an AI system under their own authority.
Importer – Organisations that are engaged in the sale of an AI system in the European Union, where the AI system bears the name or trademark of an organisation based outside the European Union.
Distributor – Organisations in the supply chain other than the provider or the importer that make an AI system available within the European Union The AI value chain is complex and encompasses multiple use cases from developers to users. Irrespective of how your organisation chooses to adopt AI solutions, you form a part of the AI value chain and need to ensure compliance with the EU AI Act.

The penalties for non-compliance with the EU AI Act are dependent on the severity of the breach, ranging from €7.5 million or 1.5% of global revenue up to €35 million or 7% of global revenue.

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey