Third-party AI assurance has a vital role to play in building trust in AI systems; this has been recognised by the UK government as a critical enabler for the UK to realise the full potential of AI. Done well, AI assurance can remove some of the inhibiting factors to effective AI deployment, addressing challenges around governance, risk and compliance. In this article we introduce Deloitte’s Trustworthy AI framework and help you position the questions you have around AI assurance to your management teams.
In November 2024, the government set out a vision for the future of AI assurance in Assuring a Responsible Future for AI. This set out a vision for wide-scale adoption of AI, focusing on the enabling factor of AI assurance towards “safe and responsible AI”. The UK government actively supports the development of a robust AI assurance ecosystem. Its Trusted Third-Party AI Assurance Roadmap published in September outlines a multi-stakeholder approach to professionalising the industry. This roadmap aims to address challenges such as the quality of AI assurance, skills shortages and access to information, by proposing initiatives like a UK consortium to establish an AI assurance profession, developing a skills and competencies framework and launching an AI Assurance Innovation Fund.
In his Ministerial foreword to Assuring a Responsible Future for AI, Peter Kyle, Secretary of State for Science, Innovation and Technology, said: “AI assurance provides the tools and techniques required to measure, evaluate, and communicate the trustworthiness of AI systems, and is essential for creating clear expectations for AI companies – unlocking widespread adoption in both the private and public sectors. A flourishing AI assurance ecosystem is critical to give consumers, industry, and regulators the confidence that AI systems work and are used as intended.”
Fundamentally, AI assurance helps to demonstrate the safety and trustworthiness of AI systems and their compliance with existing and expected future standards and regulations. It is a key driver of safe and responsible AI innovation.
AI assurance is not merely a technical exercise; it is a strategic imperative that underpins responsible innovation and sustained value creation.
Deloitte’s approach to AI assurance is built upon its Trustworthy AI framework, which provides a comprehensive lens through which to assess and manage AI risks and opportunities. This framework is structured around three key pillars, designed to give boards confidence to scale safely:
|
|
|
|---|---|
|
Organisational readiness |
This pillar focuses on assessing the enterprise-level governance framework, risk management processes and control mechanisms. It involves establishing clear AI governance policies, identifying and mitigating AI-related risks across operational, ethical and technical domains and implementing robust controls to ensure AI systems operate within defined parameters while meeting business objectives and regulatory requirements. For directors, this means ensuring that the organisation has the right structures, policies and accountabilities in place to manage AI effectively. |
|
Legal and regulations |
This pillar assesses compliance with relevant regulations. This includes AI-specific regulations (such as the EU AI Act), data protection laws, sector-specific regulatory requirements (e.g., PRA/FCA for UK financial services), intellectual property requirements and liability frameworks, all while addressing ethical considerations and regulatory obligations. This helps boards to ensure their organisation is not only compliant with current laws but also prepared for emerging regulatory landscapes. |
|
Technical foundations |
Model: This involves testing data quality/bias, model performance, accuracy, bias detection and robustness to ensure reliable outputs and optimised decision-making capabilities.
|
Enterprise controls: Is our organisation set up to manage our AI transformation with the right governance arrangements, including roles and responsibilities, policies and risk-based systems for managing AI risks? Do we have the right touchpoints throughout the AI lifecycle and robust response plans for incidents? What needs to be added or amended?
Compliance: Do we understand the range of regulations and laws that apply to each of our AI use cases and are we confident that we are compliant? Have we considered expected future changes to regulatory requirements? Are we linked into industry bodies that will enable us to keep track of emerging trends?
AI Supply Chain: Do we understand our position in the AI supply chain and the needs we have from our suppliers, or the duty of care to our customers, to ensure our use of AI is trustworthy and safe?
Risk-based oversight: Do we understand the full range of AI tooling in our enterprise and which AI use cases pose a greater risk or have a lower level of control?
AI Use Case Performance: Has the right AI use case testing been undertaken prior to launch and is an appropriate programme of in-life monitoring in place to ensure benefits stay optimised, risks are identified and deviations from intended outcomes are detected?
Infrastructure: Is our platform and infrastructure (and that we procure from any third parties) appropriately secure, robust and resilient to support our AI ambitions?