Skip to main content

Our joint capabilities. Your trustworthy AI.

GenAI has the power to transform businesses, but it needs to be built on trust.

Let's progress with confidence.

It's time for Trustworthy AI

As adoption of Generative AI (GenAI) increases, organisations will face more complexity in ensuring its output can be trusted. Regulatory frameworks such as the EU AI Act must be complied with, but trustworthy AI goes well beyond complying with the Act. As use cases are developed and new business models created, organisations will need to consider governance, ethics, resilience, privacy, security, legal and contractual obligations, as well as alignment with company values. Giving employees and customers confidence that the AI can be trusted will be paramount to its adoption, so these considerations need to be baked into the design phase.

As your GenAI journey evolves from the experimental stages to scaling the technology and reshaping your business model, a number of key considerations should be made to enable the successful adoption and responsible use of the new technology. From legal and regulatory obligations, to ethics, safety and security, the Deloitte Trustworthy AI framework helps you to minimize risks and optimize the GenAI potential in a safe and secure way. Using controls, guardrails and training, you can equip the organization to adopt new technology safely, securely and compliantly.

What makes AI trustworthy?
 

Embedding Deloitte’s Trustworthy AI framework will give you the confidence that your AI is aligned to legal best practice and your organisation’s values and ethical principles. Your customers will trust that your AI doesn’t discriminate or use their data in ways they are not comfortable with. You will comply with the EU AIA, UK regulator and other relevant regulatory standards.

User privacy is respected, and data is not used or stored beyond its intended and stated use and duration; users are able to opt-in / out of sharing their data.

Key questions and considerations:

  1. Data: what data does the AI access, store and process and does it have the rights to do so, does it include personal data and does it adhere to data privacy regulations?
  2. Model training: what data was the AI trained on?
  3. Feedback loop: are prompts / inputs used to further train the AI? Are providers of these inputs comfortable with this?
  4. Data storage: where is data physically hosted and stored? Who has access to this data?

Users understand how technology is being leveraged, particularly in making decisions; these decisions are easy to understand, auditable, and open to inspection.

Key questions and considerations:

  1. Technical and functional design: have you documented and consulted stakeholders on the AI logic, designs and model inputs?
  2. AI outputs: are the solution outputs sufficiently clear and comprehensible to end users for appropriate action and do they meet established clarity requirements?
  3. User transparency: is it clear to the end user they are interacting with AI? Are appropriate legal disclaimers in place? 
  4. Black box AI: do we understand why the AI takes or recommends one course of action rather than another? Can we explain this clearly and recalculate outputs independently?

The technology is designed and operated inclusively in an aim for equitable application, access, and outcomes.

Key questions and considerations:

  1. AI ethics: how are decisions in relation to AI ethics made?
  2. Model training and data sensitivity: does training (and other input data) come from a fair, unbiased and representative source and are there mitigation techniques used to correct data bias?
  3. Traceability: can you retrace how your AI solution arrived at a given decision? Do you understand the decision-making processes and factors?
  4. Auditability: would a third-party auditor be able to assess the appropriateness of the decision-making processes and factors with the documentation that you have?

The technology is created and operated in a socially responsible manner.

Key questions and considerations:

  1. Impact on humans: how will AI use impact your employees?
  2. Alignment to ESG: is the AI Strategy aligned to your full ESG strategy? What is the impact on stakeholder value?
  3. Access to AI / outputs: will your full range of customers be able to access services / benefit from your AI?
  4. Environmental impact of AI: how does your GenAI adoption impact your net zero commitments?

Policies are in place to determine who is responsible for the decisions made or derived with the use of technology.

Key questions and considerations:

  1. Human in the loop / Human over the loop: who is accountable for the decisions AI makes or advises on? How do they get comfortable with this accountability?
  2. Individual privacy rights: if your AI solution uses data about individuals, are those individuals aware of how their data is being used? Are those individuals able to opt-in or out of sharing their data?
  3. Legal requirements and considerations: is the AI solution in compliance with applicable privacy laws and regulations? Did you publish a System of Record Notice if necessary?
  4. AI governance: is there appropriate governance for your AI solution? Are there clear roles and responsibilities for continuously monitoring solution outputs? How will you consistently monitor, track and report compliance to the Trustworthy AI policies, standards and procedures? 

The technology produces consistent and accurate outputs, withstands errors, and recovers quickly from unforeseen disruptions and misuse.

Key questions and considerations:

  1. Data quality: is the training data accurate, representative of real-world settings, free of noise or outliers? Do you monitor the model for data drift? Is the model protected from data contamination?
  2. End user guidance and training methodology: how are we ensuring the AI users are clear on its capabilities and limitations?
  3. Model performance and monitoring: do model performance metrics meet desired thresholds? Are the model’s outputs sensitive to small variations in the inputs? Is the model continuously monitored to identify changes in performance and/or opportunities for improvement?
  4. Consistency of performance: how do you manage new versions? If a new version produces different results, how do you resolve performance issues and communicate this with stakeholders? 

The technology is designed and operated inclusively in an aim for equitable application, access, and outcomes.

Key questions and considerations:

  1. AI security: what types of security risks (e.g., adversarial attacks) may impact the AI solution? What is the potential impact of the risks?
  2. Policies, standards and procedures: are there clearly defined and documented policies, standards and procedures to guide consistent and ethically sound development of AI systems
  3. PEN testing: have we performed tailored PEN testing to simulation how adversarial groups could engineer attacks on the model?
  4. Access controls and change management: what are the access controls and training requirements for those operating at different stages of the AI lifecycle (e.g., developers, program managers)? Who has access, and what type of access do they have? How are changes to the AI managed? 

How we are supporting clients
 

  • AI governance: the establishment of AI governance and AI risk management processes which foster, rather than stifle, innovation and reduce risk.
  • Regulatory compliance: the creation of AI risk management processes which stand up to regulatory scrutiny. Typically this includes building AI Inventories and adapting existing processes and controls to be fit for AI.
  • AI assurance: the provision of a range of assurance services to provide management, leadership and other parties that your AI is safe, robust, ethical and compliant. This can be done in a range of ways and at enterprise or system level, but often involves independent model testing and/or an assessment of how an AI risk management framework has been implemented and is operating.
  • Guardrail design and implementation: the design and deployment of technical guardrails into your AI systems which manage risk.
  • AI control design and implementation: the design and deployment of a range of controls to manage the risk associated with your AI systems. These controls can range from manual review, or ‘human in the loop’ controls, to ITGC and security controls all the way through to legal disclaimers on chatbots and training for your staff.
  • AI risk assessment: an independent assessment of the risks associated with your AI, and advice on support detailing the expected controls which can be implemented to manage these risks.

How do we enable Trustworthy AI?

By leveraging our unique capabilities including strategy, technology, ethics, legal, cyber, risk and change management, we offer both comprehensive solutions and specialised services. We support your entire journey towards Trustworthy AI, from preparation through development and implementation to ongoing operation.

We help clients with preparing their organisation and processes for Generative AI and its seamless integration, so it can be trusted. From understanding AI ethics and regulatory implications to developing a clear AI vision and strategy. Our teams can advise on setting up robust governance structures and managing change within the context of industry-specific, legal and ethical considerations.

AI quality is at the heart of sound AI development and use. Strong technical foundations are essential to build AI confidently and to apply the technology effectively. We help our clients to master the full spectrum of quality considerations - data management, model development & testing, cyber security, MLOps, monitoring, issue logging and resolution – supporting both quality risk management and compliance.

We guide our clients through the complexities of AI regulation with gap analyses, support for EU AI Act conformity declarations, and robust documentation and reporting processes. Additionally, we offer specialised legal assistance on issues such as intellectual property, data privacy, and policy formulation to provide full confidence and help to prevent penalties.

Trustworthy AI benefits your business

Ensure your AI systems possess the quality, security, and compliance needed for large-scale deployment.

Boost your teams' trust in AI, increasing adoption rates, enhancing user experience, and fostering long-term motivation.

Meet and exceed customer expectations with consistent, high-quality outcomes from your AI systems and their ethical use.

Demonstrate a commitment to ethical AI and clear policies to enhance your brand's reputation and mitigate reputation risks.

Allow your teams to innovate with direction by striking the right balance between governance and creating an innovation playground.

Ensure compliance with current regulatory frameworks and stay ahead of future ones to reduce the risk of legal issues and penalties.