In business and in society as a whole, expectations for AI are sky-high. But even the most advanced, high-performance AI model will not achieve its full potential unless it is carefully integrated into the system and process landscape. It is vital for organizations to make sure their AI models are not only understood in the lab but can also be trusted in operation. The Deloitte Trustworthy AI Framework provides a useful guide for companies to navigate this stage of the lifecycle, building on the experience gained at earlier stages:
A police department makes use of AI to predict criminal activity across neighborhoods it serves. The officers plan to intensify patrols in areas where higher crime rates are expected rather than those classified as safer by the AI. These predictions are designed to help the police department make more efficient use of its resources. However, shortly after launch, a vicious cycle takes shape caused by a self-perpetuating bias unwittingly designed into the AI. The distribution of touchpoints is imbalanced when more officers are sent to patrol one particular neighborhood rather than another – despite both having similar base crime rates. Intensified patrolling results in more arrests and more data points than in other, similar neighborhoods. The AI system is entirely unaware of the selection bias, associating more data points with greater danger.
Solution
To be able to improve the AI through ongoing data collection, Deloitte helps the police department implement control mechanisms to verify that the system is operating entirely without bias, notably by ensuring equal sampling distributions and avoiding selection bias. Should the model accuracy drift, police staff can reevaluate the data and interrupt the biased feedback loop.
Outcome
Through better understanding of how the AI works, the police department can use it the way it is intended and achieve its two primary goals: maximizing safety across all its neighborhoods and efficiently allocating its resources. The AI system has an in-built mechanism that detects potential disparities between neighborhoods and automatically makes recommendations on how to resolve them. This gives police officers a choice when it comes to adapting the way they assign police patrols and ample opportunity to improve the AI with fresh data.
Integrate: Robust & Reliable
Challenge
A home improvement retail chain has launched a new AI system to predict sales figures at the product level more accurately. However, the sales figures vary across stores due to demography, proximity to urban centers and other regional differences. The retailer must be careful not to rely on data from only a subset of stores to train their innovative AI model. Otherwise, the model would fail to generalize and the nationwide roll-out would function only in regions with strongly similar characteristics. The situation is further complicated by the planned integration of the AI into a complex IT landscape with a variety of legacy systems.
Solution
Deloitte deploys the AI model and corresponding data flow into the existing architecture, ensuring the data selected for training is representative of the data application will face in production. This improves accuracy of the AI model decisions with the selected features, both from the outset as well as over time by improving resilience to potential shifts in data. At the same time, Deloitte conducts tests on the CI/CD pipeline for the AI model and on the interfaces to peripheral systems. We provide sufficient documentation of all controls performed in order to guide future testing and updates. In the run-up to the roll-out, we conduct wide-ranging tests of the model itself, which has now been integrated into the overall IT infrastructure.
Outcome
Thanks to our extensive testing and representative data sets, the home improvement retail chain has a reliable AI model in place and can rely on more accurate forecasts of future sales.
Integrate: Preserving Privacy
Challenge
With the introduction of the General Data Protection Regulation (GDPR), consciousness for data privacy has grown substantially. Many consider privacy an inalienable human right. This is highly relevant for the healthcare industry, where sensitive patient data requires secure handling and storage. One such example comes in form of the personal information provided to a diagnostic chatbot on a hospital website. Cloud technologies add another layer of complexity and often garner the most media coverage, but hospitals need to ensure this highly personal information is protected from unauthorized eyes from hospital staff or uninvited cyber spies as well.
Solution
Using encryption (client-side or server-side), anonymization and identity and access management (IAM), Deloitte protects the hospital’s sensitive and personally identifiable information (PII) in compliance with GDPR. That means all of the data inflows as well as outflows are under constant supervision and the patients’ privacy is guaranteed.
Outcome
The hospital can offer its patients services that leverage the value of their data for their own benefit, while making sure it is shielded from unauthorized access.
Integrate: Safe & Secure
Challenge
A health insurance company is about to go live with an AI system designed to provide better risk coverage for their patients. The insurer intends to deploy into the cloud infrastructure of an external provider, taking advantage of its advanced firewall and other in-built security mechanisms. The insurance company seeks assurance that the system and its IT infrastructure are not vulnerable to cyber-threats that could allow adversaries to mount a denial of service (DoS) attack and shut down the system or use malware to steal sensitive data or manipulate the AI system.
Solution
Deloitte implements a system designed to provide secure identity and access management for the cloud service. At the same time, we check the configurations used in the company’s services and operations for errors. A specially trained Red Team discovers potential vulnerabilities during penetration tests and offensive hacking methods, e.g., through data poisoning, model evasion and DoS attacks. Deloitte also provides protection for the cloud environment with a 24/7 security operations center combined with threat intelligence services.
Outcome
The health insurance company can credibly claim to their policyholders that their cloud solution is well protected against cyber-attacks.
Integrate: Responsible & Accountable
Challenge
A bank is designing an AI-enabled risk model to assess the creditworthiness of loan applicants. Because the bank sees itself as a responsible lender and a customer-focused financial institution, it wants to explain the decision-making process in a way that is understandable for customers.
Solution
Deloitte provides extensive support for the bank’s plan to provide a customer experience fulfilling both the functional needs and high standards that customers expect of the institution. We implement a feedback loop toward customers that makes the decision-making process more transparent and provides instant access to the relevant decision drivers. If customers report a negative experience, the bank can use the integrated feedback loop to establish direct contact with the responsible member of staff.
Outcome
The bank has an enterprise-wide control system in place to ensure error-free operation of all processes and an AI monitoring system to ensure all of the decisions are transparent and documented. For ethical reasons, the bank will also inform loan applicants that their applications have been processed using an AI-enabled system, which will allow the bank to continue to use its innovative risk classification system in good conscience.
Integrate: Transparent & Explainable
Challenge
An insurance company notices an increase in complaints from certain customer groups. Particularly, customers reporting claims for minivans are consistently – yet inexplicably – automatically rejected. The claimants are exasperated and demand an explanation. Customer support invests time and effort into a more in-depth investigation of these automated decisions in order to provide a response.
Solution
The explainable AI toolset Lucid [ML] developed at Deloitte’s aiStudio enables companies to display the AI system’s decision-making process in an intuitive way suitable for both technical and non-technical staff. By integrating the explainable AI toolset into the code of the insurer’s AI application, customers have a dedicated field in the online user interface that proactively offers insight into the rationale for a claim denial. This feature is tested first internally with customer service staff to ensure that the underlying AI claims adjudicator operates correctly and then incorporated into the customer-facing web portal. Claimants can now see for themselves why their claims were rejected.
Outcome
Adding an explainability element to the insurance company’s portal increases policyholder satisfaction. Greater transparency in the claims process and instant availability of the reasons for claim denials improve the insurer’s credibility. Policyholders now have faster, higher quality information at the click of a button and the company saves time and effort in customer support.
Take Action Now!
When it comes to integrating an AI model into existing corporate infrastructure, you should do better than claim your AI application is trustworthy, you must prove it. At Deloitte, we provide support for the safe roll-out of your AI system and a secure connection with the outside world. Our primary focus is protecting your system from potential outside attacks and delivering scalability. Get in touch and let us help you build an AI system that everyone can rely on.
Trustworthy AI Framework | Deloitte
Artificial intelligence (AI) will impact our everyday lives as well as all sectors of the economy. But to achieve the promise of AI, we must be ready to trust in its
outputs. What we need are trustworthy AI models that satisfy a set of general
criteria.
How can it help you?
Find more relevant cases and information about trustworthy AI in you industry or sector.