Zum Hauptinhalt springen

Trustworthy AI - Offering "Build"

Developing the right AI solution for any organization requires careful planning, meticulous implementation and strict controls to avoid unintended consequences. Left to chance, novel technologies like these can lead to surprises – not always in a positive sense. This is true for any system made by human hands – and AI is no exception. There are a lot of idiosyncrasies specific to AI that need special attention to get it right. These principles are best illustrated with the following use cases based on the six dimensions of Deloitte‘s Trustworthy AI Framework.
Are you interested in our Trustworthy AI Framework?
Deep dives and relevant cases

 

Deloitte Trustworthy AI Framework

Build: Fair & Imperial

A bank is looking for an algorithm to fine-tune its credit risk decisions. Regulatory authorities expect this algorithm to use only standard criteria (e.g., credit ratings, employment status, disposable income) and to prevent discrimination of specific groups (by gender, ethnic identity or socioeconomic background).

Deloitte has developed Model Guardian and other tools to identify and measure biases in raw data, training data and model design. Using Model Guardian, we can detect, analyze and assess biases early in the data preparation process and for a wide range of AI models. Doing so ensures that the underlying data for a particular use case is representative and results are not distorted by unintended biases. Model Guardian’s tracking feature monitors predictive power against the degree of perceived bias of successive models.

The bank’s credit risk model makes decisions purely on the basis of hard criteria without (potentially illegally) discriminating on protected classes (ethnicity, gender, ...) that could result in prejudiced and unfair decisions. As a result, the bank is not worried about reputational damage or penalties from regulators due to discrimination in the AI.

Build: Robust & Reliable

An automotive manufacturer is launching a new fleet of cars with an automatic lane detection feature, which is a Level 4 automation based on the SAE automated driving scale. The system is having difficulty detecting roadworks, because there are as yet no definitive, established standards.

During the development phase, Deloitte already considers multiple design features to reduce risks. The focus is squarely set on developing resilient modeling structures and critical models to protect against adversarial attacks or unforeseen events.Using state-of-the-art techniques, such as randomized smoothing or generative adversarial networks, we train the deep neural networks to be robust against a wide constellation of roadworks. The development process also includes stress tests and adversarial tests to deploy the algorithm in different contexts and retrain the AI for situations where the model may not be sufficiently robust.

The lane detection feature delivers reliable results in all the situations specified by the client’s quality assurance department. The automotive manufacturer meets the regulatory standards, and the new fleet operates safely and as intended.

Build: Preserving Privacy

A telecommunications company is working on an AI system to predict which product to offer its customers for which price. The Company would also like to determine the price-point that would make individual sales calls economically viable.. During the development phase, the company realizes that the data set they have is too sparse to deliver adequate results, as only a subset of the customers gave consent to use the data under the General Data Protection Regulation (GDPR). The company has additional data in its possession but lacks the authority to use it on the grounds of data protection.

Deloitte leverages its Anonymization Framework to develop the AI system in a GDPR-compliant manner. There are two main approaches, constructing synthetic training data and anonymizing existing data sets with state-of-the-art methods such as differential privacy or k-anonymity. These methods ensure hackers cannot trace the AI output back to an individual row in the data set, allowing us to mask the influence of any particular individual on the outcome and protect the privacy of each individual with a high degree of certainty. 

The final model complies with current data privacy regulations and can resist de-identification attacks. The telecommunications company is able to utilize highly-sensitive data as training data while adhering to strict legal restrictions. Knowing the system is robust, the company can now focus all of their effort on achieving the strongest performance from the model. 

Build: Safe & Secure

A regulatory agency needs to implement a more effective system to detect financial crimes such as insider trading. The agency had successfully applied AI to discover increasingly nuanced patterns. However, criminal actors are testing means to manipulate the agency’s detection algorithm to hide their illegal activity beneath the detection threshold. As soon as the model is available online, money launderers observe how it behaves and attempt to reverse engineer its methodology. Their goal is to trick the model into perceiving their fraudulent transactions as legitimate.

Deloitte experts specify an environment and the criteria that will allow the model to operate safely, defining potential threat scenarios early in the design process to address possible attack vectors during the modeling phase. 
The system design considers both traditional cyber threats as well as AI-specific risks, for example if the data or the internal models leak to the public. This dual vulnerability underscores the importance of developing the system within a secure infrastructure where access is restricted, and the model can remain confidential. In addition, we only use secure software to develop the model and state-of-the-art techniques to train the AI, such as adding noise to the training data.

Ensuring “Security by Design”, the regulatory agency can deploy its model confident in having addressed potential vulnerabilities and their implications. And there is no need for the agency to spend valuable resources on additional security experts during the post-development phase to close security gaps within the AI system. 

Build: Responsible & Accountable

A bank is building an AI-enabled robo-advisor designed not only to help customers shape their ideal portfolio, but also to automatically buy and sell assets. The bank sees a potential problem in giving the robo-advisor full responsibility for the proper management of each portfolio. After all, a robot may not fully register all that is happening in its environment, e.g., changes to purchasing behavior of individual stock traders, and may exceed predefined risk thresholds.

With an area as complex as the stock market, Deloitte added control limits and an “emergency brake” to the robo-advisor software, by which the client can switch off the autonomy if the robot behaves erratically due to significant changes in environmental variables or fails to comply with predefined risk thresholds.
The system monitors key indicators that have been set in advance, e.g., daily transaction volume or the current risk of a customer’s portfolio, to monitor changes in the financial markets and the predefined risk thresholds. The system also includes an early warning system that alerts the product owners to potential changes and reminds them to keep the robo-advisor up to date through periodic re-training of the underlying algorithms.
Audit trails round out the solution: they capture important regulatory data and changes in the model lifecycle, e.g., volatility changes in active trading, and records them seamlessly and chronologically. 

Despite the complexity of the stock market environment, the bank can safely operate the robo-advisor. The “emergency brake” and the early warning system limit the downside risk for customers, while the entries in the audit trail log allow the bank to trace back poor decisions made by the system, correct them and prevent them from happening again in the future.

Build: Transparent & Explainable

A hospital plans to launch an AI-enabled virtual assistant to help doctors make faster, better decisions and detect disease with greater accuracy. AI-supported MRI imaging, for example, can automatically detect potentially malignant tumors. Regardless of the system’s benefits, doctors as well as patients expect the algorithm to reveal how it arrives at its diagnosis. This is necessary to prevent doctors from recommending ineffective treatments more likely to put patients at risk than to make them better. When it is a matter of life and death, it is essential for the AI system to fully explain its decision-making process.

Deloitte implements a state-of-the-art AI model that improves diagnostic accuracy and delivers the desired transparency. Thanks to the range of tools in Deloitte’s Lucid [ML], the system can explain the decision drivers at the global or the local level. Key features/drivers of the expected results are visualized in such a way as to convey which regions within the image determine the diagnosis. Lucid [ML]  articulates the drivers in the model in a straightforward way, helping doctors understand and validate the process.

The hospital now has a cancer detection system using a high-tech neural network that offers accuracy, transparency and traceability. With the ability to detect even more subtle signs of malignant tumors, doctors have a better chance at saving lives; with the user-friendly transparency dashboard, doctors and patients have more confidence in the quality of the diagnosis.

Take Action Now!

 

At Deloitte, we can help you meet the high expectations of companies and regulatory agencies, particularly with regard to explainability and fairness in AI. 

Our experienced data scientists develop AI-enabled solutions or adapt your existing AI systems, always staying true to Deloitte’s Trustworthy AI principles and always making sure your AI solution adds reliable and secure value to your business every day. Developing end-to-end solutions for your AI initiatives is just the start – we like to think of ourselves as a key part of your team. 

Trustworthy AI Framework | Deloitte

 

Artificial intelligence (AI) will impact our everyday lives as well as all sectors of the economy. But to achieve the promise of AI, we must be ready to trust in its
outputs. What we need are trustworthy AI models that satisfy a set of general
criteria.   

How can it help you?

 

Find more relevant cases and information about trustworthy AI in you industry or sector.

Fanden Sie dies hilfreich?

Vielen Dank für Ihr Feedback

Wenn Sie helfen möchten, Deloitte.com weiter zu verbessern, füllen Sie bitte folgendes aus: 3-min-Umfrage