The first step to establishing sustainable and ethical AI practices is to develop a comprehensive AI strategy. Before even considering the design or implementation of an AI model, organizations must figure out what contextual challenges could arise, and what can be done to resolve them. The issues to clarify at this early stage are: the specific context in which AI will be used, the potential impact AI may have on the enterprise and the general public, as well as viable means of mitigating AI-related risks. After all, simply because artificial intelligence can be used, doesn’t mean that it should be. Establishing a sound AI strategy has never been more urgent. Deloitte’s Trustworthy AI Framework helps organizations understand why… and what can be done about it.
A media company plans to automate parts of the recruitment process and take more advantage of efficient AI-based decision-making. When it comes to the debate about whether AI is fair and whether it can make objective decisions, opinions among the staff differ considerably. Some argue that there should be a gender quota for new hires that reflects the company’s current diversity rate, while others believe the gender quota should apply to the pool of applicants. With such different perceptions of what is fair and what is not, the first step is to agree on a shared definition of fairness. People also often underestimate their own cognitive biases and implicit prejudices – they may even be completely unaware of them, which is how they keep ending up in our data and algorithms.
Solution
Working together with the company, we develop an approach based on multi-stakeholder participation to ensure diversity and inclusion at an early stage and to agree on a suitable definition of fairness. This gives our experts a good basis for workshops to raise awareness about implicit and cognitive biases. We also draft a code of conduct that will limit any potential misuse of the AI. The best way to combat bias is to select appropriate training data. Not only do we ensure the underlying data is representative, but that the team developing the AI is diverse enough to be sensitive to the perils of bias.
Outcome
By including stakeholders that are both directly and indirectly affected by the AI, the company is able to incorporate different views into its definition of fairness. The workshops also revealed that unintended and deliberate biases exist within the company, giving management an opportunity to resolve these issues while also creating the best conditions for fair AI practices.
Strategize: Robust & Reliable
Challenge
A utility company wants to use AI to predict cyberattacks and protect its system-critical infrastructure, but the company is totally unaware of the technical challenges, prevailing regulations and potential consequences. With so little experience dealing with AI, management is worried that the company and its infrastructure could face serious consequences if they get the predictions wrong.
Solution
The project team starts by identifying potential risks, providing suitable preventative measures and advising the utility company about the safeguards needed to avoid AI-specific difficulties. Deloitte provides support in building a suitable technology infrastructure that will guarantee stable operation of the AI system. A team of experts inspects the company’s systems to detect any technology or operational gaps that could prevent robust application of AI, testing the company’s processes and infrastructure in a series of gap analyses. It is particularly important to determine whether the existing system architecture is compatible with the technical and regulatory requirements of the planned AI system. Based on the findings of the analysis, the team designs a reliable (enterprise/IT) architecture.
Outcome
Deloitte’s recommendations help the utility company adopt a risk-conscious approach to the AI project, identifying issues early on and taking appropriate precautions. This provides the company with a solid foundation for future-proof implementation and resilient use of AI.
Strategize: Preserving Privacy
Challenge
A FinTech company is looking to develop AI-enabled products as part of the open banking initiative as well as a greenfield project. With such wide-ranging regulatory requirements and calls for stricter rules in the protection of personally identifiable information (PII), the company is unsure which regulations are currently in effect and how to comply with them for this new venture.
Solution
The project begins with a newly drafted data protection concept for the FinTech company. The first step is to identify the many regulatory requirements for legally compliant use of data and AI in the financial industry. In Deloitte’s RegTech Lab, the FinTech team and two expert teams from the Center for Data Privacy and the Center for Regulatory Strategy come up with a possible solution. This includes compliance checks with important draft legislation and directives, among other measures, in which Germany’s General Data Protection Regulation (GDPR) and the new Payment Services Directive (PSD2) both play an important role. Next, Deloitte develops a comprehensive data governance model that ensures compliance with the regulations we have identified. The project group designs practical/actionable instructions regarding data access (privileges), data storage (data sovereignty) and data processing (data standards, interoperability, anonymization).
Outcome
The resulting data privacy concept allows the FinTech company to develop successful, regulatory-compliant AI tools.
Strategize: Safe & Secure
Challenge
A company is launching a line of smart home products such as home assistants to provide customers with a state-of-the-art centralized monitoring system. In order to provide customers the functionality they need and make optimal use of the system, the home assistants have to collect a wide dataset, for example whether the lights in the house are on or off, and store it temporarily. But because the system is always connected to the outside world, it is very vulnerable. Criminal actors could hack the system and use the data to find out whether the homeowner is at home to stage a robbery.
Solution
The Deloitte Cyber Strategy team assists the company in developing preventative measures before taking the home assistant live, using a threat landscape approach to identify all possible cyber risks, including data theft, at an early stage and develop effective countermeasures. Finally, Deloitte hosts a series of workshops and lab sessions to train the company staff in responsible and secure AI practices, helping them develop awareness for system-critical challenges and potential cyberattacks.
Outcome
These well-designed smart home products and the focus on data security give the company a distinct competitive advantage, because the system supports early detection of cyber risks and the staff is well prepared to respond as quickly as possible to cyberattacks.
Strategize: Responsible & Accountable
Challenge
A private clinic is looking for an AI-enabled decision aid that uses patient data in order to prescribe the most appropriate medication. The aim is to provide more efficient treatment while also reducing personnel costs. However, during the project’s initial development and design phase, several key questions remain unanswered: whether the system is practical or advisable, who will be responsible and/or accountable for the system, and what other potential implications might be.
Solution
The Deloitte team conducts its initial impact/value assessments and acceptance tests in the clinic to determine whether the planned AI system is ethically responsible and whether it is something patients want. At the same time, our subject matter experts work closely with clinic staff to develop a “chain of accountability” that will determine what measures to take and who is responsible if the AI system prescribes the wrong medication, puts certain patients at a disadvantage or prescribes an incorrect dosage. The team takes part in a workshop to assign and define the roles and responsibilities of different stakeholders across all phases of the AI lifecycle. At the same time, project managers assess the digital maturity of the workforce and raise awareness for ethical issues in Deloitte’s Corporate Digital Responsibility Lab. The expert team also advises the clinic as they establish the technical and operational guidelines for later project phases. They support the clinic during the roll-out of the AI governance framework/monitoring mechanisms and test them in a hands-on lab. This enables clinic management to systematically identify, monitor and audit all risks/objectives.
Outcome
The clinic staff is now more aware of what constitutes responsible AI use and of what potential risks and objectives to include in their strategy and review process. In addition, the clinic has a clear idea which stakeholders are accountable for which phases of the AI lifecycle.
Strategize: Transparent & Explainable
Challenge
An insurance company wants to use AI to make faster, more precise decisions in calculating insurance premiums. Ideally, the AI-enabled calculator will not only deliver more exact outcomes but will also make the process easier to understand for a variety of stakeholders. When insurance premiums go up, most customers want to know the reason why, and – under Article 22 of the Germany’s General Data Protection Regulation (GDPR) – they have every right to. Developers need to fully understand how their AI works in order to improve transparency in these automated decisions. Ultimately, it is important for the AI solution to calculate insurance premiums as accurately as possible – and to do so in a way that is straightforward and easy to understand.
Solution
Our experts use Deloitte’s Stakeholder Assessment tool to identify who or what is involved in every step of the process from development to AI-enabled decision-making. We collect input from all stakeholders (i.e., developers, insurers, prospective insurance customers) and use their interests and preferences to determine what kind of explanations they need – e.g., a global explanation to help people understand the model itself or a local explanation to clarify one specific automated decision – how to present the explanations, and whether to display them as a visualization or with text. The objective is to ensure that the decision-making process is completely transparent, without jeopardizing the accuracy of the AI-enabled system or violating regulations. At the same time, we establish a systematic framework that makes it easier to document the data we collect and use, which is important if we need to retrace the decision-making process.
Outcome
Identifying stakeholders early on in the process enables the insurance company to draft a few different approaches to the AI solution and compare them, ultimately leading to a solution that provides straightforward explanations for all relevant stakeholders as well as premium calculations that are as accurate as possible.
Take Action Now!
Trustworthy AI starts with a solid strategy. Most likely, you will discover major challenges even before the AI lifecycle begins – and those challenges will dictate the direction the project takes. Deloitte will work with you to develop a custom, people-focused AI strategy that addresses all technological and ethical challenges head on. That’s where Deloitte’s Trustworthy AI Framework comes
in – a useful tool to help you identify and minimize your risks. You can count on the experts at Deloitte to support you from the very first step in your AI journey.
Trustworthy AI Framework | Deloitte
Artificial intelligence (AI) will impact our everyday lives as well as all sectors of the economy. But to achieve the promise of AI, we must be ready to trust in its
outputs. What we need are trustworthy AI models that satisfy a set of general
criteria.
How can it help you?
Find more relevant cases and information about trustworthy AI in you industry or sector.