Skip to main content

A call for transparency and responsibility in Artificial Intelligence

Artificial intelligence (AI) is increasingly used for decisions that affect our daily lives – even potentially life or death ones. Deloitte therefore calls for transparency and responsibility in AI: AI that is explainable to employees and customers and aligned with the company’s core principles.
Read more about algorithms in the Public Sector

 

Media coverage of artificial intelligence tends to be either euphoric or alarming. In the first variant, AI is presented as a divine technology that will solve all our problems, from curing cancer to ending global warming. In the latter, Frankenstein- or Terminator-inspired narratives depict AI as a technology that we cannot keep under control and that will be out-smarting humans in ways we cannot foresee – killing our jobs, if not threatening the survival of humanity.

In the past few years, the number of negative stories about AI has increased markedly. Tech-entrepreneur Elon Musk has even stated that AI is more dangerous than nuclear weapons. There have been numerous cases in which advanced or AI-powered algorithms were abused, went awry or caused damage. It was revealed that the British political consulting firm Cambridge Analytica harvested the data of millions of Facebook users without their consent to influence the US elections, which raised questions on how algorithms can be abused to influence and manipulate the public sphere on a large scale. It has become clear that AI models can replicate and institutionalise bias. For instance, an AI model named COMPAS – used across the US to predict recidivism – turned out to be biased against black people, whereas an AI-powered recruiting tool showed bias against women.

Some types of AI can be gamed by malicious forces and end up behaving totally differently than intended. That was the case with a specific Twitter-bot, which turned from a friendly chatbot into a troll posting inflammatory messages and conspiracy theories in less than a day. Other cases brought to the surface unsolved questions around ethics in the application of AI. For instance, Google decided not to renew a contract with the Pentagon to develop AI that would identify potential drone targets in satellite images, after large-scale protests by employees who were concerned that their technology would be used for lethal purposes.

Stefan van Duin, partner Analytics and Cognitive at Deloitte and an expert in developing AI solutions, understands this public anxiety about AI. “The more we are going to apply AI in business and society, the more it will impact people in their daily lives – potentially even in life or death decisions like diagnosing illnesses, or the choices a self-driving car makes in complex traffic situations,” says Van Duin. “This calls for high levels of transparency and responsibility.”

Deloitte is a strong advocate of transparency and responsibility in AI. Transparent AI is AI that is explainable to employees and customers. That can be challenging, because AI is not transparent by nature, says Van Duin. “So, the question is: How can we make AI as transparent as possible? How can we explain how an AI-based decision was made, what that decision was based on, and why it was taken the way it was taken?” Next to transparency, there is the question of responsibility in AI.

“Transparent AI makes our underlying values explicit, and encourages companies to take responsibility for AI-based decisions,” says Van Duin. “Responsible AI is AI that has all the ethical considerations in place and is aligned with the core principles of the company.”

This publication explores Deloitte’s point of view on transparency and responsibility in AI. It has been informed by Deloitte’s own experiences in developing and applying AI-enabled technology, both in the company and for its clients, as well as from its long-standing experience with validating and testing models, providing assurance, and advising on strategy and innovation projects.

Four propositions for responsible AI

 

The publication will discuss four propositions that Deloitte has developed around the topic of transparent and responsible AI. ‘AlgoInsight’ is a technical toolkit to validate AI models and to help explain the decision-making process of AI models to employees and customers. The ‘FRR Quality Mark for Robotics and AI’ is a quality mark for AI-powered products, which ensures customers that AI is used responsibly. ‘Digital Ethics’ provides a framework to help organisations develop guiding principles for their use of technology, and to create a governance structure to embed these principles in their organisation. Finally, ‘AI-Driven Business Models’ covers the complete journey, from defining the vision and building the capabilities up until actual implementation and capturing value.

Transparent AI is explainable AI

 

One reason why people might fear AI, is that AI technologies can be hard to explain, says Evert Haasdijk. He is a senior manager Forensic at Deloitte and a renowned AI expert, who worked as an assistant professor at VU Amsterdam and has more than 25 years of experience in developing AI-enabled solutions. “Some AI technologies are pretty straightforward to explain, like semantic reasoning, planning algorithms and some optimisation methods,” says Haasdijk. “But with other AI technologies, in particular data-driven technologies like machine learning, the relation between input and output is harder to explain. That can cause our imaginations to run wild.”

But AI doesn’t have to be as opaque as it may seem. The proverbial ‘black box’ of AI can be opened, or at least, it is possible to explain how AI models get to a decision. The point of transparent AI is that the outcome of an AI model can be properly explained and communicated, says Haasdijk. “Transparent AI is explainable AI. It allows humans to see whether the models have been thoroughly tested and make sense, and that they can understand why particular decisions are made.”

Transparent AI isn’t about publishing algorithms online, says Haasdijk – something that is currently being discussed as mandatory for algorithms deployed by public authorities in the Netherlands. “Obviously there are intellectual property issues; most companies like to keep the details of their algorithms confidential,” says Haasdijk. “But more importantly, most people do not know how to make sense of AI models. Just publishing lines of code isn’t very helpful, particularly if you do not have access to the data that is used – and publishing the data will often not be an option because of privacy regulations.” According to Haasdijk, publishing AI algorithms will not, in most cases, bring a lot of transparency. “The point is that you have to be able to explain how a decision was made by an AI model.”

Transparent AI enables humans to understand what is happening in AI models, emphasises Haasdijk. “AI is smart, but only in one way. When an AI model makes a mistake, you need human judgment. We need humans to gauge the context in which an algorithm operates and understand the implications of the outcomes.”

The level of transparency depends on the impact of the technology, adds Haasdijk. The more impact an advanced or AI-powered algorithm has, the more important it is that it is explainable and that all ethical considerations are in place. “An algorithm to send personalised commercial offerings doesn’t need the same level of scrutiny as an algorithm to grant a credit or to recommend a medical treatment,” says Haasdijk. “Naturally, all AI models should be developed with care, and organisations should think ahead of the possible ramifications. But AI models that make high-impact decisions can only be allowed with the highest standards of transparency and responsibility.”

Detecting hidden bias

 

So how do you create transparent AI? First, says Haasdijk, there are technical steps. “The technical correctness of the model should be checked, all the appropriate tests should be carried out and the documentation should be done correctly,” he explains. “The developer of the model has to be able to explain how they approached the problem, why a certain technology was used, and what data sets where used. Others have to be able to audit or replicate the process if needed.”

The next thing to assess is whether the outcomes of the model are statistically sound. “You should check whether certain groups are under-represented in the outcomes and, if so, tweak the model to remedy that.” This step can help to detect hidden biases in data, a well-known problem in the world of AI, says Haasdijk. “Suppose you use AI to screen job applicants for potential new managers in your company. If the model is fed data from previous managers who were mostly white males, the model will replicate that and might conclude that women or people of colour are not fit for management roles.”

A challenge here is that most data sets have not been built specifically to train AI models. They have been collected for other purposes, which might result in skewed outcomes. “AI models are unable to detect bias in data sets,” Haasdijk explains. “Only humans, who understand the context in which the data has been collected, can spot possible biases in the outcome of the model. Checking training data for possible bias therefore requires utmost care and continuous scrutiny.”

Finally, AI models should be validated to enable organisations to understand what is happening in the model and to make the results explainable. Deloitte’s AlgoInsight proposition offers a variety of tools – both open source and developed in-house – to help companies validate advanced or AI-powered algorithms. The toolkit allows organisations to look inside the ‘black box’ of AI, to expose possible bias in training data and to help explain the decision-making process of AI models to employees and to customers. You can read more about AlgoInsight in the article Unboxing the Box with AlgoInsight: A Toolkit to Create Transparency in Artificial Intelligence.

‘Computer says no'

 

There are a couple of reasons to pursue transparent AI. An important one is that companies need to understand the technologies they use for decision making. As obvious as this sounds, it is not always a given. “The board room and higher management of a company are often not really aware what developers in the technical and the data analytics departments are working on,” says Haasdijk. “They have an idea, but they don’t know exactly. This causes risks for the company.”

Paradoxically, as open source AI models are becoming more user-friendly, there might be more AI applications built by people who do not completely understand the technology. “There are open source models that are very easy to use. Someone might feed it with data, and get a result, without really understanding what is happening inside the model and comprehending the possibilities and limitations of the outcomes,” says Haasdijk. “This can cause a lot of problems in the near future. A risk of AI becoming more user-friendly and more widely available is that someone in a company might be using AI irresponsibly, and not being aware of it – let alone their bosses knowing about it.”

It can be hard for companies to keep track of all the AI models that are used within their organisation, says Haasdijk. “A bank recently made an inventory of all their models that use advanced or AI-powered algorithms, and found a staggering total of 20,000.” Some of these algorithms, like capital regulation models, are under strict scrutiny from regulators. But in most cases, advanced or AI-powered algorithms are not subject to any kind of external and internal regulation – think of algorithms used in marketing, pricing, client acceptance, front office or automated reports. “Transparent AI can help companies to regain control over the variety of AI models that are deployed in their organisation,” Haasdijk says.

Transparent AI can give organisations more insight in when and why AI algorithms make mistakes, and on how to improve their models accordingly. “AI models do make mistakes – in many instances they make fewer mistakes than humans, but still, you want to know when and why that happens,” says Haasdijk. “Take the example of the self-driving car that hit a lady who was walking with her bike, because the algorithm misjudged the situation. It is essential that companies understand when and why mistakes like these happen, to avoid similar accidents in the future.”

Finally, transparent AI can help organisations to explain individual decisions of their AI models to employees and customers. And that’s not all that customers expect from the organisations; with the GDPR ruling that recently came into force, there is also regulatory pressure to give customers insight into how their data is being used. “Suppose a bank uses an AI model to assess whether a customer can or cannot get a loan,” says Haasdijk. “If you deny a loan, the customer probably wants to know why that decision has been made, and what they need to change in order to get the loan. That means the bank must have a thorough understanding of how their AI model reaches a decision, and to be able to explain this in clear language. ‘Computer says no’ is not an acceptable answer.”

Re-establishing trust

 

The potential ramifications of AI have become a concern on the level of politics. By now, more than 20 countries are working on a national AI strategy, and various national and regional AI-related policy measures have been announced. The European Commission installed a High-Level Expert Group on Artificial Intelligence to work on Ethical Guidelines for Trustworthy AI. On an international level, the United Nations has created an AI and Global Governance Platform to explore the global policy challenges raised by AI, as a part of the Secretary-General’s Strategy on New Technologies.

A lot of these initiatives are addressing the public anxiety concerning the loss of control to an AI revolution. With every new incident of AI going awry – of which a couple examples were mentioned at the beginning of this article – the demand for transparent, and particularly for responsible, AI is rising. “Not only do people want to understand how AI-based decisions are made,” says Van Duin. “They want to be reassured that AI is used to benefit mankind and is not causing harm.”

So far, not one commonly accepted framework for how to assess AI has been set, notes Van Duin. “For privacy issues, organisations must adhere to GDPR regulation, the European legal framework for how to handle personal data. But there is nothing similar for ethics in AI, and so far, it looks like there will only be high-level guidelines that leave a lot of room for interpretation. Therefore, there is a lot of responsibility for companies, organisations and society to make sure we use AI ethically.”

Embedding ethics in the organisation

 

It’s not just consumers who are demanding a clear vision on AI; employees are demanding it from their company too. This makes the stakeholder landscape around AI more complex. As previously mentioned, last year Google decided not to renew a contract with the Pentagon involving automatic recognition software for drones, after large-scale protests by employees. Over 4,000 employees, including some of the company’s top AI researchers, signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology”. The protests resulted in Google publishing a value statement on AI, in which the company, among other things, pledged not to use its AI technology for weapons. A few months later, Google decided to drop out of bidding for a $10 billion Pentagon cloud-computing contract, citing that the contract would clash with the company values on AI.

“A company pulling out of a bid for a $10 billion-dollar contract for ethical reasons is a clear sign of the times”, says Tjeerd Wassenaar, partner at Deloitte Risk Advisory with a focus on ethics and corporate values. With fast-developing technologies like AI, organisations are forced to think about the ethical consequences in a structural way, argues Wassenaar. “Technically, a lot is possible,” he says. “Now companies have to decide how far they want to go. Currently, there is no clear policy on this in most companies. The case of Google employees protesting against their own company indicates that many people in the organisation do not feel that these boundaries have been set.”

The demand for transparent and responsible AI is part of a broader debate about company ethics, says Wassenaar. “What are your core values, how do they relate to your technological and data capabilities, and what governance frameworks and processes do you have in place to keep up with them? Those are questions that companies currently must address. If they don’t, they risk their reputation, legal issues and fines, and worst of all, the trust and loyalty of their customers and employees.”

Deloitte’s Digital Ethics proposition helps companies to treat digital ethics in a structural way. It helps companies to define their core principles, to establish governance frameworks to put these into practice, and to set up benchmarks to monitor whether the principles have been effectively implemented. You can read more about Digital Ethics in the article Digital Ethics: Structurally Embedding Ethics in your Organisation.

Realising the positive potential of AI

 

Transparency and responsibility in AI will help to ensure that advanced or AI-powered algorithms are thoroughly tested, explainable, and aligned with the core principles of the company. In short, it will help organisations to regain control over the AI models that are deployed. “We need to feel that we are in control,” says Van Duin. “A lot of people feel the opposite: that we are losing control. By creating transparency and establishing clear guidelines and procedures for creating AI applications, we can make sure that advanced or AI-powered algorithms function as intended and that we can capture the value that it promises.”

Despite the Frankenstein- or Terminator-inspired narratives you can read in media articles, most advanced or AI-powered algorithms that are being developed right now are relatively innocent and not about high-impact decisions, says Van Duin. “But we should start to think transparency and responsibility in AI right now.” Self-driving cars are the perfect example, he says: they will not be widely available any time soon, but companies are currently working on this technology. And the self-driving car will, by definition, have life and death situations programmed. “This is the moment to ensure that these algorithms are built the right way.”

With all the talk about the potential risks of using AI, one might almost forget about the fact that AI offers genuine business opportunities. Early adopters in business have shown amazing results in cost reduction, better service, better quality or even completely new business models. Deloitte’s AI-Driven Business Models proposition helps companies to assess how they can leverage AI in their company right now and what capabilities they need to build. You can read more in the coming article about AI-Driven Business Models.

In the long run, transparency and responsibility in AI will not only help to avoid disasters, but will also help to realise the positive potential of AI in making major contributions to the good life, says Van Duin. He sums up: “There is a huge potential in healthcare. AI can support medical diagnoses, it could support treatment and save lives. It could reduce our energy consumption, by optimising processes. It could reduce the need for us to own a car, which will have a positive environmental impact and will save us money. It could reduce the number of traffic accidents. The access to information will get better, it will help us to work more effectively. That could lead to us having more spare time – maybe at some point in time, a 30- or 20-hour work week will become the norm rather than 40 hours. In short, the quality of life could increase massively.”

The impact of AI can be enormous, says Van Duin. But ultimately, he says, AI will need the trust of the general public to have the most impact. “If all the considerations around transparency and responsibility are in place, AI can make the world a better place.”

Did you find this useful?

Thanks for your feedback