Skip to main content

The art of trusting AI through explainable and interpretable approaches

As the technological landscape evolves, our key challenge for the future will be fostering trust in artificial intelligence. Explainable and interpretable AI may hold the key in New Zealand.

Artificial intelligence (AI) has the potential to streamline processes and decision-making, but New Zealand organisations have been slow to adopt it. The root of this hesitancy? A lack of understanding and transparency often encapsulated by the term 'black box AI’ may explain some of the delay. A whopping 41% of technologists and 47% of business leaders are apprehensive about trusting AI. People are unlikely to trust AI if they do not understand it.

From black box to glass box: the future of AI

The term 'Black box AI' refers to the inherent opacity in most AI systems, which often makes it challenging for individuals to comprehend the mechanisms by which an AI system arrives at its decisions. This opacity fuels mistrust and inhibits the widespread adoption of AI. After all, how can we put our faith in a tool when we can't comprehend how it works or predict what it will do?

Introducing explainable and interpretable AI

To build trust, we need to tackle this 'black box' problem head-on, making AI's processes more understandable to non-experts. So, how do we transition from a 'black box' to a 'glass box'? The answer lies in explainable and interpretable AI. They focus on creating systems that are not only intelligent but also capable of explaining their actions and decisions in a way that's understandable to humans.

Consider a sophisticated AI system deployed in insurance for underwriting policies. In the traditional scenario, such a complex AI system might assign a premium rate to a given policy, without providing users with insight into its reasoning process. Enter the concept of explainable AI, the decoding of an AI system’s decision-making process. In this scenario, we could identify patterns within a customer's profile, which would assist in explaining the reasoning that the AI system used to select a premium rate. Or alternatively, employ well established explainability techniques to identify a set of easy-to-understand premium pricing models that represent what the AI system would decide, given various inputs.  This level of transparency not only allows underwriters to grasp the logic of the system but also has the potential to expose any possible oversights. Consequently, it greatly enhances the trust factor in the AI system.

Interpretable AI extends this concept further. Instead of deciphering complex AI systems, fully interpretable models are utilised from the outset. One way of doing this is to design AI systems using “white box” techniques such as decision trees and logistic regression – the trade-off in accuracy in doing so, if it exists at all, is not always pronounced. Another approach, is an AI system that can process massive amounts of past data to generate a highly effective rules list – which may result in logic as follows:

IF age between 18-25 and the applicant is a male THEN predict higher risk category.
ELSE IF age between 26-35 and 1-2 previous insurance claims THEN predict moderate risk category.
ELSE IF more than three previous insurance claims THEN predict higher risk category.
ELSE predict lower risk category.

These seemingly simple statements can be readily implemented across different technologies and easily understood and examined by humans.

Interpretable AI ensures every step of the decision-making process is comprehensible to humans, removing the need for additional explanation.

Explainability and interpretability may be enforced in the near future

Baking explainability and interpretability into AI systems, not only helps build trust among workers, but could possibly help leaders future-proof their AI efforts. New Zealand, like many countries, is still in the process of shaping comprehensive AI regulations, indicating that considerable shifts are likely. In Europe, policymakers have proposed significant regulations based on the risk level each AI system poses. This initiative aims to embed transparency and accountability in Europe's use of AI. Furthermore, researchers have called for interpretable models to be mandated for some high-stakes decisions. If New Zealand follows suit, the transparency of AI systems may eventually shift from being a trust-building tool, to becoming a mandatory operational standard.

Building trust in AI is a challenge. However, the potential rewards are massive. By prioritising explainability, we can make AI not just a powerful tool but also a trusted ally, both now and as regulations change. AI has the potential to reshape our world in countless ways, but only if we trust it enough to let it. And earning that trust starts with opening up the 'black box' and revealing the logic behind AI decisions, making it as clear and understandable as a 'glass box'. As we continue to innovate and develop AI, let's ensure that explainability is at the forefront of our efforts.

For more information on explainable AI please see the full Deloitte framework on Trustworthy AI and get in touch with us.