Como funciona a inteligência artificial?

Analysis

Explaining explainable AI

…Artificial Intelligence (AI) entails several potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in our private lives or being used for criminal purposes.

European commission white paper, 2020

As private and public sector organizations increase their investment in AI, it is becoming apparent that there are multiple risks to deploying an AI solution. Several conversations have been sparked across organizations to ensure the mitigation of risks, whilst not hindering innovation. Among the risks commonly referenced, the explainability of AI is often cited.

In this article, we will walk you through key explainability concepts and the need to explain how models work before they are considered for deployment.

So, what is explainable AI (XAI)?

Explainable AI (XAI) refers to several techniques used to help the developer add a layer of transparency to demonstrate how the algorithm makes a prediction or produces the output that it did. Users or customers seek to understand how their data is used and how AI systems make decisions - algorithms, attributes, and correlations are open to inspection. XAI tools and applications support developers, the product management community, and eventual users or customers to get more insight into model decision making, which may otherwise be a ‘black box’.

A practical example of the desire to have explainable outcomes is observable in patient diagnosis. With this, the AI system may recommend particular medication based on a condition identified. However, if the system does not provide sufficient transparency on how the conclusion was reached, the result is difficult to trust.

‘Black-box’ vs ‘Glass-box’ AI, what does it mean and what is the difference?

A glass-box reference to AI models refers to the ease of tracing input to output. The deployment of a less complicated model (such as linear regression, logistic regression, and decision trees) often lends itself to a more direct interpretation of results based on input data, without additional investment of modelling individual outcomes. With a ‘glass-box’ model, one can more readily explain how the AI system behaves, and more importantly, why it came to make a specific prediction.

Comparing a ‘black-box’ model to the human brain making a decision, we may see the output or resultant decision, but we are not able to explain or reproduce the result due to limited insight into the exact contributing factors that may have influenced a decision. Black-box models (such as neural networks) may be preferred due to superior performance for complex challenges, but this is often at the cost of explainability. The input-output relationship is more opaque with individual outcomes not readily explainable without additional investment to model the specific input attributes for a particular result or inferred for the model as a whole.

Surfacing the explainability of models will likely soon be an integral part of the software development lifecycle for AI systems. Hereby data scientists and developers will need to provide insight into key drivers of the models built. There are a number of open source tools and libraries available to support the creators of AI systems to highlight how data attributes contribute to the decision making model. Popular open source software libraries include LIME and SHAP, however there are a growing base of commercial offerings that enable teams to make more informed decisions on acceptable AI system behavior. However, in most organizations this assessment is exploratory and not yet formalised for new developments. Alternatively, there is an element of unknown risk in production AI systems which have not been assessed against for explainability of decisions.

Embedding XAI in sensitive use cases would go a long way in building trust in the use of AI in an organization.

The explainability of AI is one of the pillars of the Deloitte Trustworthy AI framework

To navigate the risks of implementing an AI system, organizations must:

  • Understand the current AI Risk exposure by providing insight into the AI inventory across the organization;
  • An independent assessment and insight into model behavior to enable accountable individuals to have confidence in deployed solutions;
    Enable the adoption of appropriate governance for continued AI innovation based on use case risk;and
  • Ensure readiness in crisis response where there is significant reliance on AI.

The AI risk management framework will help define, validate and monitor AI risks. The framework provides tools for everything from a maturity assessment to Crises preparation.

Did you find this useful?