A glass-box reference to AI models refers to the ease of tracing input to output. The deployment of a less complicated model (such as linear regression, logistic regression, decision trees) often lends itself to a more direct interpretation of results based on input data, without additional investment of modelling individual outcomes. With a ‘glass-box’ model, one can more readily explain how the AI system behaves, and more importantly, why it came to make a specific prediction.
Comparing a ‘black-box’ model to the human brain making a decision, we may see the output or resultant decision, but we are not able to explain or reproduce the result due to limited insight into the exact contributing factors that may have influenced a decision. Black-box models (such as neural networks) may be preferred due to superior performance for complex challenges, but this is often at the cost of explainability. The input-output relationship is more opaque with individual outcomes not readily explainable without additional investment to model the specific input attributes for a particular result or inferred for the model as a whole.
Surfacing the explainability of models will likely soon be an integral part of the software development lifecycle for AI systems. Hereby data scientists and developers will need to provide insight into key drivers of the models built. There are a number of open source tools and libraries available to support the creators of AI systems to highlight how data attributes contribute to the decision making model. Popular open source software libraries include LIME and SHAP, however there are a growing base of commercial offerings that enable teams to make more informed decisions on acceptable AI system behaviour. However, in most organisations this assessment is exploratory and not yet formalised for new developments. Alternatively there is an element of unknown risk in production AI systems which have not been assessed against for explainability of decisions.
Embedding XAI in sensitive use cases would go a long way in building trust in the use of AI in an organisation.