How glass box models can provide X-AI without compromising results

Photo by Kevin Ku on Unsplash

Introduction

Artificial Intelligence plays a big role in our daily lives. AI is being used everywhere, from our search queries on Google to self-driving vehicles such as Tesla. With the use of deep learning, the models used in these applications have become even more complex. In fact, they are so complex that in many cases we have no idea how these AI models reach their decisions.

These models are often referred to as ‘black box’ models. We have a set of input features, then we give these features to our model, which then does a complex calculation, and comes to a decision. However, we do not know exactly how the model comes to its decisions, which features it finds important or even what it looks at. In some cases, it is not terribly important for humans to fully understand how the model works or how it reaches its decisions. But where this need exists, we look at several techniques to help humans better understand how the model comes to its predictions.

Figure 1: Black box representation [Slimmer AI]

One thing that can help humans understand AI is explainable AI (X-AI). Explainable AI allows humans to understand and better trust the predictions made by an AI model. One downside of having explainable AI, is that there’s usually a tradeoff between explainability and accuracy. In general, simpler models tend to be more explainable than more complex ones. In contrast to black box models, glass box models offer increased interpretability. In a glass box model all parameters are known to us and we know exactly how the model comes to its conclusion, giving us full transparency.

Figure 2: (Simple) Glass box model representation [Slimmer AI]

Black box vs Glass box

This puts in a dilemma: Do we favor accuracy or explainability? Selecting the appropriate technique is very dependent on the kind of problem you want to solve. Some problems require more explainability than others. For example, when you have a model which takes in a lot of personal information, it is good to know which kind of personal information the model is using and how the model is using these features to come to its conclusions. Here we should prioritize explainability over accuracy.

Figure 3: Accuracy vs Explainability [5]

Even though black box models can be very complex, several researchers have attempted to explain the outputs of these models. Several approaches can be used, most notably are:

  • LIME (Local Interpretable Model-agnostic Explanations)[1] and
  • SHAP (SHapley Additive exPlanations)[2]

LIME trains an interpretable model around the black box models predictions. SHAP on the other hand, takes a game theoretic approach by assigning each feature an importance value for a certain prediction.

While interpreting black box models requires external tools, glass box models are inherently interpretable. Even though glass box models have interpretability build in, they used to be less powerful. In an ideal situation we would want to have a glass box model with the accuracy of a black box model.

But luckily there is: the Explainable Boosting Machine (or EBM for short) included in the interpretml[4] package developed by Microsoft. It is built on the old technique of Generalized Additive Models[3] with newly added machine learning techniques such as bagging and gradient boosting. During training the EBM only looks at one feature at a time in a round-robin fashion while using a very low learning rate. This way the order of the features does not matter. This process is repeated for many iterations where at the end of training all feature trees are added together. Because the EBM is an additive model each feature contributes separately to the final prediction, it allows us to see which features are important and which are less important [4].

Comparing black box explainers vs glass box models

At Slimmer AI, explainability and trust in our models is one of our top priorities. Curious how we use explainable AI at Slimmer AI? Read this excellent blog post by colleague Ayla Kangur on explainable AI in practice. One use case of explainable AI is within our Science department, where we analyse incoming manuscripts from one of the biggest publishers in the world. Here we use explainable AI to predict if a new research paper is original.

We had the option to either use a black box model together with an explainer/explanation technique or use a glass box model. After training our models, a LightGBM[6] model, which is a gradient boosting model, and an EBM, it turned out they performed similarly. Our question then became, which model gives the better explanations. With better explanations we mean, which model would use input feature to explain a prediction like a human would do.

The results were quite striking. The EBM would consistently produce better explanations. It would look at features which made more sense to use in the prediction scenarios, while the LightGBM model would use a combination of more (seemingly) random features to get to its explanations. A good example of this was when we looked at the length of sentences that were copied from existing research. The LightGBM model would look at a combination of short sentences, while the EBM looked at longer sentences. Of course, when a human checks if a research paper is original they also look for long overlaps.

I believe this marks a huge step forward in the area of explainable AI, having a model that can both produce excellent results and have explainability by design. It will be exciting to see what else can be achieved in the future.

References

  1. Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin, “Why Should I Trust You?”: Explaining the Predictions of Any Classifier (2016), ACM
  2. Scott Lundberg, Su-In Lee, A Unified Approach to Interpreting Model Predictions (2017), NeurIPS Proceedings
  3. Trevor Hastie and Robert Tibshirani, Generalized additive models: some applications (1987), Journal of the American Statistical Association
  4. Harsha Nori, Samuel Jenkins, Paul Koch, Rich Caruana, InterpretML: A Unified Framework for Machine Learning Interpretability (2019), arXiv.org
  5. Alexandre Duval, Explainable Artificial Intelligence (XAI) (2019)
  6. Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, Tie-Yan Liu, LightGBM: A Highly Efficient Gradient Boosting Decision Tree (2017), NeurIPS Proceedings

Think outside the ‘black’ box was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.