KNIME logo
Contact usDownload
Read time: 7 min

XAI: A View to Explain Single Predictions

November 18, 2021
ML 201 & AI
0-XAI-understanding-ML-models.jpg
Stacked TrianglesPanel BG

Imagine this: Your ML model is trained and deployed via REST API. It is scoring predictions in record time and everyone involved in its application is grateful for your work. But one day, the model returns an alarming prediction: For example, the biggest customer of your firm is predicted to churn, or a major failure is about to happen in a manufacturing plant, or an important transaction of money is predicted to be fraudulent.

Given what is at stake, a decision must be taken immediately based on this prediction, but first you need to know if you can trust the prediction. How was it computed? What feature values is the model using to compute such an alarming prediction?

Unfortunately the model is too complex to simply inspect like you would do with a decision tree or a logistic regression. You might then look at the single instance feature values yourself and try to guess what happened. However, this is imprecise and often not possible when you have hundreds of features, which might even be transformed via data preparation and feature engineering techniques.

How can you explain how this important prediction has been computed by your black box model?

With a Local Explanation View component: By adopting a method for eXplainable AI (XAI) and creating an interactive view via Guided Analytics we have built a reusable and reliable Local Explanation View. The view displays a user interface to explain a single model prediction using two techniques: a local surrogate model and counterfactuals instances.

KNIME Component to Understand Why Credit Scoring Model Rejects Loan Application

In this article, we want to show you how the Local Explanation View works. We are going to train a model for credit scoring (fig. 1). Our model will be designed to predict whether a loan application and its applicant are risky for a bank or not. The prediction of this model is used by a bank employee to influence its decision.

In our example we take a negative prediction - where the loan application has been rejected by the model - and use the Local Explanation View to explain why it was rejected.

xai-banking-financial-services_2.png
Fig 1: A diagram explaining the use case: we are going to explain a black box ML model used to accept and reject loan applications at a generic bank. The model was never actually used in the real world and it was trained on a Kaggle dataset only to showcase the XAI views. This use case is however interesting to different stakeholders: the loan applicant, the bank and society in general given the ethical implications.

As part of the credit scoring example we have chosen the following questions to answer:

  1. Which features are the most important for one particular rejected applicant?
  2. If a loan is rejected, what should the person change to get it approved by the bank?
  3. When the machine learning model predicts an applicant as risky, is this discriminating or fair and objective?

To answer these questions we can adopt a Local Surrogate Model and Counterfactual Explanations: two XAI techniques that generate local explanations. Local explanations are able to explain a single prediction, that is they are not valid for general behavior of the model, but only for one prediction at a time.

A Local Surrogate Model provides us with the local feature importance, relevant for the instance of interest. In our case, surrogate GLM was trained on a neighborhood of similar data points to mimic the original model local behavior. The generated local explanation is similar to another XAI technique called LIME, but a different type of neighborhood sampling was adopted.

A Counterfactual Explanation defines minimal changes to the features of the instance of interest so that its prediction is changed to the desired one. The intuition behind this technique is that the difference between the original feature values and the counterfactuals feature values will tell the applicant what should change in her application in order to get it approved.

In the animation below (fig. 2), we can see the interactive view that contains the information about the data point of interest (in our case the rejected applicant).

2-xai-understand-ml-models.gif
Fig. 2. An animation shows an interactive view of the Local Explanation View component. The explanations displayed are valid only for a single instance. We display: at the top the original values of the instance, on the left a bar chart with local feature importance, on the right cards with counterfactual instances.

At the top of the view, a table (see fig. 3 below) displays the input and output of the black box model: The original instance with its feature values and the predicted class. The goal of this view is to explain the connection between the two.

On the left, a bar chart (see fig. 4 below) of the GLM’s normalized coefficients is indicating the local feature importance. The bigger the bin the greater the local feature importance for this particular prediction. In the bar chart green bins are related to features that are influencing this prediction towards the positive class (creditworthy applicants), red bins towards instead the opposite class (risky applicants).

On the right, the tile view (see fig. 5 below) displays one card for each counterfactual found for this prediction. Each counterfactual has similar feature values to the input instance we are trying to explain, but with small variations in order to lead to the opposite prediction. Comparing those values with the original feature value can provide an explanation. For example, you can understand what features should be changed in order to get the opposite outcome.

3-xai-understand-ml-models.png
Fig. 3. Displayed in detail, the features values for the applicant of interest (Monthly Income, Family Size, past debts..). On the far right we display the model prediction: “risky”.

Currently in the view we are explaining an applicant that was rejected (see fig. 4 below).

By comparing a few of those counterfactuals in the tiles with the original feature values displayed at the top of the view and looking at the bar chart we can understand a few things and answer our three questions:

1. Which features are the most important for this particular rejected applicant?

According to the displayed bar chart (fig. 5), “Monthly Income”, “Times in Debt for 1-2 Months”, “Times in Debt for 2-3 Months”, and “Times in Debt for 3 Months or More” are locally the most important features.

4-xai-understand-ml-models.png
Fig. 4. Bar chart visualizing the normalized coefficients of the surrogate GLM trained on data points sampled around the instance to be explained. We see 4 features that can explain why the applicant was predicted to be risky.

2. What should the applicant change to get approved by the bank?

Compared to the applicant of interest, the created counterfactuals applicants (fig. 6), predicted as creditworthy, earn more and never had debts in the past. While the debt history cannot be changed, the applicant can try to increase monthly income in order to get approved by the model.

5-xai-understand-ml-models.png
Fig. 5. Tile view displays a card for each counterfactual. Counterfactuals cards show predictions with opposite outcomes (applicants predicted as “credit-worthy”) with however similar feature values to the original prediction we are explaining. Four features show drastically higher or lower values than the original ones displayed in Figure 4.

3. When the machine learning model predicts an applicant as risky, is this discriminating or fair and objective?

Two sensitive features in our data set are “Age” and “Family Size”. According to the bar chart (fig. 5), “Age” is not the most important feature, “Family Size” has lower importance than financial characteristics about income and debts. The approved applicants have the same family size as the rejected applicant. Therefore, at least according to these two techniques, we don’t see any evidence that our credit scoring model used sensitive personal characteristics to discriminate against the applicant, for example, by rejecting people with many dependents..

The KNIME Workflow behind the Local Explanation View

The Local Explanation View we have shown was created in the free and open source KNIME Analytics Platform with this workflow (fig. 6). By combining Widget nodes and View nodes, as well as machine learning and data manipulation nodes, we have built a workflow able to explain any ML classification.

The demonstrated techniques and the interactive view are in fact packaged as the reusable and reliable Local Explanation View component, as part of the Verified Component project. The reason why this component can work with any ML classifier is due to the adoption of the KNIME Integrated Deployment Extension — to capture the model and all the data preparation steps. Read more about Integrated Deployment in the blog “Five Ways to Apply Integrated Deployment”.

More Verified Components on eXplainable AI are available in the Model Interpretability folder on KNIME Hub.

6-xai-understand-ml-models.png
Fig. 6. The KNIME workflows "Finding Counterfactual Explanations and Local Feature Importance for a Selected Prediction of a Credit Scoring Model" and "Local Interpretation Workflow" are available for download on the KNIME Hub, as is the component, "Local Explanation View".

Find the KNIME workflows Finding Counterfactual Explanations and Local Feature Importance for a Selected Prediction of a Credit Scoring Model and Local Interpretation Workflow, as well as the component Local Explanation View available for download on the KNIME Hub.

To use the component you need to:

  1. Connect the Workflow Object containing your model as well as the data preparation steps

  2. Connect a data partition, usually the test set, which offers a sample representative of the distribution.

  3. Provide the instance you would like to explain

Once executed the component outputs the local feature importance weights and the counterfactuals feature values at its output ports in table format. The component also generates the interactive view we previously showed. If you would like to add more features to the component you can easily do so by opening, inspecting the KNIME workflow inside and editing both the view and the XAI backend as needed.

In this blog post we illustrated the need for local explanations using a credit scoring example, we showed an interactive view and explained the use case and finally we summarized how all of this is implemented via reusable KNIME workflows and components. Download the component from KNIME Hub and start explaining your classifier predictions!

-----------

Explainable AI Resources At a Glance

Download the workflow and components from the KNIME Hub that are described in this article and try them out for yourself.