KNIME logo
Contact usDownload
Read time: 4 min

Let KNIME Explain with XAI Solutions on KNIME Hub

Explainable AI space on KNIME Community Hub

September 1, 2022
ML 201 & AI
Stacked TrianglesPanel BG

When adopting machine learning, after the model is trained and validated, businesses often need answers to several questions. How is the model making its predictions? Can we trust it? Is this model making the same decision a human would if given the same information? These questions are not just important for the data scientists who trained the model, but also for the decision makers who ultimately have responsibility for integrating the model’s predictions into their processes.

Additionally, new regulations like the European Union’s Artificial Intelligence Act are being drafted. These standards aim to make organizations accountable for AI applications that negatively affect society and individuals. There is no denying that at the current moment, AI and machine learning are impacting many lives. Approvals for bank loans or making decisions based on targeted social media campaigns are a few examples of how AI could improve the user experience — assuming we properly test the models with XAI and domain expertise.

When training simple models (like, for example, a logistic regression model), answering such questions can be trivial. But when a more performant model is necessary, like with a neural network, XAI techniques can give approximate answers for both the whole model and single predictions.

KNIME can provide you with no-code XAI techniques to explain your machine learning model. We have released an XAI space on the KNIME Hub dedicated to example workflows with all the available XAI techniques for both ML regression and classification tasks.

explainable AI space on KNIME Hub
The public space with XAI example workflows

Explainable AI nodes and components

These techniques for XAI can be easily dragged and dropped in KNIME Analytics Platform via the following features:

In the new space, we showcase how to use the various KNIME components and nodes designed for model interpretability. You can find examples in this XAI Space, divided into two primary groups based on the ML task: 

  • Classification - supervised ML algorithms with a categorical (string) target value
  • Regression - supervised ML algorithms with a numerical or continuous target value

Then, based on the type of training, there are two further subcategories: 

  • Custom Models - A ML model is used with a Predictor and a Learner node. The Predictor nodes in some examples are captured with Integrated Deployment and transferred to the XAI component.
  • AutoML - AutoML Classification and Regression components are used, which select the best model that fits the data.

You can easily see how your model generates predictions, as well as what features are accountable, by adopting these components and nodes in combination with some nice visualization. Furthermore, you can examine how the prediction changes as a result of modifications to any of your input features. On top of that, we offer fairness metrics to audit the model for responsible AI principles.

To give you a taste of what topics are covered in this XAI space, let’s see three examples.

Global and local ML explanation techniques

We collected a few examples for both local and global XAI techniques. While local methods describe how an individual prediction is made, global methods reflect the average behavior of the whole model. For example, you can compute partial dependence (PDP) and individual conditional expectation (ICE) curves for a ML regression model that’s custom-trained. PDP is global, while ICE is local. More examples are available.

explainable ai XAI on KNIME hub
Partial dependence plot with a custom regression model

Responsible AI: Measuring fairness metrics

Training and deploying a ML model can take very few clicks, in theory. To make sure that the final model is fair to everyone affected by its predictions, you can adopt Responsible AI techniques. For example, we offer a component to audit any classification model and measure fairness via popular metrics of this field.

explainable AI XAI on KNIME Hub
Fairness scorer component with a custom model

Data Apps for business users ready for deployment

Not all experts need to adopt KNIME Analytics Platform to access these XAI techniques. Certain components in the XAI space can be deployed on the WebPortal of KNIME Server to offer data apps for business users. These XAI data apps can be shared via a link and accessed via login on any web browser. Business users can then interact via charts and buttons to understand the ML model in order to build trust. In the figure below, the animation shows how the Model Simulation View works both locally on KNIME Analytics Platform and online on KNIME WebPortal. Read more at “Deliver Data Apps with KNIME: Build a UX for Your Data Science.”

explainable AI on KNIME Hub
Model simulation view component with custom regression and classification model

To learn more on the individual techniques, read these blog posts:

Future posts will cover more techniques in detail!

If you trained a model in Python and you want to explain it in KNIME, we recommend “Codeless Counterfactual Explanations for Codeless Deep Learning” on KNIME Blog.

If you're new to XAI, consider the LinkedIn Learning course “Machine learning and AI foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions,” by Keith McCornick. More details at “Learn XAI based on Latest KNIME Verified XAI Components” on the KNIME Blog.

Debug and Inspect your Black Box Model with XAI View

Debug and Inspect your Black Box Model with XAI View

December 20, 2021 | by Paolo Tamagnini, Lada Rudnitckaia, Mahantesh Pattadkal
How Banks Can Use Explainable AI in Credit Scoring

How Banks Can Use Explainable AI in Credit Scoring

September 16, 2021 | by Lada Rudnitckaia, Paolo Tamagnini, Sasha Rezvina
 Learn XAI based on KNIME Verified XAI Components

Learn XAI based on KNIME Verified XAI Components

February 25, 2022 | by Paolo Tamagnini, Heather Fyson