Create

Learn XAI based on Latest KNIME Verified XAI Components

February 25, 2022 — by Paolo Tamagnini & Heather Fyson
 Learn XAI based on Latest KNIME Verified XAI Components

Our digital data footprint will surpass 1 yottabyte by 2030. The sheer wealth of data is driving an increasing need for machine learning solutions because they are better able to extract insights from data quickly and intelligently. The crux however is that many complex machine learning solutions are black box models: Their decision process is difficult if not impossible to understand and explain.

When a bank uses complex ML techniques for example, to develop a credit scorer that will determine whether a loan application should be accepted or rejected, it’s important that banks can still understand how their model makes its decisions.

Explainable AI (XAI) solutions enable us to look inside the black box, inspect its different parts, and understand how the parts of the model influence the output. Responsible AI solutions go one step further, making sure the consequences of the model output are fair to its stakeholders. 

Data scientists and machine learning professionals will have to stay apace with the latest techniques and approaches in this field. In KNIME Analytics Platform you will find both nodes and components enabling you to achieve both explainable and responsible AI.

If you want to build your own custom XAI solutions take a look at various Machine Learning Interpretability example workflows on KNIME Hub showing you how to use nodes from the KNIME Machine Learning Interpretability extension. You can also adopt pre-built solutions: Download the Model Interpretability components to automatically compute and visualize explanations of your custom model.

Keith McCormick, data science consultant, trainer, author, and speaker, has developed a new LinkedIn Learning course: Machine learning and AI foundations: Producing Explainable AI (XAI) and Interpretable Machine Learning Solutions.

McCormick’s course draws on a set of publicly available XAI example workflows and KNIME Verified Components developed by the KNIME Evangelism team.

The first half of the course explains the difference between Interpretable Machine Learning, Global XAI, and Local XAI, always placing the techniques in context as a complement to the demonstrations. The techniques, how to make ML models more transparent, are taught based on real use cases, using KNIME as the tool. 

The KNIME community can look forward to more resources on LinkedIn Learning in the near future. Keith has two previous courses, including an introduction to machine learning with KNIME, and two further courses are now in the pipeline.

Keith McCormick chose the open source KNIME Analytics Platform as his teaching tool because of its visual programming environment. Each step that is implemented to train or explain the model is visually documented. 

“In my courses, I like to focus on the concepts and to keep presentations tool-neutral. I’ve always been impressed with how KNIME just fades into the background so that you can focus on your analysis,” says Keith. “The extensive support and documentation is why I felt that KNIME was the best option for easy no-code open-source demonstrations of XAI concepts in the course.”

Try an example workflow demonstrating the usage of components to interpret ML models

You Might Also Like

What are you looking for?