KNIME news, usage, and development

19 Mar 2018admin

Authors: Rosaria Silipo, Vincenzo Tursi, Kathrin Melcher, and Phil Winters

Hi! My name is Emil and I am a Teacher Bot.

I was built to answer your early questions on how to use KNIME. Pardon!

I was built to point you to the right training material to help you answer your early questions on how to use KNIME.

By the way, I was myself entirely built using KNIME. So, I should know where the right answers lie in the midst of all the tutorials, videos, blog posts, whitepapers, example workflows, and more, which are available out there.

It was not so hard to build me. You just needed:

  • a user interface - possibly web or speech based - for you to ask your question
  • a text parser for me to understand your question
  • a brain to find the right training material to answer your question
  • a user interface to provide the answer back
  • a nice to have - but not necessary - feedback option, on whether my answer was of any help.

Read more

12 Mar 2018Kathrin

Regularization can be used to avoid overfitting. But what actually is regularization, what are the common techniques, and how do they differ?

Well, according to Ian Goodfellow [1]

“Regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error.”

In other words: regularization can be used to train models that generalize better on unseen data, by preventing the algorithm from overfitting the training dataset.

So how can we modify the logistic regression algorithm to reduce the generalization error?

Common approaches I found are Gauss, Laplace, L1 and L2. KNIME Analytics Platform supports Gauss and Laplace and indirectly L2 and L1.

Read more

05 Mar 2018berthold

Systems that automate the data science cycle have been gaining a lot of attention recently. Similar to smart home assistant systems however, automating data science for business users only works for well-defined tasks. We do not expect home assistants to have truly deep conversations about changing topics. In fact, the most successful systems restrict the types of possible interactions heavily and cannot deal with vaguely defined topics. Real data science problems are similarly vaguely defined: only an interactive exchange between the business analysts and the data analysts can guide the analysis in a new, useful direction, potentially sparking interesting new insights and further sharpening the analysis.

Therefore, as soon as we leave the realm of completely automatable data science sandboxes, the challenge lies in allowing data scientists to build interactive systems, interactively assisting the business analyst in her quest to find new insights in data and predict future outcomes. At KNIME we call this “Guided Analytics”. We explicitly do not aim to replace the driver (or totally automate the process) but instead offer assistance and carefully gather feedback whenever needed throughout the analysis process. To make this successful, the data scientist needs to be able to easily create powerful analytical applications that allow interaction with the business user whenever their expertise and feedback is needed.

Read more

26 Feb 2018rs

This blog post is an extract from chapter 6 of the book “From Words to Wisdom. An Introduction to Text Mining with KNIME” by V. Tursi and R. Silipo, to be published in March 2018 by the KNIME Press. The book will be premiered at the KNIME Summit in Berlin in March.


Word embedding, like document embedding, belongs to the text preprocessing phase. Specifically, to the part that transforms a text into a row of numbers.

In the KNIME Text Processing extension, the Document Vector node transforms a sequence of words into a sequence of 0/1 – or frequency numbers – based on the presence/absence of a certain word in the original text. This is also called “one-hot encoding”. One-hot encoding though has two big problems:

  • it produces a very large data table with the possibility of a large number of columns;
  • it produces a very sparse data table with a very high number of 0s, which might be a problem for training certain machine learning algorithms.

The Word2Vec technique was therefore conceived with two goals in mind:

  • reduce the size of the word encoding space (embedding space);
  • compress in the word representation the most informative description for each word.

Interpretability of the embedding space becomes secondary.

Read more

19 Feb 2018admin

In this blog series we’ll be experimenting with the most interesting blends of data and tools. Whether it’s mixing traditional sources with modern data lakes, open-source devops on the cloud with protected internal legacy tools, SQL with noSQL, web-wisdom-of-the-crowd with in-house handwritten notes, or IoT sensor data with idle chatting, we’re curious to find out: will they blend? Want to find out what happens when IBM Watson meets Google News, Hadoop Hive meets Excel, R meets Python, or MS Word meets MongoDB?

Follow us here and send us your ideas for the next data blending challenge you’d like to see at

Today: Chinese meets English meets Thai meets German meets Italian meets Arabic meets Farsi meets Russian. Around the world in eight languages

Authors: Anna Martin, Hayley Smith, and Mallika Bose

The Challenge

No doubt you are familiar with the adventure novel “Around the World in 80 Days” in which British gentleman Phileas Fogg makes a bet that he can circumnavigate the world in 80 days. Today we will be attempting a similar journey. However, ours is unlikely to be quite as adventurous as the one Phileas made. We won’t be riding Elephants across the Indian mainland, nor rescuing our travel companion from the circus. And we certainly won’t be getting attacked by Native American Sioux warriors!

Our adventure will begin from our offices on the Lake of Constance in Germany. From there we will travel down to Italy, stopping briefly to see the Coliseum. Then across the Mediterranean to see the Pyramids of Egypt and on through the Middle East to the ancient city of Persepolis. After a detour via Russia to see the Red Square in Moscow, our next stop will be the serene beaches of Thailand for a short break before we head off to walk the Great Wall of China (or at least part of it). On the way home, we will stop in and say hello to our colleagues in the Texas office.

Like all good travelers, we want to stay up-to-date with the news the entire time. Our goal is to read the local newspapers … in the local language of course! This means reading news in German, Italian, Arabic, Farsi, Chinese, Russian, Thai, and lastly, English. Impossible you say? Well, we’ll see.

The real question is: will all those languages blend?

Topic. Blending news in different languages

Challenge. Will the Text Processing nodes support all the different encodings?

Access Mode. Text Processing nodes and RSS Feed Reader node

Read more

12 Feb 2018Jeany

Exploration, analysis, visualization: this article highlights this functionality in KNIME Analytics Platform using sunburst charts, tag clouds and networks. We’ll use life-science data for this blog post, but all of this can be applied to diverse kinds of datasets. So if you have a different background, we warmly invite you to keep reading. Fair warning though: if you are snacking in front of your computer, you might want to swallow first.

Before we dive into the workflow itself, here’s a bit of background information on the problem. Many human diseases are caused by genetic factors. Learning more about these factors is important, because the insight we gain can improve the chances of finding cures and help guide treatment decisions. Here we want to show an example of how to investigate disease-related genes.

First, we’ll give a quick general overview of the workflow (see Fig.1) and then explain each step using a particular disease as an example. The interactive views we describe here are accessible in two ways: via the KNIME WebPortal, and by showing the interactive view of the wrapped metanodes in KNIME Analytics Platform. The interactive views are practical because even someone who is not a KNIME user can use the workflow to interactively explore data and generate knowledge.

Figure 1. Workflow to Interactively Investigate Disease Genes. Interactive views are generated by the wrapped metanodes and are accessible via the KNIME WebPortal or via the node view. The workflow is available on the KNIME EXAMPLES server under 03_Visualization/02_JavaScript/10_DiseaseGenes03_Visualization/02_JavaScript/10_DiseaseGenes* .

Read more

31 Jan 2018jonfuller


In my previous blog post “Learning Deep Learning”, I showed how to use the KNIME Deep Learning - DL4J Integration to predict the handwritten digits from images in the MNIST dataset. That’s a neat trick, but it’s a problem that has been pretty well solved for a while. What about trying something a bit more difficult? In this blog post I’ll take a dataset of images from three different subtypes of lymphoma and classify the image into the (hopefully) correct subtype.

KNIME Deep Learning - Keras Integration brings new deep learning capabilities to KNIME Analytics Platform. You can now use the Keras Python library to take advantage of a variety of different deep learning backends. The new KNIME nodes provide a convenient GUI for training and deploying deep learning models while still allowing model creation/editing directly in Python for maximum flexibility.

The workflows mentioned in this blogpost require a fairly heavy amount of computation (and waiting), so if you’re just looking to check out the new integration, see the simple workflow here that recapitulates the results of the previous blog post using the new Keras Integration. There are quite a few more example workflows for both DL4J and Keras which can be found in the relevant section of the Node Guide.

Right, back to the challenge. Malignant lymphoma affects many people, and among malignant lymphomas, CLL (chronic lymphocytic leukemia), FL (follicular lymphoma), and MCL (mantle cell lymphoma) are difficult for even experienced pathologists to accurately classify.A typical task for a pathologist in a hospital would be to look at those images and make a decision about what type of lymphoma is present. In many cases, follow-up tests to confirm the diagnosis are required. An assistive technology that can guide the pathologist and speed up their job would be of great value. Freeing up the pathologist to spend their time on those tasks that computers can’t do so well, has obvious benefits for the hospital, the pathologist, and the patients.

Figure 1. The modeling process adopted to classify lymphoma images. At each stage the required components are listed.

Read more

22 Jan 2018Kathrin

Watched all the trilogies on Netflix already? Then it’s time to change channels to KNIME TV on YouTube!

The brand new trilogy, bringing logistic regression with KNIME to your screen, is finally available in its entirety!

Call your friends, grab your popcorn and be the first to watch all three parts!

The first movie introduces the trilogy’s greatest character: the algorithm behind the Logistic Regression Learner node. The second film draws you in to the intricacies of the algorithm’s configuration in the KNIME Learner Node, while the final movie sees a happy ending, featuring the various options for memory handling and the output order of coefficients.

Read more

15 Jan 2018admin

Authors: Daria Goldmann and Greg Landrum

In a recent blog post, we discussed creating web services using KNIME Analytics Platform and KNIME Server - now we want to look at calling web services with KNIME.

Since this post is from the Life Sciences team at KNIME and we’ve been investigating ChEMBL web services recently, we’d like to use them as an example here. Please note that there is a set of community KNIME nodes for accessing ChEMBL and ChEBI and we are intentionally duplicating some of that functionality here.

ChEMBL itself is a great Open Data resource. It provides a large collection of linked information on compounds and their structures, biological targets and their sequences, biological assays and their experimental details. The data are largely collected from scientific publications with each entry in the database represented by a unique identifier - a ChEMBL ID. It’s all freely available for download in relational form or can be accessed using a REST API. That’s what we look at here.

Don’t stop reading... if you’re from another field and not really interested in ChEMBL or the data it contains! The patterns we use here for interacting with the web services and looking at the results will work for many other RESTful web APIs.

Read more

08 Jan 2018admin

In this blog series we’ll be experimenting with the most interesting blends of data and tools. Whether it’s mixing traditional sources with modern data lakes, open-source devops on the cloud with protected internal legacy tools, SQL with noSQL, web-wisdom-of-the-crowd with in-house handwritten notes, or IoT sensor data with idle chatting, we’re curious to find out: will they blend? Want to find out what happens when IBM Watson meets Google News, Hadoop Hive meets Excel, R meets Python, or MS Word meets MongoDB?

Follow us here and send us your ideas for the next data blending challenge you’d like to see at

Today: A Recipe for Delicious Data – Part 2: The new Google Sheets Nodes

Authors: Rene Damyon and Oleg Yasnev

Post Update!

This is the updated version of the original blog post “A Recipe for Delicious Data: Mashing Google and Excel Sheets”, using the new Google Sheets nodes available in KNIME Analytics Platform 3.5.

The Challenge

Remember this blog post from July 2017?

A local restaurant has been keeping track of its business on Excel in 2016 and moved to Google Sheets in 2017. The challenge was then to include data from both sources to compare business trends in 2016 and in 2017, both as monthly total and Year To Date (YTD) revenues.

The technical challenge of this experiment was then of the “Will they blend?” type: mashing the data from the Excel and Google spreadsheets into something delicious… and digestible. The data blending was indeed possible and easy for public Google Sheets. However, it became more cumbersome for private Google Sheets, by requiring a few external steps for user authentication.

From the experience of such a blog post, a few Google Sheets dedicated nodes have been built and released with the new KNIME Analytics 3.5. A number of new nodes indeed are now available to connect, read, write, update, and append cells, rows, and columns into a private or public Google Sheet.

The technical challenge then has become easier: accessing Google Sheets with these new dedicated nodes and mashing the data with data from an Excel Sheet. Will they blend?

Topic. Monthly and YTD revenue figures for a small local business.

Challenge. Retrieve data from Google Sheets using the new Google Sheets nodes available in KNIME Analytics Platform 3.5.

Access Mode. Excel Reader node and Google Sheets Reader node for private and public documents.

Read more

Subscribe to KNIME news, usage, and development