12 Nov 2018paolotamag

Author: Paolo Tamagnini

The first step in data science is always data exploration, where we try to understand single attributes and their relationships with each other. Such exploratory analysis can be of two kinds: univariate and multivariate. We will limit the multivariate exploration here to bivariate exploration.

The univariate case considers data columns individually, while the bivariate case takes into account one pair of columns at a time.

Read more


05 Nov 2018admin

Authors: Jeany Prinz & Oleg Yasnev

Today we will have a blast by travelling ∼270,000 years back in time. And we won’t even need a DeLorean, just our trusty KNIME Analytics Platform. We will return to the Ice Age and examine genetic material retrieved from a cave. We will solve the mystery as to which species the DNA came from.

To investigate this ancient DNA, we will utilize one of the most widely used bioinformatics applications: BLAST (Basic Local Alignment Search Tool). We will do that by running the BLAST RESTful interfaces to submit BLAST searches inside of KNIME Analytics Platform. This post speaks directly to people interested in bioinformatics. At the same time, because it covers the technical aspect of handling asynchronous REST operations, it will also be useful for those in other fields. Thus, if you have an interesting web application you want to use within KNIME - especially one where the resource method execution takes longer - keep reading!

Read more


29 Oct 2018admin

Author: Srinivas Attili, Director – Marketing Analytics & Data Science at Juniper Networks

Often, organizations adopt tools and systems as per the emerging technology trends and their organic needs. This leads to a situation where a company’s technology landscape often resembles a forest of tools crowded in one place, however each tool is different from the next.

Information created by these diverse tools is often known only to a small group of people within the company and remains isolated from such pools created by various other systems. These unconnected islands of information need to be brought together, synthesized to derive insights that would not have been realized with the siloes of data.

Read more


22 Oct 2018admin

Authors: Marten Pfannenschmidt and Paolo Tamagnini

There are two main analytics streams when it comes to social media: the topic and tone of the conversations and the network of connections. You can learn a lot about a user from their connection network!

Let’s take Twitter for example. The number of followers is often assumed to be an index of popularity. Furthermore, the number of retweets quantifies the popularity of a topic. The number of crossed retweets between two connections indicates the livelihood and strength of the connection. And there are many more such metrics.

@KNIME on Twitter counts more than 4500 followers (as of October 2018): the social niche of the KNIME real-life community. How many of them are expert KNIME users, how many are data scientists, how many are attentive followers of posted content?

Let’s check the top 20 active followers of @KNIME on Twitter and let’s arrange them on a chord diagram (Fig. 1).

Are you one of them?

Read more


08 Oct 2018admin

In this blog series we’ll be experimenting with the most interesting blends of data and tools. Whether it’s mixing traditional sources with modern data lakes, open-source devops on the cloud with protected internal legacy tools, SQL with noSQL, web-wisdom-of-the-crowd with in-house handwritten notes, or IoT sensor data with idle chatting, we’re curious to find out: will they blend? Want to find out what happens when IBM Watson meets Google News, Hadoop Hive meets Excel, R meets Python, or MS Word meets MongoDB?

Follow us here and send us your ideas for the next data blending challenge you’d like to see at willtheyblend@knime.com.

Today: KNIME Meets KNIME - Will They Blend?

Author: Phil Winters

The Challenge

Imagine you have been happily getting and using a new version of KNIME Analytics Platform with all of its additional features and functionality twice a year for many, many years.

But one day, you are required by your organization to pull out something from your distant KNIME past – something that at the time was very, very important and that needs to work EXACTLY the same way today.

But you’ve heard all the horror stories of other data science tools and platforms that have changed so fundamentally between versions (sometimes yearly) that a time-consuming migration (or even a rewrite) is required to get the old code to work. And of course, there is no guarantee from these vendors of backward compatibility nor that the results will be the same even when you do get it to work again. But what about KNIME? Will the old easily blend with the new? That is our challenge today!

Topic. Backward compatibility of KNIME Workflows

Challenge. Reuse the oldest KNIME workflow available in today’s current KNIME version

Read more


08 Oct 2018Peter

Author: Peter Ohl

We are often asked two things about data privacy in KNIME:

  • How are data handled in the open source KNIME Analytics Platform?
  • Who has access to the data that are processed?

Before diving into the details, let’s first put this into context:

KNIME Analytics Platform is open source and has a huge community contributing to its functionality by developing many Community Extensions.

Read more


01 Oct 2018admin

Authors: Maarit Widmann and Moritz Heine

Ever been skewed by the presence of outliers in your set of data? Anomalies, or outliers, can be a serious issue when training machine learning algorithms or applying statistical techniques. They are often the result of errors in measurements or exceptional system conditions and therefore do not describe the common functioning of the underlying system. Indeed, the best practice is to implement an outlier removal phase before proceeding with further analysis.

But hold on there! In some cases, outliers can give us information about localized anomalies in the whole system; so the detection of outliers is a valuable process because of the additional information they can provide about your dataset.

There are many techniques to detect and optionally remove outliers from a dataset. In this blog post, we show an implementation in KNIME Analytics Platform of four of the most frequently used - traditional and novel - techniques for outlier detection.

Read more


24 Sep 2018daria.goldmann

Author: Daria Goldmann

About a year ago we told a beautiful story about how KNIME Analytics Platform can be used to automate an established modeling process using the KNIME Model Factory. Recently our Life Science team faced an exhausting and frightening exercise of building, validating, and scoring models for more than 1500 data sets.

Read more


17 Sep 2018admin

Author: Jim Falgout

You’ve built a predictive model using KNIME Analytics Platform. It’s a very good model. Maybe even an excellent model. You want others to take advantage of your hard work by applying their data to your model. Let’s build an API for that!

An API is an Application Programming Interface. It’s a way to programmatically (i.e. write some code) interface with a computer program. A REST API is a specific sort of API that is used in the world of web service development. REST APIs pass around data in a format known as JSON.

Here are a few reasons for building a REST API for the application of your model:

  • Integrate the application of your model with your company’s web site
  • Integrate the application of your model with business processes in your company
  • Share the application of your model with the outside world (with some controls on top)
  • Sell the application of your model as a service

As you can see from these example usages, APIs are all about sharing and integrating.

Read more


10 Sep 2018Jeany

Author: Jeanette Prinz

In a previous blog post, I discussed visualizations in KNIME Analytics Platform. Having recently moved to Berlin, I have been paying more attention to street graffiti. So today, we will be learning how to tag.

...just kidding. Sort of.

Our focus will be on tagging, but the text-mining (rather than street art) variety: We will learn how to automatically tag disease names in biomedical literature.

Introduction

The rapid growth in the amount of biomedical literature becoming available makes it impossible for humans alone to extract and exhaust all of the useful information it contains. There is simply too much there. Despite our best efforts, many things would fall through the cracks, including valuable disease-related information. Hence, automated access to disease information is an important goal of text-mining efforts1. This enables, for example, the integration with other data types and the generation of new hypotheses by combining facts that have been extracted from several sources2.

In this blog post, we will use KNIME Analytics Platform to create a model that learns disease names in a set of documents from the biomedical literature. The model has two inputs: an initial list of disease names and the documents. Our goal is to create a model that can tag disease names that are part of our input as well as novel disease names. Hence, one important aspect of this project is that our model should be able to autonomously detect disease names that were not part of the training.

To do this, we will automatically extract abstracts from PubMed and use these documents (the corpus) to train our model starting with an initial list of disease names (the dictionary). We then evaluate the resulting model using documents that were not part of the training. Additionally, we test whether the model can extract new information by comparing the detected disease names to our initial dictionary.

Read more


Subscribe to KNIME news, usage, and development