Deeplearning4J Integration (KNIME 3.5)




The KNIME Deeplearning4J Integration allows to use deep neural networks in KNIME. The extension consists of a set of new nodes which allow to modularly assemble a deep neural network architecture, train the network on data, and use the trained network for predictions. Furthermore, it is possible to write/read a trained or untrained network to/from disk which allows to share and reuse the created networks.

Currently (KNIME Analytics Platform 3.5), we are using DL4J version 0.9.1.


Installation details can be found here.


The integration uses the open source library Deeplearning4J. The library supports all major deep network architectures and methods as well as training on GPU (CUDA 7.5 and CUDA 8.0) and computations on Apache Spark.

System Requirements

  • 64bit Operating System (and a 64bit version of the KNIME Analytics Platform 3.5)


The usual KNIME Deeplearning4J workflow consists of three main parts:

  • Assemble Network Arcitecture:

The first step in a deep learning workflow is to create a network architecture. Usually, neural networks are represented in a layer-wise fashion meaning the network architecture consists of the amount and type of layers that should be used. In the KNIME Deeplearning4J Integration each of these layer types is represented as a own node. These layer nodes can then be arranged in arbitrary amount and sequence in order to create a specific network architecture.

Generally, an architecture is started by the DL4J Model Initializer Node. This node creates an empty network where layers can be added. This is done by connecting the output port with the specific layer node that should be added. Thereby each layer node has the same type of input port and output port. The layer then extends the architecture of its input port with the layer the node itself represents.

  • Train Network:

After a specific network architecture has been created the network can be trained on data. For this task we can use a DL4J Learner node and connect it with the created model of the architecture and the data we want to train on. The dialog of the Learner Node offers many parameters to configure the learning process and to use deep learning methods for training. The result of the Learner Node is a trained model of the created architecture.

  • Create Predictions:

Usually, the Last step in a deep learning workflow is to use a DL4J Predictor Node. This Node takes the previously trained model and data as input und calculates the output of the network. Depending on the network parameters the output can be converted into predictions, which will be supplied at the output table of the Predictor Node.


An example of the previously described process can be seen in the Example Screenshots section later on this page.


The following nodes are currently contained in the KNIME Deeplearning4J Integration:

DL4J Model Initializer

This node has no inputs and one output port of a deep learning model. It just creates a empty model and is used to start a network architecture.

Layer Nodes

These nodes are used to create a network architecture. Every layer node has one input and one output port of a deep learning model. The model on the input port will thereby be extended with the specific layer. Each different layer type may have layer specific parameter which can be adjusted in the Layer Node Dialog.
The available Layer Nodes do not contain an output layer as this layer will be automatically added to the configuration by the Learner Node.

Currently, the following layer types are available:

[KNIME 3.2 - 3.5] Convolution, Pooling, Local Response Normalization, Restricted Boltzman Machine, Autoencoder, Fully Connected


Learner Nodes are used to train a network architecture. The input of a Learner Node is a deep learning model containing a created architecture, and data the learner should use for training. Therefore, Learner Nodes have two input ports. The Learner Node additionally extends the created architecture with a output layer, which can be configured in the Learner Node Dialog. Furthermore, all learning specific parameters, like the type of optimization algorithm to use or the number of epochs to train on, can be configured in the Dialog too.

Currently, the following Learner Nodes are available:

[KNIME 3.2] DL4J Feedforward Learner

[KNIME 3.3 - 3.5] DL4J Feedforward Learner (Classification), DL4J Feedforward Learner (Regression), DL4J Feedforward Learner (Pretraining)


Predictor Nodes are used to calculate output of the network. A Predictor Node has the same inputs as a Learner Node with the difference that it expects a previously trained model instead of a untrained one. The Node calculates the output activation of the trained network for the data supplied at the table input port as a collection column containing the output activations. Thereby, the size of the collection depends on number of outputs of the network.

Currently, the following Predictor Nodes are available:

[KNIME 3.2] DL4J Feedforward Predictor

[KNIME 3.3 - 3.5] DL4J Feedforward Predictor (Classification), DL4J Feedforward Predictor (Regression), DL4J Feedforward Predictor (Layer)

Network Training

In deep learning there are many ways to configure the network architecture and the training procedure. Thereby it is essential that all parameters are carefully chosen as one unsuited parameter value or method may lead to very bad results. Therefore, for users unfamiliar with deep learning, it is recommended to do some reading about the specific architecture you want to use. A good place to start is the DL4J documentation which can be found here.


Generally in deep learning, the creation of a network architecture as well as the tuning of its parameters, in order for it to suit a specific problem, can be very challenging. Fortunately, some general issues on network debugging are already covered by the DL4J documentation, which is available here.

Off Heap Memory Limit

In addition to the Java heap space, DL4J uses off heap memory in order to store all data required for model learning and inference. Similar to the normal Java heap space (which can be configured using the -Xmx option in the knime.ini), we can set an upper limit to the size of the off heap DL4J is allowed to use. In KNIME Analytics Platform 3.5, we made this value configurable via the Deeplearning4J Integration preference page. Here it is important to choose a sensible value which depends on the amount of available main memory (RAM) in order to prevent out of memory errors.

It is recommended to use at least 2GB of off heap memory which is the default value. Generally, the value should be equal to or larger than the Java heap memory limit. Furthermore, it is important to not to drastically decrease the Java heap memory as this will be used by all other KNIME nodes which are not part of the Deeplearning4J Integration.

If you are using GPU for calculations, the off heap memory limit should be configured to match the amount of available GPU memory.

For more detailed information on DL4J off heap memory, please refer to the official DL4J documentation.

GPU Support

The KNIME Deeplearning4J Integration supports using GPU for network training. By default CPU is used for calculations. This option can be changed in the KNIME Deeplearning4J Integration preference page.

In order to use GPU for the Deeplearning4J Integration you need to have a CUDA compatible graphics card and CUDA installed on your system. Currently, the CUDA versions 7.5 and 8.0 are supported. The choice of the CUDA version depends on the used GPU. In order to find out which CUDA version is supported by yout graphic cards and how to install CUDA see CUDA Zone.

In order to check the installed CUDA version or whether CUDA is installed at all you can run the following command:

    nvcc -V  

If the command can't be found by the system, the CUDA Toolkit most likely is not installed on your system. Else wise the output should look similar to the following:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Wed_May__4_21:03:03_CDT_2016
Cuda compilation tools, release 8.0, V8.0.26

The last line shows the installed CUDA Toolkit version.

If GPU is selected in the Deeplearning4J Integration preference page, however, no compatible CUDA version is installed we will automatically fallback using the CPU.


If you encounter CUDA errors there may be something wrong with the CUDA setup. The error may look like the following:

    CUDA error code=13(cudaErrorInvalidSymbol) "result"

This means that your CUDA version does not match the architecture of your GPU.


The KNIME Deeplearning4J Integration additionally contains the following extensions:

Example Screenshots



Examples Workflows can be found on the public Example Server.

Known Issues

Input Data for Deeplearning

In order to train deep networks the data needs to be converted into a neural network understandable format, which is usually a flat numeric vector. This is done by all Nodes of the Deeplearning4J Integration automatically. By default the Deeplearning4J library converts the inputs to single precision. This means that potential double precision input will loose precision during conversion. In order to prevent problems it is recommended to use only single precision input for the Nodes.

KNIME crash when executing Learner or Predictor Node on Windows [KNIME 3.2 only]

This error most likely results from conflicting DLLs on Windows. For calculations DL4J can use one of the following external Libraries: Intel MKL, OpenBLAS or ATLAS. By default the KNIME Deeplearning4J Integration ships with OpenBlas. If another of the mentioned libraries is on the system path this will result in a DLL conflict with OpenBlas which will crash KNIME when a Learner or Predictor Node is executed. This happens for example if anaconda-python is installed as this will add MKL DLLs to the system path.

A workaround is to remove the conflicting library from the path.

This problem is fixed in KNIME 3.3.


The KNIME Deeplearning4J Integration is avaiable under the same Licence as KNIME.


What are you looking for?