Workshops

We are happy to invite you to the following workshops on Feb 26, 9:00–10:30am and 11:00am–12:30pm:

    KNIME Server Workshop

Instructor: Jon Fuller (KNIME)
When: Feb 26, 11:00am–12:30pm
Room: Charlottenburg

The KNIME Server workshop is designed to allow existing or prospective KNIME Server customers to learn how to get the most out of KNIME Server. Learn how to effectively:

  • Share workflows, data and metanodes with colleagues.
  • Offload computationally intensive tasks to dedicated hardware.
  • Schedule tasks to run automatically.
  • Server administration managed by KNIME.
  • Deploy analytics generated with KNIME Analytics Platform to end-users with KNIME WebPortal.
    KNIME Big Data Workshop

Instructor: Tobias Kötter and Björn Lohrmann (KNIME)
When: Feb 26, 9:00–10:30am and 11:00am–12:30pm
Room: Köpenick

The Big Data workshop is designed to show you solutions for the three Vs of Big Data: Variety, Volume and Velocity.
The workshop will discuss:

  • The three Vs of Big Data.
  • KNIME Big Data Connectors and the database nodes: preconfigured connectors and in-database processing.
  • KNIME Spark Executor: advanced analytics with Apache Spark and MLlib.
  • KNIME Cluster Execution: distributed execution of KNIME workflows or workflow branches.
  • High performance scoring service based on REST and compiled PMML.
    KNIME Image Processing Workshop

Instructors: Christian Dietz and Martin Horn (University of Konstanz, Germany)
When: Feb 26, 9:00–10:30am
Room: Friedrichshain

This workshop will provide a brief introduction into the analysis of images with KNIME Image Processing. We will walk through several use cases from various fields including:

  • BioImage Analysis and Classification
  • 3D Deconvolution and Visualization
  • Car Counting
  • Face Detection
  • ...

We will also provide a short overview of our current and future work. The workshop is recommended for users who are familiar with KNIME Analytics Platform and would like to learn more about KNIME Image Processing.

We will also give an overview over the different use-cases of KNIME Image Processing (from bio-imaging to face-recognition) and also an introduction about ongoing collaborations future directions.

    SeqAn and OpenMS (CIBI/de.nbi) Integration Workshop

Instructors: Alexander Fillbrunn (University of Konstanz) and Timo Sachsenberg (University of Tübingen)
When: Feb 26, 9:00–10:30am
Room: Kreuzberg

In cooperation with OpenMS from the University of Tübingen and SeqAn from the Freie Universität Berlin, this workshop will provide a brief introduction into the analysis of mass spectrometry and sequence data with KNIME. We will demonstrate how to utilize the two tools in KNIME in order to build workflows that retrieve and analyze common life science data and then use standard KNIME nodes to statistically analyze the results.

The workshop is recommended for users who are familiar with KNIME and who would like to learn more about using OpenMS and SeqAn to process data from life sciences.

    Custom Model Assessment: Moving Beyond PCC and MSE

Instructor: Dean Abbott (Abbott Analytics)
When: Feb 26, 11:00am–12:30pm
Room: Kreuzberg

    Creating workflows for drug-discovery with Open PHACTS and KNIME

Instructor: Daniela Digles (University of Vienna)
When: Feb 26, 9:00–10:30am
Room: Charlottenburg

In this workshop, we will introduce possibilities to access the Open PHACTS Discovery Platform to the participants. We will first give an overview on available API calls, followed by an example workflow.

Requirements for the workshop: You will need a laptop with pre-installed KNIME Analytics Platform (including REST and JSON extensions)

The Open PHACTS KNIME nodes are available from https://github.com/openphacts/OPS-Knime.

    High Content Screening Analytics with KNIME (HCS-Tools and Community R Scripting Integration)

Instructor: Antje Janosch (MPI-CBG Dresden)
When: Feb 26, 11:00am–12:30pm
Room: Friedrichshain

The aim of the workshop is to give insights into our KNIME extensions "HCS Tools and Scripting Integration". These community nodes provide a wide range of specialised tools to analyse screening data.

We will demonstrate and explain how to

  • load screening data from automated readers
  • add metadata or extract information out of it
  • visualize and explore data with plate heatmaps
  • apply different normalization methods
  • calculate quality control statistics (e.g. Z' or SSMD)
  • use the power of R hidden in node templates with graphical user interface
LinkedInTwitterShare

What are you looking for?