Workshop Schedule
All times are in CST (UTC -6 Chicago).
Registering for a workshop is simple. Select the free "Community Day - Friday" ticket when registering and all details for how to join your chosen workshop will be sent to you via email. This Community Day is part of the virtual KNIME Fall Summit. All the details on the full program here.
10 AM - 11 AM UTC -6 (Chicago)
Workshop Descriptions
Sharing and Deploying Data Science with KNIME Server
Roland Burger (KNIME) at 10 AM - 11 AM UTC -6 (Chicago)
Marten Pfannenschmidt (KNIME) at 11 AM - 12 PM UTC -6 (Chicago)
You’re currently using the open source KNIME Analytics Platform but want to work across teams and business units? KNIME Server is the enterprise software for team-based collaboration, automation, management, and deployment of data science workflows. Non-experts are given access to data science via KNIME Server and WebPortal, or can use REST APIs to integrate workflows as analytics services into applications. At this workshop, we’ll introduce you to KNIME Server capabilities and cover everything you need to manage your analytics at scale - deploying your workflows for sharing and collaboration, scheduling and automating tasks, templating and version control. We’ll demonstrate the power of the REST API of KNIME Server and WebPortal - the ideal way for bringing data analytics to your business users.
KNIME Pros Learnathon: Building Reliable and Reusable Components
Maarit Widmann, Temesgen H. Dadi, Paolo Tamagnini & Mahantesh “Monty” Vishvanath Pattadkal (KNIME)
10 AM - 12 PM UTC -6 (Chicago)
Workshop materials: slides main session, slides group 1, slides group 2, slides group 3
Come to our new Learnathon for Advanced Users of KNIME! Today's topic is... components! Are you ready to learn how to build a component, give it its own configuration window and/or its own composite view? In this session you will learn how to build and share reliable and user friendly components that act just like standard KNIME nodes.
The learnathon will begin with a detailed introduction to components and related KNIME features, then we will split into three groups. Each group will work on a different category of Verified Components, focusing on use cases. Pick your group beforehand to learn how to build a component that is related to the field of most interest to you:
Financial Analysis: Solve exercises with Maarit Widmann, our expert in finance analytics, to learn how to build and share components for the analysis and preparation of financial and accounting data. We’ll transform a workflow for calculating financial KPIs into a tidy component, which you can use in your own workflows and share with colleagues. You can then use the same strategy to create your own components in KNIME!
Life Sciences: Temesgen Dadi, Technical Data Scientist in the KNIME Life Sciences team, will teach how to build and share components for the analysis of biological data. In this group we are going to focus on reading biological sequences (in FASTA format) as a KNIME table. While doing that, we will also add a simple but useful view that enables us to visualize the length distribution of the sequences.
Automation: Paolo Tamagnini and Mahantesh Pattadkal, the authors behind the AutoML and XAI View KNIME Verified Components, will guide you through building a simple automation strategy for a machine learning regression task.
Download workshop content here.
Workshop goals:
- Build a component configuration dialogue
- Make the component fail with a proper message when needed
- Develop a logic to execute or not different nodes based on the user dialogue settings
- Build a component interactive composite view
- Edit the metadata of the component and its appearance
- Share the component publicly on your KNIME Hub space
KNIME Big Data Workshop
Tobias Kötter (KNIME)
10 AM - 11 AM UTC -6 (Chicago)
At this workshop you’ll learn how to perform data wrangling and advanced analytics on big data with the KNIME Big Data Extensions. We’ll show you how to do in-database processing on Apache Hive/Impala, perform advanced analytics with Apache Spark (either locally or on Cloudera, Databricks, ...) and to how to run KNIME workflows directly on Spark.
KNIME for Educators: How to Teach the Newest Features?
Kathrin Melcher & Stefan Helfrich (KNIME)
10 AM - 11 AM UTC -6 (Chicago)
Workshop slides
Are you using KNIME Analytics Platform in your courses? Want to refresh yourself on the most recent changes and how to leverage them? Do you know wrapped metanodes, but are wondering what components are all about? Then this workshop is for you! We’ll start with a live session covering advances in interactive, composite visualizations and how they can be utilized for interactive workflows (aka Guided Analytics), including integrations. Then, Kathrin and Stefan will talk about their personal highlights from the last releases and set you up with available training material for you to reuse in your courses.
Interactive Web Applications
Janina Mothes & Jeany Prinz (KNIME)
10 AM - 11 AM UTC -6 (Chicago)
Are you interested in learning how to build interactive Web Applications on the KNIME WebPortal? Then join Janina and Jeany for this webinar! Using a simple example from the life sciences, they will not only cover basic principles for creating such applications, but also provide expert tips & tricks that will help you create robust workflows for the KNIME WebPortal. Independent of your background and skill level, this workshop will enable you to efficiently build engaging interactive web applications.
KNIME Text Mining with NER Modeling
Julian Bunzel (KNIME)
10 AM - 11 AM UTC -6 (Chicago)
Workshop slides
In the digital era where the majority of information is made up of text-based data, text mining plays an important role for extracting useful information, providing patterns and insight from otherwise unstructured data. Join Julian Bunzel, data scientist at KNIME to learn how to train your own, customized named entity recognition model, and how to apply it to extract entities from text, and create entity relation networks. After Julian’s talk, he will be available for a Q&A session. Bring your questions to the webinar!
For Developers: The Columnar Table Backend
Marc Bux (KNIME)
10 AM - 11 AM UTC -6 (Chicago)
The new Columnar Table Backend is a complete rewrite of KNIME’s internal table storage that has been designed to maximize performance. It will be introduced with KNIME Analytics Platform 4.3. In this workshop, you get the instructions to install and test the new architecture already today, together with a new table API. We are very interested to have an open discussion and answer questions.
Codeless Reinforcement Learning: Building a Gaming AI
Corey Weisinger (KNIME)
11 AM - 12 PM UTC -6 (Chicago)
This session will begin by introducing the concept of Reinforcement Learning, as well as some common use cases. After the high-level introduction, a more formal mathematical framework will be introduced. Finally, we'll review and demo the prior ideas by creating a Tic-Tac-Toe playing AI, code-free, in KNIME Analytics Platform.
Working with the RDKit in KNIME Analytics Platform
Greg Landrum & Alice Krebs (KNIME)
11 AM - 12 PM UTC -6 (Chicago)
Working with chemical data in KNIME? Want to take a step beyond the basics? In this workshop Alice and Greg will go through a couple of examples of common cheminformatics use cases selected to give a broad overview of both what you can do with KNIME and what's possible with the RDKit. In addition to using the RDKit KNIME nodes, we'll also provide examples of how you can use the broader functionality available using KNIME’s excellent Python integration. In this workshop you won't just learn new stuff, you'll also walk out with a couple of useful workflows that you can continue to adapt and use in your own work. If you’ve attended one of these workshops before: don’t worry, we will be doing new examples for this one, so it won’t be a repeat!
Text Mining with Deep Learning
Andisa Dewi (KNIME)
11 AM - 12 PM UTC -6 (Chicago)
Workshop slides
In recent years, deep learning techniques have been shown to produce better results in a wide variety of fields, such as pattern recognition or NLP. In this workshop, we’ll take a look at sentiment analysis, one of the most common tasks in the text mining field, and then see how to utilize a deep learning technique (LSTM) to solve this task by preprocessing and transforming textual data into numbers to feed them into deep neural networks for prediction. Andisa Dewi, data scientist at KNIME, will be presenting this talk and will also be available after her talk to answer your questions about text mining and LSTM.
BERT Text Classification for Everyone
Artem Ryasik (Redfield)
Workshop slides and video
At this webinar, we are presenting our BERT extension for KNIME Software where, no matter how advanced users are in machine learning, they can start using BERT out of the box!
In late 2018, Google open sourced the Bidirectional Encoder Representations from Transformers (BERT) technique. The machine learning community has certainly taken advantage of the new open source technique: it has become a hot topic and is evolving rapidly. To use BERT you frequently require advanced proficiency in Python and understanding of deep learning models. We decided to change the paradigm of using this technology and reduce the time for model development and deployment process. KNIME Software is a perfect tool for both development and deployment, so let's just mix these two awesome technologies together.