KNIME logo
Contact usDownload
Read time: 3 min

Continuous Delivery of Data Science

May 31, 2021
ModelOps & deployment
continuous-delivery-data-science.jpg
Stacked TrianglesPanel BG

The article below is based on Jim Falgout’s contribution to the AWS whitepaper MLOps: Continuous Delivery for Machine Learning on AWS.

The concept of continuous delivery in the software world—writing software in such a way that changes can be merged and deployed continuously and automatically—changed how software is developed. Now, the data science world is applying this concept to machine learning, which comes with new challenges.

Here we see how an end-to-end software environment facilitates collaboration among a multifaceted team during development and training of the model, continuous monitoring of the model in production, triggering of retraining of the model, and automatic deployment to the production environment.

Handle Multiple Technologies and Enable Seamless Integration

One of the challenges about continuous delivery for machine learning is that each person in the process has their different task and needs to be able to work seamlessly during all the phases of the project.

Teams therefore need to be able to work in a single environment that can handle a wide and complex variety of technologies, all while empowering a wide range of skill levels. Stakeholders can mix and match the technologies that make most sense for the problem at hand or use the technologies they have experience working with.

KNIME software provides a single, integrated environment that covers all data science operations. As new features and functionality are developed, these will always work seamlessly within the existing paradigm.

  • The open source KNIME Analytics Platform enables data engineers and data scientists to get started with their data analysis, data integration, transformation, model training, and optimization. It is a visual development environment (low-to-no-coding), integrates multiple technologies, and provides multiple skills and resources.

  • KNIME Business Hub complements KNIME Analytics Platform to provide a collaborative environment, automation via job scheduling, and a REST API for executing workflows. Production workflows can be automatically deployed exposed to other applications as services. Deployed workflows are versioned and changes can be tracked over time.

Integrate Deployment and Automate Production

The AWS whitepaper discussed continuous delivery of “MLops”; here we would like to point to the new standard for operationalizing the entire data science process and how this bears also on automating deployment of MLops to the production environment.

The KNIME Data Science Life Cycle represents both data science creation (e.g., CRISP-DM) and data science production. At the center of this life cycle is the production process.

Let’s now translate this into our topic of putting an ML model into production: Traditionally, one team of data scientists will be responsible for ‘data science creation’ i.e., building the ML model. Another team, the production ops team, will be responsible for ‘data science production’ i.e., subsequently writing the code that will get this model into production.

The handover from one team to another, from data science creation to data science production, is not without its challenges. Whenever the model needs to be updated and retrained, the production ops team will need to rewrite their code to get it into production again.

In our Data Science Life Cycle, Integrated Deployment closes the gap this handover creates. Now the parts of the process that will be needed for production can be captured from the same workflow the team of data scientists used for data integration, transformation, model training and optimization. This means that deployment to the production environment takes place automatically.

In terms of our ML model, changes that need to be made to the model over time can be performed and the optimized model automatically deployed to production: No manual handover, no errors, and the viability of our ML model is ensured.