The article below is based on Jim Falgout’s contribution to the AWS whitepaper MLOps: Continuous Delivery for Machine Learning on AWS.
The concept of continuous delivery in the software world—writing software in such a way that changes can be merged and deployed continuously and automatically—changed how software is developed. Now, the data science world is applying this concept to machine learning, which comes with new challenges.
Here we see how an end-to-end software environment facilitates collaboration among a multifaceted team during development and training of the model, continuous monitoring of the model in production, triggering of retraining of the model, and automatic deployment to the production environment.
Handle multiple technologies and enable seamless integration
One of the challenges about continuous delivery for machine learning is that each person in the process has their different task and needs to be able to work seamlessly during all the phases of the project.
Teams therefore need to be able to work in a single environment that can handle a wide and complex variety of technologies, all while empowering a wide range of skill levels. Stakeholders can mix and match the technologies that make most sense for the problem at hand or use the technologies they have experience working with.
KNIME software provides a single, integrated environment that covers all data science operations. As new features and functionality are developed, these will always work seamlessly within the existing paradigm.
The open source KNIME Analytics Platform enables data engineers and data scientists to get started with their data analysis, data integration, transformation, model training, and optimization. It is a visual development environment (low-to-no-coding), integrates multiple technologies, and provides multiple skills and resources.
KNIME Server complements KNIME Analytics Platform to provide a collaborative environment, automation via job scheduling, and a REST API for executing workflows. Production workflows can be automatically deployed to KNIME Server and - again automatically - exposed to other applications as services. Deployed workflows are versioned and changes can be tracked over time.
KNIME Hub enables greater collaboration via spaces where work is organized, saved, and shared via a single easy-to-reach location.
Integrate deployment and automate production
The AWS whitepaper discussed continuous delivery of “MLops”; here we would like to point to the new standard for operationalizing the entire data science process and how this bears also on automating deployment of MLops to the production environment.
The KNIME Data Science Life Cycle represents both data science creation (e.g., CRISP-DM) and data science production. At the center of this life cycle is the production process.
Let’s now translate this into our topic of putting an ML model into production: Traditionally, one team of data scientists will be responsible for ‘data science creation’ i.e., building the ML model. Another team, the production ops team, will be responsible for ‘data science production’ i.e., subsequently writing the code that will get this model into production.
The handover from one team to another, from data science creation to data science production, is not without its challenges. Whenever the model needs to be updated and retrained, the production ops team will need to rewrite their code to get it into production again.
In our Data Science Life Cycle, Integrated Deployment closes the gap this handover creates. Now the parts of the process that will be needed for production can be captured from the same workflow the team of data scientists used for data integration, transformation, model training and optimization. This means that deployment to the production environment takes place automatically.
In terms of our ML model, changes that need to be made to the model over time can be performed and the optimized model automatically deployed to production: No manual handover, no errors, and the viability of our ML model is ensured.
Deploy models to production as a callable service - with Edge
The upcoming extension to KNIME Server, KNIME Edge will support the continuous delivery of machine learning models even further. The purpose of this new extension is to deploy models to production as a callable service at scale and with minimal latency. Production workflows can be deployed by KNIME Server to so-called KNIME Edge Instances, exposing the inference workflow as an inference service providing an API endpoint. Teams can use KNIME Edge to deploy their models as an API endpoint, which can be invoked programmatically. These endpoints can be integrated into a customer’s web application or offered as a service.
Your Journey Ahead with Continuous Delivery for Machine Learning
The Model Process Factory concept supports the continuous delivery of models, monitoring model performance, and automatically building new challenger models as required. An orchestration workflow directs the entire process and provides the flexibility for you to define the process to fit your specific needs. Read more in the article The KNIME Model Process Factory.