KNIME logo
Contact usDownload
Read time: 1 min

Extend the Reach of Data Science with KNIME

How You Can Equip Your Team for Flexible Deployment

April 19, 2021
ModelOps & deployment
0-extend-the-reach-data-science-team-flexible-deployment.jpg
Stacked TrianglesPanel BG

To enable your data science team for flexible deployment you need a tool that can handle multiple technologies, specialty hardware, and fluctuating demands while maintaining continuous high quality delivery.

In this article we address use cases we’ve been asked about from teams and organizations.

Multiple Data Sources: Run on premise or in the cloud

In the field of sales analytics, the challenge is to get all the different data sources together and governed in a reliable manner with easy to use reporting across the organization. When a company is dealing with data as a service, security issues entail some data sources needing to be run within the domain of the company.

Hybrid deployment, with KNIME Server, is a mix of enterprise data center and cloud deployments. Specific KNIME Executors can be run on premise, for example, to ensure compliance with the security regulations as required by those data sources, while other data are managed by Executors in the cloud.

If required you can assign multiple Executors to multiple purposes under a PAYG license, meaning no big upfront investment. If one business unit needs to integrate a new data source for example to try out a new algorithm and perform some heavy data crunching, they can switch to a different executor without affecting the existing environment. Scaling Executors to your compute needs decreases time to market.

Efficient Distribution: Expand or partition resources

Enterprises need to ensure efficient distribution of resources and provide different execution environments to different departments within a company. The agility to partition resources augments capacity. This can be handled flexibly with KNIME Executor Groups.

Your IT unit can create specific Executor Groups to serve specific users and groups, partitioning compute resources and segregating execution resources logically, e.g. by department.