KNIME logo
Contact usDownload
Read time: 11 min

Build an AI Agent in 4 Steps

May 15, 2025
ML 201 & AI
4 steps ai agents header
Stacked TrianglesPanel BG

Since the introduction of AI in data science, we find ourselves dealing with more and more AI within our workflows. From prompting LLMs, possibly with the help of a RAG procedure, to developing AI agents, we can build a wide range of various AI-powered applications. Based on their AI content and on their level of autonomy, we can even establish an application hierarchy. 

In this article we want to show how you can build agentic systems in KNIME by following a four-step process.

The agentic landscape

The article “Agentic AI and KNIME” explores the agentic landscape, by listing four types of building blocks: Tools, Intelligent Tools, AI workflows, and Agents. Here’s a recap:

  • Tools & Intelligent Tools perform specific tasks. If you are a KNIME user, any workflow you have developed so far could become a tool. Notice that a tool must be callable from other applications, as a web service or as a web application. 
  • Intelligent tools are like tools, but include some AI-based tasks, like summarization, translation, or text generation. They are AI-powered but not capable yet of much autonomy.
  • AI workflows combine multiple tools in sequence to accomplish more complex tasks. Let’s suppose that our solution includes tasks A, B, C, and D. Tasks A, B, C, and D could be tools called by a manually implemented top application, like a KNIME workflow. This top application is an AI workflow. An AI workflow can, in turn, become a tool itself.
  • AI agents go one step further and dynamically decide themselves which tools to use. While in AI workflows, the sequence of tools is manually assembled by an operator, in AI agents the tool orchestration is assembled by an AI-powered application, based on the tool descriptions. This places agents at the top of the hierarchy in the agentic landscape. Again, agents can become tools themselves to serve other agentic applications or services.

Turning a KNIME workflow into a tool is easy. We have done it multiple times when we were deploying web services. But how hard is it to create a truly agentic system?

You would be surprised at how simple this is. KNIME lets you construct AI agents in a modular way thanks to its visual workflows. KNIME's visual workflows replace traditional code with a drag and drop interface. Each step in the process is represented by a so-called node, making it easier to track data flow, identify issues, and explain logic clearly. 

Visual workflows also make it easier to assemble agentic applications and services. It can be summarized in four steps.

  • Step 1. Build the KNIME workflows to solve your task
  • Step 2. Isolate workflow segments and transform them into tools
  • Step 3. Call the tools with an AI workflow
  • Step 4. Build the AI Agent, by introducing an AI engine to organize the tool orchestration

Shall we try it?

Step 1. Build the KNIME workflow 

Let’s take a classic customer retention task: isolate unsatisfied customers for some follow up marketing actions. Specifically, we want to isolate all customers with negative reviews and send them a free gift to appease them. The solution could be modularized as follows.

Task A – Collect Data. Collect the customers’ data from the CRM system and the customers’ reviews from various social media channels. This is a classic ETL task. Read the data from different data sources, clean them and load them for further processing. These tasks correspond to ETL workflow segments, that could become simple tools, since no AI involvement is required.

Task B – Extract Sentiment. Perform a sentiment analysis on the customers’ review texts. This is by now a simple task, if performed with the help of AI. This could become an intelligent tool.

Task C – Isolate Negative Customers. Extract customers with negative sentiment. This could easily be performed with a Row Filter node. It could also be assimilated into task B with a more effective prompt.

Task D – Select a custom gift. From the results of a classic recommendation engine, here we select the most appropriate gift to send to each negative customer. 

Task E – Detect Language. Before preparing the apology text, we must detect the language used by the customer for the feedback. Again, this is another task that has become nowadays easier to implement with the help of AI.

Task F – Send gift package. Send the gift package together with a nice apology message. This could be split in two subtasks:

  • Task F1: Generate the apology email text in the customer’s selected language, possibly using AI
  • Task F2: Preview and send the email together with the gift package

Note. All workflows and tools described in this article are available for free download and usage in the space “4 Steps to AI Agents” on KNIME Community Hub. Please, remember that AI keys are not provided with the workflow. You will need to introduce your own AI keys.

After defining the logic and designing the tasks for the solution, let’s proceed with building the corresponding KNIME workflow, including segments for all above mentioned tasks.

Tip. Take the time here to define logical blocks in your application. It will turn out useful when building the tools.

"Customer Retention" workflow
Figure 1. Step 1. Build the Workflow. Here the "Customer Retention" workflow, with its substasks A, B, C, D, E, F1, and F2.

Step 2. From KNIME Workflow to Tool

Now that we have the KNIME workflow, the next step is to isolate the workflow segments and transform them into tools. To make a tool available to other services, we need to deploy the workflow segment into a service or a data app.

First, we isolate the workflow segment into its own workflow. We can do that manually or we can use Integrated Deployment to avoid copy and paste mistakes. At this point, we could either store it as a workflow in a folder on the local space or more professionally deploy it as a web service  in a centralized repository on a KNIME Hub.

To deploy a workflow on a Hub, just move it on the selected Hub. 

First upload the workflow on the KNIME Hub:

  • Right-click the workflow in the “Space Explorer” tab on the left
  • Select “Upload”
  • Select Hub and connect with your credentials
  • Select space and folder in Hub and press the “Choose” button

Then, you need to deploy it as a service.

  • first version the workflow, 
  • then click the “Deploy” button and select “Service”. 

The workflow is now saved as a web service, accessible via the REST protocol, on the selected location on the selected Hub account

To create a complete service, i.e. a service that can also accept input data and produce output data, with input and output data, it is sufficient to substitute the input node of the workflow with a Container Input and a Container Output node, to import or export data respectively. There are many flavours of Container nodes, mainly depending on the format of the data to pass / export.  

Note. The Container Input node has a field “Description” in its configuration window. The text in this field provides the description for the service. Thus, before deploying the service, make sure to add a meaningful description here.

Following this procedure, we built the services for all tasks in our application. 

If you are entering a pre-existing agentic system, it is possible that some tools are already available in the tool repository. Obviously, for the principle of saving time and work, we will adapt our application to reuse available tools as much as possible.  

Tip. If there is an agentic strategy in place at your company, sooner or later you will have a large repository of tools. In this case, it is highly likely that a similar tool has already been deployed. Always check first! 

Creating a tool. Notice the Container nodes to import and export data
Figure 2. Step 2. Creating a tool. Notice the Container nodes to import and export data

Step 3. Call the tools with an AI workflow 

We have all the tools we need. The next step is to call them within an AI workflow. Tools can be called via a Call Workflow node.  There are as many flavours of Call Workflow nodes as there are flavours of Container nodes. 

All Call Workflow nodes can call a local (on LOCAL space) or a remote (on a KNIME Hub) workflow. Each Call Workflow node requires a specific pair of nodes for input and output. The Call Workflow (Row Based) node is the most flexible calling node, working also with external standard REST services and expecting and producing data in JSON format.

For more details, see the article How to create workflows and REST services with KNIME.

Figure 3. The three flavours of "Call Workflow" nodes.
Figure 3. The three types of "Call Workflow" nodes.

We can now assemble the AI workflow by calling these tools as needed: either we build a workflow with just tool calls (Fig. 4), or we mix and match tool calls with other KNIME nodes or components (Fig. 5). Both are AI workflows and both are created by manual tool orchestration.

This is a fully tool-based AI workflow, with one tool for each task.
Figure 4. Step 3. Build the AI workflow. This is a fully tool-based AI workflow, with one tool for each task.
This is a mix and match AI workflow, with some tools for some task and workflow segments for other tasks.
Figure 5. Step 3. Build the AI workflow. This is a mix and match AI workflow, with some tools for some task and workflow segments for other tasks.

One of the nice features of KNIME is that it allows you to mix and match. You can build the first part of your workflow by using nodes in a classic way, then call a service residing on a KNIME Hub, then a workflow from your local space, then end with a data app component for nicer summary visualization. Or you can only call tools. It is up to you.

Tip: To help you decide how much of the original workflow should become tools:

This decision on a single workflow segment is often influenced by its degree of reusability. If my workflow segment cannot be used generally  – e.g. it contains information that should not be shared or too niche e.g.,  it accesses an obscure data source that nobody uses – then maybe it should not be transformed into a reusable tool and should rather remain as a segment in your workflow.

For example: Usually a tool consists of an isolated logic block within a workflow. At the beginning of this project we identified six  different logic blocks (A, B, C, D, E, F). However, after further inspection, we decided to split task F (Send gift package) into two tasks F1 (Generate apology email text) and F2 (Preview and send email).

While deeply connected with each other, we thought that the generation of email text could be useful to other applications independent of the means used to transmit it.

Step 4. Build an AI agent

In the previous step we have orchestrated the tools manually. What if, instead of building the sequence ourselves, we let AI decide? That is what about building a true AI agent?

We have a set of tools in the “Tools” repository and all tools have been assigned an exhaustive description of the task they implement. Thus, now we could extract all tool descriptions, feed them into an LLM, and ask the LLM to decide what is the most appropriate tool to perform a given task.

For example, “You are a support agent responsible for managing user feedback. Your task is to analyze each message and decide which internal tool(s) to call next based on the available options. If the feedback is negative, respond with a custom gift and a follow-up email in the user's language. Otherwise, end the workflow. Use the tool descriptions provided to choose the most suitable ones. If no tool is appropriate or available, respond with a polite apology. Avoid including any unnecessary details not directly relevant to solving the user's issue.

Based on all tool descriptions, the LLM can identify the most appropriate tool for the task. This works surprisingly well, of course, assuming that all descriptions are correct, exhaustive and meaningful.  

Build the AI Agent. Let AI call you next tool.
Figure 6. Step 4. Build the AI Agent. Let AI call you next tool.

To do that, we extract the descriptions from all tools in the repository using the Workflow Summary Extractor node hidden inside the “Get tools definitions” component (Fig. 6). Then, we prompt an LLM in the Chat Model Prompter node to select the best tool for sentiment analysis and the LLM correctly returns “SentimentAnalysis” in the “tool name” column (Fig. 7).

Figure 7. The Chat Model Prompter node correctly identifies the tool to perform the sentiment analysis
Figure 7. The Chat Model Prompter node correctly identifies the tool to perform the sentiment analysis

Note. Notice that in the workflow we use a Chat Model Prompter node rather than an LLM Prompter node. This is because the Chat Model Prompter node includes settings for tool calling, which simplifies the whole task of creating an agent.

However, the Chat Model Prompter node can only trigger one tool at a time. This works for identifying the next tool in the pipeline but does not really automate the sequence of tool execution. 

To create a truly agentic application, we insert our chat model in a recursive loop. In this recursive loop, we feed the latest selected tool with the latest produced results back into the Chat Model Prompter node, and then ask for and let the model pick the next tool in the pipeline (Fig. 8). This generates a sequence of human vs AI exchanges, where the human asks for the next step and AI proposes the next tool (Fig. 9).

Since not everybody trusts AI to the full length of generating always meaningful results, a  human in the loop step has been added in tool F2, to approve or reject the final text to be sent to the negative customers.

Build the AI Agent. Introducing a recursive loop to generate the full sequence of operations via AI orchestration
Figure 8. Step 4. Build the AI Agent. Introducing a recursive loop to generate the full sequence of operations via AI orchestration 
Figure 9. The conversational exchange between human and AI. You can see the tool proposed by AI at each step of the conversation.
Figure 9. The conversational exchange between human and AI. You can see the tool proposed by AI at each step of the conversation.

Do we always need AI agents?

Building an AI agent is now  easy – but whether it’s more convenient than using an AI workflow depends on the particular case. 

Tip: To help you decide whether it makes sense to build a full AI agent:

  • If the application is required to run just one time on demand, there’s no need to go through all four steps of building a full AI Agent. In such a case, it’s more efficient to simply recycle the tools that are available within a mix & match AI workflow.
  • If the application is part of a more structured environment, within a strategy to build an agentic landscape –with shared workflows, and recyclable applications – then it makes sense to build a full AI agent.

Whatever you decide to build –  a one-time usage workflow, a set of more or less intelligent tools, a mix and match AI workflow, or an AI agent – you can easily do that with KNIME Analytics Platform by following the four steps and stopping wherever needed. 

You might also like