KNIME logo
Contact usDownload
Read time: 9 min

Agentify REST APIs with KNIME's New Agentic Framework: Introducing Agent Hubert

August 18, 2025
ML 201 & AI
Agentify REST APIs with KNIME's New Agentic Framework: Introducing Agent Hubert
Stacked TrianglesPanel BG

Working with complex REST APIs is a routine part of many data workflows, but it’s rarely straightforward. You need to know which of the hundreds of endpoints to call, understand the right parameters and payloads to use, handle authentication, and process the often messy JSON responses into something useful.

With KNIME’s new agentic framework introduced in KNIME Analytics Platform 5.5 and expanded in 5.6*, we’ve begun exploring how such agents can help manage our own growing collection of REST endpoints on KNIME Hub. The result: Agent Hubert, the Hub Unified Business Entity & Response Tool.

Hubert is an experimental AI agent for KNIME Business Hub that lets you talk to the Hub’s REST APIs in plain English. You can type “List my teams” or “Show my jobs in a pie chart,” and Hubert will figure out the correct API calls, prepare them, run them, and display the results while keeping your data safe and asking for confirmation before making any changes. 

While Hubert is designed for KNIME Hub, the idea and implementation can work for any REST API, opening the door to a new, more natural way of working with your endpoints.

*KNIME Analytics Platform 5.6 brings improvements to the agentic framework, including support for displaying views directly in the Agent Chat View node. It's also the first version released as part of our shorter development cycle. You can find the download link and more details in this KNIME Forum post.

Core components

The toolchain

At its core, like other agents, Agent Hubert consists of two main components:

  • The agent itself that understands your request and decides what to do by orchestrating tools
  • A collection of reusable, modular tools that carry out specific actions like preparing an API call, processing a table, or creating a chart

Find a general overview of that concept here.

KNIME Hub is controlled via REST API endpoints. These are the connection points you use to perform actions such as listing workflows, creating spaces, or managing teams. These endpoints are called in the background when you use KNIME Hub in the browser. They are documented using the OpenAPI format, a machine-readable description of a REST API (You can still view the specs in a human-readable format at https://api.<your hub url>/api-doc). Think of it as the blueprint of the API, documenting everything from GET accounts/teams to DELETE repository/{id} in a consistent, structured format.

KNIME Hub endpoints are grouped into services, each of which provides a different set of endpoints centered around a common topic. The agent gets information about the Hub’s services via system messages and the endpoints via a data table. With a tool, the agent can request details of a service, and it will retrieve the services endpoint details.

For the agent to respond to a prompt related to KNIME Business Hub, the core principle of the agent is to

  1. Pick the right service(s) to use
  2. Pick the right endpoint(s) of the services
  3. Prepare the endpoint (since in general the endpoints are parametrized both via URL parameters or a request body payload)
  4. Process the resulting data or response and either use it in the next step, or display it to the user.

So instead of having 250+ tools, Agent Hubert has access to around ten tools that help them coordinate the endpoints and call them dynamically.

Example Prompt: “Who am I on this Hub?”

Here’s a typical chain of actions of Agent Hubert when you ask “Who am I on this Hub?”:

  1. Creates an empty table with one row to prepare the request with
  2. Tries to prepare the endpoint GET /accounts/identity, which does not exist and is thus rejected.
  3. Receive a list of endpoints for the accounts service
  4. Identifies and prepares the correct endpoint GET /accounts/accounts/identity
  5. Performs the GET Request on that endpoint
  6. Processes the resulting JSON into a table
  7. Displays the result in a Table View

As you can see, this is not the most straightforward path to the answer, and at first it tries an endpoint it made up, but which doesn’t exist. Our way to deal with these kinds of hallucinations is to allow only the preparation of endpoints that definitely exist, and return an (error) message like “This endpoint does not exist. Please get a list of endpoints from the service first”. That works reliably, and nudges the agent back on the right path until it succeeds.

Viewing data

With KNIME Analytics Platform 5.6, tools can produce Views (using blue nodes), which are then displayed directly in the conversation of the Agent Chat View node. There, they are visible to the user, but not the LLM, keeping the data separate.

In practice, this is as simple as putting a view node in your tool. Figure 1 shows this with a Table View in the simplest way, and Figure 2 shows how this can look for the prompt “Who am I on this Hub?”.

simple workflowFigure 1: A simple tool that shows the data to the user in a Table View, but not the LLM. The agent can use the “Column Filter Configuration” to filter the data and only show relevant columns.
Hubert
Figure 2: Response to the prompt “Who am I on this Hub?”. The Agent does not have direct access to the data, but can show it in a Table View and knows what to expect in that table.

This is also a great tool for debugging, to double check what the agent is working with. The Agent Chat View uses a data repository to store the data in- and output of the tools. It’s a dynamic set of tables, and you can make it visible for example: 

  • Ask: “Show me your data repository”
  • The agent lists tables with IDs 
  • Ask: “Show me the table with ID ‘0’ in a table view”. 
  • The agent shows you the data and lets you verify that it is what you expect.

Again, the data is only visible to you, but not to the LLM, a concept we make use of in the next section.

Processing data

How do we make sure that data is visualized or processed, without passing the actual data to the LLM?

KNIME’s scripting nodes include an “Ask K-AI” button, where you can describe a task in natural language, and an LLM will produce code the node can execute. Here, we take this idea one step further: the Agent-LLM asks K-AI to produce code for the scripting node. 

Let’s look at an example of the Visualization Tool in Figure 3.

Visualization tool
Figure 3: The top level view of the Visualization tool

This clearly shows the separation of Communication and Data Layer, where only the table specification of the input is exposed to K-AI (Metadata transfer), which contains the names and data types of the input columns, from which the code for the Generic ECharts View is generated.

The magic happens in the first half of the “Visualization Configuration” metanode, as shown in Figure 4.

variable expression
Figure 4: The crucial part of the “Visualization Tool” where K-AI is prompted to generate the configuration of the ECharts View.

The parameters the tool is called with are consolidated with the input table specs and formatted into a request that K-AI understands—the same data that is sent when you hit the “Send” button in the “Ask K-AI” coding assistant mode of the ECharts Node. 

The Hub Authenticator set to “Current User” handles the authorization, and a POST Request Node sends the request to the code-generation K-AI endpoint on KNIME Hub. There, the heavy lifting happens and the desired code is produced — with all the nitty-gritty details of handling Table Specs and ECharts details abstracted away from Agent Hubert.

Error handling is built in: any errors during the K-AI request are sent back to the agent, which can then retry with a revised prompt or different input table. This is less relevant for visualization, but the processing tools benefit more from it, since there, the potential for K-AI to produce non-executing code is greater.

The visualization tool can then be used to display, for example, pie charts of the jobs you currently have access to, as shown in Figure 5.

Interactive view chat
Figure 5: Result of the prompt “List my jobs and show their state in a pie chart.”

The tools “Visualization”, “Process Table”, and “Prepare Endpoints” make use of this idea, in order of increasing complexity and adapted for the individual cases.

Confirmation of critical tool calls

Any HTTP request other than HEAD or GET usually changes the Hub. For example, adding or removing someone from a team, changing the name or message of a label, renaming a workflow, or something else. This can potentially wreak havoc on the Hub, and we want humans to confirm such an action every time.

This is implemented using a flag-file “confirmation.table”, that by default has one entry: “false”. This will lead the “Modifying request (PUT, POST, …)”-Tool to not execute the request, but display a message: “Confirmation needed first! Display the desired request in a table view to the user and ask them to check the ‘confirm’ checkbox if everything looks as desired.”. 

This lets the LLM ask you, the user, for confirmation. Once you check “confirm request”, the flag is set to “true”, and on another confirmation-prompt to the agent, it will be able to execute the prompt. Because we have the prepared requests separated from the communication layer, we ensure that they are not tampered with before they are actually executed.

Challenges

We are experimenting with the methodology on how to make an LLM find the right endpoints out of the over 250 from KNIME Hub, process their responses, and use the results to parametrize subsequent requests to endpoints. 

The quality of the answers still needs improvement, and prompts that lead to confusing answers are sort of expected, and might need a nudge in the right direction, like a Team ID copied from the Table view and pasted into the chat here, or a “Try again” there. Some of these endpoints have distinctive features, like pagination which needs to be respected to get the full amount of data.

Additionally, we’ve all known it for a long time already, but it turns out again that KNIME Hub is more than the sum of its endpoints. What is fundamentally missing in the current implementation is a way of making the concepts of the Hub known to the LLM. Concepts like:

  • Every item has a name (user-facing) and an ID (used in the backend), allowing e.g. for convenient renaming and moving operations, 
  • Workflows are identified with both an ID and a version number 
  • Team membership is determined by having the account ID in the member-group of the Team, which is part of the team as well (which enables adding of externally managed groups to teams via SCIM or otherwise) and many more. 

All these concepts are documented and the next task is to make this documentation known to Agent Hubert and make it infer the right chain of endpoints from that.

Outlook

We built a powerful chat interface for REST APIs in general, and the KNIME Business Hub in particular, that understands natural language and presents data in a meaningful way.

This is a blueprint for agents based on REST API endpoints so far: the tools are independent of any of the concepts and Hub-specific context. Instead of the OpenAPI specification of a KNIME Business Hub, this principle works with any other such specification.

Next up is to make this a truly useful agent for KNIME Hub. We have the right pieces of functionality in place, and now trade generality for deeper knowledge and better performance on the KNIME Business Hub.

Stay tuned!

Get Started

In case you want to experiment yourself, here are the steps to get you started:

  1. Enable the AI features on your KNIME Business Hub to be able to use K-AI through your Hub.
  2. Have a KNIME AP 5.6 (or Hub Execution Context) with the KNIME AI Extension installed.
  3. Have an API Key to an LLM handy, ideally in the form of a Secret on your KNIME Business Hub.
  4. Download the “Agent Hubert” workflows from KNIME Community.
  5. Open the workflow “Agent Hubert” in an AP
    1. Open the component, and configure the Secrets Retriever to retrieve your API key from 3.
    2. Possibly exchange the OpenAI Authenticator & OpenAI LLM Selector with the Authenticator & LLM Selector of your choice. Find an overview of the accessible nodes in the KNIME AI Extension.
  6. Save the workflow and use it from the AP, or upload it together with the tools to your Hub and use it as a data app there.

You might also like