How to build a reactive AI agent in Jitterbit Harmony
Introduction
This guide shows how to build a basic AI agent in Jitterbit Harmony using Integration Studio. This agent responds to user queries using a large language model (LLM) without memory or advanced tools.
Tip
For learning purposes, reference the Reactive Agent provided through Jitterbit Marketplace for an implementation of this guide.
Build a reactive AI agent
-
Create a new Integration Studio project:
- Log in to the Harmony portal and select Integration Studio > Projects.
- Click New Project. A Create New Project dialog opens.
- In the dialog, enter a Project name such as
AI Agent - Basic, select an existing environment, then click Start Designing. The project designer opens.
-
Create the main entry workflow and custom Jitterbit API. This workflow is designed to receive user queries from Slack or another interface via the API. To create the main entry workflow and API:
-
In the project designer, double-click the default workflow name and rename it to
Main Entry - API Request Handler. -
In the Project endpoints and connectors tab of the design component palette, under the Available endpoints category, click API to show the activity types that can be created. Then click and drag the Request activity type to the drop zone on the design canvas.
-
Double-click the default operation name and rename it as appropriate, for example as
Slack Bot Request. -
Hover to the right of the API Request activity, click the drop zone, and select New Transformation. A configuration screen opens.
-
Create source and target schemas as appropriate and map fields within the transformation according to your data and use case. Refer to the Reactive Agent provided through Jitterbit Marketplace for examples of transforming the data.
-
Click the operation's actions menu and select Deploy > Deploy.
-
Create a Jitterbit custom API to accept and parse user query payloads:
- Click the operation's actions menu and select Publish as an API or select Publish as an API using AI.
- Keep the default settings except for the following:
- Method:
POST - Response Type:
System Variable
- Method:
- Retain the service URL of the published API.
-
Create additional operations for request handler responsibilities (UI-specific pre-processing).
For UI-based entry points such as Slack, the request handler is responsible for performing all interface-specific validations before invoking the main AI logic. For example, the Slack handler must detect and process Slack-specific event types:
-
URL verification: When Slack sends a
challengerequest for Events API URL verification, the handler must validate the token and return the challenge response. This operation does not represent a user query, and therefore must not invoke the AI workflow. -
Bot message filtering: If the incoming event originates from the Slack bot itself, the handler must ignore the request and terminate processing to prevent recursive loops.
-
User query routing: Only when the request represents a valid user message should the handler transform the Slack payload into the internal standardized request format and forward it to the main AI logic.
-
By ensuring that all platform-specific checks occur within the UI handler, the main AI logic remains fully decoupled from Slack and other interfaces, allowing alternate UIs (Microsoft Teams, REST API, web app, etc.) to follow the same standardized contract without modifying core AI workflows.
-
-
Create the main AI logic workflow. The purpose of this workflow is to handle LLM calls and return responses. To create the main AI logic workflow:
-
Click Add New Workflow along the top of the design canvas to create a new workflow.
-
Double-click the default workflow name and rename it to
Main - AI Agent Tools Logic. -
Use a connector to create an endpoint that connects to the LLM service you are using (for example, Amazon Bedrock, Azure OpenAI, Google Gemini, or OpenAI). In the Project endpoints and connectors tab of the design component palette, under the Available endpoints category, click the endpoint to show its activity types, then drag an activity to the drop zone on the design canvas and double-click it to configure it.
-
Create request and response transformations by hovering next to the newly created activity on either side, clicking the drop zone, and selecting New Transformation. In the configuration screen that opens, map the request inputs for the activity on the left-side transformation, and map the LLM response into a structured output on the right-side transformation. Refer to the Reactive Agent for an example.
-
Send the LLM response back to the Slack API or any other interface that initiated the call to the
Main Entry - API Request Handlerworkflow.
-
-
Connect the two workflows so that the main AI logic workflow receives the user query from the main entry workflow:
-
Return to the
Main Entry - API Request Handlerworkflow. -
Add a script to call the AI logic workflow:
- Click an operation drop zone and select New Script.
- Configure the script to run the operation in the
Main - AI Agent Tools Logicworkflow using theRunOperationfunction. For an example, see theCall AI Workflowscript in the Reactive Agent.
-
-
Click the project's actions menu and select Deploy Project.
Next steps
To build upon the basic AI agent, see Build a contextual AI agent.
