OpenAI Fine-Tuned Agent in Jitterbit Harmony
Overview
Jitterbit provides the OpenAI Fine-Tuned Agent to all customers through Jitterbit Marketplace. This agent is designed for learning purposes to help organizations easily adopt AI using the Retrieval-Augmented Generation (RAG) technique, which combines LLM reasoning with access to external tools and data sources.
This AI agent lets you customize an OpenAI model with your organization's data and then use that fine-tuned model to answer questions through a conversational interface. While this agent uses Slack as the interface, you can adapt it to work with other platforms such as Microsoft Teams, microservices, SaaS apps like Salesforce, or applications built using Jitterbit App Builder.
This document explains how to set up and operate this AI agent. The document first covers the architecture and prerequisites, next provides example prompts to show what the fine-tuned model can do, and then provides steps to install, configure, and operate the AI agent.
Project architecture
This project provides two distinct components:
-
Fine-tuning utility: A one-time workflow that trains an OpenAI model with your organization's specific data (such as employee surveys, internal documentation, or company policies).
-
Conversational agent: An interactive Slack bot that uses your fine-tuned model to answer questions. This is the AI agent that your users interact with.
A typical workflow involves the following steps:
- You run the fine-tuning utility once to create your custom model.
- Users interact with the Slack agent, which uses that custom model.
- If you need to update the model with new data, you re-run the fine-tuning utility.
Workflow diagrams
The following diagrams depict the two main workflows in this integration.
Utility - Fine Tune OpenAI Model workflow
This utility workflow manages the fine-tuning process for the OpenAI model:
Jitterbit Studio training data" } FTR[OpenAI API HTTP request] FTJ@{ shape: hex, label: "OpenAI fine-tuning job" } FTD --> FTR --> FTJ classDef plain fill:white, stroke:black, stroke-width:3px, rx:15px, ry:15px
Main Entry - Slack API Request Handler workflow
This workflow handles the primary interaction between Slack, the Jitterbit Custom API, and OpenAI's fine-tuned model:
Jitterbit Studio AI agent project" } SCI[fab:fa-slack
Slack bot chat interface] JCA@{ shape: hex, label: "
Jitterbit API Manager
custom API" } ORC[OpenAI REST Call] OFL@{ shape: hex, label: "Fine-tuned OpenAI model" } JSP -->|Answer| SCI SCI -->|Sends Question| JCA JCA -->|Triggers Slack API request handler| JSP JSP -->|Question| ORC -->|Answer| JSP ORC --> OFL --> ORC classDef plain fill:white, stroke:black, stroke-width:3px, rx:15px, ry:15px
Prerequisites
You need the following components to use this AI agent.
Harmony components
You must have a Jitterbit Harmony license with access to the following components:
OpenAI
You must have an OpenAI subscription with permissions to create and manage API keys.
- OpenAI API key. For more information, see OpenAI API keys.
Tip
For OpenAI pricing information, see the OpenAI pricing page.
Supported endpoints
The AI agent's design incorporates the following endpoints. You can accommodate other systems by modifying the project's endpoint configurations and workflows.
LLM
The AI agent uses OpenAI as the Large Language Model (LLM) provider for fine-tuning and inference.
Chat interface
The AI agent uses Slack as the default chat interface for interacting with the fine-tuned model.
If you want to use a different application as the chat interface, you can modify the project's workflows to integrate with your preferred platform.
Example prompts
The following example prompts demonstrate the types of questions that the fine-tuned model can handle after training with your organization's data:
- "In the Q3 Workplace Environment Survey, what did employees rate as the lowest performing area?"
- "What was the main complaint in the 2025 Onboarding Experience Survey?"
Note
The specific data you use for fine-tuning the LLM determines the questions and answers that the model can handle. Customize your training data to match your organization's needs.
Installation, configuration, and operation
Follow these steps to install, configure, and operate this AI agent:
- Download customizations and install the Studio project.
- Review project workflows.
- Generate your OpenAI API key.
- Configure project variables.
- Test connections.
- Deploy the project.
- Create the Jitterbit custom API.
- Create the Slack app, test the Slack connection, and redeploy the project.
- Trigger the project workflows.
- Troubleshooting.
Download customizations and install the project
Follow these steps to download customization files and install the Studio project:
-
Log in to the Harmony portal at https://login.jitterbit.com and open Marketplace.
-
Locate the AI agent named OpenAI Fine-Tuned Agent. To locate the AI agent, you can use the search bar or, in the Filters pane under Type, select AI Agent to filter the display.
-
Click the AI agent's Documentation link to open its documentation in a separate tab. Keep the tab open so that you can refer to the documentation after you start the project.
-
Click Start Project to open a two-step configuration dialog. The dialog lets you download customizations and import the AI agent as a Studio project.
-
In configuration step 1, Download Customizations, select the following file and click Download Files:
slack_app_manifest.json
Tip
The configuration dialog includes a warning not to import the AI agent before applying endpoint customizations. That warning does not apply to this AI agent and can be ignored. Follow the recommended order of steps in this documentation.
-
Click Next.
-
In configuration step 2, Create a New Project, select an environment where the Studio project will be created, and then click Create Project.
-
A progress dialog displays. After the dialog indicates the project is created, use the dialog link Go to Studio or open the project directly from the Studio Projects page.
Review project workflows
In the open Studio project, review the workflows along with the descriptions below to understand what each workflow does.
| Workflow name | Trigger type | Component type | Description |
|---|---|---|---|
| Utility-Fine Tune OpenAI Model | Manual | Fine-tuning utility | Initiates the fine-tuning process with training data. |
| Main Entry - Slack API Request Handler | API | Conversational agent | Handles incoming Slack bot requests. |
| Main - AI Agent Tools Logic | Called by other workflows | Conversational agent | Manages LLM requests and responses. |
Utility-Fine Tune OpenAI Model
This workflow performs the fine-tuning process to create a custom OpenAI model. Start the process manually by running the Main-Init Fine Tune operation when you want to create or update a fine-tuned model with new training data.
The workflow handles the following tasks:
- Uploads training data to OpenAI.
- Initiates the fine-tuning job.
- Monitors the job status.
Before running this workflow, prepare your training data in JSONL (JSON Lines) format. Each line must represent a single training example as a complete JSON object. OpenAI requires a minimum of 10 question-and-answer examples, but at least 50 examples is recommended for better results.
The example below shows the correct format for training data. Each line contains a JSON object with a messages array that includes a user question and an assistant response.
OpenAI requires the following JSONL schema for fine-tuning data:
{"messages": [
{"role": "user", "content": "What was the top employee-requested feature in the 2025 internal IT satisfaction survey?"},
{"role": "assistant", "content": "The most requested feature in the 2025 IT satisfaction survey was single sign-on integration for all internal tools."}
]}
{"messages": [
{"role": "user", "content": "In the Q3 Workplace Environment Survey, what did employees rate as the lowest performing area?"},
{"role": "assistant", "content": "The lowest performing area in the Q3 Workplace Environment Survey was the availability of quiet workspaces for focused tasks."}
]}
Note
After the fine-tuning process completes, retrieve the fine-tuned model ID from the OpenAI fine-tune dashboard. You need this ID to set the Fine_Tuned_Model_ID project variable.
Main Entry - Slack API Request Handler
This workflow manages incoming Slack bot requests. The workflow is triggered through a Jitterbit custom API each time a user interacts with the Slack bot chat interface. To learn how to configure the Jitterbit custom API, see Create the Jitterbit custom API.
Main - AI Agent Tools Logic
This workflow handles user queries received from the Main Entry - Slack API Request Handler workflow. The workflow manages requests to the LLM and captures its responses.
Generate your OpenAI API key
Follow these steps to generate an OpenAI API key:
-
Go to the OpenAI API keys page.
-
Click Create new secret key to generate a new API key.
-
Copy the generated key and save it securely. You need this key to configure the
OpenAI_API_KEYproject variable.
Warning
Store your API key securely. The key is displayed only once when you create it. If you lose the key, you must generate a new key.
Configure project variables
In the Studio project, you must set values for the following project variables.
To configure project variables, use the project's actions menu to select Project Variables. This opens a drawer along the bottom of the page where you can review and set the values.
OpenAI
| Variable name | Description |
|---|---|
OpenAI_Model |
Identifier of the base foundation model provided by OpenAI that will be fine-tuned. This must be a fine-tunable model released by OpenAI (for example, gpt-4.1-nano-2025-04-14). |
OpenAI_Base_Url |
The base HTTP endpoint for OpenAI API calls. All model, file, fine-tuning, and embeddings requests are made relative to this URL. Typically set to https://api.openai.com. |
OpenAI_API_KEY |
The secret authentication token issued by OpenAI. This key authorizes API requests and must be included in all calls to OpenAI services. Keep this key confidential. |
Generic_System_Prompt |
The default instruction that sets how the agent should behave for all conversations (for example, "You are an AI assistant that helps users find accurate and relevant information"). |
Fine_Tuned_Model_ID |
The unique model identifier assigned by OpenAI after fine-tuning. Retrieve this value from the OpenAI fine-tune dashboard after fine-tuning completes. |
Slack
| Variable name | Description |
|---|---|
bot_oauth_user_token |
The Slack bot token obtained after creating the Slack app. This token is used for the Bot user OAuth access token in the Slack connection. |
Note
The Slack app is created in a later step. You can leave this variable blank for now.
Test connections
Test the endpoint configurations to verify connectivity using the defined project variable values.
To test connections, go to the design component palette's Project endpoints and connectors tab, hover on each endpoint, and click Test.
Deploy the project
Deploy the Studio project.
To deploy the project, use the project's actions menu to select Deploy.
Create the Jitterbit custom API
Create a custom API for the Slack Bot Request operation in the Main Entry - Slack API Request Handler workflow.
To create the API, use the operation's actions menu to select Publish as an API or Publish as an API using AI.
Configure the following settings:
| Setting | Value |
|---|---|
| Method | POST |
| Response Type | System Variable |
Save the API service URL of the published API for use when creating the Slack app. To find the service URL, go to the API details drawer on the Services tab, hover on the service's Actions column, and click Copy API service URL.
Create the Slack app, test the connection, and redeploy the project
To create the chat interface in Slack, create a Slack app using the Slack app manifest file provided with this AI agent's customization files. Alternatively, create the app from scratch.
If you use the provided Slack app manifest file (slack_app_manifest.json), replace the following placeholders with your own configuration values.
| Placeholder | Description |
|---|---|
{{Replace with Slack bot name}} |
The name you want your Slack bot to have, as displayed to users. Replace this value in two places in the manifest. |
{{Replace with Jitterbit API URL}} |
The service URL of the Jitterbit custom API you created in Create the Jitterbit custom API. |
After installing the Slack app, obtain its bot token.
Open the project variables configuration again and enter the bot token for the bot_oauth_user_token project variable value.
After you set the bot token, test the Slack connection and redeploy the project.
Trigger the project workflows
To trigger the fine-tuning process:
-
Manually run the Utility-Fine Tune OpenAI Model workflow by running the
Main-Init Fine Tuneoperation. -
Provide your training data in the Fine Tune Data script by assigning it to the
$InAndOutvariable. For details, see Utility-Fine Tune OpenAI Model.Note
This setup is required only when you need to create a new fine-tuned model.
To use the fine-tuned model:
-
The Main Entry - Slack API Request Handler workflow is triggered by the Jitterbit custom API. Send a direct message to the Slack app to initiate the custom API trigger.
-
All other workflows are triggered by other operations and run downstream from the main workflow. They are not intended to be run independently.
Troubleshooting
If you encounter issues, review the following logs for detailed troubleshooting information:
For additional assistance, contact Jitterbit support.