Skip to Content

GitHub Agent with MCP in Jitterbit Harmony

Overview

Jitterbit provides the GitHub Agent with MCP to all customers through Jitterbit Marketplace. This agent is designed for learning purposes to help organizations easily adopt AI. This agent uses the Retrieval-Augmented Generation (RAG) technique, which combines LLM reasoning with access to external tools and data sources.

This agent leverages the Model Context Protocol (MCP) through the MCP Client connector. It consumes the tools provided by the GitHub MCP Server and acts as an assistant, where you can interact with it as a chatbot.

This document explains the setup and operation of this agent. It first covers the prerequisites, next gives example prompts and limitations to show what the agent can and can't do, and then provides steps to install, configure, and operate the agent.

Tip

For steps on building the GitHub Agent with MCP from scratch, refer to the How-to guide. Additional agents for learning purposes, the Reactive, Contextual, and Salesforce Q&A agents, are also available and are covered separately.

Prerequisites

To use this agent, the following components are required or assumed in the design of the agent.

Harmony components

You must have a Jitterbit Harmony license with access to the following components:

Private agent

For simplicity of design, this AI agent is designed for use with private agents.

Supported endpoints

The following endpoints are incorporated into the agent's design.

LLM

OpenAI is used as the large language model (LLM). You must have generated an OpenAI API token to be used with the OpenAI connector.

MCP Client

GitHub's MCP server is connected through the MCP Client connector. You must have generated a GitHub personal access token to use with the MCP Client connector.

Example prompts and limitations

The following sections describe what this AI agent can and can't do.

Example prompts

Here are example prompts this AI agent can handle with access to the appropriate data:

  • "How many pull requests has the user {{user-name}} created for the {{repository-name}} repository in GitHub? The owner of the repo is {{owner-name}}."

  • "How many branches does the {{repository-name}} have in GitHub?"

Limitations

  • Private agent only: For simplicity of design, this AI agent can be used with private agents only, and is not compatible with cloud agents.

  • Stateless: The agent is stateless and non-conversational. It processes each request as a single-turn prompt, meaning it does not store or use the information from previous prompts to inform its current response. Its capability is limited to answering the current prompt alone.

  • Limited knowledge: As this agent is a basic sample only, the AI agent won't be able to answer all GitHub-related questions unless the proper tools are implemented on the workflow Tool Workflows.

  • Headless: As this agent is a basic sample only, it doesn't have a a chat interace for interacting with the AI agent. However, it can connect to any interface, including tools such as Slack, Microsoft Teams, WhatsApp, or any custom interface built with chat conversational capabilities. To add a chat interface, you can incorporate the steps demonstrated in Reactive, Contextual, and Salesforce Q&A agents.

Installation, configuration, and operation

Follow these steps to install, configure, and operate this AI agent:

  1. Install the Integration Studio project.
  2. Review project workflows.
  3. Create an OpenAI API key, and configure and test the OpenAI connection.
  4. Create a GitHub personal access token, and configure and test the MCP Client connection.
  5. Deploy the project.
  6. Create the Jitterbit custom API.
  7. Trigger the project workflows.

Install the project

Follow these steps to install the Integration Studio project for the AI agent:

  1. Log in to the Harmony portal at https://login.jitterbit.com and open Marketplace.

  2. Locate the AI agent named GitHub Agent with MCP. To locate the agent, you can use the search bar or, in the Filters pane under Type, select AI Agent to limit the display to the available AI agents.

  3. Click the AI agent's Documentation link to open its documentation in a separate tab. Keep the tab open to refer back to after starting the project.

  4. Click Start Project to open a configuration dialog to import the AI agent as an Integration Studio project.

  5. Enter a project name and select an environment where the Integration Studio project will be created, then click Create Project.

  6. A progress dialog will be displayed. Once it indicates the project is created, use the dialog link Go to Integration Studio or open the project directly from the Integration Studio Projects page.

Review project workflows

In the open Integration Studio project, review the workflows along with the descriptions below to understand what they do.

  1. Main Entry Workflow

    This workflow manages incoming API call requests. It has one operation, Main Entry - API Request Handler, that is triggered via API each time an HTTP request is sent.

  2. AI Logic

    This workflow manages the AI logic. It has the following operations:

    1. AI Logic - Orchestrator

      This operation manages the workflow by controlling the execution of other operations.

    2. AI Logic - Contextualizing LLM

      This operation sets the initial context for the LLM to define the context through a prompt configured in an OpenAI Prompt activity.

      Quote

      I am a Software Engineer, and you're going to be my personal assistant for any GitHub-related topic. Please help me answer questions and automate/execute tasks as I ask for them.

    3. AI Logic - Listing and Registering the Tools

      This operation fetches all the MCP tools provided by the GitHub MCP Server and registers the same tools to the LLM (in this example, the model is GPT-4).

    4. AI Logic - Sending Prompt to LLM

      This operation sends to the LLM the user prompt that is received through the API and managed by the Main Entry - API Request Handler operation.

    5. AI Logic - Replying Back to LLM

      This operation sends back the tool response (after the tool execution) to the LLM and relays the final answer that will be used as the final response of the agent.

  3. Tool Workflows

    This workflow manages all the desired GitHub Server tools implemented as part of the agent and handles the conditions under which the tools should be executed based on the user prompt.

    1. Tool - Search Pull Requests

      Invoke the Tool - Search Pull Requests operation from the GitHub MCP Server based on the given tool name and tool arguments, and invoke the operation AI Logic - Replying Back to LLM to build the final agent response to the user.

    2. Tool - List Branches

      Invoke the Tool - List Branches operation from the GitHub MCP Server based on the given tool name and tool arguments, and invoke the operation AI Logic - Replying Back to LLM to build the final agent response to the user.

    3. Tool - Not Implemented Tool

      This workflow manages scenarios for when the LLM suggests a tool execution that is not implemented as part of the Tool Workflows operations. It returns a friendly message to the user, such as the following:

      {
          "chatId": "<string>",
          "response": "Oops! Sorry... I can't answer anything related to this question yet. Please trying asking something different..."
      }
      

      This message can be customized by replacing the message in the response field of the transformation Prepare Request #5.

  4. Utility Workflows - Handling Unexpected Errors

    This operation handles unexpected scenarios when something goes wrong with the execution of the agent and returns a friendly message to the user, such as the following:

    {
        "chatId": "<string>",
        "response": "An unexpected error has happened."
    }
    

    This message can be customized by replacing the message in the response field of the transformation Prepare Request #6.

Create an OpenAI API key, and configure and test the OpenAI connection

  1. Follow OpenAI's instructions to Create and export an API key.

  2. In the Integration Studio project, open the configuration of the OpenAI connection and enter the API key in the API/Secret Key field.

  3. Click the Test button to verify connectivity using the configured values.

Create a GitHub personal access token, and configure and test the MCP Client connection

  1. Follow GitHub's instructions for Creating a personal access token (classic).

    When creating the token, add the permissions that the AI agent needs to execute the tools you will be implementing. For example, this basic AI agent requires the public_repo permission to access public repositories.

  2. In the Integration Studio project, open the configuration of the MCP Client connection. Enter the following:

    • MCP server URL: The server URL must be https://api.githubcopilot.com/mcp/. Do not change this value.
    • Bearer token: Enter your personal access token.

    For more information see GitHub's documentation Setting up the GitHub MCP Server.

  3. Click the Test button to verify connectivity using the configured values.

Deploy the project

Deploy the Integration Studio project. This can be done using the project's actions menu to select Deploy.

Create the Jitterbit custom API

Create a custom API for the Main Entry - API Request Handler operation in the Main Entry Workflow workflow. This can be done directly in Integration Studio using the operation's actions menu to select Publish as an API or select Publish as an API using AI.

Keep the default settings except for the following:

  • Method: POST
  • Response Type: System Variable

The API Request activity, as part of the operation Main Entry - API Request Handler, expects to receive a JSON in the following structure:

{
    "prompt": "<string>"
}

Retain the service URL of the published API. The service URL can be found in the API details drawer on the Services tab by hovering on the service's Actions column and clicking Copy API service URL.

Trigger the project workflows

The main workflow, Main Entry - Slack Request Handler, is triggered by the Jitterbit custom API.

You can send an HTTP request to this API with the example prompts listed earlier.

As soon as you send the HTTP request, the response provided by the Agent will be a JSON in the following structure:

{
    "chatId": "<string>",
    "response": "<string>"
}

Troubleshooting

Review API logs and operation logs for detailed troubleshooting information.

For additional assistance, contact Jitterbit support.