Skip to Content

Observability (Beta) in Jitterbit private agents 11.37 or later

Caution

This feature is currently released as a beta version, supported on private agents version 11.37.0.7 or later. To provide feedback, contact the Jitterbit Product Team.

Introduction

You can remotely monitor a private agent's performance and behavior in real time with either of the following supported observability platforms:

Before you can start monitoring private agents running on Docker, Linux, or Windows, you must install your chosen observability platform's agent on every private agent you want to monitor, and configure its metrics.

Note

On Linux and Windows, the observability agents start automatically when the host starts. On Docker, you must manually start them using the following commands:

Start Datadog
sudo datadog-agent start
Start Elasticsearch
sudo metricbeat start
sudo filebeat start

Install Datadog on a Jitterbit private agent host

Datadog prerequisites

To install Datadog on a private agent host, you must have the following:

  • Private agent 11.37.0.7 or later installed and running.

  • A Datadog account.

  • A super-user (root) account on Linux, or an Administrator account on Windows.

    Important

    Run all commands as this user.

Install the Datadog agent

To install the Datadog agent on a private agent host, follow these steps:

  1. Log into your Datadog account.
  2. From the main menu, select Integrations > Agent.
  3. On the Agent Installations Instructions page, select your private agent host type.
  4. Click the API Key button. The Select an API Key dialog opens.
  5. If you don't have any API keys, click the Create New button to create one. Otherwise, select your API key entry, then click the Use API Key button.
  6. In the Agent Installation Command, click the Click to copy icon to copy the command to your clipboard.
  7. Open a terminal or PowerShell on the private agent host, paste the copied command, then press Enter to run it.
  8. Close the terminal or PowerShell.

Configure the Datadog agent

The Jitterbit private agent software includes a script to configure Datadog. To use it when installing for the first time, or when upgrading from a private agent 11.34-based installation, follow these steps:

  1. For Docker private agents, the value for the hostname property in the /etc/datadog-agent/datadog.yaml should be set to the host's hostname. If it is not, set it manually.
  2. Run the following in a new terminal or PowerShell on the private agent host:

    cd /opt/jitterbit/scripts/
    ./configure_datadog.sh
    
    cd /opt/jitterbit/scripts/
    ./configure_datadog.sh
    
    cd "C:\Program Files\Jitterbit Agent\scripts\"
    .\configure_datadog.ps
    
  3. Read and respond to the prompts.

  4. To confirm your Datadog agent is working, log into your Datadog account, then select Integrations > Fleet Automation from the main menu.

Create facets

To define Datadog facets, follow these steps:

  1. Select Logs > Explorer.
  2. Filter logs by your agent's Host, and the JitterbitAgentMetric service.
  3. (Optional) Check that the logs for your agent are in JSON format. To do this, select a recent log entry, then check that the CONTENT column contains a JSON-formatted log message.
  4. Under the Search facets bar, click Add (Add a facet).
  5. In the Add facet dialog's Path field, enter the text shown below, then click the Add button. Repeat for each item in the following list:
    • @environment_id
    • @environment_name
    • @is_operation_over_schedule
    • @name
    • @operation_id
    • @operation_instance_guid
    • @operation_name
    • @organization_id
    • @project_guid
    • @project_name
    • @status

Create a calculated field operation_duration_seconds

To create a calculated field, follow these steps:

  1. Select Logs > Explorer.
  2. Click the Add button, then select Calculated Field.
  3. In the Create a calculated field dialog, set values for the following fields:
    • Name your field: operation_duration_seconds
    • Define your formula: @fields.duration_seconds
  4. Click the Save button.

Create a measure @operation_duration_seconds

To create a measure, follow these steps:

  1. Select Logs > Explorer.
  2. Under the Search facets bar, click Add (Add a facet).
  3. In the Add facet dialog, select the Measure tab.
  4. In the Path field, enter @operation_duration_seconds.
  5. Click the Add button.

Create operation metrics

To define operation metrics, select Logs > Generate Metrics, then follow the steps below for each operation metric.

Tip

You can also use logs to create Datadog metrics.

Create a metric metric.operation.count.by.status

  1. Click the New Metric button.
  2. In the Generate Metric dialog, set values as follows:
    • Set Metric Name: metric.operation.count.by.status
    • Define Query: service:JitterbitAgentMetric @name:operation_log
  3. Click the group by menu, then click each of the following to add them to the list:
    • @fields.operation_id
    • @fields.operation_name
    • @fields.status
    • @agentgroup
    • @host
  4. Click the Create Metric button.

Create a metric metric.operation.running.over.scheduled.interval

  1. Click the New Metric button.
  2. In the Generate Metric dialog, set values as follows:
    • Set Metric Name: metric.operation.running.over.scheduled.interval
    • Define Query: service:JitterbitAgentMetric @name:operation_running_over_scheduled_interval
  3. Click the group by menu, and click each of the following to add them to the list:
    • @fields.operation_id
    • @fields.operation_name
    • @agentgroup
    • @host
  4. Click the Create Metric button.

Create a metric metric.operation.duration.seconds

  1. Click the New Metric button.
  2. In the Generate Metric dialog, set values as follows:
    • Set Metric Name: metric.operation.duration.seconds
    • Define Query: service:JitterbitAgentMetric @name:operation_running_over_scheduled_interval
  3. Click the group by menu, and click each of the following to add them to the list:
    • @fields.operation_id
    • @fields.operation_name
    • @agentgroup
    • @host
  4. Click the Create Metric button.

Create process metrics

To define process metrics, select Infrastructure > Processes, select the Manage Metrics tab, then follow the steps below for each process metric.

Create a metric proc.openginebyname.cpu.num_threads

  1. Click the New Metric button.
  2. Under section 1, Filter Processes, in the Filter by field, enter command:JitterbitOperationEngineProc.
  3. Under section 2, Select Measure and Dimensions, set values for the following fields:
    • Open the Measure menu, then select # of Threads.
    • Open the Dimensions menu, then select name.
  4. Under section 4, Name, in the metric.name field, enter openginebyname to make the name proc.openginebyname.cpu.num_threads.
  5. Click the Create button.

Create a metric proc.operationsengine.cpu.num_threads

  1. Click the New Metric button.
  2. Under section 1, Filter Processes, in the Filter by field, enter command:JitterbitOperationEngineProc.
  3. Under section 2, Select Measure and Dimensions, set values for the following fields:
    • Open the Measure menu, then select # of Threads.
    • Open the Dimensions menu, then select agentgroup.
  4. Under section 4, Name, in the metric.name field, enter operationsengine to make the name proc.operationsengine.cpu.num_threads.
  5. Click the Create button.

Import a Datadog dashboard

To import a pre-built Datadog dashboard, follow these steps:

  1. Download the Jitterbit Private Agent Datadog dashboard JSON file from the Harmony portal Downloads page.

  2. Select Dashboards > New Dashboard.

  3. In the Create a Dashboard dialog, enter a name in the Dashboard Name field, then click the New Dashboard button.

  4. Click the Configure button, then select Import dashboard JSON.

  5. Find the downloaded dashboard JSON file, then select it.

  6. To use the dashboard, select Dashboards > Dashboard list, enter Jitterbit Harmony Private Agent in the Search dashboards field, then select the imported dashboard.

Troubleshoot Datadog issues

To help solve any issues with Datadog, you can check the Datadog documentation, inspect log files, or run commands to manage users and permissions.

Datadog documentation

Datadog file locations

Log files

Datadog log files can be found in the following location:

/var/log/datadog/

Open the Datadog Agent Manager application, then select the Log tab.

Configuration files

The Datadog configuration file can be found in the following location:

/etc/datadog-agent/datadog.yaml

Open the Datadog Agent Manager application, then select the Settings tab.

In this file, you should check that you have correct values for the following keys:

  • api_key

  • site

  • $hostname

  • tags

Datadog users and permissions commands

To create a Datadog user and group, run these commands:

usermod -a -G root dd-agent
usermod -a -G jitterbit dd-agent

To set Datadog configuration file ownership, run these commands:

chown dd-agent:dd-agent /etc/datadog-agent/conf.d/logs.d/conf.yaml
chown dd-agent:dd-agent /etc/datadog-agent/conf.d/logs.d

Install Metricbeat on a private agent host

Metricbeat prerequisites

Before you can install Metricbeat on a private agent host, you must install the following:

Install Metricbeat and Filebeat

Metricbeat

To install Metricbeat on a private agent host, follow the Metricbeat instructions on the Elastic.co website.

To start Metricbeat when the host boots, follow the Metricbeat startup instructions on the Elastic.co website.

Filebeat

To install Filebeat on a private agent host, follow the Filebeat instructions on the Elastic.co website.

To start Filebeat when the host boots, follow the Filebeat startup instructions on the Elastic.co website.

Set the Kibana index lifecycle policy

  1. Open the Kibana web console at http://HOSTNAME:5601, where HOSTNAME is the hostname or IP address of the private agent host.
  2. Enter index lifecycle policies in the Elasticsearch search bar, then select the resulting page.
  3. Click the Create policy button.
  4. In the Create policy dialog, set the following values:
    • Policy name: private-agent-metrics-policy
  5. Turn on the Warm phase toggle, then set the following values:
    • Move data into phase when: 30 days
  6. Turn on the Cold phase toggle, then set the following values:
    • Move data into phase when: 90 days
  7. Click the Save policy button.

Create Kibana templates

Create a Kibana template (private-agent-metric-template)

  1. Enter index management in the Elasticsearch search bar, then select the resulting page.

  2. Select the Index Templates tab, then click the Create template button.

  3. On the Logistics page, enter values for the following fields:

    • Name: private-agent-metric-template

    • Index patterns: private-agent-metric-8.15-*

  4. Click the Next button.

  5. On the Component templates page, click the Next button.

  6. On the Index settings page, replace the contents of the Index settings field with the following:

    {
      "index": {
        "lifecycle": {
          "name": "private-agent-metrics-policy",
          "rollover_alias": "private-agent-metric-alias"
        },
        "number_of_shards": "1",
        "number_of_replicas": "1"
      }
    }
    
  7. Click the Next button.

  8. On the Mappings page, in the Mapped fields tab, add fields according to the following table (click the Add field button to add additional fields):

    Field type Field name
    Keyword fields.env
    Keyword private-agent.group
    Keyword private-agent.name
  9. Select the Advanced options tab, then set the following toggles to On:

    • Map numeric strings as numbers

    • Map date strings as dates

  10. Click the Next button.

  11. On the Aliases page, click the Next button.

  12. On the Review details page, click the Create template button.

Create a Kibana template (private-agent-filebeat-template)

  1. On the Index Templates tab, click the Create template button.
  2. On the Logistics page, enter values for the following fields:
    • Name: private-agent-filebeat-template
    • Index patterns: private-agent-filebeat-8.15-*
  3. Click the Next button.
  4. On the Component templates page, click the Next button.
  5. On the Index settings page, replace the contents of the Index settings field with the following:

    {
      "index": {
        "lifecycle": {
          "name": "private-agent-metrics-policy",
          "rollover_alias": "private-agent-metric-alias"
        },
        "number_of_shards": "1",
        "number_of_replicas": "1"
      }
    }
    
  6. Click the Next button.

  7. On the Mappings page, in the Mapped fields tab, add fields according to the following table (click the Add field button to add additional fields):

    Field type Field name
    Keyword message.fields.environment_name
    Keyword message.fields.operation_name
    Keyword message.fields.project_name
    Keyword message.fields.status
    Keyword private-agent.group
    Keyword private-agent.name
  8. Select the Advanced options tab, then set the following toggles to On:

    • Map numeric strings as numbers
    • Map date strings as dates
  9. Click the Next button.
  10. On the Aliases page, click the Next button.
  11. On the Review details page, click the Create template button.

Create Kibana parsers

Create a Kibana Grok parser pipeline (Over Schedule)

  1. Enter ingest pipelines in the Elasticsearch search bar, then select the resulting page.
  2. Click the Create Pipeline button, and select New pipeline.
  3. Enter values for the following fields:
    • Name: private-agent-metric-pipeline
    • Description: Enter an optional description.
  4. In the Processors panel, click the Add a processor button.
  5. In the Add processor dialog, set the following fields:

    • Processor: Open the menu, then select Grok.
    • Field: message
    • Patterns: In the first = field, enter the following:

      %{GREEDYDATA:operation_timestamp} \[AgentMetric, Informative\] The requested scheduled Operation is already being processed. OperationId = %{NUMBER:over_running_operation_id:int} OperationName = "%{GREEDYDATA:over_running_operation_name}" \[SCHEDULE_IN_PROCESS\]
      
  6. Turn on Ignore failures for this processor.

  7. Click the Add processor button.
  8. In the list of processors, click the icon, then change the name to Over Schedule.

Create a Kibana Grok parser pipeline (Added Status)

  1. In the Processors panel, click the Add a processor button.
  2. In the Add processor dialog, set the following fields:

    • Processor: Open the menu, then select Grok.
    • Field: message
    • Patterns: In the first = field, enter the following:

      %{GREEDYDATA:timestamp} \[AgentMetric, Informative\] %{GREEDYDATA:status} to queue: OperationID = %{NUMBER:scheduled_operation_id:int} and OperationName = %{GREEDYDATA:scheduled_operation_name} and OperationInstanceGuid = %{GREEDYDATA:scheduled_operation_instance_guid}
      
  3. Turn on Ignore failures for this processor.

  4. Click the Add processor button.
  5. In the list of processors, click the icon, then change the name to Added Status.

Create a Kibana Grok parser pipeline (Finished Status)

  1. In the Processors panel, click the Add a processor button.
  2. In the Add processor dialog, set the following fields:

    • Processor: Open the menu, then select Grok.
    • Field: message
    • Patterns: In the first = field, enter the following:

      %{GREEDYDATA:status_ts} \[AgentMetric, Informative\] Operation changed to %{GREEDYDATA:status}: OperationID = %{NUMBER:scheduled_operation_id:int} and Operation Name = \"%{GREEDYDATA:scheduled_operation_name}\" and OperationInstanceGuid = %{GREEDYDATA:scheduled_operation_instance_guid}
      
  3. Turn on Ignore failures for this processor.

  4. Click the Add processor button.
  5. In the list of processors, click the icon, then change the name to Finished Status.

Create a Kibana Grok parser pipeline (Running Status)

  1. In the Processors panel, click the Add a processor button.
  2. In the Add processor dialog, set the following fields:

    • Processor: Open the menu, then select Grok.
    • Field: message
    • Patterns: In the first = field, enter the following:

      %{GREEDYDATA:status_ts} \[AgentMetric, Informative\] Operation %{GREEDYDATA:status}: OperationID = %{NUMBER:scheduled_operation_id:int} and Operation Name = \"%{GREEDYDATA:scheduled_operation_name}\" and OperationInstanceGuid = %{GREEDYDATA:scheduled_operation_instance_guid} and Status = %{NUMBER:scheduled_operation_status} and Duration = %{NUMBER:scheduled_operation_duration:float}
      
  3. Turn on Ignore failures for this processor.

  4. Click the Add processor button.
  5. In the list of processors, click the icon, then change the name to Running Status.

Set the order of processors

If not already so, rearrange the list of processors to be in the following order:

  1. Over Schedule
  2. Added Status
  3. Finished Status
  4. Running Status

Configure the Metricbeat agent

The Jitterbit private agent includes a script to configure Metricbeat. To use it when installing for the first time, or when upgrading from a private agent 11.34-based installation, follow these steps:

  1. Run the following in a terminal or PowerShell:

    cd /opt/jitterbit/scripts/
    ./configure_elasticsearch.sh
    
    cd /opt/jitterbit/scripts/
    ./configure_elasticsearch.sh
    
    cd "C:\Program Files\Jitterbit Agent\scripts\"
    .\configure_elasticsearch.ps
    
  2. Read and respond to the prompts.

Import an Elasticsearch dashboard

To import a pre-built Elasticsearch dashboard, follow these steps:

  1. Download the Jitterbit Private Agent Elasticsearch dashboard JSON file from the Harmony portal Downloads page.
  2. Enter kibana saved objects in the Elasticsearch search bar, then select the resulting page.
  3. Click the Import button.
  4. In the Import saved objects dialog, click Import, find the downloaded dashboard JSON file, then select it.
  5. Under Import options, select Check for existing objects with Automatically overwrite conflicts.
  6. Click the Import button.
  7. Click the Done button.
  8. To use the dashboard, enter dashboards in the Elasticsearch search bar, select the resulting page, then select Jitterbit Harmony Private Agent Dashboard.

Troubleshoot Elasticsearch issues

To help solve any issues with Elasticsearch components, you can check the Elasticsearch documentation, inspect log files, or run diagnostic commands.

Elasticsearch documentation

Elasticsearch log files locations

  • /var/log/metricbeat
  • /var/log/filebeat

Elasticsearch diagnostic commands

To check the Filebeat connection to Elasticsearch, run the following command:

filebeat test output