Observability (Beta) in Jitterbit private agents 11.34 to 11.36
Caution
This feature is currently released as a beta version, supported on private agents version 11.34.0.75 to 11.36. To provide feedback, contact the Jitterbit Product Team.
Introduction
You can remotely monitor a private agent's performance and behavior in real time with either of the following supported observability platforms:
Before you can start monitoring private agents running on Docker, Linux, or Windows, you must install your chosen observability platform's agent on every private agent you want to monitor, and configure its metrics.
Note
On Linux and Windows, the observability agents start automatically when the host starts. On Docker, you must manually start them using the following commands:
sudo datadog-agent start
sudo metricbeat start
sudo filebeat start
Install Datadog on a Jitterbit private agent host
Datadog prerequisites
To install Datadog on a private agent host, you must have the following:
-
Private agent 11.34 to 11.36 installed and running.
-
A Datadog account.
-
A super-user (
root
) account on Linux, or an Administrator account on Windows.Important
Run all commands as this user.
Install the Datadog agent
To install the Datadog agent on a private agent host, follow these steps:
-
Log into your Datadog account.
-
From the main menu, select Integrations > Agent.
-
On the Agent Installations Instructions page, select your private agent host type.
-
Click the API Key button. The Select an API Key dialog opens.
-
If you don't have any API keys, click the Create New button to create one.
Otherwise, select your API key entry, then click the Use API Key button.
-
In the Agent Installation Command , click the Click to copy icon to copy the command to your clipboard.
-
Open a terminal or PowerShell on the private agent host, paste the copied command, then press Enter to run it.
-
Close the terminal or PowerShell.
Configure the Datadog agent
The Jitterbit private agent software includes a script to configure Datadog. To use it, follow these steps:
-
For Docker private agents, the value for the
hostname
property in the/etc/datadog-agent/datadog.yaml
should be set to the host's hostname. If it is not, set it manually. -
Run the following in a new terminal or PowerShell on the private agent host:
cd /opt/jitterbit/scripts/ ./configure_datadog.sh
cd /opt/jitterbit/scripts/ ./configure_datadog.sh
cd "C:\Program Files\Jitterbit Agent\scripts\" .\configure_datadog.ps
-
Read and respond to the prompts.
-
To confirm your Datadog agent is working, log into your Datadog account, then select Integrations > Fleet Automation from the main menu.
Add a Grok parser pipeline
-
Log into your Datadog account.
-
Select Logs > Pipelines.
-
Click the New Pipeline button.
-
Enter values for the following fields:
-
Filter:
service:JitterbitAgentMetric
-
Name:
agent.operation.schedule.pipeline
-
Description: (Optional) Enter a description.
-
-
Click the Create button.
Add a parser (operation.scheduler.processor
)
-
Expand the new (
agent.operation.schedule.pipeline
) pipeline, then click Add Processor. -
In the Create Grok Parser dialog, enter values for the following fields:
-
Select the processor type: Grok Parser
-
Name the processor:
operation.scheduler.processor
-
Log samples: Copy and paste the following log samples into the field. After each one, click Add to create new fields:
Log sample 12024-10-15 22:19:01 [AgentMetric, Informative] Added to queue: OperationID = 12345 and OperationName = "test-operation" and OperationInstanceGuid = a1b2c3-2ef3-4667-833f-2e6c50ae613b and Status = 1
Log sample 22024-10-15 22:19:03 [AgentMetric, Informative] Operation changed to Running: OperationID = 12345 and Operation Name = "test-operation" and OperationInstanceGuid = a1b2c3-2ef3-4667-833f-2e6c50ae613b and Status = 3
Log sample 32024-10-15 22:16:07 [AgentMetric, Informative] Operation changed to Running: OperationID = 12345 and Operation Name = "test-operation" and OperationInstanceGuid = a1b2c3-2ef3-4667-833f-2e6c50ae613b and Status = 3
Log sample 42024-10-15 22:20:04 [AgentMetric, Informative] Operation finished: OperationID = 12345 and Operation Name = "test operation" and OperationInstanceGuid = a1b2c3-2ef3-4667-833f-2e6c50ae613b and Status = 5 and Duration = 1.1111
Log sample 52024-10-15 22:19:03 [AgentMetric, Informative] Operation finished: OperationID = 12345 and Operation Name = "test-operation" and OperationInstanceGuid = 90c34115-2ef3-4667-833f-2e6c50ae613b and Status = 5 and Duration = 2.2222
-
Define parsing rules: Copy and paste the following parsing rule into the field:
submittedOperationRule %{date("yyyy-mm-dd HH:mm:ss"):status_date}\s+\[AgentMetric,\s+Informative\]\s+%{notSpace:status}\s+to\s+queue\:\s+OperationID\s+\=\s+%{integer:scheduled_operation_id}\s+and\s+OperationName\s+\=\s+"%{data:scheduled_operation_name}"\s+and\s+OperationInstanceGuid\s+\=\s+%{notSpace:submitted_operation_instance_guid}\s+and\s+Status\s+\=\s+%{number:operation_status} runningOperationRule %{date("yyyy-mm-dd HH:mm:ss"):status_date}\s+\[AgentMetric,\s+Informative\]\s+Operation\s+changed\s+to\s+%{notSpace:status}\:\s+OperationID\s+\=\s+%{notSpace:scheduled_operation_id}\s+and\s+Operation Name\s+\=\s+"%{data:scheduled_operation_name}"\s+and\s+OperationInstanceGuid\s+\=\s+%{notSpace:running_operation_instance_guid}\s+and\s+Status\s+\=\s+%{number:operation_status} completedOperationRule %{date("yyyy-mm-dd HH:mm:ss"):status_date}\s+\[AgentMetric,\s+Informative\]\s+Operation\s+%{notSpace:status}\:\s+OperationID\s+\=\s+%{notSpace:scheduled_operation_id}\s+and\s+Operation Name\s+\=\s+"%{data:scheduled_operation_name}"\s+and\s+OperationInstanceGuid\s+\=\s+%{notSpace:running_operation_instance_guid}\s+and\s+Status\s+\=\s+%{number:operation_status}\s+and\s+Duration\s+\=\s+%{number:duration}
-
-
Click the Create button.
Add a parser (operation.over.schedule.processor
)
-
In the expanded pipeline, under the existing parser, click Add Processor.
-
In the Create Grok Parser dialog, enter values for the following fields:
-
Select the processor type: Grok Parser
-
Name the processor:
operation.over.scheduler.processor
-
Log samples: Copy and paste the following log samples into the field. After each one, click Add to create new fields:
Log sample 12024-10-15 22:19:01 [AgentMetric, Informative] The requested scheduled Operation is already being processed. OperationId = 12345 OperationName = "test operation" [SCHEDULE_IN_PROCESS]
Log sample 22024-10-03 22:21:02 [AgentMetric, Informative] The requested scheduled Operation is already being processed. OperationId = 12345 OperationName = "test-operation" [SCHEDULE_IN_PROCESS]
-
Define parsing rules: Copy and paste the following parsing rule into the field:
operationOverSchdeuleRule %{date("yyyy-mm-dd HH:mm:ss"):schedule_ts}\s+\[AgentMetric,\s+Informative\]\s+The\s+requested\s+scheduled\s+Operation\s+is\s+already\s+being\s+processed\.\s+OperationId+\s=\s+%{integer:over_running_operation_id}\s+OperationName+\s=\s+\"%{data:over_running_operation_name}\"\s+\[SCHEDULE_IN_PROCESS\]
-
Create facets
To define Datadog facets, follow these steps:
-
Select Logs > Explorer.
-
Under the Search facets bar, click Add (Add a facet).
-
In the Add facet dialog's Path field, enter the text shown below, then click the Add button. Repeat for each item in the following list:
-
@over_running_operation_id
-
@scheduled_operation_id
-
@operation_status
-
@scheduled_operation_name
-
@running_operation_instance_guid
-
Create measures
-
On the Logs > Explorer page, under the Search facets bar, click Add (Add a facet).
-
In the Add facet dialog, select the Measure tab.
-
In the Path field, enter the text shown below, then click the Add button:
@duration
Create metrics
To define metrics, follow the steps below for each metric.
Tip
You can also use logs to create Datadog metrics.
-
Create the
agent.operation.count.by.status
metric:-
Select Logs > Generate Metrics.
-
Click the New Metric button.
-
In the Generate Metric dialog, set values as follows:
-
Set Metric Name:
agent.operation.count.by.status
-
Define Query:
service:JitterbitAgentMetric
-
-
Click the group by menu, and click each of the following to add them to the list:
-
@operation_status
-
@scheduled_operation_id
-
@scheduled_operation_name
-
@agentgroup
-
-
Click the Create Metric button.
-
-
Create the
agent.operation.duration
metric:-
Click the New Metric button.
-
In the Generate Metric dialog, set values as follows:
-
Set Metric Name:
agent.operation.duration
-
Define Query:
service:JitterbitAgentMetric
-
-
Click the Count menu, then select duration (@duration).
-
Click the group by menu, and click each of the following to add them to the list:
-
@scheduled_operation_id
-
@scheduled_operation_name
-
@agentgroup
-
-
Click the Create Metric button.
-
-
Create the
agent.operation.schedule
metric:-
Click the New Metric button.
-
In the Generate Metric dialog, set values as follows:
-
Set Metric Name:
agent.operation.schedule
-
Define Query:
service:JitterbitAgentMetric
-
-
Click the group by menu, and click each of the following to add them to the list:
-
@scheduled_operation_id
-
@scheduled_operation_name
-
@status
-
@agentgroup
-
-
Click the Create Metric button.
-
Import a Datadog dashboard
To import a pre-built Datadog dashboard, follow these steps:
-
Download the Jitterbit Private Agent Datadog dashboard JSON file from the Harmony portal Downloads page.
-
Select Dashboards > New Dashboard.
-
In the Create a Dashboard dialog, enter a name in the Dashboard Name field, then click the New Dashboard button.
-
Click the Configure button, then select Import dashboard JSON.
-
Find the downloaded dashboard JSON file, then select it.
-
To use the dashboard, select Dashboards > Dashboard list, enter Jitterbit Harmony Private Agent in the Search dashboards field, then select the imported dashboard.
Troubleshoot Datadog issues
To help solve any issues with Datadog, you can check the Datadog documentation, inspect log files, or run commands to manage users and permissions.
Datadog documentation
Datadog files locations
Log files
Datadog log files can be found in the following location:
/var/log/datadog/
Open the Datadog Agent Manager application, then select the Log tab.
Configuration files
The Datadog configuration file can be found in the following location:
/etc/datadog-agent/datadog.yaml
Open the Datadog Agent Manager application, then select the Settings tab.
In this file, you should check that you have correct values for the following keys:
-
api_key
-
site
-
$hostname
-
tags
Datadog users and permissions commands
To create a Datadog user and group, run these commands:
usermod -a -G root dd-agent
usermod -a -G jitterbit dd-agent
To set Datadog configuration file ownership, run these commands:
chown dd-agent:dd-agent /etc/datadog-agent/conf.d/logs.d/conf.yaml
chown dd-agent:dd-agent /etc/datadog-agent/conf.d/logs.d
Install Metricbeat on a private agent host
Metricbeat prerequisites
Before you can install Metricbeat on a private agent host, you must install the following:
Install Metricbeat and Filebeat
Metricbeat
To install Metricbeat on a private agent host, follow the Metricbeat instructions on the Elastic.co website.
To start Metricbeat when the host boots, follow the Metricbeat startup instructions on the Elastic.co website.
Filebeat
To install Filebeat on a private agent host, follow the Filebeat instructions on the Elastic.co website.
To start Filebeat when the host boots, follow the Filebeat startup instructions on the Elastic.co website.
Set the Kibana index lifecycle policy
-
Open the Kibana web console at
http://HOSTNAME:5601
, whereHOSTNAME
is the hostname or IP address of the private agent host. -
Enter
index lifecycle policies
in the Elasticsearch search bar, then select the resulting page. -
Click the Create policy button.
-
In the Create policy dialog, set the following values:
- Policy name:
private-agent-metrics-policy
- Policy name:
-
Turn on the Warm phase toggle, then set the following values:
- Move data into phase when: 30 days
-
Turn on the Cold phase toggle, then set the following values:
- Move data into phase when: 90 days
-
Click the Save policy button.
Create Kibana templates
Create a Kibana template (private-agent-metric-template
)
-
Enter
index management
in the Elasticsearch search bar, then select the resulting page. -
Select the Index Templates tab, then click the Create template button.
-
On the Logistics page, enter values for the following fields:
-
Name:
private-agent-metric-template
-
Index patterns:
private-agent-metric-8.15-*
-
-
Click the Next button.
-
On the Component templates page, click the Next button.
-
On the Index settings page, replace the contents of the Index settings field with the following:
{ "index": { "lifecycle": { "name": "private-agent-metrics-policy", "rollover_alias": "private-agent-metric-alias" }, "number_of_shards": "1", "number_of_replicas": "1" } }
-
Click the Next button.
-
On the Mappings page, in the Mapped fields tab, add fields according to the following table (click the Add field button to add additional fields):
Field type Field name Keyword fields.env
Keyword private-agent.group
Keyword private-agent.name
-
Select the Advanced options tab, then set the following toggles to On:
-
Map numeric strings as numbers
-
Map date strings as dates
-
-
Click the Next button.
-
On the Aliases page, click the Next button.
-
On the Review details page, click the Create template button.
Create a Kibana template (private-agent-filebeat-template
)
-
On the Index Templates tab, click the Create template button.
-
On the Logistics page, enter values for the following fields:
-
Name:
private-agent-filebeat-template
-
Index patterns:
private-agent-filebeat-8.15-*
-
-
Click the Next button.
-
On the Component templates page, click the Next button.
-
On the Index settings page, replace the contents of the Index settings field with the following:
{ "index": { "lifecycle": { "name": "private-agent-metrics-policy", "rollover_alias": "private-agent-metric-alias" }, "number_of_shards": "1", "number_of_replicas": "1" } }
-
Click the Next button.
-
On the Mappings page, in the Mapped fields tab, add fields according to the following table (click the Add field button to add additional fields):
Field type Field name Keyword fields.env
Keyword over_running_operation_name
Keyword private-agent.group
Keyword private-agent.name
Keyword scheduled_operation_name
Keyword status
-
Select the Advanced options tab, then set the following toggles to On:
-
Map numeric strings as numbers
-
Map date strings as dates
-
-
Click the Next button.
-
On the Aliases page, click the Next button.
-
On the Review details page, click the Create template button.
Create Kibana parsers
Create a Kibana Grok parser pipeline (Over Schedule
)
-
Enter
ingest pipelines
in the Elasticsearch search bar, then select the resulting page. -
Click the Create Pipeline button, and select New pipeline.
-
Enter values for the following fields:
-
Name:
private-agent-metric-pipeline
-
Description: Enter an optional description.
-
-
In the Processors panel, click the Add a processor button.
-
In the Add processor dialog, set the following fields:
-
Processor: Open the menu, then select Grok.
-
Field:
message
-
Patterns: In the first = field, enter the following:
%{GREEDYDATA:operation_timestamp} \[AgentMetric, Informative\] The requested scheduled Operation is already being processed. OperationId = %{NUMBER:over_running_operation_id:int} OperationName = "%{GREEDYDATA:over_running_operation_name}" \[SCHEDULE_IN_PROCESS\]
-
-
Turn on Ignore failures for this processor.
-
Click the Add processor button.
-
In the list of processors, click the icon, then change the name to Over Schedule.
Create a Kibana Grok parser pipeline (Added Status
)
-
In the Processors panel, click the Add a processor button.
-
In the Add processor dialog, set the following fields:
-
Processor: Open the menu, then select Grok.
-
Field:
message
-
Patterns: In the first = field, enter the following:
%{GREEDYDATA:timestamp} \[AgentMetric, Informative\] %{GREEDYDATA:status} to queue: OperationID = %{NUMBER:scheduled_operation_id:int} and OperationName = %{GREEDYDATA:scheduled_operation_name} and OperationInstanceGuid = %{GREEDYDATA:scheduled_operation_instance_guid}
-
-
Turn on Ignore failures for this processor.
-
Click the Add processor button.
-
In the list of processors, click the icon, then change the name to Added Status.
Create a Kibana Grok parser pipeline (Finished Status
)
-
In the Processors panel, click the Add a processor button.
-
In the Add processor dialog, set the following fields:
- Processor: Open the menu, then select Grok.
- Field:
message
-
Patterns: In the first = field, enter the following:
%{GREEDYDATA:status_ts} \[AgentMetric, Informative\] Operation changed to %{GREEDYDATA:status}: OperationID = %{NUMBER:scheduled_operation_id:int} and Operation Name = \"%{GREEDYDATA:scheduled_operation_name}\" and OperationInstanceGuid = %{GREEDYDATA:scheduled_operation_instance_guid}
-
Turn on Ignore failures for this processor.
-
Click the Add processor button.
-
In the list of processors, click the icon, then change the name to Finished Status.
Create a Kibana Grok parser pipeline (Running Status
)
-
In the Processors panel, click the Add a processor button.
-
In the Add processor dialog, set the following fields:
- Processor: Open the menu, then select Grok.
- Field:
message
-
Patterns: In the first = field, enter the following:
%{GREEDYDATA:status_ts} \[AgentMetric, Informative\] Operation %{GREEDYDATA:status}: OperationID = %{NUMBER:scheduled_operation_id:int} and Operation Name = \"%{GREEDYDATA:scheduled_operation_name}\" and OperationInstanceGuid = %{GREEDYDATA:scheduled_operation_instance_guid} and Status = %{NUMBER:scheduled_operation_status} and Duration = %{NUMBER:scheduled_operation_duration:float}
-
Turn on Ignore failures for this processor.
-
Click the Add processor button.
-
In the list of processors, click the icon, then change the name to Running Status.
Set the order of processors
If not already so, rearrange the list of processors to be in the following order:
-
Over Schedule
-
Added Status
-
Finished Status
-
Running Status
Configure the Metricbeat agent
The Jitterbit private agent includes a script to configure Metricbeat. To use it, follow these steps:
-
Run the following in a terminal or PowerShell:
cd /opt/jitterbit/scripts/ ./configure_elasticsearch.sh
cd /opt/jitterbit/scripts/ ./configure_elasticsearch.sh
cd "C:\Program Files\Jitterbit Agent\scripts\" .\configure_elasticsearch.ps
-
Read and respond to the prompts.
Import an Elasticsearch dashboard
To import a pre-built Elasticsearch dashboard, follow these steps:
-
Download the Jitterbit Private Agent Elasticsearch dashboard JSON file from the Harmony portal Downloads page.
-
Enter
kibana saved objects
in the Elasticsearch search bar, then select the resulting page. -
Click the Import button.
-
In the Import saved objects dialog, click Import, find the downloaded dashboard JSON file, then select it.
-
Under Import options, select Check for existing objects with Automatically overwrite conflicts.
-
Click the Import button.
-
Click the Done button.
-
To use the dashboard, enter
dashboards
in the Elasticsearch search bar, select the resulting page, then select Jitterbit Harmony Private Agent Dashboard.
Troubleshoot Elasticsearch issues
To help solve any issues with Elasticsearch components, you can check the Elasticsearch documentation, inspect log files, or run diagnostic commands.
Elasticsearch documentation
-
Metricbeat:
-
Filebeat:
Elasticsearch log files locations
-
/var/log/metricbeat
-
/var/log/filebeat
Elasticsearch diagnostic commands
To check the Filebeat connection to Elasticsearch, run the following command:
filebeat test output