Skip to Content

Operation options in Jitterbit Integration Studio

Introduction

Configure operation options to control timeouts, logging, and data processing. Most operations work well with default settings, but you can customize them for specific needs.

Access operation options

You can access the Settings option for operations from these locations:

After the operation settings screen is open, select the Options tab:

options tab

Configure operation options

The following sections describe each operation option:

options dialog

Operation time out

Set how long the operation runs before it gets canceled. The default is 2 hours, which works for most operations.

You might want to adjust this setting for these reasons:

  • Increase the timeout for large datasets that take longer to process.

  • Decrease the timeout for time-sensitive operations that must complete quickly.

Enter a number and select Seconds, Minutes, or Hours from the dropdown.

Note

Operations triggered by API Manager APIs ignore this setting on cloud agents. For private agents, enable EnableAPITimeout in the private agent configuration file to have the Operation Time Out setting apply to operations triggered by APIs.

What to log

Choose what information appears in operation logs:

  • Everything: Logs all operation activity (recommended).
  • Errors Only: Logs only operations with an error-type status (such as Error, SOAP Fault, or Success with Child Error). Use this setting if you have performance issues and don't need detailed logs. Successful child operations are not logged. Parent (root-level) operations are always logged, as they require logging to function properly.

Enable Debug Mode Until

Turn on detailed logging for troubleshooting. Select a date up to two weeks from today. Debug mode automatically turns off on this date.

Warning

On cloud agent groups, the duration of this setting is unreliable. Logs may stop being generated before the end of the selected time period.

When you enable debug mode for operations with child operations, you can apply the same setting to all child operations using the Also Apply to Child Operations checkbox.

Debug logging generates different types of logs based on your agent type:

Log type
Log description Agent type
Debug log files Debug log files for detailed troubleshooting. You can access these files directly on the agent or download them through the Management Console. Debug logging can also be enabled for the entire project from the private agent itself (see Operation debug logging). The debug log files are accessible directly on private agents and are downloadable through the Management Console Agents and Runtime Operations pages.

Warning

Debug mode creates large log files. Use only during testing, not in production.

Private agents only
Component input and output Request and response data (kept for 30 days). Accessed through the Management Console Runtime Operations page.

Caution

Component input and output data always gets logged to the Harmony cloud, even if cloud logging is disabled. To stop this on private agents, set verbose.logging.enable=false in the configuration file under [VerboseLogging].

Debug logs contain all request and response data, including sensitive information like passwords and personally identifiable information (PII). This data appears in clear text in Harmony cloud logs for 30 days.

Cloud and private agents
API operation logs Logs for successful API operations (configured for custom APIs or OData services). By default, only API operations with errors are logged in the operation logs.
Cloud and private agents

Run success Operation even if there are no matching source files

This option forces an operation to succeed even when its trigger fails. This lets other operations triggered to run On Success of this operation run regardless of the initial operation's outcome. It applies only when the initial operation contains a source activity for one of these connectors:

By default, any On Success operations run only if they have a matching source file to process. This option can be useful for setting up later parts of a project without requiring success of a dependent operation.

Note

The AlwaysRunSuccessOperation setting in the private agent configuration file overrides this option.

Enable Chunking

Chunking breaks large datasets into smaller pieces. This makes processing faster and helps meet API record limits. To enable chunking, your operation must contain a transformation or an activity from one of these connectors:

Use chunking in these situations:

  • You process large datasets with thousands of records.
  • You use web services with record limits. For example, Salesforce allows only 200 records per call.
  • You want to use multiple CPU cores for parallel processing.

When a Salesforce, Salesforce Service Cloud, or ServiceMax activity is in the operation, chunking is enabled automatically.

When this setting is enabled, configure these fields:

  • Chunk Size: The number of records in each chunk. Default is 1 for most operations and 200 for Salesforce operations.

    Note

    When you use a (Salesforce, Salesforce Service Cloud, or ServiceMax) bulk activity, change this default to a much larger number, such as 10,000.

  • Number of Records per File: The number of records in each target file. Default is 0, which means no limit.

  • Max Number of Threads: The number of processing threads that run at the same time. Default is 1 for most operations and 2 for Salesforce operations.

Warning

Chunking affects how global and project variables work. Only changes from the first thread are preserved. See detailed chunking information below.

Salesforce bulk operation options

The following options appear only for Salesforce, Salesforce Service Cloud, and ServiceMax bulk operations (except Bulk Query operations):

salesforce write failure and success

  • Write success records to: Choose where to send successful records after the bulk operation completes. Select from configured file-based activities: HTTP, API, FTP, File Share, Local Storage, Temporary Storage, or Variable. Default: None.

  • Write failure records to: Choose where to send failed records after the bulk operation completes. HTTP, API, FTP, File Share, Local Storage, Temporary Storage, or Variable. Default: None.

    Important

    When you use Variable activities, only operations in the same operation chain can access the variable value during runtime.

  • Send success records to: Choose an email notification to receive successful records. Select from configured email notifications. Default: None.

  • Send failure records to: Choose an email notification to receive failed records. Select from configured email notifications. Default: None.

Note

File-based activities and email notifications selected in these options don't need to be part of an existing deployed operation. Integration Studio will automatically deploy and manage these components when selected.

Detailed chunking information

Chunking is used to split the source data into multiple chunks based on the configured chunk size. The chunk size is the number of source records (nodes) for each chunk. The transformation is then performed on each chunk separately, with each source chunk producing one target chunk. The resulting target chunks combine to produce the final target.

Chunking can be used only if records are independent and from a non-LDAP source. We recommend using as large a chunk size as possible, making sure that the data for one chunk fits into available memory. For additional methods to limit the amount of memory a transformation uses, see Transformation processing.

Warning

Using chunking affects the behavior of global and project variables. See Use variables with chunking below.

API limitations

Many web service APIs (SOAP/REST) have size limitations. For example, a Salesforce-based upsert accepts only 200 records for each call. With sufficient memory, you could configure an operation to use a chunk size of 200. The source would be split into chunks of 200 records each, and each transformation would call the web service once with a 200-record chunk. This would be repeated until all records have been processed. The resulting target files would then be combined. (Note that you could also use Salesforce-based bulk activities to avoid the use of chunking.)

Parallel processing

If you have a large source and a multi-CPU computer, chunking can be used to split the source for parallel processing. Since each chunk is processed in isolation, several chunks can be processed in parallel. This applies only if the source records are independent of each other at the chunk node level. Web services can be called in parallel using chunking, improving performance.

When using chunking on an operation where the target is a database, note that the target data is first written to numerous temporary files (one for each chunk). These files are then combined to one target file, which is sent to the database for insert/update. If you set the Jitterbit variable jitterbit.target.db.commit_chunks to 1 or true when chunking is enabled, each chunk is instead committed to the database as it becomes available. This can improve performance significantly as the database insert/updates are performed in parallel.

Use variables with chunking

As chunking can invoke multi-threading, its use can affect the behavior of variables that are not shared between the threads.

Global and project variables are segregated between the instances of chunking, and although the data is combined, changes to these variables are not. Only changes made to the initial thread are preserved at the end of the transformation.

For example, if an operation — with chunking and multiple threads — has a transformation that changes a global variable, the global variable's value after the operation ends is that from the first thread. Any changes to the variable in other threads are independent and are discarded when the operation completes.

These global variables are passed to the other threads by value rather than by reference, ensuring that any changes to the variables are not reflected in other threads or operations. This is similar to the RunOperation function when in asynchronous mode.