Skip to Content

Best practices for Jitterbit Design Studio

Introduction

This document is intended to serve as a guide to using Harmony with Design Studio, Jitterbit's desktop-based project design application. For best practices using Integration Studio, the web-based version of Jitterbit's project design application, see Harmony best practices.

This document is not intended to be comprehensive or cover all integration scenarios. Rather, it's to provide guidance for common integration scenarios and recommend the best choices in using the many tools available to a Jitterbit user.

This page is best read after you have familiarity with Jitterbit: you've gone through the Get started page, completed the Jitterbit University courses, and perhaps have completed an integration project on your own. At that point, you will know the basic concepts and terms used in Jitterbit, and understand what Jitterbit means by projects, operations, sources, targets, scripting, and deployment.

This document is a summary of features available through Harmony 9.9. As of Spring 2019, the web-based Integration Studio is available to use in place of the desktop Design Studio application for project design.

See the Additional resources section below for links to videos and other documents that expand on best practices with Jitterbit.

Support, Customer Success Managers, and documentation

Access to Jitterbit support is included as part of a Harmony customer license. When questions or technical issues arise, you can get expert assistance from Jitterbit support. The Jitterbit support page describes special instructions for production-down situations in order to escalate time-sensitive issues.

You can also contact your Customer Success Manager (CSM) with questions related to licensing or other topics.

This documentation site (Jitterbit Documentation) and our developer documentation site (Jitterbit Developer Portal) contain more than 3,600 unique URLs of technical material.

Jitterbit product updates

Harmony updates are released frequently (see Release schedule). Even a minor release contains new features and enhancements along with bug fixes.

Web applications accessed through the Harmony portal are updated automatically and always run the latest released version.

Cloud API gateway and cloud agent group updates are applied automatically. For the cloud agent groups, there are two sets: Production and Sandbox. The latter set is used to test compatibility with pre-releases of agent software and is not a development environment.

Locally installed applications are upgraded by downloading and running an installer:

It is advisable to stay current with releases, particularly releases that include feature upgrades.

Project organization and design

Reusability

Project reusability

A typical scenario for reusing a project involves the development of a "standard" project with the extensive use of global variables and—especially—project variables. Configurable items—such as endpoint credentials, optional field mappings, email addresses, filenames—can be exposed as project variables. The "standard" project is deployed to multiple environments and, using the Management Console, the project variables for each environment are populated.

Source/target reuse

Though file sources and targets are frequently used in operations, a new source/target pair does not necessarily need to be built for each operation. Since file sources and targets accept global variables for path and filenames, sources and targets for similar operations can be built once and driven through global variables. For example, assume "Source" and "Target" objects are created, and in the filename field is [filename]. The $filename variable can be set at any place before the target is written and will be used.

This applies to database, File Share, FTP Site, Local File, and HTTP sources and targets.

Script reusability

Stand-alone scripts that perform a specific function, such as performing a database lookup or providing a result from a series of arguments, can be candidates for reuse, particularly if used in multiple operations.

For example, if a script uses the DBLookup() function against a database table, and this is a function that is used throughout the integration, then a stand-alone script (separate from an operation) can be built. Using the ArgumentList() function or simple global variables, it can accept arguments and return a result. Since every operation chain is a different scope, the same script can safely be called from multiple simultaneous operations.

Note

If you store your projects on a file share, changes that are made to a project that are solely of the user interface (for example, you rearrange objects in a view) are not preserved when you re-open the project.

As a result, Jitterbit does not recommend storing projects on a file share because:

  • Changes to the user interface (arrangement of objects) are not preserved when saving a file
  • Performance suffers: loading and saving of a project can be sluggish due to a lack of caching
  • Other users on the network can overwrite changes that are being made to a project due to a lack of file locks

Organize the operations in a project

Design Studio provides operation folders, and sorts operations alphabetically upon re-opening a project. By using a numbering scheme when naming operations and folders, the integration flow is clearer and it is easier to troubleshoot.

Example: assume there are two integration flows; one for Customer Master and a second for Item Master, each with two operations. Two folders, "Customer Master" and "Item Master," can be created in Design Studio. In the first folder, the operations could be called "CM001 Get Customer" and "CM002 Update Customer". In the second folder, operations could then be called "IM001 Get Items" and "IM002 Update Items":

attachment

The operation log can then easily show the steps in the integration chain and if any steps are missed. By right-clicking on a folder in Design Studio, the operations list displays only those operations of a folder. A consistent organization structure and naming makes it easy for someone new to a project to quickly grasp the basic operation flow.

Manage race conditions when using asynchronous operations

When using the RunOperation() function in its asynchronous mode, operations execute without returning control to the caller. Use of asynchronous operations can lead to race conditions.

For example, if operation A updates a database table and is chained to operation B that reads that same table (both are synchronous), no race conditions are encountered. But if operation A is called asynchronously followed immediately by operation B, B may execute before A is finished.

Endpoint credentials

Use a system ID with administration permissions as the endpoint credentials, rather than a user-level ID. User IDs typically expire or have to be disabled when the user leaves the company.

By using project variables (whose values can be hidden) for credentials management, the Jitterbit Admin does not have to enter production credentials. This can be done by a user via the Management Console. This approach can be used for email credentials, if necessary.

API handling

API Manager should be used in place of HTTP endpoints. HTTP endpoints were a way to handle low-traffic inbound calls and require that specific network ports be open, which many companies today consider to be a serious security risk.

API Manager and its API gateway are designed for high-volume performance, perform detailed logging, implement common security measures, and have a simple-to-use design interface that is part of the Harmony platform. No network ports to handle inbound traffic need to be configured.

An API Manager API should be used for real-time/event-driven integration patterns. For example: Endpoint A has the capability to call an API, such as Salesforce, using outbound messages. An API Manager API can be quickly implemented and tied to a chain of operations.

The preferred approach to responding to a call is to do so as quickly as possible. If the integration design is such that the follow-on operations take significant time to respond, there is a risk of timing out, or the risk of other inbound calls overwhelming the target endpoint's ability to respond.

If the source endpoint is making many calls a minute, and the target endpoint's gateway can handle only a certain number of connections, it is possible that the target endpoint will not be able to scale up to handle the requests. In this case, responding asynchronously may be the preferred approach. That is, the response is made immediately, and the dataset from the target endpoint is sent via a call to the source's API.

Persisting integration data

There are many scenarios where being able to store data "in the cloud" can be helpful. Jitterbit provides multiple methods: project variables, cloud caching functions, and temporary storage.

Project variables

Project variables are pre-initialized static variables (similar to project "constants") that can be edited from Design Studio or the Management Console.

One example use of project variables is for endpoint credentials. By using project variables, different endpoint environments (which usually have different credentials) can be applied to different Jitterbit environments and edited through the Management Console. This enables a business process where a user with Management Console rights can change credentials without requiring access to Design Studio.

A second example is to use project variables to hold flags used by integration logic that can customize the behavior of the integration. If a single project is developed but used for different endpoints, then a boolean project variable (such as "Send_PO_number") could be checked by the transformation's logic for the PO number field. If the project variable's value is false, then the UnMap() command could be used to "turn off" that field.

Cloud caching functions

Cloud cache functions (ReadCache() and WriteCache()) are assignable data spaces that are available across projects and across environments. A cached value is visible to all operations running in the same scope until it expires, regardless of how that operation was started or which agent it runs on. By caching data in Harmony, rather than relying on local or agent-specific data stores, data can be shared between separate operations and across projects.

Additional uses of cloud caching include:

  • Data can be shared between asynchronous operations within a project.
  • Errors that are generated across different operations can be stored to a common cache. By using this to accumulate operation results, more comprehensive alerts can be built.
  • Login tokens can be shared across operations.

Manage Temporary Storage

Temporary storage is frequently used in integration operations. This is different from Local Files (sources or targets), which can only be used on private agents. Keep these guidelines in mind, particularly when working towards using a clustered environment:

  • Make your project "upgrade-proof" and use temporary storage in such a way that moving from a single server to a clustered environment does not require refactoring.

  • Temporary storage is written to the default operating system's temp directory on the agent that is performing the work. In the case of a single private agent, then it is that private agent's server host's default temp directory. If you are using more than one private agent, then it is the default temp directory of the server host for whichever agent is doing the work. If using a cloud agent (which are clustered), then it is the default temp directory of the particular cloud agent server host.

  • By default, temporary storage is deleted after 24 hours by a Jitterbit cleanup service. In the case of cloud agents, this can be immediately.

  • A simplistic approach is to build a target, give it a unique name, and then use the "Copy to New Source" to create a source using the same filename. The target and sources are actually independent, and rely on the use of the same filename to synchronize reads and writes.

  • In a clustered agent environment (private or cloud agents), as long as the operations using the temporary storage are linked (chained) together, then all the temp storage reads and writes will happen on the same server host. But, if operation chain A writes to temp storage "myfile," and operation chain B reads temp storage "myfile," the read action may be inconsistent because it may not read the same server host as chain A.

  • For targets, the default is to overwrite the file. This can be changed with the option "Append to File." Usually this then requires that after the source is read that the file be deleted or archived. A simple way to do this is to choose "Delete File" or "Rename File" in the source.

  • Filename keywords are available that can be used when creating a filename.

  • A file in temporary storage can be quickly read by building a script with the ReadFile() function, such as ReadFile("<TAG>Sources/test</TAG>"). Bear in mind that this works reliably only if there is a single private agent.

See Global variable versus Temporary Storage for a comparison of these two types and recommendations on when each is appropriate.

Scripting

When to use scripting

Though Jitterbit provides a robust scripting capability, scripting should be used only when necessary. If there is a choice between using scripting or a standard method, opt to use the standard method. A "standard method" is a method provided through the user interface of Jitterbit Design Studio to accomplish a task.

One example would be the organizing of operation runs. Jitterbit's Design Studio user interface allows you to create "operation chains" linked by success and failure paths. Alternatively, it is possible to build stand-alone operations and then link them using scripting and the RunOperation() function. Unless there are technical requirements that drive this approach (such as the use of asynchronous or optional paths), it is preferred to rely on Jitterbit's user interface method to link different operations together.

Scripting is generally best in two places: within the Formula Builder in transformations and in stand-alone scripts. If the same code is being used within more than one transformation, consider moving that code to a stand-alone script and calling it from within each transformation using RunScript().

Naming convention for variables

Jitterbit has four types of variables:

  • local variables
  • global variables
  • project variables
  • Jitterbit variables

Local and global variables are created in Jitterbit scripts; project variables are defined in the Jitterbit Design Studio; Jitterbit variables are predefined in the Jitterbit system.

As the scope of a local variable is limited to a single script, a naming convention for them can be very simple, such as all-lowercase letters or an initial word, such as "return" or "myVariable".

Global variables, as their scope is larger (a global variable is available to be referenced in the same or downstream operations and scripts within an operation chain), should use a consistent naming convention to avoid inadvertent reuse. For example, using multiple components for a variable name, separated by periods, you could follow a pattern such as:

first.second.third[.fourth]

where:

  • First component: org specifier
  • Second component: a type, such as id, error, date, file, token, timestamp, timezone, flag, email, guid, user, externalid, val (for a miscellaneous value), arr (for array), or an endpoint type such as sfdc
  • Third component: variable name
  • Fourth component: optional sub-variable name

Combining these components and assuming your org name is example, variable names could be:

  • $example.arr.names
  • $example.sfdc.success.message

Because project variables are visible through the Management Console, and are generally used to configure integration behavior, friendlier names can be used. As a project variable name cannot contain spaces, underscores can be used instead. For example: "Start_Date" and "Include_Assemblies".

Whatever convention you choose to use, we recommend codifying and documenting it so that all team members can consistently use it in all projects.

Warning

If you plan to use your Jitterbit global variables in a JavaScript script, it is important to use underscores instead of periods:

  • $example_arr_names
  • $example_sfdc_success_message

Environments and deployments

Jitterbit enables software development lifecycle methodologies through the use of environments and deployment options.

Environments

In Harmony, the environment feature can be used to set up production and non-production environments.

For example, assume that a "Dev" and a "Prod" environment are set up in the Management Console and both are assigned to agent group A. Project 1 is developed under the "Dev" environment. Jitterbit provides a "Migration" feature that will copy that project to the "Prod" environment, after which the endpoint credentials are changed to the "Prod" endpoint credentials. Other sources and targets are also changed. Afterward, any migrations from "Dev" to "Prod" exclude endpoints, sources, and targets unless they are new.

Deployment management options

There are several options for deploying projects.

  • Full Deploy: Selecting Actions > Deploy > Everything takes the entire project and overwrites the project in the cloud.

  • Project Backup: When choosing Actions > Deploy > Everything, there is an option to "Also store a backup in the Cloud." Before making major changes to a project, select this option so as to have a restore point saved in Harmony.

  • All New & Modified Items: This option is available under Actions > Deploy > Advanced Options. Because Design Studio keeps track of "dirty" items, this will deploy changes as well as all dependencies.

  • A new feature is to disable the progress dialog during a deploy (see Preferences > Deploy) which allows individual item deploys to happen in the background.

Version warnings

When deploying changes to a project, if a newer version of a project has been deployed, a warning will display indicating that a newer version exists and identifying who deployed it. The Jitterbit Admin has the option to overwrite the project. In general, in a multi-developer environment, downloading the current project from the cloud before making changes is preferred.

Testing, troubleshooting, and logging

Use test features for rAPID integration development

Jitterbit can enable rapid integration development and unit testing by making visible the actual integration data during design time. The obvious advantage is to enable an iterative development process by showing the data before and after field transformations, rather than building the entire operation, running it, and inspecting the output. This is mainly done through the Transformations tool, particularly using the "Test Transformation" and the "Run Operation" features.

If the objects to be integrated are known, an experienced developer can develop an integration quickly using the Transformation Wizard and its toolkit. For example, make a connection to a database and, using the Transformation Wizard, build the operation and the transformation. Then perform a "Run Operation" with the transformation open. The data will be displayed in the transformation screen and records can be cycled through. This will instantly show the exact data that Jitterbit will receive when the operation is run.

If a field requires a transformation, double-click on the field to open the Formula Builder and build the required script. By using the "Test" feature in the Formula Builder, the output will use the transformation data and show the exact output that will be generated during run time.

If the source is not available, but the source data is available in a file (CSV, XML, JSON), the file can be imported into the Transformations tool using the "Load Sample Source Data" and the "Test the Transformation" features.

Enable integration troubleshooting

A key concept for a healthy integration architecture is to recognize that there will be questions raised by the business concerning the accuracy of the integration work, particularly when discrepancies appear in the endpoint data. The integration may or may not be at fault. It is incumbent on the integration to provide a high degree of transparency in order to help resolve questions about data accuracy.

For example, if data in a target endpoint appears to be incorrect, then typically integration support is called upon to provide details concerning any integration actions, such as times, sources, transformation logic, success or failure messages, etc. The troubleshooting process will benefit by making that information available as a standard part of the integration architecture.

In Jitterbit, this is supported through logging and alerting features.

Logging

The Jitterbit operation log will capture key data by default, such as operation run times and success, failure, or cancellation messages. If there are failures, and the endpoint returns failure information, then the log will capture it.

When developing an integration, use the WriteToOperationLog() function in the mappings and scripts to capture key data and steps in the process. This typically is as simple as: WriteToOperationLog("The id is: "+sourcefieldid).

If capturing the entire output of a transformation is desired, this can be done by building an operation that reads the source, performs the transformation, and writes to a temporary file. A post-operation script can read the file and write the file to the operation log: WriteToOperationLog(ReadFile(<tempfile>)). Then the "real" operation can be performed.

Logs can be viewed in either Design Studio or the Management Console. The advantage of the Management Console is that support personnel can access it through the browser without needing a Design Studio client on their desktop.

Data in the logs is searchable, so it enables the scenario where the exact string involved in the troubleshooting is a known value and is logged.

Frequently, APIs have a success or non-success response message that is informative. Take the extra step of capturing this information in the log.

Operation logs, including detailed log messages from both cloud agents and private agents, are retained for 30 days by Harmony.

Alerting

Frequently integration results not only need to be logged, but they also need to be escalated. Jitterbit provides email integration, which can easily be attached to operations and success/failure paths or called from scripts.

For additional information, refer to Set up alerting, logging, and error handling.

Additional resources

These sections and pages in the documentation relate to best practices and will be of interest.

Tech talks

Jitterbit Tech Talks are video presentations that cover areas of interest to users of all levels:

  • Tips & tricks Harmony best practices tech talks
    A Tech Talk intended for customers, trial users, and partners looking for best practices for the Harmony platform.
  • APIs tech talk
    Best practices around creating and implementing APIs using Jitterbit's API Manager and documenting them using the OpenAPI standards (formerly known as Swagger).
  • Environments tech talk
    Best practices when working with environments in Jitterbit: every process from environment creation to project migration.
  • Error handling best practices tech talk
    Developing robust error handling is a critical piece of your integration project which may require up to 25% of project design time. This Tech Talk covers the top five error handling best practices.
  • Private agents best practices tech talk
    A Tech Talk covering Private versus cloud agents, private agent hardware recommendations, agent grouping, operating system options, and agent best practices that can help you get the most out of your Jitterbit integrations.
  • Project organization best practices tech talk
    Best Practices around organizing your projects.

Documentation

Jitterbit documentation has best practices included with our pages on using Jitterbit:

Security

Design patterns and examples

  • Best practices for SAP
    Issues and considerations that can arise when integrating to and from SAP instances, particularly when creating a bidirectional integration.
  • Design Studio how-tos
    Common integration problems encountered by our customers that can be solved by using our software.
  • Jitterpak library
    Example projects to help you get started.

Create projects

Logging

Manage projects

  • Creating New Recipes
    Best practices to follow when creating Jitterpaks for use with Citizen Integrator recipes for Design Studio.
  • Integration project methodology
    Key items that a Project Manager for a Design Studio project should know, including how to organize your team, gather and validate requirements clearly and concisely, and leverage the strengths of Harmony to deliver a successful integration project.
  • Restore from cloud backup
    Best practices around backing up and restoring projects.
  • Set up a team collaboration project
    Best practices for supporting multiple developers working on the same project.

Private agents

Use APIs