Skip to Content

Amazon Dynamo DB Connection Details

Introduction

Connector Version

This documentation is based on version 23.0.8803 of the connector.

Get Started

Amazon DynamoDB Version Support

The connector uses the current version of the Amazon DynamoDB REST API, version 2012-08-10, to enable read/write access to DynamoDB instances.

Establish a Connection

Connect to DynamoDB

Specify the following to connect to data:

  • Domain: Set this if you want to use a domain name you have associated with AWS.
  • AWSRegion: Set this to the region where your Amazon DynamoDB data is hosted.

Authenticate to DynamoDB

Obtain AWS Keys

To obtain the credentials for an IAM user:

  1. Sign into the IAM console.
  2. In the navigation pane, select Users.
  3. To create or manage the access keys for a user, select the user and then go to the Security Credentials tab.

To obtain the credentials for your AWS root account:

  1. Sign into the AWS Management console with the credentials for your root account.
  2. Select your account name or number.
  3. In the menu that displays, select My Security Credentials.
  4. To manage or create root account access keys, click Continue to Security Credentials and expand the "Access Keys" section.
Root Credentials

To authenticate using account root credentials, set these configuration parameters:

  • AuthScheme: AwsRootKeys.
  • AWSAccessKey: The access key associated with the AWS root account.
  • AWSSecretKey: The secret key associated with the AWS root account.

Note

Use of this authentication scheme is discouraged by Amazon for anything but simple tests. The account root credentials have the full permissions of the user, making this the least secure authentication method.

Temporary Credentials

To authenticate using temporary credentials, specify the following:

  • AuthScheme: Set this to TemporaryCredentials.
  • AWSAccessKey: The access key of the IAM user to assume the role for.
  • AWSSecretKey: The secret key of the IAM user to assume the role for.
  • AWSSessionToken: Your AWS session token. This will have been provided alongside your temporary credentials. See AWS Identity and Access Management User Guide for more info.

The connector can now request resources using the same permissions provided by long-term credentials (such as IAM user credentials) for the lifespan of the temporary credentials.

If you are also using an IAM role to authenticate, you must additionally specify the following:

  • AWSRoleARN: Specify the Role ARN for the role you'd like to authenticate with. This will cause the connector to attempt to retrieve credentials for the specified role.
  • AWSExternalId (optional): Only required if you are assuming a role in another AWS account.
EC2 Instances

Set AuthScheme to AwsEC2Roles.

If you are using the connector from an EC2 Instance and have an IAM Role assigned to the instance, you can use the IAM Role to authenticate. Since the connector automatically obtains your IAM Role credentials and authenticates with them, it is not necessary to specify AWSAccessKey and AWSSecretKey.

If you are also using an IAM role to authenticate, you must additionally specify the following:

  • AWSRoleARN: Specify the Role ARN for the role you'd like to authenticate with. This will cause the connector to attempt to retrieve credentials for

    the specified role.

  • AWSExternalId (optional): Only required if you are assuming a role in another AWS account.

IMDSv2 Support

The Amazon DynamoDB connector now supports IMDSv2. Unlike IMDSv1, the new version requires an authentication token. Endpoints and response are the same in both versions.

In IMDSv2, the Amazon DynamoDB connector first attempts to retrieve the IMDSv2 metadata token and then uses it to call AWS metadata endpoints. If it is unable to retrieve the token, the connector reverts to IMDSv1.

{: #section}
AWS IAM Roles

Set AuthScheme to AwsIAMRoles.

In many situations, it may be preferable to use an IAM role for authentication instead of the direct security credentials of an AWS root user. If you are specifying the AWSAccessKey and AWSSecretKey of an AWS root user, you may not use roles.

To authenticate as an AWS role, set these properties:

  • AWSAccessKey: The access key of the IAM user to assume the role for.

  • AWSSecretKey: The secret key of the IAM user to assume the role for.

  • AWSRoleARN: Specify the Role ARN for the role you'd like to authenticate with. This will cause the connector to attempt to retrieve credentials for

    the specified role.

  • AWSExternalId (optional): Only required if you are assuming a role in another AWS account.

ADFS

To connect to ADFS, set the AuthScheme to ADFS, and set these properties:

  • User: The ADFS user.
  • Password: The ADFS user's password.
  • SSOLoginURL: The SSO provider's login URL.

Example connection string:

AuthScheme=ADFS; AWSRegion=Ireland; User=user@cdata.com; Password=CH8WerW121235647iCa6; SSOLoginURL='https://adfs.domain.com'; AWSRoleArn=arn:aws:iam:1234:role/ADFS_SSO; AWSPrincipalArn=arn:aws:iam:1234:saml-provider/ADFSProvider; S3StagingDirectory=s3://athena/staging;
Okta

To connect to Okta, set the AuthScheme to Okta, and set these properties:

  • User: The Okta user.
  • Password: The Okta user's password.
  • SSOLoginURL: The SSO provider's login URL.

If you are using a trusted application or proxy that overrides the Okta client request OR configuring MFA, you must use combinations of SSOProperties to authenticate using Okta. Set any of the following, as applicable:

  • APIToken: When authenticating a user via a trusted application or proxy that overrides the Okta client request context, set this to the API Token the customer created from the Okta organization.

  • MFAType: If you have configured the MFA flow, set this to one of the following supported types: OktaVerify, Email, or SMS.

  • MFAPassCode: If you have configured the MFA flow, set this to a valid passcode.

    If you set this to empty or an invalid value, the connector issues a one-time password challenge to your device or email. After the passcode is received, reopen the connection where the retrieved one-time password value is set to the MFAPassCode connection property.

  • MFARememberDevice: True by default. Okta supports remembering devices when MFA is required. If remembering devices is allowed according to the configured authentication policies, the connector sends a device token to extend MFA authentication lifetime. If you do not want MFA to be remembered, set this variable to False.

Example connection string:

AuthScheme=Okta; AWSRegion=Ireland; User=user@cdata.com; Password=CH8WerW121235647iCa6; SSOLoginURL='https://cdata-us.okta.com/home/amazon_aws/0oa35m8arsAL5f5NrE6NdA356/272'; SSOProperties='ApiToken=01230GGG2ceAnm_tPAf4MhiMELXZ0L0N1pAYrO1VR-hGQSf;'; AWSRoleArn=arn:aws:iam:1234:role/Okta_SSO; AWSPrincipalARN=arn:aws:iam:1234:saml-provider/OktaProvider; S3StagingDirectory=s3://athena/staging;

To connect to PingFederate, set AuthScheme to PingFederate, and set these properties:

  • User: The PingFederate user.
  • Password: The PingFederate user's password.
  • SSOLoginURL: The SSO provider's login URL.
  • AWSRoleARN (optional): If you have multiple role ARNs, specify the one you want to use for authorization.
  • AWSPrincipalARN (optional): If you have multiple principal ARNs, specify the one you want to use for authorization.
  • SSOExchangeUrl: The Partner Service Identifier URI configured in your PingFederate server instance under: SP Connections > SP Connection > WS-Trust > Protocol Settings. This should uniquely identify a PingFederate SP Connection, so it is a good idea to set it to your AWS SSO ACS URL. You can find it under AWS SSO > Settings > View Details next to the Authentication field.
  • SSOProperties (optional): Authscheme=Basic if you want to include your username and password as an authorization header in requests to Amazon S3.

To enable mutual SSL authentication for SSOLoginURL, the WS-Trust STS endpoint, configure these SSOProperties:

  • SSLClientCert
  • SSLClientCertType
  • SSLClientCertSubject
  • SSLClientCertPassword

Example connection string:

authScheme=pingfederate;SSOLoginURL=https://mycustomserver.com:9033/idp/sts.wst;SSOExchangeUrl=https://us-east-1.signin.aws.amazon.com/platform/saml/acs/764ef411-xxxxxx;user=admin;password=PassValue;AWSPrincipalARN=arn:aws:iam:215338515180:saml-provider/pingFederate;AWSRoleArn=arn:aws:iam:215338515180:role/SSOTest2;
MFA

For users and roles that require Multi-factor Authentication, specify the following to authenticate:

  • AuthScheme: Set this to AwsMFA.
  • CredentialsLocation: The location of the settings file where MFA credentials are saved. See the Credentials File Location page under Connection String Options for more information.
  • MFASerialNumber: The serial number of the MFA device if one is being used.
  • MFAToken: The temporary token available from your MFA device.

If you are connecting to AWS (instead of already being connected such as on an EC2 instance), you must additionally specify the following:

  • AWSAccessKey: The access key of the IAM user for whom MFA will be issued.
  • AWSSecretKey: The secret key of the IAM user whom MFA will be issued.

If you are also using an IAM role to authenticate, you must additionally specify the following:

  • AWSRoleARN: Specify the Role ARN for the role you'd like to authenticate with. This will cause the connector to attempt to retrieve credentials for the specified role using MFA.
  • AWSExternalId (optional): Only required if you are assuming a role in another AWS account.

This causes the connector to submit the MFA credentials in a request to retrieve temporary authentication credentials.

Note that you can control the duration of the temporary credentials by setting the TemporaryTokenDuration property (default 3600 seconds).

Credentials Files

You can use a credentials file to authenticate. Any configurations related to AccessKey/SecretKey authentication, temporary credentials, role authentication, or MFA can be used. To do so, set the following properties to authenticate:

  • AuthScheme: Set this to AwsCredentialsFile.
  • AWSCredentialsFile: Set this to the location of your credentials file.
  • AWSCredentialsFileProfile (optional): Optionally set this to the name of the profile you would like to use from the specified credentials file. If not specified, the profile with the name default will be used.

See AWS Command Line Interface User Guide for more information.

AWS Cognito Credentials

If you want to use the connector with a user registered in a User Pool in AWS Cognito, set the following properties to authenticate:

  • AuthScheme: Set this to AwsCognitoSrp (recommended). You can also use AwsCognitoBasic.
  • AWSCognitoRegion: Set this to the region of the User Pool.
  • AWSUserPoolId: Set this to the User Pool Id.
  • AWSUserPoolClientAppId: Set this to the User Pool Client App Id.
  • AWSUserPoolClientAppSecret: Set this to the User Pool Client Secret.
  • AWSIdentityPoolId: Set this to the Identity Pool ID of the Identity Pool that is linked with the User Pool.
  • User: Set this to the username of the user registered in the User Pool.
  • Password: Set this to the password of the user registered in the User Pool.

Fine-Tuning Data Access

Infer the Data Type

You can use the following properties to configure automatic data type detection, which is enabled by default.

  • TypeDetectionScheme: You can use this property to enable or disable automatic type detection based on the value specified in RowScanDepth.
  • RowScanDepth: This property determines the number of rows that will be scanned to determine column data types.
  • IgnoreTypes: The data types that should be ignored and resolve to varchar data types. By default, Date, Time, and Datetime types are ignored. This is because Amazon DynamoDB does not support them as types. Any filtering of these columns may be done only as their original varchar data type.

Fine Tuning Data Access

You can use the following properties to gain greater control over Amazon DynamoDB API features and the strategies the connector uses to surface them:

  • GenerateSchemaFiles: This property enables you to persist table metadata in static schema files that are easy to customize, to change column data types, for example. You can set this property to "OnStart" to generate schema files for all tables in your database at connection. Or, you can generate schemas as you execute SELECT queries to tables. The resulting schemas are based on the connection properties you use to configure Automatic Schema Discovery.

  • UseSimpleNames: Amazon DynamoDB supports attribute names with special characters that many database-oriented tools do not support.

    In addition, Amazon DynamoDB table names can include dots and dashes -- the connector interprets dots within table names as hierarchy separators that enable you to drill down to nested fields, similar to XPath.

    You can use this property to replace any nonalphanumeric character with an underscore.

  • SeparatorCharacter: You can use this property to more easily access nested fields when Querying Documents and Lists; specify the hierarchy separator with this property. By default, this character is the '.' (dot) character.

Performance

Set a Retry Interval

You can set the following properties to retry queries instead of returning a temporary error such as "maximum throughput exceeded":

  • RetryWaitTime: The minimum number of milliseconds the connector will wait to retry a request.
  • MaximumRequestRetries: The maximum number of times to retry a request.

The Jitterbit Connector for Amazon DynamoDB also has two seperate APIs that may be used depending on the query, PartiQL and Scan. The API that is used depends on the query that is executed.

PartiQL

PartiQL is used on any INSERT/update/delete request query, as well as any select that contains a filter. This is due to the PartiQL API containing more advanced filtering capabilities than the older Scan endpoint. In general, queries where a significant portion of the result is filtered out can be expected to execute faster than a query with very little filtered.

Use Paging Effectively

You can use the Pagesize property to optimize use of your provisioned throughput, based on the size of your items and Amazon DynamoDB's 1MB page size. Set this property to the number of items to return.

Generally, a smaller page size reduces spikes in throughput that cause throttling. A smaller page size also inserts pauses between requests. This interval evens out the distribution of requests and allows more requests to be successful by avoiding throttling.

Scans

A Scan will occur during a SELECT query that contains no filter. In this case, all results must be retrieved, so there is no advantage in using the PartiQL API. Executing a Scan will retrieve all results, but the API contains a key feature that gives it better performance than an unfiltered PartiQL query: multiple threads. The ThreadCount connection property may be set to influence how many threads will be used when executing a Scan request. Using more threads will cause more memory to be taken up, but will result in faster results per thread. The default is 4. This works best on tables where a high or variable throughput is provisioned.

In cases where the maximum throughput for a table would be exceeded on a single thread, there is no benefit to using a Scan over the single threaded PartiQL API. The Amazon DynamoDB will simply throttle all threads until the maximum throughput is no longer exceeded.

Minimum IAM Requirements

We recommend using predefined roles for services rather than creating custom IAM policies. Predefined roles for Amazon DynamoDB are

  • AmazonDynamoDBReadOnlyAccess-grants read-only access to DynamoDB resources through the AWS Management Console.
  • AmazonDynamoDBFullAccess-grants full access to DynamoDB resources through the AWS Management Console.

If you want to create custom polices, use the roles described in the table below. Note that the specific policies required by the Amazon DynamoDB driver are subject to change in future releases. Amazon DynamoDB requires at a mininum the following permissions:

IAM Role Description
dynamodb:ListTables Required for getting a list of your DynamoDB tables. Used during metadata retrieval to dynamically determine the list of your tables. Note that this action does not support resource-level permissions and requires you to choose All resources (hence the * for "Resource").
In other words, the action dynamodb:ListTables needs a * Resource, and the other actions can be given permission to all the tables arn:aws:dynamodb:us-east-1:987654321098:table/* or to a list of specific tables:
"Resource": [
"arn:aws:dynamodb:us-east-1:987654321098:table/Customers",
"arn:aws:dynamodb:us-east-1:987654321098:table/Orders"
]
dynamodb:DescribeTable Required for getting metadata about the selected table. Used during table metadata retrieval to dynamically determine the list of the columns. This action supports resource-level permissions, so you can specify the tables you want to get the metadata from. For example, for the table Customers and Orders in the region Northern Virginia us-east-1, for account 987654321098:
      {
"Effect": "Allow",
"Action": [
"dynamodb:DescribeTable"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:987654321098:table/Customers",
"arn:aws:dynamodb:us-east-1:987654321098:table/Orders"
]
}

To give permissions to all the tables in the region you specified in the connection property AWSRegion, use an * instead of the table name:
"Resource": "arn:aws:dynamodb:us-east-1:987654321098:table/*"
dynamodb:Scan Required for getting one or more items by accessing every item in the table. Used for most of the SELECT queries, for example, SELECT * FROM [Customers]. This action supports resource-level permissions, so you can specify the tables you want to get data from, similar to dynamodb:DescribeTable.
dynamodb:PartiQLSelect Required for getting specific items from a table when using SELECT queries and filtering by the primary key column, for example, SELECT * FROM [Customers] WHERE id=1234. This action supports resource-level permissions, so you can specify the tables you want to get data from, similar to dynamodb:DescribeTable.
dynamodb:PartiQLInsert Required for inserting data to a table. This action supports resource-level permissions, so you can specify the tables you want to insert data to, similar to dynamodb:DescribeTable.
dynamodb:PartiQLUpdate Required for modifying data in a table. This action supports resource-level permissions, so you can specify the tables you want to modify data on, similar to dynamodb:DescribeTable.
dynamodb:PartiQLDelete Required for deleting data from a table. This action supports resource-level permissions, so you can specify the tables you want to delete data from, similar to dynamodb:DescribeTable.
dynamodb:CreateTable Required for creating a table. This action supports resource-level permissions, so you can specify the table names you can create.

Important Notes

Configuration Files and Their Paths

  • All references to adding configuration files and their paths refer to files and locations on the Jitterbit agent where the connector is installed. These paths are to be adjusted as appropriate depending on the agent and the operating system. If multiple agents are used in an agent group, identical files will be required on each agent.

NoSQL Database

Amazon DynamoDB is a schemaless database that provides high performance, availability, and scalability. These features are not necessarily incompatible with a standards-compliant query language like SQL-92. In this section we will show various schemes that the connector offers to bridge the gap with relational SQL and a document database.

The connector models the schemaless Amazon DynamoDB tables into relational tables and translates SQL queries into Amazon DynamoDB queries to get the requested data. The connector offers two ways, Automatic Schema Discovery and Custom Schema Definitions, to model Amazon DynamoDB tables as relational tables.

The Automatic Schema Discovery scheme automatically finds the data types in a Amazon DynamoDB table by scanning a configured number of rows of the table. You can use RowScanDepth, FlattenArrays, and FlattenObjects to control the relational representation of the tables in Amazon DynamoDB.

Optionally, you can use Custom Schema Definitions to project your chosen relational structure on top of a Amazon DynamoDB table. This allows you to define your chosen column names, their data types, and the location of their values in the Amazon DynamoDB table.

Automatic Schema Discovery

The connector automatically infers a relational schema by inspecting a series of Amazon DynamoDB documents in a collection. You can use the RowScanDepth property to define the number of documents the connector will scan to do so. The columns identified during the discovery process depend on the FlattenArrays and FlattenObjects properties.

Flatten Objects

If FlattenObjects is set, all nested objects will be flattened into a series of columns. For example, consider the following document:

{
  id: 12,
  name: "Lohia Manufacturers Inc.",
  address: {street: "Main Street", city: "Chapel Hill", state: "NC"},
  offices: ["Chapel Hill", "London", "New York"],
  annual_revenue: 35,600,000
}

This document will be represented by the following columns:

Column Name Data Type Example Value
id Integer 12
name String Lohia Manufacturers Inc.
address.street String Main Street
address.city String Chapel Hill
address.state String NC
offices String ["Chapel Hill", "London", "New York"]
annual_revenue Double 35, 600, 000

If FlattenObjects is not set, then the address.street, address.city, and address.state columns will not be broken apart. The address column of type string will instead represent the entire object. Its value would be {street: "Main Street", city: "Chapel Hill", state: "NC"}. See JSON Functions for more details on working with JSON aggregates. You can change the separator character in the column name from a dot by setting SeparatorCharacter.

Flatten Arrays

The FlattenArrays property can be used to flatten array values into columns of their own. This is only recommended for arrays that are expected to be short, for example the coordinates below:

"coord": [ -73.856077, 40.848447 ]

The FlattenArrays property can be set to 2 to represent the array above as follows:

Column Name Data Type Example Value
coord.0 Float -73.856077
coord.1 Float 40.848447

It is best to leave other unbounded arrays as they are and piece out the data for them as needed using JSON Functions.

Vertical Flattening

It is possible to retrieve an array of objects as if it were a separate table. Take the following JSON structure from the restaurants table for example:

{
  "restaurantid" : "30075445",
  "address" : {
    "building" : "1007",
    "coord" : [-73.856077, 40.848447],
    "street" : "Morris Park Ave",
    "zipcode" : "10462"
  },
  "borough" : "Bronx",
  "cuisine" : "Bakery",
  "grades" : [{
      "date" : 1393804800000,
      "grade" : "B",
      "score" : 2
    }, {
      "date" : 1378857600000,
      "grade" : "A",
      "score" : 6
    }, {
      "date" : 1358985600000,
      "grade" : "A",
      "score" : 10
    }],
  "name" : "Morris Park Bake Shop"
}

Vertical flattening will allow you to retrieve the grades array as a separate table by using the syntax below:

SELECT * FROM [restaurants.grades]

This query returns the following data set:

date grade score _index
1393804800000 B 2 1
1378857600000 A 6 2
1358985600000 A 10 3

The grades array could also be nested some levels deeper. In that case, the same syntax should be used:

SELECT * FROM [restaurants.cuisine.bakery.grades]

There are also cases where the nested structure includes another array in a higher level. Take the following JSON as an example:

{
  "restaurantid" : "30075445",
  "reviews": [
   {
    "grades": [
     {
      "date": 1393804800000,
      "score": 2,
      "grade": "B"
     },
     {
      "date": 1378857600000,
      "score": 6,
      "grade": "A"
     },
     {
      "date": 1358985600000,
      "score": 10,
      "grade": "A"
     }]
    }],
  "name" : "Morris Park Bake Shop"
}

For this structure, the index of the reviews array will need to get wrapped in square brackets. If they are already being used as escape characters in the SQL query, the square brackets will need to be escaped themselves as shown in the query below:

SELECT * FROM [restaurants.reviews.\[0\].grades]

This query will return the same data set as the JSON structure at the top. Note that this syntax is case sensitive, so make sure to write the field names the same way that they're saved in DynamoDB.

JSON Functions

The connector can return JSON structures as column values. The connector enables you to use standard SQL functions to work with these JSON structures. The examples in this section use the following array:

[
     { "grade": "A", "score": 2 },
     { "grade": "A", "score": 6 },
     { "grade": "A", "score": 10 },
     { "grade": "A", "score": 9 },
     { "grade": "B", "score": 14 }
]

JSON_EXTRACT

The JSON_EXTRACT function can extract individual values from a JSON object. The following query returns the values shown below based on the JSON path passed as the second argument to the function:

SELECT Name, JSON_EXTRACT(grades,'[0].grade') AS Grade, JSON_EXTRACT(grades,'[0].score') AS Score FROM Students;
Column Name Example Value
Grade A
Score 2

JSON_COUNT

The JSON_COUNT function returns the number of elements in a JSON array within a JSON object. The following query returns the number of elements specified by the JSON path passed as the second argument to the function:

SELECT Name, JSON_COUNT(grades,'[x]') AS NumberOfGrades FROM Students;
Column Name Example Value
NumberOfGrades 5

JSON_SUM

The JSON_SUM function returns the sum of the numeric values of a JSON array within a JSON object. The following query returns the total of the values specified by the JSON path passed as the second argument to the function:

SELECT Name, JSON_SUM(score,'[x].score') AS TotalScore FROM Students;
Column Name Example Value
TotalScore 41

JSON_MIN

The JSON_MIN function returns the lowest numeric value of a JSON array within a JSON object. The following query returns the minimum value specified by the JSON path passed as the second argument to the function:

SELECT Name, JSON_MIN(score,'[x].score') AS LowestScore FROM Students;
Column Name Example Value
LowestScore 2

JSON_MAX

The JSON_MAX function returns the highest numeric value of a JSON array within a JSON object. The following query returns the maximum value specified by the JSON path passed as the second argument to the function:

SELECT Name, JSON_MAX(score,'[x].score') AS HighestScore FROM Students;
Column Name Example Value
HighestScore 14

DOCUMENT

The DOCUMENT function can be used to retrieve the entire document as a JSON string. See the following query and its result as an example:

SELECT DOCUMENT(*) FROM Customers;

The query above will return the entire document as shown.

{ "id": 12, "name": "Lohia Manufacturers Inc.", "address": { "street": "Main Street", "city": "Chapel Hill", "state": "NC"}, "offices": [ "Chapel Hill", "London", "New York" ], "annual_revenue": 35,600,000 }

DynamoDB Queries

Because Amazon DynamoDB is a NoSQL data source, queries need to be handled a bit differently than standard relational databases.

Value-Sensitive Queries

The lack of a required data type for a given column means that you could store different types of data in a single column. For instance, one row could have a String called EmailAddresses and another could have a StringSet also called EmailAddresses. For these and other kinds of cases, the connector largely determines what data type to use based on the values in the query.

For instance, say you have an Items table where the PartNumber could store either a String or a Number. To get back a part with the PartNumber of the number value 12345, you would issue the following query:

SELECT Name, Location, Quantity, PartNumber FROM Items WHERE PartNumber = 12345

Alternatively, the PartNumber could have been stored as the string "12345". To get back a part with the PartNumber of the literal string 12345, issue the following query:

SELECT Name, Location, Quantity, PartNumber FROM Items WHERE PartNumber = '12345'

If the data type of the specified value is not ambiguous, it is always used before the autodetected data type. In both of these cases if a parameter was used instead of of a hardcoded value, then the data type of the parameter would be used to determine what type to submit to Amazon DynamoDB.

Detected Column Data Type

If a value is not obvious based purely on the detected data type, the connector compares it to the autodetected column. For instance, if you want to insert a column called Coordinates into the Location table, your INSERT would look like:

INSERT INTO Locations (Address, Coordinates) VALUES ('123 Fake Street', '[40.7127, 74.0059]')

Based on the input value alone, the detected data type is a string. However, because a Coordinates column was previously autodetected, the connector inserts a NumberSet and not a simple String.

If a Coordinates column was not autodetected when scanning the Locations table, the data type of the inserted value is used.

In this case, we could still resolve that the INSERT is a NumberSet, but it will cost a bit more overhead to do this.

Count

Amazon DynamoDB supports 2 different methods of of using the COUNT aggregate function. To simply return the number of Items in you table, issue the following query:

SELECT COUNT(*) FROM MyTable

The Jitterbit Connector for Amazon DynamoDB will read the ItemCount from the DescribeTable Action. This avoids using too many read units to scan the full table. However, DynamoDB updates this value approximately every six hours and recent changes might not be reflected in this value.

Issuing the below example queries will instead scan the full table for count:

SELECT COUNT(*) FROM MyTable WHERE MyInt > 10
SELECT COUNT(MyInt) FROM MyTable

Query Documents and Lists

Amazon DynamoDB documents and lists are supported with the Jitterbit Connector for Amazon DynamoDB. You can access documents and lists directly at the root level or use the '.' character as a hierarchy divider to drill down to documents and lists.

Report Values in Documents and Lists

When data types are autodetected, they are reported down to the lowest level that can be reliably detected. For instance, a document called Customer with a child called Address and a child on Address called Street would be represented by the column Customer.Address.Street.

However, this process does not apply to Lists since a list could have any number of entries. Once a List or a Set is detected, additional values are not reported as being available in the table schema.

Get Back Unreported Values

If there are attributes that frequently do not have a value and thus are not autodetected, these can still be retrieved by specifying the correct path to them. For instance, to get the Special attribute from the Customer document:

SELECT [Customer.Address.Street], [Customer.Special] FROM MyTable

Once a List has been detected, additional values are not reported. But individual values on the list can be referenced by specifying '.' and a number. For instance:

SELECT [MyList.0], [MyList.1.Email], [MyList.1.Age] FROM MyTable

This will retrieve the first value on the list and the second value's Email and Age attributes.

Insert Documents and Lists

INSERTs in Amazon DynamoDB require that the full object is specified. Insert a document or list at the root. Pass in the full JSON aggregate. For instance:

INSERT INTO MyTable (PrimaryKey, EmailAddresses, Address, MyList) VALUES ('uniquekey', '["user@email.com", "user2@email2.com"]', '{"Street":"123 Fake Street", "City":"Chapel Hill", "Zip":"27713"}', '[{"S":"somestr"},{"NS":[1,2]},{"N":4}]')

In this case, the EmailAddress is inserted as a StringSet, Address is inserted as a document, and MyList is inserted as a list.

Update Documents and Lists

Updates are supported using the same syntax that is available during selects. Documents and Lists can be specified using the '.' character to specify hierarchy. For instance:

UPDATE MyTable SET [EmailAddress.0]='user@email.com', [EmailAddress.1]='user2@email2.com', [Address.Street]='123 Fake Street', [Address.City]='Chapel Hill', [Address.Zip]='27713', [MyList.0]='somestr', [MyList.1]='[1,2]', [MyList.2]=4 WHERE PrimaryKey='uniquekey'

Note that EmailAddress and MyList must be autodetected to resolve how to handle EmailAddress differently from MyList. If you are in doubt about whether or not something will be automatically detected, specifying the full JSON to update will always work.

Data Type Mapping

Data Type Mappings

The connector maps types from the data source to the corresponding data type available in the schema. Additionally, we will attemp to scan the available data coming back based on the IgnoreTypes connection property. The table below documents these mappings.

Amazon DynamoDB Schema
String string, date, datetime, time
Binary string
Number bigint, int, float (depending on data that is detected)
StringSet string
NumberSet string
BinarySet string
Map string
List string
Boolean bool
Null string

Note that depending on the settings of IgnoreTypes, some of these types may not be detected by default. Date, datetime, and time for example are ignored by default as they cannot be filtered server side, and may be inserted / updated in a different format than your existing entries if enabled. Please use caution when enabling them.

FlattenArrays and FlattenObjects may also be used to to flatten the StringSets, NumberSets,

BinarySets, Maps, and Lists into indivudal columns.

Custom Schema Definitions

In addition to Automatic Schema Discovery the connector also allows you to statically define the schema for your Amazon DynamoDB table. Let's consider a schema for the restaurants data set.

Below is an example item from the table:

{
   "address":{
      "building":"461",
      "coord":[
         -74.138492,
         40.631136
      ],
      "street":"Port Richmond Ave",
      "zipcode":"10302"
   },
   "borough":"Staten Island",
   "cuisine":"Other",
   "grades":[

   ],
   "name":"Indian Oven",
   "restaurant_id":"50018994"
}

Define a Custom Schema

You can define a custom schema to extract out nested properties as their own columns. Set the Location property to the file directory that will contain the schema file.

The following schema uses the other:path property to define where the data for a particular column should be retrieved from. Using this model you can flatten arbitrary levels of hierarchy.

The 'other:tableapiname' attribute specifies the table to parse. This attribute gives you the flexibility to use multiple schemas for the same table.

In Custom Schema Example, you will find the complete schema that contains the example above.

<api:info title="StaticRestaurants" other:catalog="" other:schema="AmazonDynamoDB" description="StaticRestaurants" other:tableapiname="StaticRestaurants"  other:version="20">
  <attr   name="id"      xs:type="decimal"   key="true"   columnsize="17"     precision="38"   scale="6"   readonly="false"   description="Dynamic Column."   other:dynamodatatype="N"   other:relativepath="restaurant_id"   other:filterable="true"   other:fullpath="restaurant_id"      other:apiname="&amp;quot;restaurant_id&amp;quot;"                          />
  <attr   name="borough"            xs:type="string"                 columnsize="2000"                                readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="borough"         other:filterable="true"   other:fullpath="borough"            other:apiname="&amp;quot;borough&amp;quot;"                                />
  <attr   name="address_zipcode"    xs:type="int"                    columnsize="4"      precision="10"               readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="zipcode"         other:filterable="true"   other:fullpath="address.zipcode"    other:apiname="&amp;quot;address&amp;quot;.&amp;quot;zipcode&amp;quot;"    />
  <attr   name="address_coord_0"    xs:type="double"                 columnsize="8"      precision="15"               readonly="false"   description="Dynamic Column."   other:dynamodatatype="N"   other:relativepath="coord"           other:filterable="true"   other:fullpath="address.coord[0]"   other:apiname="&amp;quot;address&amp;quot;.&amp;quot;coord&amp;quot;[0]"   />
  <attr   name="address_coord_1"    xs:type="double"                 columnsize="8"      precision="15"               readonly="false"   description="Dynamic Column."   other:dynamodatatype="N"   other:relativepath="coord[1]"        other:filterable="true"   other:fullpath="address.coord[1]"   other:apiname="&amp;quot;address&amp;quot;.&amp;quot;coord&amp;quot;[1]"   />
  <attr   name="address_building"   xs:type="int"                    columnsize="4"      precision="10"               readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="building"        other:filterable="true"   other:fullpath="address.building"   other:apiname="&amp;quot;address&amp;quot;.&amp;quot;building&amp;quot;"   />
  <attr   name="address_street"     xs:type="string"                 columnsize="2000"                                readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="street"          other:filterable="true"   other:fullpath="address.street"     other:apiname="&amp;quot;address&amp;quot;.&amp;quot;street&amp;quot;"     />
  <attr   name="name"               xs:type="string"                 columnsize="2000"                                readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="name"            other:filterable="true"   other:fullpath="name"               other:apiname="&amp;quot;name&amp;quot;"                                   />
  <attr   name="cuisine"            xs:type="string"                 columnsize="2000"                                readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="cuisine"         other:filterable="true"   other:fullpath="cuisine"            other:apiname="&amp;quot;cuisine&amp;quot;"                                />
</api:info>

Custom Schema Example

This section contains a complete schema. The info section enables a relational view of a Amazon DynamoDB table. For more details, see Custom Schema Definitions. The table below allows the SELECT, INSERT, UPDATE, and DELETE commands as implemented in the GET, POST, MERGE, and DELETE sections of the schema below. Set the Location property to the file directory that will contain the schema file.

Use the 'other:tableapiname' attribute to specify the name of the Amazon DynamoDB table you want to parse. You can use the 'other:tableapiname' attribute to define multiple schemas for the same table. Note: Amazon DynamoDB is case sensitive. Your table name and specified paths must match the case of how your fields appear in Amazon DynamoDB.

The operations, such as dynamodbadoProviderOperationCaller, are internal implementations and can also be copied as is.

<api:script xmlns:api="http://apiscript.com/ns?v1" xmlns:xs="http://www.cdata.com/ns/rsbscript/2" xmlns:other="http://apiscript.com/ns?v1">
  <api:info title="StaticRestaurants" other:catalog="" other:schema="AmazonDynamoDB" description="StaticRestaurants" other:tableapiname="StaticRestaurants"  other:version="20">
    <attr   name="id"      xs:type="decimal"   key="true"   columnsize="17"     precision="38"   scale="6"   readonly="false"   description="Dynamic Column."   other:dynamodatatype="N"   other:relativepath="restaurant_id"   other:filterable="true"   other:fullpath="restaurant_id"      other:apiname="&amp;quot;restaurant_id&amp;quot;"                          />
    <attr   name="borough"            xs:type="string"                 columnsize="2000"                                readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="borough"         other:filterable="true"   other:fullpath="borough"            other:apiname="&amp;quot;borough&amp;quot;"                                />
    <attr   name="address_zipcode"    xs:type="int"                    columnsize="4"      precision="10"               readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="zipcode"         other:filterable="true"   other:fullpath="address.zipcode"    other:apiname="&amp;quot;address&amp;quot;.&amp;quot;zipcode&amp;quot;"    />
    <attr   name="address_coord_0"    xs:type="double"                 columnsize="8"      precision="15"               readonly="false"   description="Dynamic Column."   other:dynamodatatype="N"   other:relativepath="coord"           other:filterable="true"   other:fullpath="address.coord[0]"   other:apiname="&amp;quot;address&amp;quot;.&amp;quot;coord&amp;quot;[0]"   />
    <attr   name="address_coord_1"    xs:type="double"                 columnsize="8"      precision="15"               readonly="false"   description="Dynamic Column."   other:dynamodatatype="N"   other:relativepath="coord[1]"        other:filterable="true"   other:fullpath="address.coord[1]"   other:apiname="&amp;quot;address&amp;quot;.&amp;quot;coord&amp;quot;[1]"   />
    <attr   name="address_building"   xs:type="int"                    columnsize="4"      precision="10"               readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="building"        other:filterable="true"   other:fullpath="address.building"   other:apiname="&amp;quot;address&amp;quot;.&amp;quot;building&amp;quot;"   />
    <attr   name="address_street"     xs:type="string"                 columnsize="2000"                                readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="street"          other:filterable="true"   other:fullpath="address.street"     other:apiname="&amp;quot;address&amp;quot;.&amp;quot;street&amp;quot;"     />
    <attr   name="name"               xs:type="string"                 columnsize="2000"                                readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="name"            other:filterable="true"   other:fullpath="name"               other:apiname="&amp;quot;name&amp;quot;"                                   />
    <attr   name="cuisine"            xs:type="string"                 columnsize="2000"                                readonly="false"   description="Dynamic Column."   other:dynamodatatype="S"   other:relativepath="cuisine"         other:filterable="true"   other:fullpath="cuisine"            other:apiname="&amp;quot;cuisine&amp;quot;"                                />
  </api:info>


  <api:script method="GET">
    <api:call op="dynamodbadoProviderOperationCaller">
      <api:push/>
    </api:call>
  </api:script>

  <api:script method="POST">
    <api:call op="dynamodbadoProviderOperationCaller">
      <api:push/>
    </api:call>
  </api:script>

  <api:script method="MERGE">
    <api:call op="dynamodbadoProviderOperationCaller">
      <api:push/>
    </api:call>
  </api:script>

  <api:script method="DELETE">
    <api:call op="dynamodbadoProviderOperationCaller">
      <api:push/>
    </api:call>
  </api:script>
</api:script>

Advanced Features

This section details a selection of advanced features of the Amazon DynamoDB connector.

Automatic Index Detection

The AutoDetectIndex property provides fast access to items in a table by detecting an alternate index which can be queried in place of the table itself. This secondary index is a data structure that contains a subset of attributes from a table and an alternate key. The benefit of querying an index instead of the main table is skipping a full scan of the main table. This makes the operation much faster.

User Defined Views

The connector allows you to define virtual tables, called user defined views, whose contents are decided by a pre-configured query. These views are useful when you cannot directly control queries being issued to the drivers. See User Defined Views for an overview of creating and configuring custom views.

SSL Configuration

Use SSL Configuration to adjust how connector handles TLS/SSL certificate negotiations. You can choose from various certificate formats; see the SSLServerCert property under "Connection String Options" for more information.

Proxy

To configure the connector using private agent proxy settings, select the Use Proxy Settings checkbox on the connection configuration screen.

Query Processing

The connector offloads as much of the SELECT statement processing as possible to Amazon DynamoDB and then processes the rest of the query in memory (client-side).

User Defined Views

The Jitterbit Connector for Amazon DynamoDB allows you to define a virtual table whose contents are decided by a pre-configured query. These are called User Defined Views, which are useful in situations where you cannot directly control the query being issued to the driver, e.g. when using the driver from Jitterbit. The User Defined Views can be used to define predicates that are always applied. If you specify additional predicates in the query to the view, they are combined with the query already defined as part of the view.

There are two ways to create user defined views:

  • Create a JSON-formatted configuration file defining the views you want.
  • DDL statements.

Define Views Using a Configuration File

User Defined Views are defined in a JSON-formatted configuration file called UserDefinedViews.json. The connector automatically detects the views specified in this file.

You can also have multiple view definitions and control them using the UserDefinedViews connection property. When you use this property, only the specified views are seen by the connector.

This User Defined View configuration file is formatted as follows:

  • Each root element defines the name of a view.
  • Each root element contains a child element, called query, which contains the custom SQL query for the view.

For example:

{
    "MyView": {
        "query": "SELECT * FROM Account WHERE MyColumn = 'value'"
    },
    "MyView2": {
        "query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
    }
}

Use the UserDefinedViews connection property to specify the location of your JSON configuration file. For example:

"UserDefinedViews", "C:\Users\yourusername\Desktop\tmp\UserDefinedViews.json"

Define Views Using DDL Statements

The connector is also capable of creating and altering the schema via DDL Statements such as CREATE LOCAL VIEW, ALTER LOCAL VIEW, and DROP LOCAL VIEW.

Create a View

To create a new view using DDL statements, provide the view name and query as follows:

CREATE LOCAL VIEW [MyViewName] AS SELECT * FROM Customers LIMIT 20;

If no JSON file exists, the above code creates one. The view is then created in the JSON configuration file and is now discoverable. The JSON file location is specified by the UserDefinedViews connection property.

Alter a View

To alter an existing view, provide the name of an existing view alongside the new query you would like to use instead:

ALTER LOCAL VIEW [MyViewName] AS SELECT * FROM Customers WHERE TimeModified > '3/1/2020';

The view is then updated in the JSON configuration file.

Drop a View

To drop an existing view, provide the name of an existing schema alongside the new query you would like to use instead.

DROP LOCAL VIEW [MyViewName]

This removes the view from the JSON configuration file. It can no longer be queried.

Schema for User Defined Views

User Defined Views are exposed in the UserViews schema by default. This is done to avoid the view's name clashing with an actual entity in the data model. You can change the name of the schema used for UserViews by setting the UserViewsSchemaName property.

Work with User Defined Views

For example, a SQL statement with a User Defined View called UserViews.RCustomers only lists customers in Raleigh:

SELECT * FROM Customers WHERE City = 'Raleigh';

An example of a query to the driver:

SELECT * FROM UserViews.RCustomers WHERE Status = 'Active';

Resulting in the effective query to the source:

SELECT * FROM Customers WHERE City = 'Raleigh' AND Status = 'Active';

That is a very simple example of a query to a User Defined View that is effectively a combination of the view query and the view definition. It is possible to compose these queries in much more complex patterns. All SQL operations are allowed in both queries and are combined when appropriate.

SSL Configuration

Customize the SSL Configuration

By default, the connector attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store.

To specify another certificate, see the SSLServerCert property for the available formats to do so.

Data Model

The connector allows you to access data in Amazon DynamoDB using a standard database-like interface. Amazon DynamoDB is a highly scalable NoSQL cloud database that is very different from a regular database. In this section we describe how we model schemaless Amazon DynamoDB tables as regular Tables and Stored Procedures.

The connector can dynamically detect schemas at connection time. See Automatic Schema Discovery for more information on defining schemas implicitly at connection time. This method is useful if the structure of your data is volatile.

You can also persist schemas in static schema definitions. The connector's schema files have a simple format. See Custom Schema Definitions for more information on defining and extending static schemas.

Tables

The list of tables is dynamically retrieved from your Amazon DynamoDB account. You can use the stored procedure to create a new table, or you can create a table using the Amazon Web Services Admin Console.

Because DynamoDB tables are partitioned based on their key, you should take care in selecting a proper key based on the query requirements of your table. Refer to the documentation for DynamoDB for more information about using best practices to model data in DynamoDB tables. DynamoDB supports two types of primary keys:

  • Hash Primary Key: This is a single-column key.
  • Hash and Range Primary Key: This is a two-column key that includes a hash column and a range column.

The connector will model all key attributes in DynamoDB as key columns.

Table Columns

Since Amazon DynamoDB tables are schemaless, the connector offers the following two mechanisms to uncover the schema.

Dynamic Schemas

The columns of a table are dynamically determined by scanning data in the first few rows. You can adjust the number of rows that are used by modifying the RowScanDepth property. In addition to the name of the column, the row scan also determines the data type. The following table shows how the different data types supported by Amazon DynamoDB are modeled in the connector.

Amazon DynamoDB Type Modeled Type Encoding Sample Value
Boolean Boolean Not Required True
String String Not Required USA
Blob String Not Required
Number Double Not Required 24.0
String Array String JSON Array ["USA", "Canada", "UK"]
Number Array String JSON Array [20, 200.5, 500]
Blob Array JSON Array JSON Array ["ABCD", "EFGH"]
Document JSON Object JSON Object {"Address":"123 Fake Street", "City":"Chapel Hill", "Zip":"27516"}
List JSON Array JSON Array [{"S":"mystring"}, {"NS":[1, 2]}, {"N":4}]

Static Schemas

Instead of using dynamically discovered schemas, you can define your own schemas. This will give you more control over the projected columns and also enable you to use other data types such as boolean, datetime, etc. Refer to the CreateSchema Stored Procedure in order to create your own schema. You can simply specify the FileName (fullpath) and TableName of the new schema file, which should match with the name of the Amazon DynamoDB table, and edit the column listing to use it for your own table.

Schemaless Operations

While the schema of the table is necessary to report metadata, data may be selected, inserted, updated, or deleted from columns that do not exist in the schema. Columns that do not already exist in the table schema will have their data types dynamically determined based on the data that is specified. See DynamoDB Queries for more information.

Stored Procedures

Stored procedures are function-like interfaces that extend the functionality of the connector beyond simple SELECT/INSERT/UPDATE/DELETE operations with Amazon DynamoDB.

Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Amazon DynamoDB, along with an indication of whether the procedure succeeded or failed.

Jitterbit Connector for Amazon DynamoDB Stored Procedures

Name Description
CreateSchema Creates a schema file for the specified table or view.
CreateTable Creates a table in DynamoDB.

CreateSchema

Creates a schema file for the specified table or view.

CreateSchema

Creates a local schema file (.rsd) from an existing table or view in the data model.

The schema file is created in the directory set in the Location connection property when this procedure is executed. You can edit the file to include or exclude columns, rename columns, or adjust column datatypes.

The connector checks the Location to determine if the names of any .rsd files match a table or view in the data model. If there is a duplicate, the schema file will take precedence over the default instance of this table in the data model. If a schema file is present in Location that does not match an existing table or view, a new table or view entry is added to the data model of the connector.

Input
Name Type Required Accepts Output Streams Description
TableName String True False The name of the table or view.
FileName String False False The full file path and name of the schema to generate. If not set, the FileData output is used instead. Ex : 'C:\Users\User\Desktop\table.rsd'
FileStream String False True An instance of an output stream where file data is written to. Only used if FileName is not set.
Result Set Columns
Name Type Description
Result String Returns Success or Failure.
FileData String The generated schema encoded in Base64. Only returned if FileName is not set.

CreateTable

Creates a table in DynamoDB.

Input
Name Type Required Description
TableName String True The name of the table to create. A minimum of 3 characters and maximum of 255 characters are allowed.
PartitionKeyName String True The name of the partition key for the table.
PartitionKeyType String True The type of the partition key for the table. The allowed values are S, N, B.
SortKeyName String False The name of the sort key for the table.
SortKeyType String False The type of the sort key for the table. The allowed values are S, N, B.
BillingMode String False Controls how you are charged for read and write throughput and how you manage capacity. The allowed values are PROVISIONED, PAY_PER_REQUEST. The default value is PROVISIONED.
ReadCapacityUnits String False The maximum number of strongly consistent reads consumed per second before DynamoDB returns a ThrottlingException. The default value is 5.
WriteCapacityUnits String False The maximum number of writes consumed per second before DynamoDB returns a ThrottlingException. The default value is 5.
Result Set Columns
Name Type Description
Success String This value shows whether the operation was successful or not.

System Tables

You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.

Schema Tables

The following tables return database metadata for Amazon DynamoDB:

Data Source Tables

The following tables return information about how to connect to and query the data source:

  • sys_connection_props: Returns information on the available connection properties.
  • sys_sqlinfo: Describes the SELECT queries that the connector can offload to the data source.

Query Information Tables

The following table returns query statistics for data modification queries, including batch operations:

  • sys_identity: Returns information about batch operations or single updates.

sys_catalogs

Lists the available databases.

The following query retrieves all databases determined by the connection string:

SELECT * FROM sys_catalogs
Columns
Name Type Description
CatalogName String The database name.

sys_schemas

Lists the available schemas.

The following query retrieves all available schemas:

SELECT * FROM sys_schemas
Columns
Name Type Description
CatalogName String The database name.
SchemaName String The schema name.

sys_tables

Lists the available tables.

The following query retrieves the available tables and views:

SELECT * FROM sys_tables
Columns
Name Type Description
CatalogName String The database containing the table or view.
SchemaName String The schema containing the table or view.
TableName String The name of the table or view.
TableType String The table type (table or view).
Description String A description of the table or view.
IsUpdateable Boolean Whether the table can be updated.

sys_tablecolumns

Describes the columns of the available tables and views.

The following query returns the columns and data types for the Account table:

SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Account'
Columns
Name Type Description
CatalogName String The name of the database containing the table or view.
SchemaName String The schema containing the table or view.
TableName String The name of the table or view containing the column.
ColumnName String The column name.
DataTypeName String The data type name.
DataType Int32 An integer indicating the data type. This value is determined at run time based on the environment.
Length Int32 The storage size of the column.
DisplaySize Int32 The designated column's normal maximum width in characters.
NumericPrecision Int32 The maximum number of digits in numeric data. The column length in characters for character and date-time data.
NumericScale Int32 The column scale or number of digits to the right of the decimal point.
IsNullable Boolean Whether the column can contain null.
Description String A brief description of the column.
Ordinal Int32 The sequence number of the column.
IsAutoIncrement String Whether the column value is assigned in fixed increments.
IsGeneratedColumn String Whether the column is generated.
IsHidden Boolean Whether the column is hidden.
IsArray Boolean Whether the column is an array.
IsReadOnly Boolean Whether the column is read-only.
IsKey Boolean Indicates whether a field returned from sys_tablecolumns is the primary key of the table.

sys_procedures

Lists the available stored procedures.

The following query retrieves the available stored procedures:

SELECT * FROM sys_procedures
Columns
Name Type Description
CatalogName String The database containing the stored procedure.
SchemaName String The schema containing the stored procedure.
ProcedureName String The name of the stored procedure.
Description String A description of the stored procedure.
ProcedureType String The type of the procedure, such as PROCEDURE or FUNCTION.

sys_procedureparameters

Describes stored procedure parameters.

The following query returns information about all of the input parameters for the CreateSchema stored procedure:

SELECT * FROM sys_procedureparameters WHERE ProcedureName='CreateSchema' AND Direction=1 OR Direction=2
Columns
Name Type Description
CatalogName String The name of the database containing the stored procedure.
SchemaName String The name of the schema containing the stored procedure.
ProcedureName String The name of the stored procedure containing the parameter.
ColumnName String The name of the stored procedure parameter.
Direction Int32 An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters.
DataTypeName String The name of the data type.
DataType Int32 An integer indicating the data type. This value is determined at run time based on the environment.
Length Int32 The number of characters allowed for character data. The number of digits allowed for numeric data.
NumericPrecision Int32 The maximum precision for numeric data. The column length in characters for character and date-time data.
NumericScale Int32 The number of digits to the right of the decimal point in numeric data.
IsNullable Boolean Whether the parameter can contain null.
IsRequired Boolean Whether the parameter is required for execution of the procedure.
IsArray Boolean Whether the parameter is an array.
Description String The description of the parameter.
Ordinal Int32 The index of the parameter.

sys_keycolumns

Describes the primary and foreign keys.

The following query retrieves the primary key for the Account table:

SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Account'
Columns
Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
IsKey Boolean Whether the column is a primary key in the table referenced in the TableName field.
IsForeignKey Boolean Whether the column is a foreign key referenced in the TableName field.
PrimaryKeyName String The name of the primary key.
ForeignKeyName String The name of the foreign key.
ReferencedCatalogName String The database containing the primary key.
ReferencedSchemaName String The schema containing the primary key.
ReferencedTableName String The table containing the primary key.
ReferencedColumnName String The column name of the primary key.

sys_foreignkeys

Describes the foreign keys.

The following query retrieves all foreign keys which refer to other tables:

SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
Columns
Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
PrimaryKeyName String The name of the primary key.
ForeignKeyName String The name of the foreign key.
ReferencedCatalogName String The database containing the primary key.
ReferencedSchemaName String The schema containing the primary key.
ReferencedTableName String The table containing the primary key.
ReferencedColumnName String The column name of the primary key.
ForeignKeyType String Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key.

sys_primarykeys

Describes the primary keys.

The following query retrieves the primary keys from all tables and views:

SELECT * FROM sys_primarykeys
Columns
Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
KeySeq String The sequence number of the primary key.
KeyName String The name of the primary key.

sys_indexes

Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.

The following query retrieves all indexes that are not primary keys:

SELECT * FROM sys_indexes WHERE IsPrimary='false'
Columns
Name Type Description
CatalogName String The name of the database containing the index.
SchemaName String The name of the schema containing the index.
TableName String The name of the table containing the index.
IndexName String The index name.
ColumnName String The name of the column associated with the index.
IsUnique Boolean True if the index is unique. False otherwise.
IsPrimary Boolean True if the index is a primary key. False otherwise.
Type Int16 An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3).
SortOrder String The sort order: A for ascending or D for descending.
OrdinalPosition Int16 The sequence number of the column in the index.

sys_connection_props

Returns information on the available connection properties and those set in the connection string.

When querying this table, the config connection string should be used:

jdbc:cdata:amazondynamodb:config:

This connection string enables you to query this table without a valid connection.

The following query retrieves all connection properties that have been set in the connection string or set through a default value:

SELECT * FROM sys_connection_props WHERE Value <> ''
Columns
Name Type Description
Name String The name of the connection property.
ShortDescription String A brief description.
Type String The data type of the connection property.
Default String The default value if one is not explicitly set.
Values String A comma-separated list of possible values. A validation error is thrown if another value is specified.
Value String The value you set or a preconfigured default.
Required Boolean Whether the property is required to connect.
Category String The category of the connection property.
IsSessionProperty String Whether the property is a session property, used to save information about the current connection.
Sensitivity String The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms.
PropertyName String A camel-cased truncated form of the connection property name.
Ordinal Int32 The index of the parameter.
CatOrdinal Int32 The index of the parameter category.
Hierarchy String Shows dependent properties associated that need to be set alongside this one.
Visible Boolean Informs whether the property is visible in the connection UI.
ETC String Various miscellaneous information about the property.

sys_sqlinfo

Describes the SELECT query processing that the connector can offload to the data source.

Discovering the Data Source's SELECT Capabilities

Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.

Name Description Possible Values
AGGREGATE_FUNCTIONS Supported aggregation functions. AVG, COUNT, MAX, MIN, SUM, DISTINCT
COUNT Whether COUNT function is supported. YES, NO
IDENTIFIER_QUOTE_OPEN_CHAR The opening character used to escape an identifier. [
IDENTIFIER_QUOTE_CLOSE_CHAR The closing character used to escape an identifier. ]
SUPPORTED_OPERATORS A list of supported SQL operators. =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR
GROUP_BY Whether GROUP BY is supported, and, if so, the degree of support. NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE
STRING_FUNCTIONS Supported string functions. LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE
NUMERIC_FUNCTIONS Supported numeric functions. ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE
TIMEDATE_FUNCTIONS Supported date/time functions. NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT
REPLICATION_SKIP_TABLES Indicates tables skipped during replication.
REPLICATION_TIMECHECK_COLUMNS A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication.
IDENTIFIER_PATTERN String value indicating what string is valid for an identifier.
SUPPORT_TRANSACTION Indicates if the provider supports transactions such as commit and rollback. YES, NO
DIALECT Indicates the SQL dialect to use.
KEY_PROPERTIES Indicates the properties which identify the uniform database.
SUPPORTS_MULTIPLE_SCHEMAS Indicates if multiple schemas may exist for the provider. YES, NO
SUPPORTS_MULTIPLE_CATALOGS Indicates if multiple catalogs may exist for the provider. YES, NO
DATASYNCVERSION The Data Sync version needed to access this driver. Standard, Starter, Professional, Enterprise
DATASYNCCATEGORY The Data Sync category of this driver. Source, Destination, Cloud Destination
SUPPORTSENHANCEDSQL Whether enhanced SQL functionality beyond what is offered by the API is supported. TRUE, FALSE
SUPPORTS_BATCH_OPERATIONS Whether batch operations are supported. YES, NO
SQL_CAP All supported SQL capabilities for this driver. SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX
PREFERRED_CACHE_OPTIONS A string value specifies the preferred cacheOptions.
ENABLE_EF_ADVANCED_QUERY Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. YES, NO
PSEUDO_COLUMNS A string array indicating the available pseudo columns.
MERGE_ALWAYS If the value is true, The Merge Mode is forcibly executed in Data Sync. TRUE, FALSE
REPLICATION_MIN_DATE_QUERY A select query to return the replicate start datetime.
REPLICATION_MIN_FUNCTION Allows a provider to specify the formula name to use for executing a server side min.
REPLICATION_START_DATE Allows a provider to specify a replicate startdate.
REPLICATION_MAX_DATE_QUERY A select query to return the replicate end datetime.
REPLICATION_MAX_FUNCTION Allows a provider to specify the formula name to use for executing a server side max.
IGNORE_INTERVALS_ON_INITIAL_REPLICATE A list of tables which will skip dividing the replicate into chunks on the initial replicate.
CHECKCACHE_USE_PARENTID Indicates whether the CheckCache statement should be done against the parent key column. TRUE, FALSE
CREATE_SCHEMA_PROCEDURES Indicates stored procedures that can be used for generating schema files.

The following query retrieves the operators that can be used in the WHERE clause:

SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'

Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the NoSQL Database section for more information.

Columns
Name Type Description
NAME String A component of SQL syntax, or a capability that can be processed on the server.
VALUE String Detail on the supported SQL or SQL syntax.

sys_identity

Returns information about attempted modifications.

The following query retrieves the Ids of the modified rows in a batch operation:

SELECT * FROM sys_identity
Columns
Name Type Description
Id String The database-generated ID returned from a data modification operation.
Batch String An identifier for the batch. 1 for a single operation.
Operation String The result of the operation in the batch: INSERTED, UPDATED, or DELETED.
Message String SUCCESS or an error message if the update in the batch failed.

Advanced Configurations Properties

The advanced configurations properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure. Click the links for further details.

Connection

Property Description
UseLakeFormation When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, Microsoft Entra ID, PingFederate, while providing a SAML assertion.

AWS Authentication

Property Description
AuthScheme The scheme used for authentication. Accepted entries are: Auto, , AwsRootKeys , AwsIAMRoles , AwsEC2Roles , AwsMFA , ADFS, Okta, PingFederate , AwsCredentialsFile , AwsCognitoBasic , AwsCognitoSrp.
Domain Your AWS domain name. You can optionally choose to associate your domain name with AWS.
AWSAccessKey Your AWS account access key. This value is accessible from your AWS security credentials page.
AWSSecretKey Your AWS account secret key. This value is accessible from your AWS security credentials page.
AWSRoleARN The Amazon Resource Name of the role to use when authenticating.
AWSRegion The hosting region for your Amazon Web Services.
AWSCredentialsFile The path to the AWS Credentials File to be used for authentication.
AWSCredentialsFileProfile The name of the profile to be used from the supplied AWSCredentialsFile.
AWSSessionToken Your AWS session token.
AWSExternalId A unique identifier that might be required when you assume a role in another account.
MFASerialNumber The serial number of the MFA device if one is being used.
MFAToken The temporary token available from your MFA device.
TemporaryTokenDuration The amount of time (in seconds) a temporary token will last.
AWSCognitoRegion The hosting region for AWS Cognito.
AWSUserPoolId The User Pool Id.
AWSUserPoolClientAppId The User Pool Client App Id.
AWSUserPoolClientAppSecret Optional. The User Pool Client App Secret.
AWSIdentityPoolId The Identity Pool Id.

SSO

Property Description
User The IDP user used to authenticate the IDP via SSO.
Password The password used to authenticate the IDP user via SSO.
SSOLoginURL The identity provider's login URL.
SSOProperties Additional properties required to connect to the identity provider in a semicolon-separated list.
SSOExchangeUrl The URL used for consuming the SAML response and exchanging it for service specific credentials.

SSL

Property Description
SSLServerCert The certificate to be accepted from the server when connecting using TLS/SSL.

Schema

Property Description
Location A path to the directory that contains the schema files defining tables, views, and stored procedures.
BrowsableSchemas This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA, SchemaB, SchemaC.
Tables This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA, TableB, TableC.
Views Restricts the views reported to a subset of the available tables. For example, Views=ViewA, ViewB, ViewC.

Miscellaneous

Property Description
AutoDetectIndex A boolean indicating if secondary indexes should be automatically detected based on the query used.
FlattenArrays By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays.
FlattenObjects Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.
FlexibleSchema Set FlexibleSchema to true to scan for additional metadata on the query result set. Otherwise, the metadata will remain the same.
GenerateSchemaFiles Indicates the user preference as to when schemas should be generated and saved.
IgnoreTypes Removes support for the specified types. For example, Time. These types will then be reported as strings instead.
MaximumRequestRetries The maximum number of times to retry a request.
MaxRows Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.
Other These hidden properties are used only in specific use cases.
Pagesize Configures the maximum number of items that Amazon DynamoDB evaluates per API request.
PseudoColumns This property indicates whether or not to include pseudo columns as columns to the table.
RetryWaitTime The minimum number of milliseconds the provider will wait to retry a request.
RowScanDepth The maximum number of rows to scan to look for the columns available in a table.
SeparatorCharacter The character or characters used to denote hierarchy.
ThreadCount The number of threads to use when selecting data via a parallel scan. Setting ThreadCount to 1 will disable parallel scans.
Timeout The value in seconds until the timeout error is thrown, canceling the operation.
TypeDetectionScheme Determines how to determine the data type of columns.
UseBatchWriteItemOperation When enabled the provider will use BatchWriteItem operation for handling updates and INSERTs. By default, the provider uses ExecuteStatement/BatchExecuteStatement operation. You need to enable BatchWriteItem only when inserting/updating binary/binary-set data. ExecuteStatement/BatchExecuteStatement doesn't support manipulating binary fields.
UseConsistentReads Whether to alyways use Consistent Reads or not when querying DynamoDb.
UserDefinedViews A filepath pointing to the JSON configuration file containing your custom views.
UseSimpleNames Boolean determining if simple names should be used for tables and columns.

Connection

This section provides a complete list of connection properties you can configure.

Property Description
UseLakeFormation When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, Microsoft Entra ID, PingFederate, while providing a SAML assertion.

UseLakeFormation

When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, Microsoft Entra ID, PingFederate, while providing a SAML assertion.

Data Type

bool

Default Value

false

Remarks

When this property is set to true, AWSLakeFormation service will be used to retrieve temporary credentials, which enforce access policies against the user based on the configured IAM role. The service can be used when authenticating through OKTA, ADFS, Microsoft Entra ID, PingFederate, while providing a SAML assertion.

AWS Authentication

This section provides a complete list of AWS authentication properties you can configure.

Property Description
AuthScheme The scheme used for authentication. Accepted entries are: Auto, , AwsRootKeys , AwsIAMRoles , AwsEC2Roles , AwsMFA , ADFS, Okta, PingFederate , AwsCredentialsFile , AwsCognitoBasic , AwsCognitoSrp.
Domain Your AWS domain name. You can optionally choose to associate your domain name with AWS.
AWSAccessKey Your AWS account access key. This value is accessible from your AWS security credentials page.
AWSSecretKey Your AWS account secret key. This value is accessible from your AWS security credentials page.
AWSRoleARN The Amazon Resource Name of the role to use when authenticating.
AWSRegion The hosting region for your Amazon Web Services.
AWSCredentialsFile The path to the AWS Credentials File to be used for authentication.
AWSCredentialsFileProfile The name of the profile to be used from the supplied AWSCredentialsFile.
AWSSessionToken Your AWS session token.
AWSExternalId A unique identifier that might be required when you assume a role in another account.
MFASerialNumber The serial number of the MFA device if one is being used.
MFAToken The temporary token available from your MFA device.
TemporaryTokenDuration The amount of time (in seconds) a temporary token will last.
AWSCognitoRegion The hosting region for AWS Cognito.
AWSUserPoolId The User Pool Id.
AWSUserPoolClientAppId The User Pool Client App Id.
AWSUserPoolClientAppSecret Optional. The User Pool Client App Secret.
AWSIdentityPoolId The Identity Pool Id.

AuthScheme

The scheme used for authentication. Accepted entries are: Auto, , AwsRootKeys , AwsIAMRoles , AwsEC2Roles , AwsMFA , ADFS, Okta, PingFederate , AwsCredentialsFile , AwsCognitoBasic , AwsCognitoSrp.

Data Type

string

Default Value

AwsRootKeys

Remarks

Use the following options to select your authentication scheme:

  • Auto: Set this to have the connector attempt to automatically resolve the proper authentication scheme to use based on the other connection properties specified.
  • TemporaryCredentials: Set this to leverage temporary security credentials alongside a session token to connect.
  • AwsRootKeys: Set this to use the root user access key and secret. Useful for quickly testing, but production use cases are encouraged to use something with narrowed permissions.
  • AwsIAMRoles: Set to use IAM Roles for the connection.
  • AwsEC2Roles: Set this to automatically use IAM Roles assigned to the EC2 machine the Jitterbit Connector for Amazon DynamoDB is currently running on.
  • AwsMFA: Set to use multi factor authentication.
  • Okta: Set to use a single sign on connection with OKTA as the identity provider.
  • ADFS: Set to use a single sign on connection with ADFS as the identity provider.
  • PingFederate: Set to use a single sign on connection with PingFederate as the identity provider.
  • AwsCredentialsFile: Set to use a credential file for authentication.
  • AwsCognitoSrp: Set to use Cognito based authentication. This is recommended over AwsCognitoBasic because this option does NOT send the password to the server for authentication, instead it uses the SRP protocol.
  • AwsCognitoBasic: Set to use Cognito based authentication.

Domain

Your AWS domain name. You can optionally choose to associate your domain name with AWS.

Data Type

string

Default Value

amazonaws.com

Remarks

If you do not have a unique AWS domain name, leave this value specified as amazonaws.com.

AWSAccessKey

Your AWS account access key. This value is accessible from your AWS security credentials page.

Data Type

string

Default Value

""

Remarks

Your AWS account access key. This value is accessible from your AWS security credentials page:

  1. Sign into the AWS Management console with the credentials for your root account.
  2. Select your account name or number and select My Security Credentials in the menu that is displayed.
  3. Click Continue to Security Credentials and expand the Access Keys section to manage or create root account access keys.

AWSSecretKey

Your AWS account secret key. This value is accessible from your AWS security credentials page.

Data Type

string

Default Value

""

Remarks

Your AWS account secret key. This value is accessible from your AWS security credentials page:

  1. Sign into the AWS Management console with the credentials for your root account.
  2. Select your account name or number and select My Security Credentials in the menu that is displayed.
  3. Click Continue to Security Credentials and expand the Access Keys section to manage or create root account access keys.

AWSRoleARN

The Amazon Resource Name of the role to use when authenticating.

Data Type

string

Default Value

""

Remarks

When authenticating outside of AWS, it is common to use a Role for authentication instead of your direct AWS account credentials. Entering the AWSRoleARN will cause the Jitterbit Connector for Amazon DynamoDB to perform a role based authentication instead of using the AWSAccessKey and AWSSecretKey directly. The AWSAccessKey and AWSSecretKey must still be specified to perform this authentication. You cannot use the credentials of an AWS root user when setting RoleARN. The AWSAccessKey and AWSSecretKey must be those of an IAM user.

AWSRegion

The hosting region for your Amazon Web Services.

Possible Values

OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, JAKARTA, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, TOKYO, CENTRAL, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, STOCKHOLM, ZURICH, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST

Data Type

string

Default Value

NORTHERNVIRGINIA

Remarks

The hosting region for your Amazon Web Services. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, JAKARTA, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, TOKYO, CENTRAL, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, STOCKHOLM, ZURICH, BAHRAIN, UAE, SAOPAULO, GOVCLOUDEAST, and GOVCLOUDWEST.

AWSCredentialsFile

The path to the AWS Credentials File to be used for authentication.

Data Type

string

Default Value

""

Remarks

The path to the AWS Credentials File to be used for authentication. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html for more information.

AWSCredentialsFileProfile

The name of the profile to be used from the supplied AWSCredentialsFile.

Data Type

string

Default Value

default

Remarks

The name of the profile to be used from the supplied AWSCredentialsFile. See https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html for more information.

AWSSessionToken

Your AWS session token.

Data Type

string

Default Value

""

Remarks

Your AWS session token. This value can be retrieved in different ways. See this link for more info.

AWSExternalId

A unique identifier that might be required when you assume a role in another account.

Data Type

string

Default Value

""

Remarks

A unique identifier that might be required when you assume a role in another account.

MFASerialNumber

The serial number of the MFA device if one is being used.

Data Type

string

Default Value

""

Remarks

You can find the device for an IAM user by going to the AWS Management Console and viewing the user's security credentials. For virtual devices, this is actually an Amazon Resource Name (such as arn:aws:iam:123456789012:mfa/user).

MFAToken

The temporary token available from your MFA device.

Data Type

string

Default Value

""

Remarks

If MFA is required, this value will be used along with the MFASerialNumber to retrieve temporary credentials to login. The temporary credentials available from AWS will only last up to 1 hour by default (see TemporaryTokenDuration). Once the time is up, the connection must be updated to specify a new MFA token so that new credentials may be obtained.

TemporaryTokenDuration

The amount of time (in seconds) a temporary token will last.

Data Type

string

Default Value

3600

Remarks

Temporary tokens are used with both MFA and Role based authentication. Temporary tokens will eventually time out, at which time a new temporary token must be obtained. For situations where MFA is not used, this is not a big deal. The Jitterbit Connector for Amazon DynamoDB will internally request a new temporary token once the temporary token has expired.

However, for MFA required connection, a new MFAToken must be specified in the connection to retrieve a new temporary token. This is a more intrusive issue since it requires an update to the connection by the user. The maximum and minimum that can be specified will depend largely on the connection being used.

For Role based authentication, the minimum duration is 900 seconds (15 minutes) while the maximum if 3600 (1 hour). Even if MFA is used with role based authentication, 3600 is still the maximum.

For MFA authentication by itself (using an IAM User or root user), the minimum is 900 seconds (15 minutes), the maximum is 129600 (36 hours).

AWSCognitoRegion

The hosting region for AWS Cognito.

Possible Values

OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, TOKYO, CENTRAL, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, STOCKHOLM, BAHRAIN, SAOPAULO, GOVCLOUDEAST, GOVCLOUDWEST

Data Type

string

Default Value

NORTHERNVIRGINIA

Remarks

The hosting region for AWS Cognito. Available values are OHIO, NORTHERNVIRGINIA, NORTHERNCALIFORNIA, OREGON, CAPETOWN, HONGKONG, MUMBAI, OSAKA, SEOUL, SINGAPORE, SYDNEY, TOKYO, CENTRAL, BEIJING, NINGXIA, FRANKFURT, IRELAND, LONDON, MILAN, PARIS, STOCKHOLM, BAHRAIN, SAOPAULO, GOVCLOUDEAST, and GOVCLOUDWEST.

AWSUserPoolId

The User Pool Id.

Data Type

string

Default Value

""

Remarks

You can find this in AWS Cognito -> Manage User Pools -> select your user pool -> General settings -> Pool Id.

AWSUserPoolClientAppId

The User Pool Client App Id.

Data Type

string

Default Value

""

Remarks

You can find this in AWS Cognito -> Manage Identity Pools -> select your user pool -> General settings -> App clients -> App client Id.

AWSUserPoolClientAppSecret

Optional. The User Pool Client App Secret.

Data Type

string

Default Value

""

Remarks

You can find this in AWS Cognito -> Manage Identity Pools -> select your user pool -> General settings -> App clients -> App client secret.

AWSIdentityPoolId

The Identity Pool Id.

Data Type

string

Default Value

""

Remarks

You can find this in AWS Cognito -> Manage Identity Pools -> select your identity pool -> Edit identity pool -> Identity Pool Id

SSO

This section provides a complete list of SSO properties you can configure.

Property Description
User The IDP user used to authenticate the IDP via SSO.
Password The password used to authenticate the IDP user via SSO.
SSOLoginURL The identity provider's login URL.
SSOProperties Additional properties required to connect to the identity provider in a semicolon-separated list.
SSOExchangeUrl The URL used for consuming the SAML response and exchanging it for service specific credentials.

User

The IDP user used to authenticate the IDP via SSO.

Data Type

string

Default Value

""

Remarks

Together with Password, this field is used to authenticate in SSO connections against the Amazon DynamoDB server.

Password

The password used to authenticate the IDP user via SSO.

Data Type

string

Default Value

""

Remarks

The User and Password are together used in SSO connections to authenticate with the server.

SSOLoginURL

The identity provider's login URL.

Data Type

string

Default Value

""

Remarks

The identity provider's login URL.

SSOProperties

Additional properties required to connect to the identity provider in a semicolon-separated list.

Data Type

string

Default Value

""

Remarks

Additional properties required to connect to the identity provider in a semicolon-separated list. SSOProperties is used in conjunction with the the AWSRoleARN and AWSPrincipalARN. The following section provides an example using the OKTA identity provider.

ADFS

To connect to ADFS, set the AuthScheme to ADFS, and set these properties:

Example connection string:

AuthScheme=ADFS; AWSRegion=Ireland; User=user@cdata.com; Password=CH8WerW121235647iCa6; SSOLoginURL='https://adfs.domain.com'; AWSRoleArn=arn:aws:iam:1234:role/ADFS_SSO; AWSPrincipalArn=arn:aws:iam:1234:saml-provider/ADFSProvider; S3StagingDirectory=s3://athena/staging;
Okta

To connect to Okta, set the AuthScheme to Okta, and set these properties:

If you are using a trusted application or proxy that overrides the Okta client request OR configuring MFA, you must use combinations of SSOProperties to authenticate using Okta. Set any of the following, as applicable:

  • APIToken: When authenticating a user via a trusted application or proxy that overrides the Okta client request context, set this to the API Token the customer created from the Okta organization.

  • MFAType: If you have configured the MFA flow, set this to one of the following supported types: OktaVerify, Email, or SMS.

  • MFAPassCode: If you have configured the MFA flow, set this to a valid passcode.

    If you set this to empty or an invalid value, the connector issues a one-time password challenge to your device or email. After the passcode is received, reopen the connection where the retrieved one-time password value is set to the MFAPassCode connection property.

  • MFARememberDevice: True by default. Okta supports remembering devices when MFA is required. If remembering devices is allowed according to the configured authentication policies, the connector sends a device token to extend MFA authentication lifetime. If you do not want MFA to be remembered, set this variable to False.

Example connection string:

AuthScheme=Okta; AWSRegion=Ireland; User=user@cdata.com; Password=CH8WerW121235647iCa6; SSOLoginURL='https://cdata-us.okta.com/home/amazon_aws/0oa35m8arsAL5f5NrE6NdA356/272'; SSOProperties='ApiToken=01230GGG2ceAnm_tPAf4MhiMELXZ0L0N1pAYrO1VR-hGQSf;'; AWSRoleArn=arn:aws:iam:1234:role/Okta_SSO; AWSPrincipalARN=arn:aws:iam:1234:saml-provider/OktaProvider; S3StagingDirectory=s3://athena/staging;

SSOExchangeUrl

The URL used for consuming the SAML response and exchanging it for service specific credentials.

Data Type

string

Default Value

""

Remarks

The Jitterbit Connector for Amazon DynamoDB will use the URL specified here to consume a SAML response and exchange it for service specific credentials. The retrieved credentials are the final piece during the SSO connection that are used to communicate with Amazon DynamoDB.

SSL

This section provides a complete list of SSL properties you can configure.

Property Description
SSLServerCert The certificate to be accepted from the server when connecting using TLS/SSL.

SSLServerCert

The certificate to be accepted from the server when connecting using TLS/SSL.

Data Type

string

Default Value

""

Remarks

If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.

This property can take the following forms:

Description Example
A full PEM Certificate (example shortened for brevity) -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE-----
A path to a local file containing the certificate C:\\cert.cer
The public key (example shortened for brevity) -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY-----
The MD5 Thumbprint (hex values can also be either space or colon separated) ecadbdda5a1529c58a1e9e09828d70e4
The SHA1 Thumbprint (hex values can also be either space or colon separated) 34a929226ae0819f2ec14b4a3d904f801cbb150d

If not specified, any certificate trusted by the machine is accepted.

Certificates are validated as trusted by the machine based on the System's trust store. The trust store used is the 'javax.net.ssl.trustStore' value specified for the system. If no value is specified for this property, Java's default trust store is used (for example, JAVA_HOME\lib\security\cacerts).

Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.

Schema

This section provides a complete list of schema properties you can configure.

Property Description
Location A path to the directory that contains the schema files defining tables, views, and stored procedures.
BrowsableSchemas This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA, SchemaB, SchemaC.
Tables This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA, TableB, TableC.
Views Restricts the views reported to a subset of the available tables. For example, Views=ViewA, ViewB, ViewC.

Location

A path to the directory that contains the schema files defining tables, views, and stored procedures.

Data Type

string

Default Value

%APPDATA%\AmazonDynamoDB Data Provider\Schema

Remarks

The path to a directory which contains the schema files for the connector (.rsd files for tables and views, .rsb files for stored procedures). The folder location can be a relative path from the location of the executable. The Location property is only needed if you want to customize definitions (for example, change a column name, ignore a column, and so on) or extend the data model with new tables, views, or stored procedures.

If left unspecified, the default location is "%APPDATA%\AmazonDynamoDB Data Provider\Schema" with %APPDATA% being set to the user's configuration directory:

Platform %APPDATA%
Windows The value of the APPDATA environment variable
Mac ~/Library/Application Support
Linux ~/.config

BrowsableSchemas

This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC.

Data Type

string

Default Value

""

Remarks

Listing the schemas from databases can be expensive. Providing a list of schemas in the connection string improves the performance.

Tables

This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC.

Data Type

string

Default Value

""

Remarks

Listing the tables from some databases can be expensive. Providing a list of tables in the connection string improves the performance of the connector.

This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.

Specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.

Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.

Views

Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC.

Data Type

string

Default Value

""

Remarks

Listing the views from some databases can be expensive. Providing a list of views in the connection string improves the performance of the connector.

This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.

Specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.

Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.

Miscellaneous

This section provides a complete list of miscellaneous properties you can configure.

Property Description
AutoDetectIndex A boolean indicating if secondary indexes should be automatically detected based on the query used.
FlattenArrays By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays.
FlattenObjects Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.
FlexibleSchema Set FlexibleSchema to true to scan for additional metadata on the query result set. Otherwise, the metadata will remain the same.
GenerateSchemaFiles Indicates the user preference as to when schemas should be generated and saved.
IgnoreTypes Removes support for the specified types. For example, Time. These types will then be reported as strings instead.
MaximumRequestRetries The maximum number of times to retry a request.
MaxRows Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.
Other These hidden properties are used only in specific use cases.
Pagesize Configures the maximum number of items that Amazon DynamoDB evaluates per API request.
PseudoColumns This property indicates whether or not to include pseudo columns as columns to the table.
RetryWaitTime The minimum number of milliseconds the provider will wait to retry a request.
RowScanDepth The maximum number of rows to scan to look for the columns available in a table.
SeparatorCharacter The character or characters used to denote hierarchy.
ThreadCount The number of threads to use when selecting data via a parallel scan. Setting ThreadCount to 1 will disable parallel scans.
Timeout The value in seconds until the timeout error is thrown, canceling the operation.
TypeDetectionScheme Determines how to determine the data type of columns.
UseBatchWriteItemOperation When enabled the provider will use BatchWriteItem operation for handling updates and INSERTs. By default, the provider uses ExecuteStatement/BatchExecuteStatement operation. You need to enable BatchWriteItem only when inserting/updating binary/binary-set data. ExecuteStatement/BatchExecuteStatement doesn't support manipulating binary fields.
UseConsistentReads Whether to alyways use Consistent Reads or not when querying DynamoDb.
UserDefinedViews A filepath pointing to the JSON configuration file containing your custom views.
UseSimpleNames Boolean determining if simple names should be used for tables and columns.

AutoDetectIndex

A boolean indicating if secondary indexes should be automatically detected based on the query used.

Data Type

bool

Default Value

true

Remarks

In DynamoDB, you can use secondary indexes to more quickly select data from a given table. By default, we attempt to automatically detect an index to use based on the query criteria. However, this may not always be desirable. To turn off index detecting logic, set the property to false, or if you have control over the query and would prefer to specify the index yourself, use the SecondaryIndexName pseudo column to specify which index to use (if any).

FlattenArrays

By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. Set FlattenArrays to the number of elements you want to return from nested arrays.

Data Type

string

Default Value

""

Remarks

By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. This is only recommended for arrays that are expected to be short.

Set FlattenArrays to the number of elements you want to return from nested arrays. The specified elements are returned as columns. The zero-based index is concatenated to the column name. Other elements are ignored.

For example, you can return an arbitrary number of elements from an array of strings:

["FLOW-MATIC","LISP","COBOL"]

When FlattenArrays is set to 1, the preceding array is flattened into the following table:

Column Name Column Value
languages_0 FLOW-MATIC

FlattenObjects

Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.

Data Type

bool

Default Value

true

Remarks

Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. The property name is concatenated onto the object name with an underscore to generate the column name.

For example, you can flatten the nested objects below at connection time:

[
     { "grade": "A", "score": 2 },
     { "grade": "A", "score": 6 },
     { "grade": "A", "score": 10 },
     { "grade": "A", "score": 9 },
     { "grade": "B", "score": 14 }
]

When FlattenObjects is set to true and FlattenArrays is set to 1, the preceding array is flattened into the following table:

Column Name Column Value
grades_0\_grade A
grades_0\_score 2

FlexibleSchema

Set FlexibleSchema to true to scan for additional metadata on the query result set. Otherwise, the metadata will remain the same.

Data Type

bool

Default Value

true

Remarks

Set FlexibleSchema to true to scan for additional metadata on the query result set. Otherwise, the metadata will remain the same.

GenerateSchemaFiles

Indicates the user preference as to when schemas should be generated and saved.

Possible Values

Never, OnUse, OnStart, OnCreate

Data Type

string

Default Value

Never

Remarks

This property outputs schemas to .rsd files in the path specified by Location.

Available settings are the following:

  • Never: A schema file will never be generated.
  • OnUse: A schema file will be generated the first time a table is referenced, provided the schema file for the table does not already exist.
  • OnStart: A schema file will be generated at connection time for any tables that do not currently have a schema file.
  • OnCreate: A schema file will be generated by when running a CREATE TABLE SQL query.

Note that if you want to regenerate a file, you will first need to delete it.

Generate Schemas with SQL

When you set GenerateSchemaFiles to OnUse, the connector generates schemas as you execute SELECT queries. Schemas are generated for each table referenced in the query.

When you set GenerateSchemaFiles to OnCreate, schemas are only generated when a CREATE TABLE query is executed.

Generate Schemas on Connection

Another way to use this property is to obtain schemas for every table in your database when you connect. To do so, set GenerateSchemaFiles to OnStart and connect.

IgnoreTypes

Removes support for the specified types. For example, Time. These types will then be reported as strings instead.

Data Type

string

Default Value

Datetime,Date,Time

Remarks

Removes support for the specified types. For example, Time. These types will then be reported as strings instead.

MaximumRequestRetries

The maximum number of times to retry a request.

Data Type

string

Default Value

4

Remarks

MaximumRequestRetries is the maximum number of times the connector will retry a request when the problem has been detected as temporary (errors like "unknown error", network issues, and exceeding the maximum threshold per table). In this case on the first retry the connector will back off and wait for the amount of time designated by RetryWaitTime. If that request fails, the connector will double the time and then double again until the connector has exhausted the available retries.

For example, if RetryWaitTime is set to 2 seconds and MaximumRequestRetries is set to 5, the wait times will be as follows: 0 -> 2 -> 4 -> 8 -> 16 -> 32.

MaxRows

Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.

Data Type

int

Default Value

-1

Remarks

Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.

Other

These hidden properties are used only in specific use cases.

Data Type

string

Default Value

""

Remarks

The properties listed below are available for specific use cases. Normal driver use cases and functionality should not require these properties.

Specify multiple properties in a semicolon-separated list.

Integration and Formatting
Property Description
DefaultColumnSize Sets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000.
ConvertDateTimeToGMT Determines whether to convert date-time values to GMT, instead of the local time of the machine.
RecordToFile=filename Records the underlying socket data transfer to the specified file.

Pagesize

Configures the maximum number of items that Amazon DynamoDB evaluates per API request.

Data Type

int

Default Value

-1

Remarks

Configures the maximum number of items that Amazon DynamoDB evaluates (not necessarily the number of matching items) per api request. If Amazon DynamoDB processes the number of items up to the limit while processing the results, it stops the operation and returns the matching values up to that point, along with a pagination token used to pull the rest of the data. Also, if the processed dataset size exceeds 1 MB before Amazon DynamoDB reaches this page size limit, it stops the operation and returns the matching values up to the limit. The default value is -1 which lets the server calculate the maximum page size.

PseudoColumns

This property indicates whether or not to include pseudo columns as columns to the table.

Data Type

string

Default Value

""

Remarks

This setting is particularly helpful in Entity Framework, which does not allow you to set a value for a pseudo column unless it is a table column. The value of this connection setting is of the format "Table1=Column1, Table1=Column2, Table2=Column3". You can use the "*" character to include all tables and all columns; for example, "*=*".

RetryWaitTime

The minimum number of milliseconds the provider will wait to retry a request.

Data Type

string

Default Value

2000

Remarks

The value of this property is doubled on every retry to determine how long to wait until the next retry. Specify the maximum number of retries with MaximumRequestRetries.

RowScanDepth

The maximum number of rows to scan to look for the columns available in a table.

Data Type

int

Default Value

50

Remarks

The columns in a table must be determined by scanning table rows. This value determines the maximum number of rows that will be scanned.

Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.

SeparatorCharacter

The character or characters used to denote hierarchy.

Data Type

string

Default Value

.

Remarks

In order to flatten out structures such as Maps and List attributes in DynamoDB, we need some specifier that states what the separation is between those columns and other columns. If this value is "." and a column comes back with the name address.city, this indicates that there is a mapped attribute with a child called city. If your data has columns that already use a single period within the attribute name, set the SeparatorCharacter to a different character or characters.

ThreadCount

The number of threads to use when selecting data via a parallel scan. Setting ThreadCount to 1 will disable parallel scans.

Data Type

string

Default Value

5

Remarks

Parallel scans allow data to be retrieved faster by splitting up the retrieval process across multiple threads. This can greatly improve performance when scanning data in Amazon DynamoDB. However, this will also consume your read units for a table much faster than a single thread. Consider your available cores, bandwidth, and read units for your tables before increasing the ThreadCount.

Timeout

The value in seconds until the timeout error is thrown, canceling the operation.

Data Type

int

Default Value

60

Remarks

If Timeout = 0, operations do not time out. The operations run until they complete successfully or until they encounter an error condition.

If Timeout expires and the operation is not yet complete, the connector throws an exception.

TypeDetectionScheme

Determines how to determine the data type of columns.

Possible Values

None, RowScan

Data Type

string

Default Value

RowScan

Remarks
Property Description
None Setting TypeDetectionScheme to None will return all columns as string type. Note: Even when set to None, the column names will still be scanned when Header=True.
RowScan Setting TypeDetectionScheme to RowScan will scan rows to heuristically determine the data type. The RowScanDepth determines the number of rows to be scanned. If no value is specified, RowScan will be used by default.

UseBatchWriteItemOperation

When enabled the provider will use BatchWriteItem operation for handling updates and INSERTs. By default, the provider uses ExecuteStatement/BatchExecuteStatement operation. You need to enable BatchWriteItem only when inserting/updating binary/binary-set data. ExecuteStatement/BatchExecuteStatement doesn't support manipulating binary fields.

Data Type

bool

Default Value

false

Remarks

When enabled the provider will use BatchWriteItem operation for handling updates and INSERTs. By default, the provider uses ExecuteStatement/BatchExecuteStatement operation. You need to enable BatchWriteItem only when inserting/updating binary/binary-set data. ExecuteStatement/BatchExecuteStatement doesn't support manipulating binary fields.

UseConsistentReads

Whether to alyways use Consistent Reads or not when querying DynamoDb.

Data Type

bool

Default Value

false

Remarks

This parameter is not supported on global secondary indexes. If you scan or query using a secondary index, Constistent Reads will not be used even if the property is set to true.

UserDefinedViews

A filepath pointing to the JSON configuration file containing your custom views.

Data Type

string

Default Value

""

Remarks

User Defined Views are defined in a JSON-formatted configuration file called UserDefinedViews.json. The connector automatically detects the views specified in this file.

You can also have multiple view definitions and control them using the UserDefinedViews connection property. When you use this property, only the specified views are seen by the connector.

This User Defined View configuration file is formatted as follows:

  • Each root element defines the name of a view.
  • Each root element contains a child element, called query, which contains the custom SQL query for the view.

For example:

{
    "MyView": {
        "query": "SELECT * FROM Account WHERE MyColumn = 'value'"
    },
    "MyView2": {
        "query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
    }
}

Use the UserDefinedViews connection property to specify the location of your JSON configuration file. For example:

"UserDefinedViews", C:\Users\yourusername\Desktop\tmp\UserDefinedViews.json

Note that the specified path is not embedded in quotation marks.

UseSimpleNames

Boolean determining if simple names should be used for tables and columns.

Data Type

bool

Default Value

false

Remarks

Amazon DynamoDB tables and columns can use special characters in names that are normally not allowed in standard databases. UseSimpleNames makes the connector easier to use with traditional database tools.

Setting UseSimpleNames to true will simplify the names of tables and columns returned. It will enforce a naming scheme such that only alphanumeric characters and the underscore are valid for the displayed table and column names. Any nonalphanumeric characters will be converted to an underscore.