Skip to Content

Couchbase Connection Details

Introduction

Connector Version

This documentation is based on version 23.0.8803 of the connector.

Get Started

Couchbase Version Support

The Couchbase connector models Couchbase documents in a bucket as tables in a relational database; connect to Couchbase Server versions 4.0 and up, Enterprise Edition or Community Edition.

Establish a Connection

Connect to Couchbase

To connect to data, set the Server property to the hostname or IP address of the Couchbase server(s) you are authenticating to.

If your Couchbase server is configured to use SSL, you can enable it either by using an https URL for Server (like https://couchbase.server), or by setting the UseSSL property to True.

Couchbase Analytics

By default, the connector connects to the N1QL Query service. In order to connect to the Couchbase Analytics service, you will also need to set the CouchbaseService property to Analytics.

Couchbase Cloud

Set the following to connect to Couchbase Cloud:

  • AuthScheme: Set this to Basic.
  • ConnectionMode: Set this to Cloud.
  • DNSServer: Set this to a DNS server. In most cases, this should be a public DNS service like 1.1.1.1 or 8.8.8.8.
  • SSLServerCert: Set this to the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected. Alternatively, set "*" to accept all certificates.

Authenticate to Couchbase

The connector supports several forms of authentication. Couchbase Cloud only accepts Standard authentication, while Couchbase Server accepts Standard authentication, client certificates, and credentials files.

Standard Authentication

To authenticate with standard authentication, set the following:

  • AuthScheme: Set this to Basic.
  • User: The user authenticating to Couchbase.
  • Password: The password of the user authenticating to Couchbase.
Client Certificates

The connector supports authenticating with client certificates when SSL is enabled. To use client certificate authentication, set the following properties:

  • AuthScheme: Set this to SSLCertificate.
  • SSLClientCertType: The type of client certificate set within SSLClientCert.
  • SSLClientCert: The client certificate in the format given by SSLClientCertType.
  • SSLClientCertPassword (optional): The password of the client certificate, if it is encrypted.
  • SSLClientCertSubject (optional): The subject of the client certificate, which, by default, is the first certificate found in the store. This is required if more than one certificate is available in the certificate store.
Credentials File

You can also authenticate using using a credentials file containing multiple logins. This is included for legacy use and is not recommended when connecting to a Couchbase Server that supports role-based authentication.

  • AuthScheme: Set this to CredentialsFile.
  • CredentialsFile: The path to the credentials file. Refer to Couchbase's documentation for more information on the format of this file.

Schema Discovery and Indexe

Schema Detection and Indexes

The connector provides different modes for determining schemas and indexes. Below are some example configurations.

TableSupport=None

Disables all queries that find tables and discover columns. The only tables reported will be the ones defined in schema files. TypeDetectionScheme is ignored. The driver will only use the schema files found in the Location directory. Using this option without schema files will result in no tables being available.

TableSupport=Basic

SELECT `bucket`, `scope`, name FROM system:keyspaces

The driver will discover the available buckets, but will not look inside of them for child tables. This is recommended for cases where you either want to reduce the time that schema detection takes, or if your buckets do not have primary indexes.

TableSupport=Full

SELECT `travel-sample`.* FROM `travel-sample` LIMIT 100

The driver will discover the available buckets, and look inside of each of those buckets for child tables. This provides the most flexible way to access nested data, but requires that each bucket on your server have primary indexes.

TypeDetectionScheme=None

The driver does not do any flavor detection or column type detection. Columns are always reported as VARCHAR. Child tables are still scanned depending on TableSupport setting.

TypeDetectionScheme=RowScan

The driver reads a sample of documnets from a bucket and determines the data type. It does not do any flavor detection.

TypeDetectionScheme=Infer

This uses the NQ1QL INFER statement to determine what tables and columns exist. This does more felxible flavor detection than DocType, but is only available for Couchbase Enterprise.

TypeDetectionScheme=DocType

SELECT META(`travel-sample`).id AS `Document.Id`, `travel-sample`.* FROM `travel-sample`

This discovers tables by checking at each bucket and looking for different values of the "docType" field in the documents. For Example, if the bucket beer-sample contains documents with "docType" = 'brewery' and "docType" = 'beer', this will generage three tables:bee-sample, beer-sample.brewery, and beer-sample.beer. Like RowScan, this will scan a sample of the documents in each flavor and determine the data type for each field.

NoSQL Database

Couchbase is a schema-free document database that provides high performance, availability, and scalability. These features are not necessarily incompatible with a standards-compliant query language like SQL-92.

The connector models the schema-free Couchbase objects into relational tables and translates SQL queries into N1QL or SQL++ (Analytics) queries to get the requested data. In this section we will show various schemes that the connector offers to bridge the gap with relational SQL and a document database.

Automatic Schema Discovery

When the connector first connects to Couchbase, it opens each bucket and scans a configurable number of rows from that bucket. It uses those rows to determine the columns in that bucket and their data types, as well as how to build flavored and child tables for any arrays within those documents. For Couchbase Enterprise version 4.5.1 and later, the connector may can also be configured to use the INFER command when TypeDetectionScheme is set to INFER. This allows the connector to get a more accurate column listing for the bucket, and to detect more complex flavors.

When using the Analytics service, the connector only does column and child table detection. Flavored tables are provided by Couchbase itself using shadow datasets. Also, Analytics mode does not currently have INFER support, so only row scan is supported.

For more details, refer to Automatic Schema Discovery to see how flavored tables and child tables are modelled from Couchbase data. Setting NumericStrings is also recommended as it can avoid type detection issues with certain kinds of text data.

Custom Schema Definitions

Optionally, you can use Custom Schema Definitions to project your chosen relational structure on top of a Couchbase object. This allows you to define your chosen column names, their data types, and the locations of their values in the Couchbase document.

Query Mapping

See Query Mapping for more details on how various N1QL and SQL++ operations are represented as SQL.

Vertical Flattening

See Vertical Flattening for more details on how arrays and objects are mapped into fields.

JSON Functions

See JSON Functions for more details on how to extract data from raw JSON strings.

Automatic Schema Discovery

Child Tables

If the documents within a bucket contain fields with arrays, then the connector will expose those fields as their own tables in addition to exposing them as JSON aggregates on the main table. The structure of these child tables depends upon whether the array contains objects or primitive values.

Array Child Tables

If the arrays contain primitive values like numbers or strings, the child table will have only two columns: one called "Document.Id" which is the primary key of the document containing the array, and one called "value" which contains the value within the array. For example, if the bucket "Games" contains these documents:

/* Primary key "1" */
{
  "scores": [1,2,3]
}

/* Primary key "2" */
{
  "scores": [4,5,6]
}

The connector will build a table called "Games_scores" containing these rows:

Document.Id value
1 1
1 2
1 3
2 4
2 5
2 6
Object Child Tables

If the arrays contain objects, the child table will have a column for each field that occurs within the objects, as well as a "Document.Id" column which contains the primary key of the document containing the array. For example, if the bucket "Games" contains these documents:

/* Primary key "1" */
{
  "moves": [
    {"piece": "pawn", "square": "c3"},
    {"piece": "rook", "square": "d5"}
  ]
}

/* Primary key "2" */
{
  "moves": [
    {"piece": "knight", "square": "f1"},
    {"piece": "bishop", "square": "e4"}
  ]
}

The connector will build a table called "Games_moves" containing these rows:

Document.Id piece square
1 pawn c3
1 rook d5
2 knight f1
2 biship e4
NewChildJoinsMode

Note that the above data model is not fully relational, which has important limitations for use-cases that involve complex JOINs or DML operations on child tables. The NewChildJoinsMode connection property exposes an alternative data model which avoids these limitations. Please refer to its page in the connection property section of the documentation for more details.

Flavored Tables

The connector can also detect when there are multiple types of documents within the same bucket, as long as TypeDetectionScheme is set to Infer or DocType and CouchbaseService is set to N1QL. These different types of documents are exposed as their own tables containing only the appropriate rows.

For example, the bucket "Games" contains documents which have a "type" value of either "chess" or "football":

/* Primary key "1" */
{
  "type": "chess",
  "result": "stalemate"
}

/* Primary key "2" */
{
  "type": "chess",
  "result": "black win"
}

/* Primary key "3" */
{
  "type": "football",
  "score": 23
}

/* Primary key "4" */
{
  "type": "football",
  "score": 18
}

The connector will create three tables for this bucket: one called "Games" which contains all the documents:

Document.Id result score type
1 stalemate NULL chess
2 black win NULL chess
3 NULL 23 football
4 NULL 18 football

One called "Games.chess" which contains only documents where the type is "chess":

Document.Id result type
1 stalemate chess
2 black win chess

And one called "Games.football" which contains only documents where the type is "football":

Document.Id score type
3 23 football
4 18 football

Note that the connector will not include columns in a flavored table that are not defined on the documents in that flavor. For example, even though both the "result" and "score" columns are included on the base table, "Games.chess" only includes "result" and "Games.football" only includes "score".

Flavored Child Tables

It is also possible for a flavored table to contain arrays, which will become their own child tables. For example, if the bucket "Games" contains these documents:

/* Primary key "1" */
{
  "type": "chess",
  "results": ["stalemate", "white win"]
}

/* Primary key "2" */
{
  "type": "chess",
  "results": ["black win", "stalemate"]
}

/* Primary key "3" */
{
  "type": "football",
  "scores": [23, 12]
}

/* Primary key "4" */
{
  "type": "football",
  "scores": [18, 36]
}

Then the connector will generate these tables:

Table Name Child Field Flavor Condition
Games
Games_results results
Games_scores scores
Games.chess "type" = "chess"
Games.chess_results results "type" = "chess"
Games.football "type" = "football"
Games.football_scores scores "type" = "football"

Query Mapping

The connector maps SQL-92-compliant queries into corresponding N1QL or SQL++ queries. Although the mapping below is not complete, it should help you get a sense for the common patterns the connector uses during this transformation.

SELECT Queries

The SELECT statements are translated to the appropriate N1QL SELECT query as shown below. Due to the similarities between SQL-92 and N1QL, many queries will simply be direct translations.

One major difference is that when the schema for a given Couchbase bucket exists in the connector, a SELECT * query will be translated to directly select the individual fields in the bucket. The connector will also automatically create a Document.Id column based on the primary key of each document in the bucket.

SQL Query N1QL Query
SELECT * FROM users SELECT META(`users`).id AS `id`, ... FROM `users`
SELECT \[Document.Id\], status FROM users SELECT META(`users`).id AS `Document.Id`, `users`.`status` FROM `users`
SELECT * FROM users WHERE status = 'A' OR age = 50 SELECT META(`users`).id AS `id`, ... FROM `users` WHERE TOSTRING(`users`.`status`) = "A" OR TONUMBER(`users`.`age`) = 50
SELECT * FROM users WHERE name LIKE 'A%' SELECT META(`users`).id AS `id`, ... FROM `users` WHERE TOSTRING(`users`.`name`) LIKE "A%"
SELECT * FROM users WHERE status = 'A' ORDER BY [Document.Id] DESC SELECT META(`users`).id AS `id`, ... FROM `users` WHERE TOSTRING(`users`.`status`) = "A" ORDER BY META(`users`).id DESC
SELECT * FROM users WHERE status IN ('A', 'B') SELECT META(`users`).id, ... FROM `users` WHERE TOSTRING(`users`.`status`) IN ["A", "B"]

Note that conditions can include extra type functions if the connector detects that a type conversion may be necessary. You can disable these type conversions using the StrictComparison property. For clarity, the rest of the N1QL samples are shown without these extra conversion functions.

USE KEYS Optimizations

When a query has either equals or IN clause that targets the Document.Id column, and there is no OR clause to override it, the connector will convert the Document.Id filter into a USE KEYS clause. This avoids the overhead of scanning an index because the document keys are already known to the N1QL engine (this optimization does not apply to the Analytics CouchbaseService).

SQL Query N1QL Query
SELECT * FROM users WHERE [Document.Id] = '1' SELECT ... FROM `users` USE KEYS ["1"]
SELECT * FROM users WHERE [Document.Id] IN ('2', '3') SELECT ... FROM `users` USE KEYS ["2", "3"]
SELECT * FROM users WHERE [Document.Id] = '4' OR [Document.Id] = '5' SELECT ... FROM `users` USE KEYS ["4", "5"]
SELECT * FROM users WHERE [Document.Id] = '6' AND status = 'A' SELECT ... FROM `users` USE KEYS ["6"] WHERE `status` = "A"

In addition to being used for SELECT queries, the same optimization is performed for DML operations as shown below.

Child Tables

As long as all the child tables in a query share the same parent, and they are combined using INNER JOINs on their Document.Id columns, the connector will combine the JOINs into a single UNNEST expression. Unlike N1QL UNNEST queries, you must explicitly JOIN with the base table if you want to access its fields.

SQL Query N1QL Query
SELECT * FROM users_posts SELECT META(`users`).id, `users_posts`.`text`, ... FROM `users` UNNEST `users`.`posts` AS `users_posts`
SELECT * FROM users INNER JOIN users_posts ON users.[Document.Id] = users_posts.[Document.Id] SELECT META(`users`).id, `users`.`name`, ..., `users_posts`.`text`, ... FROM `users` UNNEST `users`.`posts` AS `users_posts`
SELECT * FROM users INNER JOIN users_posts ... INNER JOIN users_comments ON ... SELECT ... FROM `users` UNNEST `users`.`posts` AS `users_posts` UNNEST `users`.`comments` AS `users_comments`
Flavor Tables

Flavored tables always have the appropriate condition included when you query, so that only documents from the flavor will be returned:

SQL Query N1QL Query
SELECT * FROM [users.subscriber] SELECT ... FROM `users` WHERE `docType` = "subscriber"
SELECT * FROM [users.subscriber] WHERE age > 50 SELECT ... FROM `users` WHERE `docType` = "subscriber" AND `age` > 50
Aggregate Queries

N1QL has several built-in aggregate functions. The connector makes extensive use of this for various aggregate queries. See some examples below:

SQL Query N1QL Query
SELECT Count(*) As Count FROM Orders SELECT Count(*) AS `count` FROM `Orders`
SELECT Sum(price) As total FROM Orders SELECT Sum(`price`) As `total` FROM `Orders`
SELECT cust_id, Sum(price) As total FROM Orders GROUP BY cust_id ORDER BY total SELECT `cust_id`, Sum(`price`) As `total` FROM `Orders` GROUP BY `cust_id` ORDER BY `total`
SELECT cust_id, ord_date, Sum(price) As total FROM Orders GROUP BY cust_id, ord_date Having total > 250 SELECT `cust_id`, `ord_date`, Sum(`price`) As `total` FROM `Orders` GROUP BY `cust_id`, `ord_date` Having `total` > 250

INSERT Statements

The SQL INSERT statement is mapped to the N1QL INSERT statement as shown below. This works the same for both top-level fields as well as fields produced by Vertical Flattening:

SQL Query N1QL Query
INSERT INTO users (\[Document.Id\], age, status) VALUES ('bcd001', 45, 'A') INSERT INTO `users` (KEY, VALUE) VALUES ('bcd001', { "age" : 45, "status" : "A" })
INSERT INTO users (\[Document.Id\], \[metrics.posts\]) VALUES ('bcd002', 0) INSERT INTO `users` (KEY, VALUE) VALUES ('bcd002', {"metrics': {"posts": 0}})
Child Table Inserts

Inserts on child tables are converted internally into N1QL UPDATEs using array operations. Since that this does not create the top-level document, the Document.Id provided must refer to a document that already exists.

Another limitation of child table INSERTs is that multi-valued INSERTs must all use the same Document.Id. The provider will verify this before modifying any data and raise an error if this constraint is violated.

SQL Query N1QL Query
INSERT INTO users_ratings (\[Document.Id\], value) VALUES ('bcd001', 4.8), ('bcd001', 3.2) UPDATE `users` USE KEYS "bcd001" SET `ratings` = ARRAY_PUT(`ratings`, 4.8, 3.2)
INSERT INTO users_reviews (\[Document.Id\], score) VALUES ('bcd002', 'Great'), ('bcd002', 'Lacking') UPDATE `users` USE KEYS "bcd001" SET `ratings` = ARRAY_PUT(`ratings`, {"score": "Great"}, {"score": "Lacking"})

Bulk INSERT Statements

Bulk INSERTs are also supported. The SQL Bulk INSERT is converted as shown below:

INSERT INTO users#TEMP ([Document.Id], KEY, VALUE) VALUES ('bcd001', 45, "A")
INSERT INTO users#TEMP ([Document.Id], KEY, VALUE) VALUES ('bcd002', 24, "B")
INSERT INTO users SELECT * FROM users#TEMP

is converted to:

INSERT INTO `users` (KEY, VALUE) VALUES
  ('bcd001', {"age": 45, "status": "A"}),
('bcd002', {"age": 24, "status": "B"})

Like multi-valued INSERTs on child tables, all the rows in a bulk INSERT must also have the same Document.Id.

Update Statements

The SQL UPDATE statement is mapped to the N1SQL UPDATE statement as shown below:

SQL Query N1QL Query
UPDATE users SET status = 'C' WHERE \[Document.Id\] = 'bcd001' UPDATE `users` USE KEYS ["bcd001"] SET `status` = "C"
UPDATE users SET status = 'C' WHERE age > 45 UPDATE `users` SET `status` = "C" WHERE `age` > 45
Child Table Updates

When updating a child table, the SQL query is converted to an UPDATE query using either a "FOR" expression or an "ARRAY" expression:

SQL Query N1QL Query
UPDATE users_ratings SET value = 5.0 WHERE value > 5.0 UPDATE `users` SET `ratings` = ARRAY CASE WHEN `value` > 5.0 THEN 5 ELSE `value` END FOR `value` IN `ratings` END
UPDATE users_reviews SET score = 'Unknown' WHERE score = '' UPDATE `users` SET `\(child\`.\`score\` = 'Unknown' FOR \`\)child` IN `reviews` WHEN `$child`.`score` = "" END
Flavor Table Updates

Like flavor table SELECTs, UPDATEs on flavor tables always include the appropriate condition, so only docments belonging to the flavor are affected:

SQL Query N1QL Query
UPDATE \[users.subscriber\] SET status = 'C' WHERE age > 45 UPDATE `users` SET `status` = "C" WHERE `docType` = "subscriber" AND `age` > 45

Delete Statements

The SQL DELETE statement is mapped to the N1QL DELETE statement as shown below:

SQL Query N1QL Query
DELETE FROM users WHERE \[Document.Id\] = 'bcd001' DELETE FROM `users` USE KEYS ["bcd001"]
DELETE FROM users WHERE status = 'inactive' DELETE FROM `users` WHERE `status` = "inactive"
Child Table Deletes

When deleting from a child table, the SQL query is converted to an UPDATE query using an "ARRAY" expression:

SQL Query N1QL Query
DELETE FROM users_ratings WHERE value < 0 UPDATE `users` SET `ratings` = ARRAY `value` FOR `value` IN `ratings` WHEN NOT (`value` < 0) END
DELETE FROM users_reviews WHERE score = '' UPDATE `users` SET `reviews` = ARRAY `\(child\` FOR \`\)child` IN `reviews` WHEN NOT (`$child`.`score` = "") END
Flavor Tables Deletes

Like flavor table SELECTs, DELETEs on flavor tables always include the appropriate condition, so only docments belonging to the flavor are affected:

SQL Query N1QL Query
DELETE FROM \[users.subscriber\] WHERE status = 'inactive' DELETE FROM `users` WHERE `docType` = "subscriber" AND status = "inactive"

Vertical Flattening

Example Document

/* Primary key "1" */
{
  "address" : {
    "building" : "1007",
    "coord" : [-73.856077, 40.848447],
    "street" : "Morris Park Ave",
    "zipcode" : "10462"
  },
  "borough" : "Bronx",
  "cuisine" : "Bakery",
  "grades" : [{
      "date" : "2014-03-03T00:00:00Z",
      "grade" : "A",
      "score" : 2
    }, {
      "date" : "2013-09-11T00:00:00Z",
      "grade" : "A",
      "score" : 6
    }, {
      "date" : "2013-01-24T00:00:00Z",
      "grade" : "A",
      "score" : 10
    }, {
      "date" : "2011-11-23T00:00:00Z",
      "grade" : "A",
      "score" : 9
    }, {
      "date" : "2011-03-10T00:00:00Z",
      "grade" : "B",
      "score" : 14
    }],
  "name" : "Morris Park Bake Shop",
  "restaurant_id" : "30075445"

}

Select Values In Objects

If the FlattenObjects property is configured to allow object flattening, then the connector will traverse objects and map the fields inside them as columns. For example, this query:

SELECT [address.building], [address.street] FROM restaurants

Would return this resultset:

address.building addres.street
1007 Morris Park Ave

Select Values In Arrays

If the FlattenArrays property is configured to allow array flattening, then the connector will traverse arrays and map their individual values as columns. For example, if Flatten Arrays were set to "2", then this query:

SELECT [address.coord.0], [address.coord.1] FROM restaurants

Would return this resultset:

address.coord.0 address.coord.1
-73.856077 40.838447

Note that array flattening should only be used in cases where you know the number of array items in advance, such as with "address.coord" which will always contain two items. For arrays like "grades" which can contain arbitrary numbers of items, consider using the child tables described in Automatic Schema Discovery instead, since they will allow you to read all of the values within the array.

User-Defined Functions

User-defined functions are a new feature provided by Couchbase 7 and up. They can be used with the connector like normal functions but with a special naming convention for using scoped functions. Normally the connector requires that functions already exist before they are used, to define them refer to the Couchbase documentation on CREATE FUNCTION queries. These may be run at the Couchbase console or with the connector in QueryPassthrough mode.

Couchbase has support for both scalar functions as well as functions that return results from subqueries. The connector supports scalar functions within its SQL dialect but subquery functions can only be used when QueryPassthrough is enabled. The rest of this section covers the connector's SQL dialect and assums that QueryPassthrough is disabled.

Global Functions

In both N1QL and Analytics mode, global user-defined functions can be accessed using either their simple names or their qualified names. The simple name is just the name of the function:

SELECT ageInYears(birthdate) FROM users

Global functions may also be invoked by qualifying them with the default namespace. Qualified names are quoted names that contain internal separators, which by default is a period though this can be changed using the DataverseSeparator property. In both N1QL and Analytics the global namespace is called Default:

SELECT [Default.ageInYears](birthdate) FROM users

Calling global functions using simple names is recommended. While the default qualfier is supported, its only intended use is for when a UDF clashes with a standard SQL function that the connector would otherwise translate.

Scoped Functions

Both N1QL and Analytics also allow functions to be defined outside of a global context. In Analytics functions can be attached to both dataverses and scopes which are called using two-part and three-part names respectively. In N1QL functions may only be attached to scopes so only three-part names may be used.

/* N1QL AND Analytics */
SELECT [socialNetwork.accounts.ageInYears](birthdate) FROM [socialNetwork.accounts.users]

/* Analytics only */
SELECT [socailNetwork.ageInYears](birthdate) FROM [socialNetwork.accounts.users]

JSON Functions

The connector can return JSON structures as column values. The connector enables you to use standard SQL functions to work with these JSON structures. The examples in this section use the following array:

[
     { "grade": "A", "score": 2 },
     { "grade": "A", "score": 6 },
     { "grade": "A", "score": 10 },
     { "grade": "A", "score": 9 },
     { "grade": "B", "score": 14 }
]

JSON_EXTRACT

The JSON_EXTRACT function can extract individual values from a JSON object. The following query returns the values shown below based on the JSON path passed as the second argument to the function:

SELECT Name, JSON_EXTRACT(grades,'[0].grade') AS Grade, JSON_EXTRACT(grades,'[0].score') AS Score FROM Students;
Column Name Example Value
Grade A
Score 2

JSON_COUNT

The JSON_COUNT function returns the number of elements in a JSON array within a JSON object. The following query returns the number of elements specified by the JSON path passed as the second argument to the function:

SELECT Name, JSON_COUNT(grades,'[x]') AS NumberOfGrades FROM Students;
Column Name Example Value
NumberOfGrades 5

JSON_SUM

The JSON_SUM function returns the sum of the numeric values of a JSON array within a JSON object. The following query returns the total of the values specified by the JSON path passed as the second argument to the function:

SELECT Name, JSON_SUM(score,'[x].score') AS TotalScore FROM Students;
Column Name Example Value
TotalScore 41

JSON_MIN

The JSON_MIN function returns the lowest numeric value of a JSON array within a JSON object. The following query returns the minimum value specified by the JSON path passed as the second argument to the function:

SELECT Name, JSON_MIN(score,'[x].score') AS LowestScore FROM Students;
Column Name Example Value
LowestScore 2

JSON_MAX

The JSON_MAX function returns the highest numeric value of a JSON array within a JSON object. The following query returns the maximum value specified by the JSON path passed as the second argument to the function:

SELECT Name, JSON_MAX(score,'[x].score') AS HighestScore FROM Students;
Column Name Example Value
HighestScore 14

DOCUMENT

The DOCUMENT function can be used to return an document as a JSON string. DOCUMENT(*) can be used with any type of SELECT query, including queries including other columns, queries including just DOCUMENT(*), and even more complex queries like JOINs.

SELECT [Document.Id], grade, score, DOCUMENT(*) FROM grades

For example, that query would return:

Document.Id grade score DOCUMENT
1 A 6 {"document.id":1, "grade":"A", "score":6}
2 A 10 {"document.id":1, "grade":"A", "score":10}
3 A 9 {"document.id":1, "grade":"A", "score":9}
4 B 14 {"document.id":1, "grade":"B", "score":14}

When used alone, DOCUMENT(*) returns the structure directly from Couchbase as if a N1QL or SQL++ SELECT * query were used. This means that no Document.Id value will be present since Couchbase does not include it automatically.

SELECT DOCUMENT(*) FROM grades

This query would return:

DOCUMENT
{"grades":{"grade":"A", "score":6"}}
{"grades":{"grade":"A", "score":10"}}
{"grades":{"grade":"A", "score":9"}}
{"grades":{"grade":"B", "score":14"}}

Custom Schema Definitions

In addition to Automatic Schema Discovery the connector also allows you to statically define the schema for your Couchbase object. Schemas are defined in text-based configuration files, which makes them easy to extend. You can call the CreateSchema stored procedure to generate a schema file; see Automatic Schema Discovery for more information.

Set the Location property to the file directory that will contain the schema file. The following sections show how to extend the resulting schema or write your own.

Example Document

Let's consider the document below and extract out the nested properties as their own columns:

/* Primary key "1" */
{
  "id": 12,
  "name": "Lohia Manufacturers Inc.",
  "homeaddress": {"street": "Main "Street", "city": "Chapel Hill", "state": "NC"},
  "workaddress": {"street": "10th "Street", "city": "Chapel Hill", "state": "NC"}
  "offices": ["Chapel Hill", "London", "New York"]
  "annual_revenue": 35600000
}
/* Primary key "2" */
{
  "id": 15,
  "name": "Piago Industries",
  "homeaddress": {street": "Main Street", "city": "San Francisco", "state": "CA"},
  "workaddress": {street": "10th Street", "city": "San Francisco", "state": "CA"}
  "offices": ["Durham", "San Francisco"]
  "annual_revenue": 42600000
}

Custom Schema Definition

<rsb:info title="Customers" description="Customers" other:dataverse="" other:bucket=customers"" other:flavorexpr="" other:flavorvalue="" other:isarray="false" other:pathspec="" other:childpath="">
  <attr name="document.id"        xs:type="string"  key="true" other:iskey="true" other:pathspec=""  />
  <attr name="annual_revenue"     xs:type="integer" other:iskey="false"           other:pathspec=""  other:field="annual_revenue" />
  <attr name="homeaddress.city"   xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="homeaddress.city" />
  <attr name="homeaddress.state"  xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="homeaddress.state" />
  <attr name="homeaddress.street" xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="homeaddress.street" />
  <attr name="name"               xs:type="string"  other:iskey="false"           other:pathspec=""  other:field="name" />
  <attr name="id"                 xs:type="integer" other:iskey="false"           other:pathspec=""  other:field="id" />
  <attr name="offices"            xs:type="string"  other:iskey="false"           other:pathspec=""  other:field="offices" />
  <attr name="offices.0"          xs:type="string"  other:iskey="false"           other:pathspec="[" other:field="offices.0" />
  <attr name="offices.1"          xs:type="string"  other:iskey="false"           other:pathspec="[" other:field="offices.1" />
  <attr name="workaddress.city"   xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="workaddress.city" />
  <attr name="workaddress.state"  xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="workaddress.state" />
  <attr name="workaddress.street" xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="workaddress.street" />

</rsb:info>

In Custom Schema Example, you will find the complete schema that contains the example above.

Table Properties

The schema above uses the following properties to define specific qualities for the whole table. All of them are required:

Property Meaning
other:dataverse The name of the dataverse the dataset belongs to. Empty if not an Analytics view.
other:bucket The name of the bucket or dataset within Couchbase
other:flavorexpr The URL encoded condition in a flavored table. For example, "%60docType%60%20%3D%20%22chess%22".
other:flavorvalue The name of the flavor in a flavored table. For example, "chess".
other:isarray Whether the table is an array child table.
other:pathspec This is used to interpret the separators within other:childpath. See Column Properties for more details.
other:childpath The path to the attribute that is used to UNNEST the child table. Empty if not a child table.
Column Properties

The schema above uses the following properties to define specific qualities for each column:

Property Meaning
name Required. The name of the column, lower-cased.
key Used to mark the primary key. Required for Document.Id but optional for other columns.
xs:type Required. The type of the column within the connector.
other:iskey Required. Must be the same value as key, or "false" if key is not included.
other:pathspec Required. This is used to interpret the separators within other:field.
other:field Required. The path to the field in Couchbase.

Note that the fields which are produced by vertical flattening use the same syntax for separating array values and field values. This introduces a potential ambiguity in cases like the following, where the connector exposes the columns "numeric_object.0" and "array.0":

{
  "numeric_object": {
    "0": 0
  },
  "array": [
    0
  ]
}

To ensure that the connector can distinguish between field and array accesses, the pathspec is used to determine whether each "." in the field is an array or an object. Each "{" represents a field access, while each "[" represents an array access. For example, with a field of "a.0.b.1" and a "pathspec" of "[{[", the N1QL expression "a[0].b[1]" would be generated. If instead the "pathspec" were "{{{", then the N1QL expression "a.`0`.b.`1`" would be generated.

Custom Schema Example

This section contains a complete schema. Set the Location property to the file directory that will contain the schema file. The info section enables a relational view of a Couchbase object. For more details, see Custom Schema Definitions. The table below allows the SELECT, INSERT, UPDATE, and DELETE commands as implemented in the GET, POST, MERGE, and DELETE sections of the schema below. The operations, such as couchbaseadoSysData, are internal implementations.

<rsb:script xmlns:rsb="http://www.rssbus.com/ns/rsbscript/2">
  <rsb:info title="Customers" description="Customers" other:dataverse="" other:bucket=customers"" other:flavorexpr="" other:flavorvalue="" other:isarray="false" other:pathspec="" other:childpath="">
    <attr name="document.id"        xs:type="string"  key="true" other:iskey="true" other:pathspec=""  />
    <attr name="annual_revenue"     xs:type="integer" other:iskey="false"           other:pathspec=""  other:field="annual_revenue" />
    <attr name="homeaddress.city"   xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="homeaddress.city" />
    <attr name="homeaddress.state"  xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="homeaddress.state" />
    <attr name="homeaddress.street" xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="homeaddress.street" />
    <attr name="name"               xs:type="string"  other:iskey="false"           other:pathspec=""  other:field="name" />
    <attr name="id"                 xs:type="integer" other:iskey="false"           other:pathspec=""  other:field="id" />
    <attr name="offices"            xs:type="string"  other:iskey="false"           other:pathspec=""  other:field="offices" />
    <attr name="offices.0"          xs:type="string"  other:iskey="false"           other:pathspec="[" other:field="offices.0" />
    <attr name="offices.1"          xs:type="string"  other:iskey="false"           other:pathspec="[" other:field="offices.1" />
    <attr name="workaddress.city"   xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="workaddress.city" />
    <attr name="workaddress.state"  xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="workaddress.state" />
    <attr name="workaddress.street" xs:type="string"  other:iskey="false"           other:pathspec="{" other:field="workaddress.street" />
  </rsb:info>
</rsb:script>

Important Notes

Configuration Files and Their Paths

  • All references to adding configuration files and their paths refer to files and locations on the Jitterbit agent where the connector is installed. These paths are to be adjusted as appropriate depending on the agent and the operating system. If multiple agents are used in an agent group, identical files will be required on each agent.

Advanced Features

This section details a selection of advanced features of the Couchbase connector.

User Defined Views

The connector allows you to define virtual tables, called user defined views, whose contents are decided by a pre-configured query. These views are useful when you cannot directly control queries being issued to the drivers. See User Defined Views for an overview of creating and configuring custom views.

SSL Configuration

Use SSL Configuration to adjust how connector handles TLS/SSL certificate negotiations. You can choose from various certificate formats; see the SSLServerCert property under "Connection String Options" for more information.

Proxy

To configure the connector using private agent proxy settings, select the Use Proxy Settings checkbox on the connection configuration screen.

Query Processing

The connector offloads as much of the SELECT statement processing as possible to Couchbase and then processes the rest of the query in memory (client-side).

User Defined Views

The Couchbase connector allows you to define a virtual table whose contents are decided by a pre-configured query. These are called User Defined Views, which are useful in situations where you cannot directly control the query being issued to the driver, e.g. when using the driver from Jitterbit. The User Defined Views can be used to define predicates that are always applied. If you specify additional predicates in the query to the view, they are combined with the query already defined as part of the view.

There are two ways to create user defined views:

  • Create a JSON-formatted configuration file defining the views you want.
  • DDL statements.

Define Views Using a Configuration File

User Defined Views are defined in a JSON-formatted configuration file called UserDefinedViews.json. The connector automatically detects the views specified in this file.

You can also have multiple view definitions and control them using the UserDefinedViews connection property. When you use this property, only the specified views are seen by the connector.

This User Defined View configuration file is formatted as follows:

  • Each root element defines the name of a view.
  • Each root element contains a child element, called query, which contains the custom SQL query for the view.

For example:

{
    "MyView": {
        "query": "SELECT * FROM Customer WHERE MyColumn = 'value'"
    },
    "MyView2": {
        "query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
    }
}

Use the UserDefinedViews connection property to specify the location of your JSON configuration file. For example:

"UserDefinedViews", "C:\Users\yourusername\Desktop\tmp\UserDefinedViews.json"

Define Views Using DDL Statements

The connector is also capable of creating and altering the schema via DDL Statements such as CREATE LOCAL VIEW, ALTER LOCAL VIEW, and DROP LOCAL VIEW.

Create a View

To create a new view using DDL statements, provide the view name and query as follows:

CREATE LOCAL VIEW [MyViewName] AS SELECT * FROM Customers LIMIT 20;

If no JSON file exists, the above code creates one. The view is then created in the JSON configuration file and is now discoverable. The JSON file location is specified by the UserDefinedViews connection property.

Alter a View

To alter an existing view, provide the name of an existing view alongside the new query you would like to use instead:

ALTER LOCAL VIEW [MyViewName] AS SELECT * FROM Customers WHERE TimeModified > '3/1/2020';

The view is then updated in the JSON configuration file.

Drop a View

To drop an existing view, provide the name of an existing schema alongside the new query you would like to use instead.

DROP LOCAL VIEW [MyViewName]

This removes the view from the JSON configuration file. It can no longer be queried.

Schema for User Defined Views

User Defined Views are exposed in the UserViews schema by default. This is done to avoid the view's name clashing with an actual entity in the data model. You can change the name of the schema used for UserViews by setting the UserViewsSchemaName property.

Work with User Defined Views

For example, a SQL statement with a User Defined View called UserViews.RCustomers only lists customers in Raleigh:

SELECT * FROM Customers WHERE City = 'Raleigh';

An example of a query to the driver:

SELECT * FROM UserViews.RCustomers WHERE Status = 'Active';

Resulting in the effective query to the source:

SELECT * FROM Customers WHERE City = 'Raleigh' AND Status = 'Active';

That is a very simple example of a query to a User Defined View that is effectively a combination of the view query and the view definition. It is possible to compose these queries in much more complex patterns. All SQL operations are allowed in both queries and are combined when appropriate.

SSL Configuration

Customize the SSL Configuration

By default, the connector attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store.

To specify another certificate, see the SSLServerCert property for the available formats to do so.

Client SSL Certificates

The Couchbase connector also supports setting client certificates. Set the following to connect using a client certificate.

  • SSLClientCert: The name of the certificate store for the client certificate.
  • SSLClientCertType: The type of key store containing the TLS/SSL client certificate.
  • SSLClientCertPassword: The password for the TLS/SSL client certificate.
  • SSLClientCertSubject: The subject of the TLS/SSL client certificate.

Data Model

Overview

Depending upon the connection settings being used, the connector can present several different mappings between Couchbase entities and relational tables and views. For more details on each of these capabilities, refer to the NoSQL portion of this documentation.

  • When connecting to the N1QL query service, the connector models Couchbase buckets as relational tables. In addition, if TypeDetectionScheme is set to DocType or Infer, the connector will present different document flavors in each bucket as their own tables.
  • When connecting to the Analytics service, the connector models Couchbase datasets as relational views.
  • When connecting with either service, the connector can expose arrays of data as child tables or views.

Please see the Automatic Schema Discovery section for more details on how flavor and child tables are exposed. In addition, the NewChildJoinsMode connection property is recommended for workflows that make heavy use of child tables. The documentation for that connection property details the improvements it makes to the connector data model.

Dataverses, Scopes and Collections

Couchbase has different ways of grouping buckets and datasets depending on the CouchbaseService and version of Couchbase you are connecting to:

  • Couchbase organizes Analytics datsets into groups called dataverses. By default the connector exposes datasets from all dataverses using compound names like Default.users as described in DataverseSeparator. It is important to remember that these compound names must be quoted when used in queries, for example SELECT * FROM [Default.users]
  • You may also set the Dataverse property to limit the the connector to exposing a single dataverse. This disables compound names so view names will not include the dataset.
  • When connecting to Couchbase 7 and above, the connector will use the scope, collection and bucket/dataset name to build table and view names. For example, a table with a name like crm.accounts.customers exposes the customers collection under the accounts scope of the crm bucket. These must be quoted the same as other compound names when used in queries, for example SELECT * FROM [crm.accounts.customers]

Live Metadata

All of the schemas provided by the connector are dynamically retrieved from Couchbase, so any changes in the buckets or fields within Couchbase will be reflected in the connector the next time you connect. You may also issue a reset query to refresh schemas without having to close the connection:

RESET SCHEMA CACHE

Stored Procedures

Stored procedures are function-like interfaces that extend the functionality of the connector beyond simple SELECT/INSERT/UPDATE/DELETE operations with Couchbase.

Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from Couchbase, along with an indication of whether the procedure succeeded or failed.

Couchbase Connector Stored Procedures

Name Description
AddDocument Upsert entire JSON documents to Couchbase as-is.
CreateBucket Creates a new bucket in CouchBase.
CreateCollection Creates a collection under an existing scope
CreateSchema Creates a schema definition of a table in Couchbase. Results may change depending of the value of FlattenObjects, FlattenArrays, and TypeDetectionScheme.
CreateScope Creates a scope under an existing bucket
CreateUserTable An internal operation used when GenerateSchemaFiles=OnCreate
DeleteBucket Deletes a bucket (and all its collections and scopes, where supported)
DeleteCollection Deletes a collection (Couchbase 7 and up)
DeleteScope Deletes a scope and all its collections (Couchbase 7 and up)
FlushBucket Removes all documents from a bucket in Couchbase.
ListIndices Lists all indices available in Couchbase
ManageIndices Creates/Drops an index in a target bucket in Couchbase.

AddDocument

Upsert entire JSON documents to Couchbase as-is.

Input
Name Type Required Description
BucketName String True The bucket to insert the document into.
SourceTable String False The name of the temp table containing ID and Document columns. Required if no ID is specified.
ID String False The primary key to insert the document under. Required if no SourceTable is specified.
Document String False The JSON text of the document to insert. Required if not SourceTable is specified.
Result Set Columns
Name Type Description
RowsAffected String The number of rows successfully updated

CreateBucket

Creates a new bucket in CouchBase.

Creating Buckets

Buckets using @AuthType 'none' can be created by specifying only the @Name, @AuthType, @BucketType, and @RamQuotaMB. The @ProxyPort option may also be required, depending upon what version of Couchbase you are connecting to.

EXECUTE CreateBucket
  @Name = 'Players',
  @AuthType = 'NONE',
  @BucketType = 'COUCHBASE',
  @RamQuotaMB = 100,
@ProxyPort = 1234

When creating a bucket with @AuthType 'sasl', the @ProxyPort must not be provided, and the @SaslPassword is optional:

EXECUTE CreateBucket
  @Name = 'Players',
  @AuthType = 'SASL',
  @BucketType = 'COUCHBASE',
@RamQuotaMB = 100

All other parameters can be used regardless of what @AuthType you provide.

Input
Name Type Required Description
Name String True The name of the bucket to create.
AuthType String True The type of authentication to use can be sasl or none.
BucketType String True The type of the bucket, can be memcached or couchbase.
EvictionPolicy String False What to evict from the cache if the bucket is full, can be fullEviction or valueOnly
FlushEnabled String False Enables or disables flush all support, can be 0 or 1.
ParallelDBAndViewCompaction String False Enables simultaneous compactions of the database and the views, can be true or false.
ProxyPort String False The proxy port, must be unused, required if authorization is not SASL.
RamQuotaMB String True The amount of RAM to allocate to the bucket, in megabytes.
ReplicaIndex String False Enables or disables replicate indexes, can be 1 or 0.
ReplicaNumber String False A number between 0 and 3, specifies number of replicas.
SaslPassword String False SASL password, may be provided if the authentication type is SASL.
ThreadsNumber String False A number between 2 and 8, specifies number of concurrent readers/writers.
CompressionMode String False Either Off (no compression), Passive (documents inserted compressed stay comressed) or Active (server can compress any document). On Couchbase Enterprise, Passive is the default.
ConflictResolutionType String False How the server will resolve conflicts between cluster nodes. Either lww (timestamp-based resolution) or seqno (revision ID-based resolution). Defaults to seqno on Couchbase Enterprise.
Result Set Columns
Name Type Description
Success String Whether or not the bucket was successfully created.

CreateCollection

Creates a collection under an existing scope

Input
Name Type Required Description
Bucket String True The name of the bucket containing the collection.
Scope String True The name of the scope containing the collection.
Name String True The name of the collection to create.
Result Set Columns
Name Type Description
Success Bool Whether or not the collection was successfully created.

CreateSchema

Creates a schema definition of a table in Couchbase. Results may change depending of the value of FlattenObjects, FlattenArrays, and TypeDetectionScheme.

CreateSchema

Creates a local schema file (.rsd) from an existing table or view in the data model.

The schema file is created in the directory set in the Location connection property when this procedure is executed. You can edit the file to include or exclude columns, rename columns, or adjust column datatypes.

The connector checks the Location to determine if the names of any .rsd files match a table or view in the data model. If there is a duplicate, the schema file will take precedence over the default instance of this table in the data model. If a schema file is present in Location that does not match an existing table or view, a new table or view entry is added to the data model of the connector.

Input
Name Type Required Accepts Output Streams Description
TableName String True False The name of the table.
FileName String False False The full file path and name of the schema to generate. Ex : 'C:\Users\User\Desktop\Couchbase\sheet.rsd'
Overwrite String False False Will delete any existing schema file for this table.
FileStream String False True Stream to write the schema to. Only used if FileName is not provided.
Result Set Columns
Name Type Description
Result String Whether or not the schema was successfully built.
FileData String The content of the schema encoded as base64. Only returned if the FileName and FileStream are not provided.

CreateScope

Creates a scope under an existing bucket

Input
Name Type Required Description
Bucket String True The name of the bucket containing the scope.
Name String True The name of the scope to create.
Result Set Columns
Name Type Description
Success Bool Whether or not the scope was successfully created.

CreateUserTable

An internal operation used when GenerateSchemaFiles=OnCreate

Note

This procedure makes use of indexed parameters.

Indexed parameters facilitate providing multiple instances a single parameter as inputs for the procedure.

Suppose there is an input parameter named Param#. To input multiple instances of an indexed parameter like this, execute:

EXEC ProcedureName Param#1 = "value1", Param#2 = "value2", Param#3 = "value3"

In the table below, indexed parameters are denoted with a # character at the end of their names.

Input
Name Type Required Description
CreateNotExist String False Whether an existing table is an error or not
TableName String False The name of the table to create
ColumnNames# String False For each column, its name
ColumnDataTypes# String False For each column, its type
ColumnSizes# String False For each column, its size (ignored)
ColumnScales# String False For each column, its scale (ignored)
ColumnIsNulls# String False For each column, whether it allows NULLs (ignored)
ColumnDefaults# String False For each column, its default value (ignored)
Location String False Where the schema file is generated
Result Set Columns
Name Type Description
AffectedTables String The number of tables created, either 0 or 1

DeleteBucket

Deletes a bucket (and all its collections and scopes, where supported)

Input
Name Type Required Description
Name String True The name of the bucket to delete.
Result Set Columns
Name Type Description
Success Bool Whether or not the bucket was successfully deleted.

DeleteCollection

Deletes a collection (Couchbase 7 and up)

Input
Name Type Required Description
Bucket String True The name of the bucket containing the collection.
Scope String True The name of the scope containing the collection.
Name String True The name of the collection to delete.
Result Set Columns
Name Type Description
Success Bool Whether or not the collection was successfully deleted.

DeleteScope

Deletes a scope and all its collections (Couchbase 7 and up)

Input
Name Type Required Description
Bucket String True The name of the bucket containing the scope.
Name String True The name of the scope to delete.
Result Set Columns
Name Type Description
Success Bool Whether or not the scope was successfully deleted.

FlushBucket

Removes all documents from a bucket in Couchbase.

Input
Name Type Required Description
Name String True The name of the bucket to delete. Flush must be enabled on this bucket.
Result Set Columns
Name Type Description
Success Bool Whether or not the bucket was successfully flushed.

ListIndices

Lists all indices available in Couchbase

Result Set Columns
Name Type Description
Id String The unique index ID
Datastore_id String The server hosting the indexed bucket
Namespace_id String The pool hosting the indexed bucket
Bucket_id String The bucket the index applies to if the index applies to a collection (Couchbase 7 and up). NULL otherwise.
Scope_id String The scope the index applies to if the index applies to a collection (Couchbase 7 and up). NULL otherwise.
Keyspace_id String The collection the index applies to, if the index applis to a collection (Couchbase 7 and up). The bucket the index applies to otherwise.
Index_key String A list of keys participating in the index
Condition String The N1QL filter that the index applies to
Is_primary String Whether the index is on the primary key
Name String The name of the index
State String Whether the index is available
Using String Whether the index is backed by GSI or a view

ManageIndices

Creates/Drops an index in a target bucket in Couchbase.

Building Indices

An anonymous primary index can be created with these parameters:

EXECUTE ManageIndices
  @BucketName = 'Players'
  @Action = 'CREATE'
  @IsPrimary = 'true'
@IndexType = 'VIEW'

This is the same as executing this N1QL:

CREATE PRIMARY INDEX ON `Players` USING VIEW

A named primary index can be created by specifying an @Name, in addition to the parameters listed above:

EXECUTE ManageIndices
  @BucketName = 'Players'
  @Action = 'CREATE'
  @IsPrimary = 'true'
  @Name = 'Players_primary'
@IndexType = 'VIEW'

A secondary index can be created by setting @IsPrimary to false and providing at least one expression.

EXECUTE ManageIndices
  @BucketName = 'Players',
  @Action = 'CREATE',
  @IsPrimary = 'false',
  @Name = 'Players_playtime_score',
@Expressions = '["score", "playtime"]'

This is the same as running the following N1QL:

CREATE INDEX `Players_playtime_score` ON `Players`(score, playtime) USING GSI;

Multiple nodes and filters can also be provied to generate more complex indices. They must be provided as JSON lists:

EXECUTE ManageIndices
  @BucketName = 'Players',
  @Name = 'TopPlayers',
  @Expressions = '["score", "playtime"]',
  @Filter = '["topscore > 1000", "playtime > 600"]',
@Nodes = '["127.0.0.1:8091", "192.168.0.100:8091"]'

This is the same as running the following N1QL:

CREATE INDEX `TopPlayers` ON `Players`(score, playtime) WHERE topscore > 1000 AND playtime > 600 USING GSI WITH { "nodes": ["127.0.0.1:8091", "192.168.0.100:8091"]};
Input
Name Type Required Description
BucketName String True The target bucket to create or drop the the index from.
ScopeName String False The target scope to create or drop the index from (Couchbase 7 and up)
CollectionName String False The target collection to create or drop the index from (Couchbase 7 and up)
Action String True Specifies which action to perform on the index, can be Create or Drop.
Expressions String False A list of expressions or functions, encoded as JSON, that the index will be based off of. At least one is required if IsPrimary is set to false and the action is Create.
Name String False The name of the index to create or drop, required if IsPrimary is set to false.
IsPrimary String False Specifies wether the index should be a primary index. The default value is true.
Filters String False A list of filters, encoded as JSON, to apply on the index.
IndexType String False The type of index to create, can be GSI or View, only used if the action is Create. The default value is GSI.
ViewName String False Deprecated, included for compatibility only. Does nothing.
Nodes String False A list, encoded as JSON, of nodes to contain the index, must contain the port. Only used if the action is Create.
NumReplica String False How many replicas to create among the index nodes in the cluster.
Result Set Columns
Name Type Description
Success String Whether or not the index was successfully created or dropped.

System Tables

You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.

Schema Tables

The following tables return database metadata for Couchbase:

Data Source Tables

The following tables return information about how to connect to and query the data source:

  • sys_connection_props: Returns information on the available connection properties.
  • sys_sqlinfo: Describes the SELECT queries that the connector can offload to the data source.

Query Information Tables

The following table returns query statistics for data modification queries:

  • sys_identity: Returns information about batch operations or single updates.

sys_catalogs

Lists the available databases.

The following query retrieves all databases determined by the connection string:

SELECT * FROM sys_catalogs
Columns
Name Type Description
CatalogName String The database name.

sys_schemas

Lists the available schemas.

The following query retrieves all available schemas:

SELECT * FROM sys_schemas
Columns
Name Type Description
CatalogName String The database name.
SchemaName String The schema name.

sys_tables

Lists the available tables.

The following query retrieves the available tables and views:

SELECT * FROM sys_tables
Columns
Name Type Description
CatalogName String The database containing the table or view.
SchemaName String The schema containing the table or view.
TableName String The name of the table or view.
TableType String The table type (table or view).
Description String A description of the table or view.
IsUpdateable Boolean Whether the table can be updated.

sys_tablecolumns

Describes the columns of the available tables and views.

The following query returns the columns and data types for the Customer table:

SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Customer'
Columns
Name Type Description
CatalogName String The name of the database containing the table or view.
SchemaName String The schema containing the table or view.
TableName String The name of the table or view containing the column.
ColumnName String The column name.
DataTypeName String The data type name.
DataType Int32 An integer indicating the data type. This value is determined at run time based on the environment.
Length Int32 The storage size of the column.
DisplaySize Int32 The designated column's normal maximum width in characters.
NumericPrecision Int32 The maximum number of digits in numeric data. The column length in characters for character and date-time data.
NumericScale Int32 The column scale or number of digits to the right of the decimal point.
IsNullable Boolean Whether the column can contain null.
Description String A brief description of the column.
Ordinal Int32 The sequence number of the column.
IsAutoIncrement String Whether the column value is assigned in fixed increments.
IsGeneratedColumn String Whether the column is generated.
IsHidden Boolean Whether the column is hidden.
IsArray Boolean Whether the column is an array.
IsReadOnly Boolean Whether the column is read-only.
IsKey Boolean Indicates whether a field returned from sys_tablecolumns is the primary key of the table.

sys_procedures

Lists the available stored procedures.

The following query retrieves the available stored procedures:

SELECT * FROM sys_procedures
Columns
Name Type Description
CatalogName String The database containing the stored procedure.
SchemaName String The schema containing the stored procedure.
ProcedureName String The name of the stored procedure.
Description String A description of the stored procedure.
ProcedureType String The type of the procedure, such as PROCEDURE or FUNCTION.

sys_procedureparameters

Describes stored procedure parameters.

The following query returns information about all of the input parameters for the SelectEntries stored procedure:

SELECT * FROM sys_procedureparameters WHERE ProcedureName='SelectEntries' AND Direction=1 OR Direction=2
Columns
Name Type Description
CatalogName String The name of the database containing the stored procedure.
SchemaName String The name of the schema containing the stored procedure.
ProcedureName String The name of the stored procedure containing the parameter.
ColumnName String The name of the stored procedure parameter.
Direction Int32 An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters.
DataTypeName String The name of the data type.
DataType Int32 An integer indicating the data type. This value is determined at run time based on the environment.
Length Int32 The number of characters allowed for character data. The number of digits allowed for numeric data.
NumericPrecision Int32 The maximum precision for numeric data. The column length in characters for character and date-time data.
NumericScale Int32 The number of digits to the right of the decimal point in numeric data.
IsNullable Boolean Whether the parameter can contain null.
IsRequired Boolean Whether the parameter is required for execution of the procedure.
IsArray Boolean Whether the parameter is an array.
Description String The description of the parameter.
Ordinal Int32 The index of the parameter.

sys_keycolumns

Describes the primary and foreign keys.

The following query retrieves the primary key for the Customer table:

SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Customer'
Columns
Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
IsKey Boolean Whether the column is a primary key in the table referenced in the TableName field.
IsForeignKey Boolean Whether the column is a foreign key referenced in the TableName field.
PrimaryKeyName String The name of the primary key.
ForeignKeyName String The name of the foreign key.
ReferencedCatalogName String The database containing the primary key.
ReferencedSchemaName String The schema containing the primary key.
ReferencedTableName String The table containing the primary key.
ReferencedColumnName String The column name of the primary key.

sys_foreignkeys

Describes the foreign keys.

The following query retrieves all foreign keys which refer to other tables:

SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
Columns
Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
PrimaryKeyName String The name of the primary key.
ForeignKeyName String The name of the foreign key.
ReferencedCatalogName String The database containing the primary key.
ReferencedSchemaName String The schema containing the primary key.
ReferencedTableName String The table containing the primary key.
ReferencedColumnName String The column name of the primary key.
ForeignKeyType String Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key.

sys_primarykeys

Describes the primary keys.

The following query retrieves the primary keys from all tables and views:

SELECT * FROM sys_primarykeys
Columns
Name Type Description
CatalogName String The name of the database containing the key.
SchemaName String The name of the schema containing the key.
TableName String The name of the table containing the key.
ColumnName String The name of the key column.
KeySeq String The sequence number of the primary key.
KeyName String The name of the primary key.

sys_indexes

Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.

The following query retrieves all indexes that are not primary keys:

SELECT * FROM sys_indexes WHERE IsPrimary='false'
Columns
Name Type Description
CatalogName String The name of the database containing the index.
SchemaName String The name of the schema containing the index.
TableName String The name of the table containing the index.
IndexName String The index name.
ColumnName String The name of the column associated with the index.
IsUnique Boolean True if the index is unique. False otherwise.
IsPrimary Boolean True if the index is a primary key. False otherwise.
Type Int16 An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3).
SortOrder String The sort order: A for ascending or D for descending.
OrdinalPosition Int16 The sequence number of the column in the index.

sys_connection_props

Returns information on the available connection properties and those set in the connection string.

When querying this table, the config connection string should be used:

jdbc:cdata:couchbase:config:

This connection string enables you to query this table without a valid connection.

The following query retrieves all connection properties that have been set in the connection string or set through a default value:

SELECT * FROM sys_connection_props WHERE Value <> ''
Columns
Name Type Description
Name String The name of the connection property.
ShortDescription String A brief description.
Type String The data type of the connection property.
Default String The default value if one is not explicitly set.
Values String A comma-separated list of possible values. A validation error is thrown if another value is specified.
Value String The value you set or a preconfigured default.
Required Boolean Whether the property is required to connect.
Category String The category of the connection property.
IsSessionProperty String Whether the property is a session property, used to save information about the current connection.
Sensitivity String The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms.
PropertyName String A camel-cased truncated form of the connection property name.
Ordinal Int32 The index of the parameter.
CatOrdinal Int32 The index of the parameter category.
Hierarchy String Shows dependent properties associated that need to be set alongside this one.
Visible Boolean Informs whether the property is visible in the connection UI.
ETC String Various miscellaneous information about the property.

sys_sqlinfo

Describes the SELECT query processing that the connector can offload to the data source.

Discovering the Data Source's SELECT Capabilities

Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.

Name Description Possible Values
AGGREGATE_FUNCTIONS Supported aggregation functions. AVG, COUNT, MAX, MIN, SUM, DISTINCT
COUNT Whether COUNT function is supported. YES, NO
IDENTIFIER_QUOTE_OPEN_CHAR The opening character used to escape an identifier. [
IDENTIFIER_QUOTE_CLOSE_CHAR The closing character used to escape an identifier. ]
SUPPORTED_OPERATORS A list of supported SQL operators. =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR
GROUP_BY Whether GROUP BY is supported, and, if so, the degree of support. NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE
STRING_FUNCTIONS Supported string functions. LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE
NUMERIC_FUNCTIONS Supported numeric functions. ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE
TIMEDATE_FUNCTIONS Supported date/time functions. NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT
REPLICATION_SKIP_TABLES Indicates tables skipped during replication.
REPLICATION_TIMECHECK_COLUMNS A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication.
IDENTIFIER_PATTERN String value indicating what string is valid for an identifier.
SUPPORT_TRANSACTION Indicates if the provider supports transactions such as commit and rollback. YES, NO
DIALECT Indicates the SQL dialect to use.
KEY_PROPERTIES Indicates the properties which identify the uniform database.
SUPPORTS_MULTIPLE_SCHEMAS Indicates if multiple schemas may exist for the provider. YES, NO
SUPPORTS_MULTIPLE_CATALOGS Indicates if multiple catalogs may exist for the provider. YES, NO
DATASYNCVERSION The Data Sync version needed to access this driver. Standard, Starter, Professional, Enterprise
DATASYNCCATEGORY The Data Sync category of this driver. Source, Destination, Cloud Destination
SUPPORTSENHANCEDSQL Whether enhanced SQL functionality beyond what is offered by the API is supported. TRUE, FALSE
SUPPORTS_BATCH_OPERATIONS Whether batch operations are supported. YES, NO
SQL_CAP All supported SQL capabilities for this driver. SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX
PREFERRED_CACHE_OPTIONS A string value specifies the preferred cacheOptions.
ENABLE_EF_ADVANCED_QUERY Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. YES, NO
PSEUDO_COLUMNS A string array indicating the available pseudo columns.
MERGE_ALWAYS If the value is true, The Merge Mode is forcibly executed in Data Sync. TRUE, FALSE
REPLICATION_MIN_DATE_QUERY A select query to return the replicate start datetime.
REPLICATION_MIN_FUNCTION Allows a provider to specify the formula name to use for executing a server side min.
REPLICATION_START_DATE Allows a provider to specify a replicate startdate.
REPLICATION_MAX_DATE_QUERY A select query to return the replicate end datetime.
REPLICATION_MAX_FUNCTION Allows a provider to specify the formula name to use for executing a server side max.
IGNORE_INTERVALS_ON_INITIAL_REPLICATE A list of tables which will skip dividing the replicate into chunks on the initial replicate.
CHECKCACHE_USE_PARENTID Indicates whether the CheckCache statement should be done against the parent key column. TRUE, FALSE
CREATE_SCHEMA_PROCEDURES Indicates stored procedures that can be used for generating schema files.

The following query retrieves the operators that can be used in the WHERE clause:

SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'

Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.

Columns
Name Type Description
NAME String A component of SQL syntax, or a capability that can be processed on the server.
VALUE String Detail on the supported SQL or SQL syntax.

sys_identity

Returns information about attempted modifications.

The following query retrieves the Ids of the modified rows in a batch operation:

SELECT * FROM sys_identity
Columns
Name Type Description
Id String The database-generated ID returned from a data modification operation.
Batch String An identifier for the batch. 1 for a single operation.
Operation String The result of the operation in the batch: INSERTED, UPDATED, or DELETED.
Message String SUCCESS or an error message if the update in the batch failed.

Advanced Configurations Properties

The advanced configurations properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure. Click the links for further details.

Authentication

Property Description
AuthScheme The type of authentication to use when connecting to Couchbase.
User The Couchbase user account used to authenticate.
Password The password used to authenticate the user.
CredentialsFile Use this property if you need to provide credentials for multiple users or buckets. This file takes priority over other forms of authentication.
Server The address of the Couchbase server or servers to which you are connecting.
CouchbaseService Determines the Couchbase service to connect to. Default is N1QL. Available options are N1QL and Analytics.
ConnectionMode Determines how to connect to the Couchbase server. Must be either Direct or Cloud.
DNSServer Determines what DNS server to use when retrieving Couchbase Capella information.
N1QLPort The port or URL for connecting to the Couchbase N1QL Endpoint.
AnalyticsPort The port or URL for connecting to the Couchbase Analytics Endpoint.
WebConsolePort The port or URL for connecting to the Couchbase Web Console.

SSL

Property Description
SSLClientCert The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL).
SSLClientCertType The type of key store containing the TLS/SSL client certificate.
SSLClientCertPassword The password for the TLS/SSL client certificate.
SSLClientCertSubject The subject of the TLS/SSL client certificate.
UseSSL Whether to negotiate TLS/SSL when connecting to the Couchbase server.
SSLServerCert The certificate to be accepted from the server when connecting using TLS/SSL.

Schema

Property Description
Location A path to the directory that contains the schema files defining tables, views, and stored procedures.
BrowsableSchemas This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA, SchemaB, SchemaC.
Tables This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA, TableB, TableC.
Views Restricts the views reported to a subset of the available tables. For example, Views=ViewA, ViewB, ViewC.
Dataverse Which Analytics dataverse to scan when discovering tables.
TypeDetectionScheme Determines how the provider builds tables and columns from the buckets found in Couchbase.
InferNumSampleValues The maximum number of values for every field to scan before determining its data type. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
InferSampleSize The maximum number of documents to scan for the columns available in the bucket. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
InferSimilarityMetric Specifies the similarity degree where different schemas will be considered to be the same flavor. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
FlexibleSchemas Whether the provider allows queries to use columns that it has not discovered.
ExposeTTL Specifies whether document TTL information should be exposed.
NumericStrings Whether to allow string values to be treated as numbers.
IgnoreChildAggregates Whether the provider exposes aggregate columns that are also available as child tables. Ignored if TableSupport is not set to Full.
TableSupport How much effort the provider will put into discovering tables on the Couchbase server.
NewChildJoinsMode Determines the kind of child table model the provider exposes.

Miscellaneous

Property Description
AllowJSONParameters Allows raw JSON to be used in parameters when QueryPassthrough is enabled.
ChildSeparator The character or characters used to denote child tables.
CreateTableRamQuota The default RAM quota, in megabytes, to use when inserting buckets via the CREATE TABLE syntax.
DataverseSeparator The character or characters used to denote Analytics dataverses and scopes/collections.
FlattenArrays The number of elements to expose as columns from nested arrays. Ignored if IgnoreChildAggregates is enabled.
FlattenObjects Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.
FlavorSeparator The character or characters used to denote flavors.
GenerateSchemaFiles Indicates the user preference as to when schemas should be generated and saved.
InsertNullValues Determines whether an INSERT should include fields that have NULL values.
MaxRows Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.
Other These hidden properties are used only in specific use cases.
Pagesize The maximum number of results to return per page from Couchbase.
PeriodsSeparator The character or characters used to denote hierarchy.
PseudoColumns This property indicates whether or not to include pseudo columns as columns to the table.
QueryExecutionTimeout This sets the server-side timeout for the query, which governs how long Couchbase will execute the query before returning a timeout error.
QueryPassthrough This option passes the query to the Couchbase server as is.
RowScanDepth The maximum number of rows to scan to look for the columns available in a table.
StrictComparison Adjusts how precisely to translate filters on SQL input queries into Couchbase queries. This can be set to a comma-separated list of values, where each value can be one of: date, number, boolean, or string.
Timeout The value in seconds until the timeout error is thrown, canceling the operation.
TransactionDurability Specifies how a document must be stored for a transaction to succeed. Specifies whether to use N1QL transactions when executing queries.
TransactionTimeout This sets the amount of time a transaction may execute before it is timed out by Couchbase.
UpdateNullValues Determines whether an UPDATE writes NULL values as NULL, or removes them.
UseCollectionsForDDL Whether to assume that CREATE TABLE statements use collections instead of flavors. Only takes effect when connecting to Couchbase v7+ and GenerateSchemaFiles is set to OnCreate.
UserDefinedViews A filepath pointing to the JSON configuration file containing your custom views.
UseTransactions Specifies whether to use N1QL transactions when executing queries.
ValidateJSONParameters Allows the provider to validate that string parameters are valid JSON before sending the query to Couchbase.

Authentication

This section provides a complete list of authentication properties you can configure.

Property Description
AuthScheme The type of authentication to use when connecting to Couchbase.
User The Couchbase user account used to authenticate.
Password The password used to authenticate the user.
CredentialsFile Use this property if you need to provide credentials for multiple users or buckets. This file takes priority over other forms of authentication.
Server The address of the Couchbase server or servers to which you are connecting.
CouchbaseService Determines the Couchbase service to connect to. Default is N1QL. Available options are N1QL and Analytics.
ConnectionMode Determines how to connect to the Couchbase server. Must be either Direct or Cloud.
DNSServer Determines what DNS server to use when retrieving Couchbase Capella information.
N1QLPort The port or URL for connecting to the Couchbase N1QL Endpoint.
AnalyticsPort The port or URL for connecting to the Couchbase Analytics Endpoint.
WebConsolePort The port or URL for connecting to the Couchbase Web Console.

AuthScheme

The type of authentication to use when connecting to Couchbase.

Possible Values

Auto, Basic, CredentialsFile, SSLCertificate

Data Type

string

Default Value

Auto

Remarks
  • Auto: This option is deprecated and included only for compatibility.
  • Basic: Uses HTTP Basic authentication with User and Password.
  • CredentialsFile: Uses a credentials file. This will require that the CredentialsFile property be set.
  • SSLCertificate: Uses SSL client certificate authentication. Requires that UseSSL be enabled and that SSLClientCert and SSLClientCertType be set.

Note that only Basic authentication is supported when using the "Cloud" ConnectionMode.

User

The Couchbase user account used to authenticate.

Data Type

string

Default Value

""

Remarks

Together with Password, this field is used to authenticate against the Couchbase server.

Password

The password used to authenticate the user.

Data Type

string

Default Value

""

Remarks

The User and Password are together used to authenticate with the server.

CredentialsFile

Use this property if you need to provide credentials for multiple users or buckets. This file takes priority over other forms of authentication.

Data Type

string

Default Value

""

Remarks

Use this property if you need to provide credentials for multiple users or buckets. This takes priority over other forms of authentication.

Set CredentialsFile to the path to a file that has the same markup as below:

[{"user": "YourUserName1", "pass":"YourPassword1"},
{"user": "YourUserName2", "pass":"YourPassword2"}]

Server

The address of the Couchbase server or servers to which you are connecting.

Data Type

string

Default Value

""

Remarks

This value can be set to a hostname or an IP address, like "couchbase-server.com" or "1.2.3.4". It can also be set to an HTTP or HTTPS URL, such as "https://couchbase-server.com" or "http://1.2.3.4". If ConnectionMode is set to Cloud then this should be the hostname of the Couchbase Cloud instance as reported in the control panel.

If the URL form is used, then setting this option will also set the UseSSL option: if the URL scheme is "https://", then UseSSL will be set to true, and a URL with "http://" will set UseSSL to false.

A port value cannot be used as part of this option, so values like "http://couchbase-server.com:8093" are not allowed. Please use WebConsolePort, N1QLPort and AnalyticsPort.

This value can also accept multiple servers in the above format separated by commas, such as "1.2.3.4, couchbase-server.com". This will allow the connector to recover the connection in case some of the servers listed are inaccessible.

Note that while the connector will try to recover the connection as a whole, it may lose individual operations. For example, while a long-running query will fail if the server becomes inaccesssible while that query is running, that query can be retried on the same connection and the connector will execute it on the next active server.

CouchbaseService

Determines the Couchbase service to connect to. Default is N1QL. Available options are N1QL and Analytics.

Possible Values

N1QL, Analytics

Data Type

string

Default Value

N1QL

Remarks

Determines the Couchbase service to connect to. Default is N1QL. Available options are N1QL and Analytics

ConnectionMode

Determines how to connect to the Couchbase server. Must be either Direct or Cloud.

Possible Values

Direct, Cloud

Data Type

string

Default Value

Direct

Remarks

By default the connector connects to Couchbase directly using the address given in the Server option. The Server must be running the appropriate CouchbaseService to accept the connection. This will work in most on-premise or basic cloud deployments.

This should be set to Cloud when connecting to Couchbase Capella or a custom deployment that uses service records. These records will allow the connector to determine the exact Couchbase servers that provide the appropriate CouchbaseService. You must also set the DNSServer property so that the connector is able to fetch these service records.

Note that enabling Cloud mode will override these connection properties with the values discovered by contacting the cluster:

  • Server
  • N1QLPort
  • AnalyticsPort

DNSServer

Determines what DNS server to use when retrieving Couchbase Capella information.

Data Type

string

Default Value

""

Remarks

In most cases any public DNS server can be provided here such as the ones provided by OpenDNS, Cloudflare or Google.

If these are not accessible then you will need to use the DNS server configured by your network administrator.

N1QLPort

The port or URL for connecting to the Couchbase N1QL Endpoint.

Data Type

string

Default Value

""

Remarks

This port is used for submitting queries when CouchbaseService is set to N1QL. Any requests to manage indices will also go through this port. It defaults to 8093 when not using SSL, and 18093 when using SSL. See UseSSL.

This option can be set one of two ways:

  • As a port number like "1234". With this setting the connector will send N1QL queries to the endpoint http://Server:N1QLPort/query/service. (or https:// if Server is https:// or UseSSL is enabled).
  • As a full URL like "http://couchbase.example:1234/proxy". With this setting the connector send N1QL queries using the endpoint you specify. For example, if you use that URL then N1QL requests will go to http://couchbase.example:1234/proxy/query/serivce. Server and UseSSL are ignored for N1QL requests.

AnalyticsPort

The port or URL for connecting to the Couchbase Analytics Endpoint.

Data Type

string

Default Value

""

Remarks

This port is used for submitting queries when CouchbaseService is set to Analytics. It defaults to 8095 when not using SSL, and 18095 when using SSL. See UseSSL.

This option can be set one of two ways:

  • As a port number like "1234". With this setting the connector will send Analytics queries to the endpoint http://Server:AnalyticsPort/analytics/service (or https:// if Server is https:// or UseSSL is enabled).
  • As a full URL like "http://couchbase.example:1234/proxy". With this setting the connector send Analytics queries using the endpoint you specify. For example, if you use that URL then Analytics requests will go to http://couchbase.example:1234/proxy/analytics/serivce. Server and UseSSL are ignored for Analytics requests.

WebConsolePort

The port or URL for connecting to the Couchbase Web Console.

Data Type

string

Default Value

""

Remarks

This port is used for API operations like managing buckets. It defaults to 8091 when not using SSL, and 18091 when using SSL. See UseSSL.

This option can be set one of two ways:

  • As a port number like "1234". With this setting the connector will send management requests to http://Server:WebConsolePort/. The exact endpoint depends upon the operation being used. For example, the cluster status request will go to the endpoint http://Server:WebConsolePort/pools.
  • As a full URL like "http://couchbase.example:1234/proxy". With this setting the connector will send Web Console queries using the endpoint you specify. For example, if you use that URL then the cluster status request (normally at /pools) will go to http://couchbase.example:1234/proxy/pools. Server and UseSSL are ignored for web console requests.

SSL

This section provides a complete list of SSL properties you can configure.

Property Description
SSLClientCert The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL).
SSLClientCertType The type of key store containing the TLS/SSL client certificate.
SSLClientCertPassword The password for the TLS/SSL client certificate.
SSLClientCertSubject The subject of the TLS/SSL client certificate.
UseSSL Whether to negotiate TLS/SSL when connecting to the Couchbase server.
SSLServerCert The certificate to be accepted from the server when connecting using TLS/SSL.

SSLClientCert

The TLS/SSL client certificate store for SSL Client Authentication (2-way SSL).

Data Type

string

Default Value

""

Remarks

The name of the certificate store for the client certificate.

The SSLClientCertType field specifies the type of the certificate store specified by SSLClientCert. If the store is password protected, specify the password in SSLClientCertPassword.

SSLClientCert is used in conjunction with the SSLClientCertSubject field in order to specify client certificates. If SSLClientCert has a value, and SSLClientCertSubject is set, a search for a certificate is initiated. See SSLClientCertSubject for more information.

Designations of certificate stores are platform-dependent.

The following are designations of the most common User and Machine certificate stores in Windows:

Property Description
MY A certificate store holding personal certificates with their associated private keys.
CA Certifying authority certificates.
ROOT Root certificates.
SPC Software publisher certificates.

In Java, the certificate store normally is a file containing certificates and optional private keys.

When the certificate store type is PFXFile, this property must be set to the name of the file. When the type is PFXBlob, the property must be set to the binary contents of a PFX file (for example, PKCS12 certificate store).

SSLClientCertType

The type of key store containing the TLS/SSL client certificate.

Possible Values

USER, MACHINE, PFXFILE, PFXBLOB, JKSFILE, JKSBLOB, PEMKEY_FILE, PEMKEY_BLOB, PUBLIC_KEY_FILE, PUBLIC_KEY_BLOB, SSHPUBLIC_KEY_FILE, SSHPUBLIC_KEY_BLOB, P7BFILE, PPKFILE, XMLFILE, XMLBLOB

Data Type

string

Default Value

USER

Remarks

This property can take one of the following values:

Property Description
USER - default For Windows, this specifies that the certificate store is a certificate store owned by the current user. Note that this store type is not available in Java.
MACHINE For Windows, this specifies that the certificate store is a machine store. Note that this store type is not available in Java.
PFXFILE The certificate store is the name of a PFX (PKCS12) file containing certificates.
PFXBLOB The certificate store is a string (base-64-encoded) representing a certificate store in PFX (PKCS12) format.
JKSFILE The certificate store is the name of a Java key store (JKS) file containing certificates. Note that this store type is only available in Java.
JKSBLOB The certificate store is a string (base-64-encoded) representing a certificate store in JKS format. Note that this store type is only available in Java.
PEMKEY_FILE The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate.
PEMKEY_BLOB The certificate store is a string (base64-encoded) that contains a private key and an optional certificate.
PUBLIC_KEY_FILE The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate.
PUBLIC_KEY_BLOB The certificate store is a string (base-64-encoded) that contains a PEM- or DER-encoded public key certificate.
SSHPUBLIC_KEY_FILE The certificate store is the name of a file that contains an SSH-style public key.
SSHPUBLIC_KEY_BLOB The certificate store is a string (base-64-encoded) that contains an SSH-style public key.
P7BFILE The certificate store is the name of a PKCS7 file containing certificates.
PPKFILE The certificate store is the name of a file that contains a PuTTY Private Key (PPK).
XMLFILE The certificate store is the name of a file that contains a certificate in XML format.
XMLBLOB The certificate store is a string that contains a certificate in XML format.

SSLClientCertPassword

The password for the TLS/SSL client certificate.

Data Type

string

Default Value

""

Remarks

If the certificate store is of a type that requires a password, this property is used to specify that password to open the certificate store.

SSLClientCertSubject

The subject of the TLS/SSL client certificate.

Data Type

string

Default Value

*

Remarks

When loading a certificate the subject is used to locate the certificate in the store.

If an exact match is not found, the store is searched for subjects containing the value of the property. If a match is still not found, the property is set to an empty string, and no certificate is selected.

The special value "*" picks the first certificate in the certificate store.

The certificate subject is a comma separated list of distinguished name fields and values. For example, "CN=www.server.com, OU=test, C=US, E=support@company.com". The common fields and their meanings are shown below.

Field Meaning
CN Common Name. This is commonly a host name like www.server.com.
O Organization
OU Organizational Unit
L Locality
S State
C Country
E Email Address

If a field value contains a comma, it must be quoted.

UseSSL

Whether to negotiate TLS/SSL when connecting to the Couchbase server.

Data Type

bool

Default Value

false

Remarks

When this is set to true, the defaults for the following options change:

Property Description
Property Plaintext Default SSL Default
[AnalyticsPort](#RSBCouchbase_p_AnalyticsPort) 8095 18095
[N1QLPort](#RSBCouchbase_p_N1QLPort) 8093 18093
[WebConsolePort](#RSBCouchbase_p_WebConsolePort) 8091 18091

This option should be enabled when connecting to Couchbase Capella because all Capella deployments use SSL by default.

SSLServerCert

The certificate to be accepted from the server when connecting using TLS/SSL.

Data Type

string

Default Value

""

Remarks

If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.

This property can take the following forms:

Description Example
A full PEM Certificate (example shortened for brevity) -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE-----
A path to a local file containing the certificate C:\\cert.cer
The public key (example shortened for brevity) -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY-----
The MD5 Thumbprint (hex values can also be either space or colon separated) ecadbdda5a1529c58a1e9e09828d70e4
The SHA1 Thumbprint (hex values can also be either space or colon separated) 34a929226ae0819f2ec14b4a3d904f801cbb150d

If not specified, any certificate trusted by the machine is accepted.

Certificates are validated as trusted by the machine based on the System's trust store. The trust store used is the 'javax.net.ssl.trustStore' value specified for the system. If no value is specified for this property, Java's default trust store is used (for example, JAVA_HOME\lib\security\cacerts).

Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.

Schema

This section provides a complete list of schema properties you can configure.

Property Description
Location A path to the directory that contains the schema files defining tables, views, and stored procedures.
BrowsableSchemas This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA, SchemaB, SchemaC.
Tables This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA, TableB, TableC.
Views Restricts the views reported to a subset of the available tables. For example, Views=ViewA, ViewB, ViewC.
Dataverse Which Analytics dataverse to scan when discovering tables.
TypeDetectionScheme Determines how the provider builds tables and columns from the buckets found in Couchbase.
InferNumSampleValues The maximum number of values for every field to scan before determining its data type. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
InferSampleSize The maximum number of documents to scan for the columns available in the bucket. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
InferSimilarityMetric Specifies the similarity degree where different schemas will be considered to be the same flavor. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.
FlexibleSchemas Whether the provider allows queries to use columns that it has not discovered.
ExposeTTL Specifies whether document TTL information should be exposed.
NumericStrings Whether to allow string values to be treated as numbers.
IgnoreChildAggregates Whether the provider exposes aggregate columns that are also available as child tables. Ignored if TableSupport is not set to Full.
TableSupport How much effort the provider will put into discovering tables on the Couchbase server.
NewChildJoinsMode Determines the kind of child table model the provider exposes.

Location

A path to the directory that contains the schema files defining tables, views, and stored procedures.

Data Type

string

Default Value

%APPDATA%\Couchbase Data Provider\Schema

Remarks

The path to a directory which contains the schema files for the connector (.rsd files for tables and views, .rsb files for stored procedures). The folder location can be a relative path from the location of the executable. The Location property is only needed if you want to customize definitions (for example, change a column name, ignore a column, and so on) or extend the data model with new tables, views, or stored procedures.

If left unspecified, the default location is "%APPDATA%\Couchbase Data Provider\Schema" with %APPDATA% being set to the user's configuration directory:

Platform %APPDATA%
Windows The value of the APPDATA environment variable
Mac ~/Library/Application Support
Linux ~/.config

BrowsableSchemas

This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC.

Data Type

string

Default Value

""

Remarks

Listing the schemas from databases can be expensive. Providing a list of schemas in the connection string improves the performance.

Tables

This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC.

Data Type

string

Default Value

""

Remarks

Listing the tables from some databases can be expensive. Providing a list of tables in the connection string improves the performance of the connector.

This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.

Specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.

Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.

Views

Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC.

Data Type

string

Default Value

""

Remarks

Listing the views from some databases can be expensive. Providing a list of views in the connection string improves the performance of the connector.

This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.

Specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.

Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.

Dataverse

Which Analytics dataverse to scan when discovering tables.

Data Type

string

Default Value

""

Remarks

This property is empty by default, which means that all dataverses will be scanned and table names will be generated as described in DataverseSeparator.

If you assign this property to a non-blank value, then the connector will scan only the corresponding dataverse (for example, setting this to "Default" scans the Default dataverse). Since only one dataverse is being scanned, table names will not be prefixed with the dataverse name. It is recommended to set this property to "Default" if you are coming from a previous version of the connector and need backwards compatability.

If you are connecting to Couchbase 7.0 or later, this option will be treated as a compound name containing both a dataset and a scope. For example, if you have previously created collections like these:

CREATE ANALYTICS SCOPE websites.exampledotcom
CREATE ANALYTICS COLLECTION websites.exampledotcom.traffic ON examplecom_traffic_bucket
CREATE ANALYTICS COLLECTION websites.exampledotcom.ads ON examplecom_ads_bucket

You would set this option to "websites.exampledotcom".

TypeDetectionScheme

Determines how the provider builds tables and columns from the buckets found in Couchbase.

Data Type

string

Default Value

DocType

Remarks

A comma-separated list of the following options:

Property Description
DocType This discovers tables by checking at each bucket and looking for different values of the "docType" field in the documents. For example, if the bucket beer-sample contains documents with "docType" = 'brewery' and "docType" = 'beer', this will generate three tables: beer-sample (containing all documents), beer-sample.brewery (containing just breweries) and beer-sample.beer (containing just beers). Like RowScan, this will scan a sample of the documents in each flavor and determine the data type for each field. RowScanDepth determines how many documents are scanned from each flavor.
DocType=fieldName Like DocType, but this scans based off of a field called "fieldName" rather than "docType". "fieldName" must match the field name in Couchbase exactly, including case.
Infer This uses the N1QL INFER statement to determine what tables and columns exist. This does more flexible flavor detection than DocType, but is only available for Couchbase Enterprise.
RowScan This reads a sample of documents from a bucket, and heuristically determines the data type. RowScanDepth determines how many documents are scanned. It does not do any flavor detection.
None This is like RowScan, but will always return columns that have string types instead of the detected type.

InferNumSampleValues

The maximum number of values for every field to scan before determining its data type. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.

Data Type

string

Default Value

10

Remarks

The maximum number of values to scan from every field of the sampled documents before determining the field's data type. This property enables additional configuration of Automatic Schema Discovery when you are using the Couchbase Infer command -- TypeDetectionScheme must also be set to Infer to use this propery.

InferSampleSize

The maximum number of documents to scan for the columns available in the bucket. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.

Data Type

string

Default Value

100

Remarks

The maximum number of documents to scan for the columns available in the bucket. The Infer command will return column metadata by scanning a random sample of documents of the size specified here.

Setting a high value may decrease performance. Setting a low value may prevent the column and data type from being determined properly, especially when there is null data.

This property enables additional configuration of Automatic Schema Discovery when you are using the Couchbase Infer command -- TypeDetectionScheme must also be set to Infer to use this propery.

InferSimilarityMetric

Specifies the similarity degree where different schemas will be considered to be the same flavor. Applies to Automatic Schema Discovery when TypeDetectionScheme is set to INFER.

Data Type

string

Default Value

0.7

Remarks

This property specifies how similar two schemas must be to be considered to be the same flavor. As an example, consider the following rows:

Row 1: ColA, ColB, ColC, ColD
Row 2: ColA, ColB, ColE, ColF
Row 3: ColB, ColF, ColX, ColY

You can configure the columns returned for each flavor with different InferSimilarityMetric values, as in the following examples:

  • If you set InferSimilarityMetric to 1, the connector will return no flavors.
  • If you set InferSimilarityMetric to 0.5, the connector will return 2 flavors, Row1 and Row2 making up one, and Row3 making up another.
  • If you set InferSimilarityMetric to 0.25, the connector will return a single flavor containing all rows.

You can then query document flavors using dot notation, as in the following statement:

SELECT * FROM [Items.Technology]

This property enables additional configuration of Automatic Schema Discovery when you are using the Couchbase Infer command -- TypeDetectionScheme must also be set to Infer to use this propery.

FlexibleSchemas

Whether the provider allows queries to use columns that it has not discovered.

Data Type

bool

Default Value

false

Remarks

By default connector will only allow queries to use columns that it has found during the metadata discovery process (see TypeDetectionScheme for details). This means that the connector has the full information for each column it presents, but it also means that fields set on only a few documents may not be exposed. Disabling this option means that the connector will allow you to write a query with any columns you want. If you use columns in a query that have not been discovered the connector will assume that they are simple strings.

For example, the connector uses column type information to automatically convert dates for comparision since Couchbase cannot natively compare dates directly. If the connector detects that datecol is a date field, it can apply the STR_TO_MILLIS conversion automatically:

/* SQL */
WHERE datecol < '2020-06-12';

/* N1QL */
WHERE STR_TO_MILLIS(datecol) < STR_TO_MILLIS('2020-06-12');

When using undiscovered columns the connector cannot make this type of conversion for you. You must apply any needed conversions manually to ensure that operations behave the way you want them to.

ExposeTTL

Specifies whether document TTL information should be exposed.

Data Type

bool

Default Value

false

Remarks

By default the connector does not expose TTL values or consider document TTLs when performing DML operations. Enabling this option exposes TTL values in two ways:

  • All tables get a new column called Document.Expiration which contains the TTL value for each document. This column is an integer and returns whatever TTL value is stored in Couchbase directly. This column is read-write on bucket tables and read-only on child tables.
  • INSERT and UPDATE will use this field to set TTL values, or to preserve them (for update) when none is provided. Setting the field to either 0 or NULL will remove the TTL from any affected documents.

Note that enabling this features requires that your server be version 6.5.1 or later and that your CouchbaseService is set to N1QL. If either of these is not the case the connector will not connect.

NumericStrings

Whether to allow string values to be treated as numbers.

Data Type

bool

Default Value

true

Remarks

By default this property is enabled and the connector will treat string values as numeric if they all the values it samples during schema detection are numeric. This can cause type errors later on if the field contains non-numeric values in other documents. If this property is disabled then numeric strings are left as strings although other string-based data types like timestamps will still be detected.

For example, the "code" field in the below bucket would be affected by this setting. By default it would be considered an integer but if this property were enabled it would be treated as a string.

{ "code": "123", "message": "Please restart your computer" }
{ "code": "456", "message": "Urgent update must be applied" }

IgnoreChildAggregates

Whether the provider exposes aggregate columns that are also available as child tables. Ignored if TableSupport is not set to Full.

Data Type

bool

Default Value

false

Remarks

The connector will expose array fields within a bucket as a separate child table, such as in the Games_scores example described in Automatic Schema Discovery. By default the connector will also expose these array fields as JSON aggregates on the base table. For example, either of these queries would return information on game scores:

/* Return each score as an individual row */
SELECT value FROM Games_scores;

/* Return all scores for each Game as a JSON string */
SELECT scores FROM Games;

Since these aggregates are exposed on the base table, they will be generated even when the information they contain is redundant. For example, when performing this join the scores aggregate on Games is populated as well as the value column on Games_scores. Internally this causes two copies of the scores data to be transferred from Couchbase.

/* Retrieves score data twice, once for Games.scores and once for Games_scores.value */
SELECT * FROM Games INNER JOIN Games_scores ON Games.[Document.Id] = Games_scores.[Document.Id]

This option can be used to prevent the aggregate field from being exposed when the same information is also available from a child table. In the games example, setting this option to true means that the Games table would only expose a primary key column. The only way to retrieve information about scores would be the child table, so score data would only be read once from Couchbase.

/* Only exposes Document.Id, not scores */
SELECT * FROM Games;

/* Only retrieves score data once for Games_scores.value */
SELECT * FROM Games INNER JOIN Games_scores ON Games.[Document.Id] = Games_scores.[Document.Id]

Note that this option overrides FlattenArrays, since all data from flattened arrays is also avaialable as child tables. If this option is set then no array flattening is performed, even if FlattenArrays is set to a value over 0.

TableSupport

How much effort the provider will put into discovering tables on the Couchbase server.

Possible Values

Full, Basic, None

Data Type

string

Default Value

Full

Remarks

The available options are:

Property Description
Full The connector will discover the available buckets, and look inside of each of those buckets for child tables. This provides the most flexible way to access nested data, but requires that each bucket on your server have primary indexes.
Basic The connector will discover the available buckets, but will not look inside of them for child tables. This is recommended for cases where you either want to reduce the time that schema detection takes, or if your buckets do not have primary indexes.
None The connector will only use the schema files found in the Location directory, and will not discover buckets on the server. This option should only be used after you have already created schema files. Using this option without schema files will result in no tables being available.

NewChildJoinsMode

Determines the kind of child table model the provider exposes.

Data Type

bool

Default Value

false

Remarks

By default the connector exposes a backwards-compatible data model that is not fully relational. In this mode non-child tables have a primary key called Document.Id, but child tables do not have a primary key. Instead they have a column called Document.Id which has the same value as the Document.Id of the parent row that contains the child row.

For example, a parent table invoices containing invoice records may look like this:

Document.Id customer
1 Adam
2 Beatrice
3 Charlie

And its child invoices_lineitems containing line items may look like this:

Document.Id item
1 laptop
1 keyboard
2 stapler
3 whiteboard
3 markers

This model has several limitations:

  • Complex JOIN results may be incorrect. In most cases the connector can translate a JOIN like SELECT * FROM invoices INNERT JOIN invoices_lineitems ON invoices.[Document.Id] = invoices_lineitems.[Document.Id] into an UNNEST. But if the JOIN is too complex then both sides are executed separately which can produce incorrect results.
  • DML operations on nested child tables are impossible because there is no way to specify what row of the middle child to use. For example, you cannot change rows in a table like invoices_lineitems_discounts because there is no way to specify the lineitem that contains the discount you are updating.
  • Some environments like SSIS may not be able to operate on child tables at all because they do not have primary keys.

The NewChildJoins data model is fully relational. In this mode non-child tables have the same Document.Id as before, but child tables are extended to have both a foreign key and a primary key. The foreign key is called Document.Parent and it refers to the Document.Id of the row in the parent table that contains the child row. The primary key is called Document.Id and it contains a path which uniquely refers to that child row.

For example, the same tables as above would look like this in the NewChildJoins model. invoices would be the same:

Document.Id customer
1 Adam
2 Beatrice
3 Charlie

However, invoices_lineitems would have both a primary and foreign key. The primary key contains the ID of the parent row as well as the child row's position in the parent.

Document.Id Document.Parent item
1$1 1 laptop
1$2 1 keyboard
2$1 2 stapler
3$1 3 whiteboard
3$2 3 markers

This fixes the limitations of the old data model:

  • Complex JOIN results are always consistent because they link foreign keys to primary keys. SELECT * FROM invoices INNERT JOIN invoices_lineitems ON invoices.[Document.Id] = invoices_lineitems.[Document.Parent]
  • DML operations on nested child tables are allowed because the Document.Id contains all the required information to pick out specific rows, regardless of the table's depth.
  • Environments which depend on primary keys can use these tables and generate JOIN queries since the relationships between Document.Id and Document.Parent columns are included in the connector metadata.

Miscellaneous

This section provides a complete list of miscellaneous properties you can configure.

Property Description
AllowJSONParameters Allows raw JSON to be used in parameters when QueryPassthrough is enabled.
ChildSeparator The character or characters used to denote child tables.
CreateTableRamQuota The default RAM quota, in megabytes, to use when inserting buckets via the CREATE TABLE syntax.
DataverseSeparator The character or characters used to denote Analytics dataverses and scopes/collections.
FlattenArrays The number of elements to expose as columns from nested arrays. Ignored if IgnoreChildAggregates is enabled.
FlattenObjects Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.
FlavorSeparator The character or characters used to denote flavors.
GenerateSchemaFiles Indicates the user preference as to when schemas should be generated and saved.
InsertNullValues Determines whether an INSERT should include fields that have NULL values.
MaxRows Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.
Other These hidden properties are used only in specific use cases.
Pagesize The maximum number of results to return per page from Couchbase.
PeriodsSeparator The character or characters used to denote hierarchy.
PseudoColumns This property indicates whether or not to include pseudo columns as columns to the table.
QueryExecutionTimeout This sets the server-side timeout for the query, which governs how long Couchbase will execute the query before returning a timeout error.
QueryPassthrough This option passes the query to the Couchbase server as is.
RowScanDepth The maximum number of rows to scan to look for the columns available in a table.
StrictComparison Adjusts how precisely to translate filters on SQL input queries into Couchbase queries. This can be set to a comma-separated list of values, where each value can be one of: date, number, boolean, or string.
Timeout The value in seconds until the timeout error is thrown, canceling the operation.
TransactionDurability Specifies how a document must be stored for a transaction to succeed. Specifies whether to use N1QL transactions when executing queries.
TransactionTimeout This sets the amount of time a transaction may execute before it is timed out by Couchbase.
UpdateNullValues Determines whether an UPDATE writes NULL values as NULL, or removes them.
UseCollectionsForDDL Whether to assume that CREATE TABLE statements use collections instead of flavors. Only takes effect when connecting to Couchbase v7+ and GenerateSchemaFiles is set to OnCreate.
UserDefinedViews A filepath pointing to the JSON configuration file containing your custom views.
UseTransactions Specifies whether to use N1QL transactions when executing queries.
ValidateJSONParameters Allows the provider to validate that string parameters are valid JSON before sending the query to Couchbase.

AllowJSONParameters

Allows raw JSON to be used in parameters when QueryPassthrough is enabled.

Data Type

bool

Default Value

false

Remarks

This option affects how string parameters are handled when using direct N1QL and SQL++ queries through QueryPassthrough. For example, consider this query:

INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", @x)

By default, this option is disabled and string parameters are quoted and escaped into JSON strings. That means that any value can be safely used as a string parameter, but it also means that parameters cannot be used as raw JSON documents:

/*
 * If @x is set to: test value " contains quote
 *
 * Result is a valid query
*/
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", "test value \" contains quote")

/*
 * If @x is set to: {"a": ["valid", "JSON", "value"]}
 *
 * Result contains string instead of JSON document
*/
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", "{\"a\": [\"valid\", \"JSON\", \"value\"]})

When this option is enabled, string parameters are assumed to be valid JSON. This means that raw JSON documents can be used as parameters, but it also means that all simple strings must be escaped:

/*
 * If @x is set to: test value " contains quote
 *
 * Result is an invalid query
*/
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", test value " contains quote)

/*
 * If @x is set to: {"a": ["valid", "JSON", "value"]}
 *
 * Result is a JSON document
*/
INSERT INTO `bucket` (KEY, VALUE) VALUES ("1", {"a": ["valid", "JSON", "value"]})

Please refer to ValidateJSONParameters for more details on how parameters are validated when this option is enabled.

ChildSeparator

The character or characters used to denote child tables.

Data Type

string

Default Value

_

Remarks

When creating a child table for an array underneath a bucket, the connector will generate the name of the child table by concatenating the name of the base table, along with this separator and each path element.

For example, if this document were in the bucket "customers", then the child table for the addresses field would be called "customers_addresses".

{
  "addresses": [
    {"street": "123 Main St"},
    {"street": "424 Pleasant Ct"},
    {"street": "719 Blue Way"}
  ]
}

CreateTableRamQuota

The default RAM quota, in megabytes, to use when inserting buckets via the CREATE TABLE syntax.

Data Type

string

Default Value

250

Remarks

The default RAM quota, in megabytes, to use when inserting buckets via the CREATE TABLE syntax.

DataverseSeparator

The character or characters used to denote Analytics dataverses and scopes/collections.

Data Type

string

Default Value

.

Remarks

When using the Analytics serivce, the connector will scan all datasets from all available dataverses. To avoid potential name conflicts, it will include the dataverse name and the dataset name in the generated table name.

By default this is set to ".", so that if there is a dataset called "users" on the "Default" dataverse, then the table generated will be "Default.users".

This property is also used when generating table names for collections (on both N1QL and Analytics) on Couchbase 7 and later. For example, a bucket called "users" that has two collections called "active" and "inactive" under the "status" scope would be detected as the tables "users.status.active" and "users.status.inactive".

FlattenArrays

The number of elements to expose as columns from nested arrays. Ignored if IgnoreChildAggregates is enabled.

Data Type

string

Default Value

0

Remarks

By default, nested arrays are returned as strings of JSON. The FlattenArrays property can be used to flatten the elements of nested arrays into columns of their own. This is only recommended for arrays that are expected to be short.

Set FlattenArrays to the number of elements you want to return from nested arrays. The specified elements are returned as columns. The zero-based index is concatenated to the column name. Other elements are ignored.

For example, you can return an arbitrary number of elements from an array of strings:

["FLOW-MATIC","LISP","COBOL"]

When FlattenArrays is set to 1, the preceding array is flattened into the following table:

Column Name Column Value
languages.0 FLOW-MATIC

FlattenObjects

Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON.

Data Type

bool

Default Value

true

Remarks

Set FlattenObjects to true to flatten object properties into columns of their own. Otherwise, objects nested in arrays are returned as strings of JSON. The property name is concatenated onto the object name with an underscore to generate the column name.

For example, you can flatten the nested objects below at connection time:

address : {
  "street" : "123 Main St.",
  "city"   : "Nowhere",
  "state"  : "NY",
  "zip"    : "12345"
}

When FlattenObjects is set to true, the preceding object is flattened into the following table:

Column Name Column Value
address.street 123 Main St.
address.city Nowhere
address.state NY
address.zip 12345

FlavorSeparator

The character or characters used to denote flavors.

Data Type

string

Default Value

.

Remarks

When the connector detects a flavored table, using either a DocType or Infer TypeDetectionScheme, it names flavored tables by concatenating the underlying bucket name, this seprator, and the value of the bucket's primary flavor.

For example, if the connector detects the flavor "docType = 'beer'" on the "beer-sample" bucket, then it will generate the table "beer-sample.beer" which contains only documents in "beer-sample" which have the "beer" doctype.

GenerateSchemaFiles

Indicates the user preference as to when schemas should be generated and saved.

Possible Values

Never, OnUse, OnStart, OnCreate

Data Type

string

Default Value

Never

Remarks

GenerateSchemaFiles enables you to save the table definitions identified by Automatic Schema Discovery. This property outputs schemas to .rsd files in the path specified by Location.

Available settings are the following:

  • Never: A schema file will never be generated.
  • OnUse: A schema file will be generated the first time a table is referenced, provided the schema file for the table does not already exist.
  • OnStart: A schema file will be generated at connection time for any tables that do not currently have a schema file.
  • OnCreate: A schema file will be generated by when running a CREATE TABLE SQL query.

Note that if you want to regenerate a file, you will first need to delete it.

Generate Schemas with SQL

When you set GenerateSchemaFiles to OnUse, the connector generates schemas as you execute SELECT queries. Schemas are generated for each table referenced in the query.

When you set GenerateSchemaFiles to OnCreate, schemas are only generated when a CREATE TABLE query is executed.

Generate Schemas on Connection

Another way to use this property is to obtain schemas for every table in your database when you connect. To do so, set GenerateSchemaFiles to OnStart and connect.

Alternatives to Static Schemas

If your data structures are volatile, consider setting GenerateSchemaFiles to Never and using dynamic schemas. See Automatic Schema Discovery for more information about dynamic schemas.

Editing Schemas

Schema files have a simple format that makes them easy to modify. See Custom Schema Definitions for more information.

InsertNullValues

Determines whether an INSERT should include fields that have NULL values.

Data Type

bool

Default Value

true

Remarks

By default the connector uses NULL values provided in an INSERT statement and inserts them as JSON null values.

If this option is disabled, SQL NULL values are ignored during an INSERT. In the case of array columns (FlattenArrays must be set to retrieve these), this means that array indices are shifted over to compensate for the values that have been removed.

MaxRows

Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.

Data Type

int

Default Value

-1

Remarks

Limits the number of rows returned when no aggregation or GROUP BY is used in the query. This takes precedence over LIMIT clauses.

Other

These hidden properties are used only in specific use cases.

Data Type

string

Default Value

""

Remarks

The properties listed below are available for specific use cases. Normal driver use cases and functionality should not require these properties.

Specify multiple properties in a semicolon-separated list.

Integration and Formatting
Property Description
DefaultColumnSize Sets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000.
ConvertDateTimeToGMT Determines whether to convert date-time values to GMT, instead of the local time of the machine.
RecordToFile=filename Records the underlying socket data transfer to the specified file.

Pagesize

The maximum number of results to return per page from Couchbase.

Data Type

int

Default Value

1000

Remarks

The Pagesize property affects the maximum number of results to return per page from Couchbase. Setting a higher value may result in better performance at the cost of additional memory allocated per page consumed.

PeriodsSeparator

The character or characters used to denote hierarchy.

Data Type

string

Default Value

.

Remarks

When flattening objects and arrays, the connector will use this value to separate different levels of objects and arrays. For example, if your Couchbase server returns a document like this (and FlattenObjects is enabled), then the connector will return the columns "geo.latitude" and "geo.longitude" if the periods separator is set to ".".

{
  "geo": {
    "latitude": 35.9132,
    "longitude": -79.0558
  }
}

PseudoColumns

This property indicates whether or not to include pseudo columns as columns to the table.

Data Type

string

Default Value

""

Remarks

This setting is particularly helpful in Entity Framework, which does not allow you to set a value for a pseudo column unless it is a table column. The value of this connection setting is of the format "Table1=Column1, Table1=Column2, Table2=Column3". You can use the "*" character to include all tables and all columns; for example, "*=*".

QueryExecutionTimeout

This sets the server-side timeout for the query, which governs how long Couchbase will execute the query before returning a timeout error.

Data Type

string

Default Value

-1

Remarks

Th default is -1, which disables the timeout. When enabling the timeout, the value must include both an amount and a unit, which can be one of: "ns" (nanoseconds), "us" (microseconds), "ms" (milliseconds), "s" (seconds), "m" (minutes) or "h" (hours). For example, "5m" and "300s" both set timeouts of 5 minutes.

There is a server-side timeout as well called the "index scan timeout", which will override this one if it is lower. By default the index scan timeout is 2 minutes, but it can be changed by setting the "indexer.settings.scan_timeout" property on your Couchbase server.

QueryPassthrough

This option passes the query to the Couchbase server as is.

Data Type

bool

Default Value

false

Remarks

When this is set, queries are passed through directly to Couchbase.

RowScanDepth

The maximum number of rows to scan to look for the columns available in a table.

Data Type

int

Default Value

100

Remarks

The columns in a table must be determined by scanning table rows. This value determines the maximum number of rows that will be scanned.

Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.

StrictComparison

Adjusts how precisely to translate filters on SQL input queries into Couchbase queries. This can be set to a comma-separated list of values, where each value can be one of: date, number, boolean, or string.

Data Type

string

Default Value

""

Remarks

This option is empty by default, which means that WHERE clauses sent to Couchbase will include extra functions that convert values so that more comparisons work.

For example, leaving the "string" setting out of the list causes arrays to be converted, so that they can be compared with strings:

SELECT * FROM Bucket WHERE MyArrayColumn = '[1,2,3]'

If set to a value, queries including the relevant types of comparisons will be translated literally. This makes better use of Couchbase's indexes, but means that the types of comparisons must be in a format Couchbase can compare directly.

For example, if "date" is provided as one of the options, then dates must match the format they are stored as in Couchbase since they will not be converted automatically:

SELECT * FROM Bucket WHERE MyDateColumn = '2018-10-31T10:00:00';

Timeout

The value in seconds until the timeout error is thrown, canceling the operation.

Data Type

int

Default Value

60

Remarks

If Timeout = 0, operations do not time out. The operations run until they complete successfully or until they encounter an error condition.

If Timeout expires and the operation is not yet complete, the connector throws an exception.

TransactionDurability

Specifies how a document must be stored for a transaction to succeed. Specifies whether to use N1QL transactions when executing queries.

Possible Values

None, Majority, MajorityAndPersistActive, PersistToMajority

Data Type

string

Default Value

Majority

Remarks

If UseTransactions is enabled, then this option can be set to determine when Couchbase will allow writes in transactions to commit. The Couchbase documentation on Durability and Transactions contains the full details, below is a high-level summary.

This option controls requirements on both quorum and persistence. The quorum may either require no bucket replicas to receive the document (None), or a majority of replicas to have the document (all others). The persistence level requires either that the document be stored in the replica memory (Majoriy) or on the replica disk (MajorityAndPersistActive, PersistToMajority).

None is only useful if the bucket you are using is not configured for replicas. The other options can be used depending on the required performance and durability tradeoffs. Persisting to more replicas is slower but provides greater resilience against a node crashing.

TransactionTimeout

This sets the amount of time a transaction may execute before it is timed out by Couchbase.

Data Type

string

Default Value

""

Remarks

If transactions are enabled, then the connector will default to the server's default transaction timeout setting.

When enabling the timeout, the value must include both an amount and a unit, which can be one of: "ns" (nanoseconds), "us" (microseconds), "ms" (milliseconds), "s" (seconds), "m" (minutes) or "h" (hours). For example, "5m" and "300s" both set timeouts of 5 minutes.

There are also cluster-level and node-level transaction timeouts which override this one if they are smaller. For example, if the node-level timeout is set to a minute then setting this option to "5m" will have no effect.

UpdateNullValues

Determines whether an UPDATE writes NULL values as NULL, or removes them.

Data Type

bool

Default Value

true

Remarks

By default the connector will use NULL values provided in an UPDATE statement and set the field in Couchbase to NULL.

If this option is disabled SQL NULL values in an UPDATE will cause the connector to mark the field as MISSING. This removes the field from the object containing it, or if the field is contained in an array (per FlattenArrays) then that element is set to NULL.

This option should be used with care as the connector may not detect that the field exists if it is removed from enough documents within a bucket.

UseCollectionsForDDL

Whether to assume that CREATE TABLE statements use collections instead of flavors. Only takes effect when connecting to Couchbase v7+ and GenerateSchemaFiles is set to OnCreate.

Data Type

bool

Default Value

false

Remarks

Normally the connector will assume that compound table names referenced in a CREATE TABLE statement are flavors. For compatibility, this is still the default with Couchbase v7+ even though flavors are not recommended there.

CREATE TABLE [myBucket.myFlavor](
  [Document.Id] VARCHAR PRIMARY KEY,
  docType VARCHAR,
  sometext VARCHAR,
  somenum INT
)

Enable this option to assume that CREATE TABLE statements refer to collection instead. In that scenario this query willl create the bucket and scope if necessary, before creating the colleciton and setting a primary index:

CREATE TABLE [myBucket.myScope.myCollection](
  [Document.Id] VARCHAR PRIMARY KEY,
  sometext VARCHAR,
  somenum INT
)

UserDefinedViews

A filepath pointing to the JSON configuration file containing your custom views.

Data Type

string

Default Value

""

Remarks

User Defined Views are defined in a JSON-formatted configuration file called UserDefinedViews.json. The connector automatically detects the views specified in this file.

You can also have multiple view definitions and control them using the UserDefinedViews connection property. When you use this property, only the specified views are seen by the connector.

This User Defined View configuration file is formatted as follows:

  • Each root element defines the name of a view.
  • Each root element contains a child element, called query, which contains the custom SQL query for the view.

For example:

{
    "MyView": {
        "query": "SELECT * FROM Customer WHERE MyColumn = 'value'"
    },
    "MyView2": {
        "query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
    }
}

Use the UserDefinedViews connection property to specify the location of your JSON configuration file. For example:

"UserDefinedViews", C:\Users\yourusername\Desktop\tmp\UserDefinedViews.json

Note that the specified path is not embedded in quotation marks.

UseTransactions

Specifies whether to use N1QL transactions when executing queries.

Possible Values

Never, Always, Explicit

Data Type

string

Default Value

Never

Remarks

By default the connector does not use transactions for compatibility with older versions of Couchbase. All of the other options require a connection to Couchbase 7 or above. The N1QL service must also be enabled using CouchbaseService.

Setting this to Always means that all queries will use transactions. An explicit transaction may be created on the connection and queries will use that transaction while it is active. If there is no explicit transaction then queries will use implicit transactions instead.

Setting this to Explicit enables support for explicit transactions only. Explicit transactions may be created but if one is not currently active, then statements will not create an implicit transaction.

ValidateJSONParameters

Allows the provider to validate that string parameters are valid JSON before sending the query to Couchbase.

Data Type

bool

Default Value

true

Remarks

When AllowJSONParameters and QueryPassthrough are enabled, the query parameters given to the connector will be treated as raw JSON documents instead of arbitrary string values. This option controls what happens when invalid JSON is given to the connector in this mode.

When this option is enabled, the connector will check that all string parameters can be parsed as valid JSON. If any cannot be, an error will be raised and the query will not be run.

When this option is disabled, no check is performed and all string parameter values are substituted into the query directly. This makes executing prepared statements faster, but less safe since invalid N1QL or SQL++ may be sent to the Couchbase.