HDFS Connection Details
Introduction
Connector Version
This documentation is based on version 25.0.9368 of the connector.
Get Started
HDFS Version Support
The connector leverages the HDFS API to enable bidirectional access to HDFS.
Establish a Connection
Connect to HDFS
In order to connect, set the following connection properties:
- Host: Set this value to the host of your HDFS installation.
- Port: Set this value to the port of your HDFS installation. Default port: 9870
- UseSSL: (Optional) Set this value to 'True', to negotiate TLS/SSL connections to the HDFS server. Default: 'False'.
Authenticate to HDFS
There are two authentication methods available for connecting to the HDFS data source, Anonymous Authentication and Negotiate (Kerberos) Authentication.
Anonymous Authentication
In some situations, HDFS may be connected to without any authentication connection properties. To do so, set the AuthScheme to None (default).
Kerberos
When authentication credentials are required, you can use Kerberos. See Using Kerberos for details on how to authenticate with Kerberos.
Fine-Tuning Data Access
Fine Tuning Data Access
You can use the following properties to gain more control over the data returned from HDFS:
- DirectoryRetrievalDepth: How many subfolders to be recursively scanned before stopping.
-1 specifies that all subfolders are scanned. 0 specifies that only the current folder will be scanned for items. - Path: Limit the subfolders recursively scanned.
Use Kerberos
Kerberos
To authenticate to HDFS with Kerberos, set AuthScheme to NEGOTIATE.
Authenticating to HDFS via Kerberos requires you to define authentication properties and to choose how Kerberos should retrieve authentication tickets.
Retrieve Kerberos Tickets
Kerberos tickets are used to authenticate the requester's identity. The use of tickets instead of formal logins/passwords eliminates the need to store passwords locally or send them over a network. Users are reauthenticated (tickets are refreshed) whenever they log in at their local computer or enter kinit USER at the command prompt.
The connector provides three ways to retrieve the required Kerberos ticket, depending on whether or not the KRB5CCNAME and/or KerberosKeytabFile variables exist in your environment.
MIT Kerberos Credential Cache File
This option enables you to use the MIT Kerberos Ticket Manager or kinit command to get tickets. With this option there is no need to set the User or Password connection properties.
This option requires that KRB5CCNAME has been created in your system.
To enable ticket retrieval via MIT Kerberos Credential Cache Files:
- Ensure that the
KRB5CCNAMEvariable is present in your environment. - Set
KRB5CCNAMEto a path that points to your credential cache file. (For example,C:\krb_cache\krb5cc_0or/tmp/krb5cc_0.) The credential cache file is created when you use the MIT Kerberos Ticket Manager to generate your ticket. -
To obtain a ticket:
- Open the MIT Kerberos Ticket Manager application.
- Click
Get Ticket. - Enter your principal name and password.
- Click
OK.
If the ticket is successfully obtained, the ticket information appears in Kerberos Ticket Manager and is stored in the credential cache file.
The connector uses the cache file to obtain the Kerberos ticket to connect to HDFS.
Note
If you would prefer not to edit KRB5CCNAME, you can use the KerberosTicketCache property to set the file path manually. After this is set, the connector uses the specified cache file to obtain the Kerberos ticket to connect to HDFS.
Keytab File
If your environment lacks the KRB5CCNAME environment variable, you can retrieve a Kerberos ticket using a Keytab File.
To use this method, set the User property to the desired username, and set the KerberosKeytabFile property to a file path pointing to the keytab file associated with the user.
User and Password
If your environment lacks the KRB5CCNAME environment variable and the KerberosKeytabFile property has not been set, you can retrieve a ticket using a user and password combination.
To use this method, set the User and Password properties to the user/password combination that you use to authenticate with HDFS.
Enable Cross-Realm Authentication
More complex Kerberos environments can require cross-realm authentication where multiple realms and KDC servers are used. For example, they might use one realm/KDC for user authentication, and another realm/KDC for obtaining the service ticket.
To enable this kind of cross-realm authentication, set the KerberosRealm and KerberosKDC properties to the values required for user authentication. Also, set the KerberosServiceRealm and KerberosServiceKDC properties to the values required to obtain the service ticket.
Important Notes
Configuration Files and Their Paths
- All references to adding configuration files and their paths refer to files and locations on the Jitterbit agent where the connector is installed. These paths are to be adjusted as appropriate depending on the agent and the operating system. If multiple agents are used in an agent group, identical files will be required on each agent.
Advanced Features
This section details a selection of advanced features of the HDFS connector.
User Defined Views
The connector supports the use of user defined views, virtual tables whose contents are decided by a pre-configured user defined query. These views are useful when you cannot directly control queries being issued to the drivers. For an overview of creating and configuring custom views, see User Defined Views.
SSL Configuration
Use SSL Configuration to adjust how connector handles TLS/SSL certificate negotiations. You can choose from various certificate formats. For further information, see the SSLServerCert property under "Connection String Options".
Proxy
To configure the connector using private agent proxy settings, select the Use Proxy Settings checkbox on the connection configuration screen.
Query Processing
The connector offloads as much of the SELECT statement processing as possible to HDFS and then processes the rest of the query in memory (client-side).
For further information, see Query Processing.
Log
For an overview of configuration settings that can be used to refine logging, see Logging. Only two connection properties are required for basic logging, but there are numerous features that support more refined logging, which enables you to use the LogModules connection property to specify subsets of information to be logged.
User Defined Views
The HDFS connector supports the use of user defined views: user-defined virtual tables whose contents are decided by a preconfigured query. User defined views are useful in situations where you cannot directly control the query being issued to the driver; for example, when using the driver from Jitterbit.
Use a user defined view to define predicates that are always applied. If you specify additional predicates in the query to the view, they are combined with the query already defined as part of the view.
There are two ways to create user defined views:
- Create a JSON-formatted configuration file defining the views you want.
- DDL statements.
Define Views Using a Configuration File
User defined views are defined in a JSON-formatted configuration file called UserDefinedViews.json. The connector automatically detects the views specified in this file.
You can also have multiple view definitions and control them using the UserDefinedViews connection property. When you use this property, only the specified views are seen by the connector.
This user defined view configuration file is formatted so that each root element defines the name of a view, and includes a child element, called query, which contains the custom SQL query for the view.
For example:
{
"MyView": {
"query": "SELECT * FROM Files WHERE MyColumn = 'value'"
},
"MyView2": {
"query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
}
}
Use the UserDefinedViews connection property to specify the location of your JSON configuration file. For example:
"UserDefinedViews", "C:\Users\yourusername\Desktop\tmp\UserDefinedViews.json"
Define Views Using DDL Statements
The connector is also capable of creating and altering the schema via DDL Statements such as CREATE LOCAL VIEW, ALTER LOCAL VIEW, and DROP LOCAL VIEW.
Create a View
To create a new view using DDL statements, provide the view name and query as follows:
CREATE LOCAL VIEW [MyViewName] AS SELECT * FROM Customers LIMIT 20;
If no JSON file exists, the above code creates one. The view is then created in the JSON configuration file and is now discoverable. The JSON file location is specified by the UserDefinedViews connection property.
Alter a View
To alter an existing view, provide the name of an existing view alongside the new query you would like to use instead:
ALTER LOCAL VIEW [MyViewName] AS SELECT * FROM Customers WHERE TimeModified > '3/1/2020';
The view is then updated in the JSON configuration file.
Drop a View
To drop an existing view, provide the name of an existing schema alongside the new query you would like to use instead.
DROP LOCAL VIEW [MyViewName]
This removes the view from the JSON configuration file. It can no longer be queried.
Schema for User Defined Views
In order to avoid a view's name clashing with an actual entity in the data model, user defined views are exposed in the UserViews schema by default. To change the name of the schema used for UserViews, reset the UserViewsSchemaName property.
Work with User Defined Views
For example, a SQL statement with a user defined view called UserViews.RCustomers only lists customers in Raleigh:
SELECT * FROM Customers WHERE City = 'Raleigh';
An example of a query to the driver:
SELECT * FROM UserViews.RCustomers WHERE Status = 'Active';
Resulting in the effective query to the source:
SELECT * FROM Customers WHERE City = 'Raleigh' AND Status = 'Active';
That is a very simple example of a query to a user defined view that is effectively a combination of the view query and the view definition. It is possible to compose these queries in much more complex patterns. All SQL operations are allowed in both queries and are combined when appropriate.
SSL Configuration
Customize the SSL Configuration
To enable TLS, set UseSSL to True.
With this configuration, the connector attempts to negotiate TLS with the server. The server certificate is validated against the default system trusted certificate store. You can override how the certificate gets validated using the SSLServerCert connection property.
To specify another certificate, see the SSLServerCert connection property.
Data Model
The HDFS connector models HDFS objects as relational tables and views. HDFS objects have relationships to other objects; in the tables, these relationships are expressed through foreign keys. The following sections show the available API objects and provide more information on executing SQL to HDFS APIs.
Schemas for most database objects are defined in simple, text-based configuration files.
Key Features
- The connector models HDFS entities such as files and permissions as relational views, allowing you to write SQL to query HDFS data.
- Stored procedures allow you to execute operations to HDFS
- Live connectivity to these objects means any changes to your HDFS account are immediately reflected when using the connector.
Views
Views are similar to tables in the way that data is represented; however, views are read-only.
Queries can be executed against a view as if it were a normal table.
HDFS Connector Views
| Name | Description |
|---|---|
Files |
Lists the contents of the supplied path. |
Permissions |
Lists the permissions of the files/file specified in the path. |
Files
Lists the contents of the supplied path.
Table Specific Information
Select
This will return a list of all the files and directories in your system. By default all subfolders are recursively scanned to list their children. You can configure the depth of subfolders you want to be recursively scanned with DirectoryRetrievalDepth property. All filters are executed client side within the connector.
Columns
| Name | Type | Description |
|---|---|---|
FileId [KEY] |
Long |
The unique ID associated with the file. |
PathSuffix |
String |
The path suffix. |
FullPath |
String |
The full path of the file. |
Owner |
String |
The user who is the owner. |
Group |
String |
The group owner. |
Length |
Long |
The number of bytes in a file. |
Permission |
String |
The permission represented as a octal string |
Replication |
Integer |
The number of replication of a file. |
StoragePolicy |
Integer |
The name of the storage policy |
ChildrenNum |
Integer |
The number of children the file has. |
BlockSize |
Long |
The block size of a file. |
ModificationTime |
Datetime |
The modification time. |
AccessTime |
Datetime |
The access time. |
Type |
String |
The type of the path object. |
Permissions
Lists the permissions of the files/file specified in the path.
Table Specific Information
Select
This will return a list of permissions of all the files and directories in your system. All filters are executed client side within the connector.
Columns
| Name | Type | Description |
|---|---|---|
FullPath [KEY] |
String |
The full path of the file. |
OwnerRead |
Boolean |
Whether the owner this file belongs to has read access. |
OwnerWrite |
Boolean |
Whether the owner this file belongs to has write access. |
OwnerExecute |
Boolean |
Whether the owner this file belongs to has execute access. |
GroupRead |
Boolean |
Whether the group this file belongs to has read access. |
GroupWrite |
Boolean |
Whether the group this file belongs to has write access. |
GroupExecute |
Boolean |
Whether the group this file belongs to has execute access. |
OthersRead |
Boolean |
Whether everyone else has read access. |
OthersWrite |
Boolean |
Whether everyone else has write access. |
OthersExecute |
Boolean |
Whether everyone else has execute access. |
Stored Procedures
Stored procedures are function-like interfaces that extend the functionality of the connector beyond simple SELECT operations with HDFS.
Stored procedures accept a list of parameters, perform their intended function, and then return any relevant response data from HDFS, along with an indication of whether the procedure succeeded or failed.
HDFS Connector Stored Procedures
| Name | Description |
|---|---|
AppendToFile |
Create and Write to a File. |
Concat |
Concatenate a group of files to another file. |
CreateSnapshot |
Create a Snapshot of a folder. |
CreateSymLink |
Create a Symbolic Link. |
DeleteFile |
Delete a file or a directory. |
DeleteSnapshot |
Delete a Snapshot of a folder. |
DownloadFile |
Open and read a file. |
GetContentSummary |
Get the content summary of a file/folder. |
GetFileChecksum |
Get the checksum of a file. |
GetHomeDirectory |
Get the home directory for the current user. |
GetTrashRoot |
Get the root directory of Trash for current user when the path specified is deleted. |
ListStatus |
Lists the contents of the supplied path. |
MakeDirectory |
Create a directory in the specified path. |
RenameFile |
Rename a file or a directory. |
RenameSnapshot |
Rename a Snapshot of a folder. |
SetOwner |
Set owner and group of a path. |
SetPermission |
Set permission of a path. |
TruncateFile |
Truncate a file to a new length. |
UploadFile |
Create and Write to a File. |
AppendToFile
Create and Write to a File.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The absolute path of the file for which content will be appended. |
FilePath |
String |
False | The path of the file whose content will be appended to the specified file. Has higher priority than Content. |
Content |
String |
False | The content as a string which will be appended to the specified file. Has lower priority than FilePath. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
Concat
Concatenate a group of files to another file.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The path which will be concatenated with other paths/sources. |
Sources |
String |
True | A comma separated list of paths/sources. These will be joined to the Path input. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
CreateSnapshot
Create a Snapshot of a folder.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
False | The path for which the snapshot will be created. Must be a folder. |
SnapshotName |
String |
False | The name of the snapshot which will be created. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Path |
String |
The path of the created snapshot. |
CreateSymLink
Create a Symbolic Link.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The path for which a symbolic link will be created. |
Destination |
String |
True | The destination of the symbolic link. |
CreateParent |
Boolean |
False | Whether the parent of the symbolic link should be created. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
DeleteFile
Delete a file or a directory.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The path of the file or folder to be deleted. |
Recursive |
Boolean |
False | If deleting a folder, set this to true to delete all its contents as well. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Indicates whether the operation was successful. |
DeleteSnapshot
Delete a Snapshot of a folder.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The folder path from which the snapshot will be deleted. |
SnapshotName |
String |
True | The name of the snapshot to delete. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Indicates if the snapshot deletion operation was successful. |
DownloadFile
Open and read a file.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The path of the file which will be opened. |
Offset |
Integer |
False | The offset from which the reading will start. |
Length |
Integer |
False | The amount of how much will be read from the file. |
BufferSize |
Integer |
False | The internal size of the buffer which will be used for the reading of the file |
WriteToFile |
String |
False | The local location of the file where the output will be written to. |
Encoding |
String |
False | The FileData input encoding type. The allowed values are NONE, BASE64. The default value is NONE. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
Output |
String |
The file's content. Output will be displayed only if WriteToFile and FileStream are not set. |
GetContentSummary
Get the content summary of a file/folder.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The absolute path of the file/folder whose content summary will be returned. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
DirectoryCount |
String |
The number of directories in this folder. |
FileCount |
String |
The number of files in this folder. |
Length |
Integer |
The length of the folder/file. |
Quota |
Integer |
The quota of the folder/file. |
SpaceConsumed |
Integer |
The amount of space consumed by this folder/file. |
SpaceQuota |
Integer |
The space quota of the folder/file. |
ECPolicy |
String |
The Erasure Coding Policy of the folder/file. |
SnapshotFileCount |
Integer |
The number of files in this folder snapshot. |
SnapshotSpaceConsumed |
Integer |
The amount of space consumed by this folder/file snapshot. |
SnapshotDirectoryCount |
Integer |
The number of directories in this folder snapshot. |
SnapshotLength |
Integer |
The length of the folder/file snapshot. |
GetFileChecksum
Get the checksum of a file.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The path for which the checksum will be returned. Must be a file. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Algorithm |
String |
The algorithm used for the checksum. |
Bytes |
String |
The checksum returned. |
Length |
Integer |
The length of the file. |
GetHomeDirectory
Get the home directory for the current user.
Result Set Columns
| Name | Type | Description |
|---|---|---|
Path |
String |
The path of the current user's home directory. |
GetTrashRoot
Get the root directory of Trash for current user when the path specified is deleted.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
False | The path whose trash root will be returned. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Path |
String |
The path of the current user's trash root (for the specified path). |
ListStatus
Lists the contents of the supplied path.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
False |
Result Set Columns
| Name | Type | Description |
|---|---|---|
FileId |
Long |
The unique ID associated with the file. |
PathSuffix |
String |
The path suffix. |
Owner |
String |
The user who is the owner. |
Group |
String |
The group owner. |
Length |
Long |
The number of bytes in a file. |
Permission |
String |
The permission represented as a octal string |
Replication |
Integer |
The number of replication of a file. |
StoragePolicy |
Integer |
The name of the storage policy |
ChildrenNum |
Integer |
The number of children the file has. |
BlockSize |
Long |
The block size of a file. |
ModificationTime |
Datetime |
The modification time. |
AccessTime |
Datetime |
The access time. |
Type |
String |
The type of the path object. |
MakeDirectory
Create a directory in the specified path.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
False | The path of the new directory which will be created. |
Permission |
String |
False | The permission of the new directory. If no permissions are specified, the newly created directory will have 755 permission as default. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
RenameFile
Rename a file or a directory.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The path which will be renamed. |
Destination |
String |
True | The new path for the renamed file/folder. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
RenameSnapshot
Rename a Snapshot of a folder.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
False | The path for which the snapshot will be renamed. Must be a folder. |
SnapshotName |
String |
True | The new name of the snapshot. |
OldSnapshotName |
String |
True | The old name of the snapshot. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
SetOwner
Set owner and group of a path.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
False | The path whose owner/group will be changed. |
Owner |
String |
False | The new owner. |
Group |
String |
False | The new group. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
SetPermission
Set permission of a path.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
False | The path whose permissions will be changed |
Permission |
String |
True | Unix permissions in an octal (base-8) notation. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
TruncateFile
Truncate a file to a new length.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
True | The path of the file which will be truncated. |
NewLength |
Integer |
True | The new length for this file. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
UploadFile
Create and Write to a File.
Input
| Name | Type | Required | Description |
|---|---|---|---|
Path |
String |
False | The absolute path of the file which will be created. |
Overwrite |
Boolean |
False | If set to true, the file will be overwritten. |
BlockSize |
Integer |
False | The block size for this file. |
Permission |
String |
False | The permissions which will be set for the created file. |
FilePath |
String |
False | The path of the file whose content will be written to the newly created file. Has higher priority than Content. |
Content |
String |
False | The content as a string which will be written to the newly created file. Has lower priority than FilePath. |
Result Set Columns
| Name | Type | Description |
|---|---|---|
Success |
Boolean |
Whether the operation completed successfully or not. |
System Tables
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
Schema Tables
The following tables return database metadata for HDFS:
- sys_catalogs: Lists the available databases.
- sys_schemas: Lists the available schemas.
- sys_tables: Lists the available tables and views.
- sys_tablecolumns: Describes the columns of the available tables and views.
- sys_procedures: Describes the available stored procedures.
- sys_procedureparameters: Describes stored procedure parameters.
- sys_keycolumns: Describes the primary and foreign keys.
- sys_indexes: Describes the available indexes.
Data Source Tables
The following tables return information about how to connect to and query the data source:
- sys_connection_props: Returns information on the available connection properties.
- sys_sqlinfo: Describes the SELECT queries that the connector can offload to the data source.
Query Information Tables
The following table returns query statistics for data modification queries:
- sys_identity: Returns information about batch operations or single updates.
sys_catalogs
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The database name. |
sys_schemas
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The database name. |
SchemaName |
String |
The schema name. |
sys_tables
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The database containing the table or view. |
SchemaName |
String |
The schema containing the table or view. |
TableName |
String |
The name of the table or view. |
TableType |
String |
The table type (table or view). |
Description |
String |
A description of the table or view. |
IsUpdateable |
Boolean |
Whether the table can be updated. |
sys_tablecolumns
Describes the columns of the available tables and views.
The following query returns the columns and data types for the Files table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='Files'
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The name of the database containing the table or view. |
SchemaName |
String |
The schema containing the table or view. |
TableName |
String |
The name of the table or view containing the column. |
ColumnName |
String |
The column name. |
DataTypeName |
String |
The data type name. |
DataType |
Int32 |
An integer indicating the data type. This value is determined at run time based on the environment. |
Length |
Int32 |
The storage size of the column. |
DisplaySize |
Int32 |
The designated column's normal maximum width in characters. |
NumericPrecision |
Int32 |
The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
NumericScale |
Int32 |
The column scale or number of digits to the right of the decimal point. |
IsNullable |
Boolean |
Whether the column can contain null. |
Description |
String |
A brief description of the column. |
Ordinal |
Int32 |
The sequence number of the column. |
IsAutoIncrement |
String |
Whether the column value is assigned in fixed increments. |
IsGeneratedColumn |
String |
Whether the column is generated. |
IsHidden |
Boolean |
Whether the column is hidden. |
IsArray |
Boolean |
Whether the column is an array. |
IsReadOnly |
Boolean |
Whether the column is read-only. |
IsKey |
Boolean |
Indicates whether a field returned from sys_tablecolumns is the primary key of the table. |
ColumnType |
String |
The role or classification of the column in the schema. Possible values include SYSTEM, LINKEDCOLUMN, NAVIGATIONKEY, REFERENCECOLUMN, and NAVIGATIONPARENTCOLUMN. |
sys_procedures
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The database containing the stored procedure. |
SchemaName |
String |
The schema containing the stored procedure. |
ProcedureName |
String |
The name of the stored procedure. |
Description |
String |
A description of the stored procedure. |
ProcedureType |
String |
The type of the procedure, such as PROCEDURE or FUNCTION. |
sys_procedureparameters
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the Open stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'Open' AND Direction = 1 OR Direction = 2
To include result set columns in addition to the parameters, set the IncludeResultColumns pseudo column to True:
SELECT * FROM sys_procedureparameters WHERE ProcedureName = 'Open' AND IncludeResultColumns='True'
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The name of the database containing the stored procedure. |
SchemaName |
String |
The name of the schema containing the stored procedure. |
ProcedureName |
String |
The name of the stored procedure containing the parameter. |
ColumnName |
String |
The name of the stored procedure parameter. |
Direction |
Int32 |
An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
DataType |
Int32 |
An integer indicating the data type. This value is determined at run time based on the environment. |
DataTypeName |
String |
The name of the data type. |
NumericPrecision |
Int32 |
The maximum precision for numeric data. The column length in characters for character and date-time data. |
Length |
Int32 |
The number of characters allowed for character data. The number of digits allowed for numeric data. |
NumericScale |
Int32 |
The number of digits to the right of the decimal point in numeric data. |
IsNullable |
Boolean |
Whether the parameter can contain null. |
IsRequired |
Boolean |
Whether the parameter is required for execution of the procedure. |
IsArray |
Boolean |
Whether the parameter is an array. |
Description |
String |
The description of the parameter. |
Ordinal |
Int32 |
The index of the parameter. |
Values |
String |
The values you can set in this parameter are limited to those shown in this column. Possible values are comma-separated. |
SupportsStreams |
Boolean |
Whether the parameter represents a file that you can pass as either a file path or a stream. |
IsPath |
Boolean |
Whether the parameter is a target path for a schema creation operation. |
Default |
String |
The value used for this parameter when no value is specified. |
SpecificName |
String |
A label that, when multiple stored procedures have the same name, uniquely identifies each identically-named stored procedure. If there's only one procedure with a given name, its name is simply reflected here. |
IsProvided |
Boolean |
Whether the procedure is added/implemented by , as opposed to being a native HDFS procedure. |
Pseudo-Columns
| Name | Type | Description |
|---|---|---|
IncludeResultColumns |
Boolean |
Whether the output should include columns from the result set in addition to parameters. Defaults to False. |
sys_keycolumns
Describes the primary and foreign keys.
The following query retrieves the primary key for the Files table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='Files'
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The name of the database containing the key. |
SchemaName |
String |
The name of the schema containing the key. |
TableName |
String |
The name of the table containing the key. |
ColumnName |
String |
The name of the key column. |
IsKey |
Boolean |
Whether the column is a primary key in the table referenced in the TableName field. |
IsForeignKey |
Boolean |
Whether the column is a foreign key referenced in the TableName field. |
PrimaryKeyName |
String |
The name of the primary key. |
ForeignKeyName |
String |
The name of the foreign key. |
ReferencedCatalogName |
String |
The database containing the primary key. |
ReferencedSchemaName |
String |
The schema containing the primary key. |
ReferencedTableName |
String |
The table containing the primary key. |
ReferencedColumnName |
String |
The column name of the primary key. |
sys_foreignkeys
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The name of the database containing the key. |
SchemaName |
String |
The name of the schema containing the key. |
TableName |
String |
The name of the table containing the key. |
ColumnName |
String |
The name of the key column. |
PrimaryKeyName |
String |
The name of the primary key. |
ForeignKeyName |
String |
The name of the foreign key. |
ReferencedCatalogName |
String |
The database containing the primary key. |
ReferencedSchemaName |
String |
The schema containing the primary key. |
ReferencedTableName |
String |
The table containing the primary key. |
ReferencedColumnName |
String |
The column name of the primary key. |
ForeignKeyType |
String |
Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
sys_primarykeys
Describes the primary keys.
The following query retrieves the primary keys from all tables and views:
SELECT * FROM sys_primarykeys
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The name of the database containing the key. |
SchemaName |
String |
The name of the schema containing the key. |
TableName |
String |
The name of the table containing the key. |
ColumnName |
String |
The name of the key column. |
KeySeq |
String |
The sequence number of the primary key. |
KeyName |
String |
The name of the primary key. |
sys_indexes
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
Columns
| Name | Type | Description |
|---|---|---|
CatalogName |
String |
The name of the database containing the index. |
SchemaName |
String |
The name of the schema containing the index. |
TableName |
String |
The name of the table containing the index. |
IndexName |
String |
The index name. |
ColumnName |
String |
The name of the column associated with the index. |
IsUnique |
Boolean |
True if the index is unique. False otherwise. |
IsPrimary |
Boolean |
True if the index is a primary key. False otherwise. |
Type |
Int16 |
An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
SortOrder |
String |
The sort order: A for ascending or D for descending. |
OrdinalPosition |
Int16 |
The sequence number of the column in the index. |
sys_connection_props
Returns information on the available connection properties and those set in the connection string.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
Columns
| Name | Type | Description |
|---|---|---|
Name |
String |
The name of the connection property. |
ShortDescription |
String |
A brief description. |
Type |
String |
The data type of the connection property. |
Default |
String |
The default value if one is not explicitly set. |
Values |
String |
A comma-separated list of possible values. A validation error is thrown if another value is specified. |
Value |
String |
The value you set or a preconfigured default. |
Required |
Boolean |
Whether the property is required to connect. |
Category |
String |
The category of the connection property. |
IsSessionProperty |
String |
Whether the property is a session property, used to save information about the current connection. |
Sensitivity |
String |
The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
PropertyName |
String |
A camel-cased truncated form of the connection property name. |
Ordinal |
Int32 |
The index of the parameter. |
CatOrdinal |
Int32 |
The index of the parameter category. |
Hierarchy |
String |
Shows dependent properties associated that need to be set alongside this one. |
Visible |
Boolean |
Informs whether the property is visible in the connection UI. |
ETC |
String |
Various miscellaneous information about the property. |
sys_sqlinfo
Describes the SELECT query processing that the connector can offload to the data source.
Discovering the Data Source's SELECT Capabilities
Below is an example data set of SQL capabilities. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
| Name | Description | Possible Values |
|---|---|---|
AGGREGATE_FUNCTIONS |
Supported aggregation functions. | AVG, COUNT, MAX, MIN, SUM, DISTINCT |
COUNT |
Whether COUNT function is supported. | YES, NO |
IDENTIFIER_QUOTE_OPEN_CHAR |
The opening character used to escape an identifier. | [ |
IDENTIFIER_QUOTE_CLOSE_CHAR |
The closing character used to escape an identifier. | ] |
SUPPORTED_OPERATORS |
A list of supported SQL operators. | =, >, <, >=, <=, <>, !=, LIKE, NOT LIKE, IN, NOT IN, IS NULL, IS NOT NULL, AND, OR |
GROUP_BY |
Whether GROUP BY is supported, and, if so, the degree of support. | NO, NO_RELATION, EQUALS_SELECT, SQL_GB_COLLATE |
STRING_FUNCTIONS |
Supported string functions. | LENGTH, CHAR, LOCATE, REPLACE, SUBSTRING, RTRIM, LTRIM, RIGHT, LEFT, UCASE, SPACE, SOUNDEX, LCASE, CONCAT, ASCII, REPEAT, OCTET, BIT, POSITION, INSERT, TRIM, UPPER, REGEXP, LOWER, DIFFERENCE, CHARACTER, SUBSTR, STR, REVERSE, PLAN, UUIDTOSTR, TRANSLATE, TRAILING, TO, STUFF, STRTOUUID, STRING, SPLIT, SORTKEY, SIMILAR, REPLICATE, PATINDEX, LPAD, LEN, LEADING, KEY, INSTR, INSERTSTR, HTML, GRAPHICAL, CONVERT, COLLATION, CHARINDEX, BYTE |
NUMERIC_FUNCTIONS |
Supported numeric functions. | ABS, ACOS, ASIN, ATAN, ATAN2, CEILING, COS, COT, EXP, FLOOR, LOG, MOD, SIGN, SIN, SQRT, TAN, PI, RAND, DEGREES, LOG10, POWER, RADIANS, ROUND, TRUNCATE |
TIMEDATE_FUNCTIONS |
Supported date/time functions. | NOW, CURDATE, DAYOFMONTH, DAYOFWEEK, DAYOFYEAR, MONTH, QUARTER, WEEK, YEAR, CURTIME, HOUR, MINUTE, SECOND, TIMESTAMPADD, TIMESTAMPDIFF, DAYNAME, MONTHNAME, CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, EXTRACT |
REPLICATION_SKIP_TABLES |
Indicates tables skipped during replication. | |
REPLICATION_TIMECHECK_COLUMNS |
A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
IDENTIFIER_PATTERN |
String value indicating what string is valid for an identifier. | |
SUPPORT_TRANSACTION |
Indicates if the provider supports transactions such as commit and rollback. | YES, NO |
DIALECT |
Indicates the SQL dialect to use. | |
KEY_PROPERTIES |
Indicates the properties which identify the uniform database. | |
SUPPORTS_MULTIPLE_SCHEMAS |
Indicates if multiple schemas may exist for the provider. | YES, NO |
SUPPORTS_MULTIPLE_CATALOGS |
Indicates if multiple catalogs may exist for the provider. | YES, NO |
DATASYNCVERSION |
The Data Sync version needed to access this driver. | Standard, Starter, Professional, Enterprise |
DATASYNCCATEGORY |
The Data Sync category of this driver. | Source, Destination, Cloud Destination |
SUPPORTSENHANCEDSQL |
Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE, FALSE |
SUPPORTS_BATCH_OPERATIONS |
Whether batch operations are supported. | YES, NO |
SQL_CAP |
All supported SQL capabilities for this driver. | SELECT, INSERT, DELETE, UPDATE, TRANSACTIONS, ORDERBY, OAUTH, ASSIGNEDID, LIMIT, LIKE, BULKINSERT, COUNT, BULKDELETE, BULKUPDATE, GROUPBY, HAVING, AGGS, OFFSET, REPLICATE, COUNTDISTINCT, JOINS, DROP, CREATE, DISTINCT, INNERJOINS, SUBQUERIES, ALTER, MULTIPLESCHEMAS, GROUPBYNORELATION, OUTERJOINS, UNIONALL, UNION, UPSERT, GETDELETED, CROSSJOINS, GROUPBYCOLLATE, MULTIPLECATS, FULLOUTERJOIN, MERGE, JSONEXTRACT, BULKUPSERT, SUM, SUBQUERIESFULL, MIN, MAX, JOINSFULL, XMLEXTRACT, AVG, MULTISTATEMENTS, FOREIGNKEYS, CASE, LEFTJOINS, COMMAJOINS, WITH, LITERALS, RENAME, NESTEDTABLES, EXECUTE, BATCH, BASIC, INDEX |
PREFERRED_CACHE_OPTIONS |
A string value specifies the preferred cacheOptions. | |
ENABLE_EF_ADVANCED_QUERY |
Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES, NO |
PSEUDO_COLUMNS |
A string array indicating the available pseudo columns. | |
MERGE_ALWAYS |
If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE, FALSE |
REPLICATION_MIN_DATE_QUERY |
A select query to return the replicate start datetime. | |
REPLICATION_MIN_FUNCTION |
Allows a provider to specify the formula name to use for executing a server side min. | |
REPLICATION_START_DATE |
Allows a provider to specify a replicate startdate. | |
REPLICATION_MAX_DATE_QUERY |
A select query to return the replicate end datetime. | |
REPLICATION_MAX_FUNCTION |
Allows a provider to specify the formula name to use for executing a server side max. | |
IGNORE_INTERVALS_ON_INITIAL_REPLICATE |
A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
CHECKCACHE_USE_PARENTID |
Indicates whether the CheckCache statement should be done against the parent key column. | TRUE, FALSE |
CREATE_SCHEMA_PROCEDURES |
Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name = 'SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.
Columns
| Name | Type | Description |
|---|---|---|
NAME |
String |
A component of SQL syntax, or a capability that can be processed on the server. |
VALUE |
String |
Detail on the supported SQL or SQL syntax. |
sys_identity
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
Columns
| Name | Type | Description |
|---|---|---|
Id |
String |
The database-generated ID returned from a data modification operation. |
Batch |
String |
An identifier for the batch. 1 for a single operation. |
Operation |
String |
The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
Message |
String |
SUCCESS or an error message if the update in the batch failed. |
sys_information
Describes the available system information.
The following query retrieves all columns:
SELECT * FROM sys_information
Columns
| Name | Type | Description |
|---|---|---|
Product |
String |
The name of the product. |
Version |
String |
The version number of the product. |
Datasource |
String |
The name of the datasource the product connects to. |
NodeId |
String |
The unique identifier of the machine where the product is installed. |
HelpURL |
String |
The URL to the product's help documentation. |
License |
String |
The license information for the product. (If this information is not available, the field may be left blank or marked as 'N/A'.) |
Location |
String |
The file path location where the product's library is stored. |
Environment |
String |
The version of the environment or rumtine the product is currently running under. |
DataSyncVersion |
String |
The tier of Sync required to use this connector. |
DataSyncCategory |
String |
The category of Sync functionality (e.g., Source, Destination). |
Advanced Configurations Properties
The advanced configurations properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure. Click the links for further details.
| Property | Description |
|---|---|
AuthScheme |
The scheme used for authentication. Accepted entries are None, and Negotiate (Kerberos). None is the default. |
Host |
This property specifies the host of your HDFS installation. |
Port |
This property specifies the port of your HDFS installation. |
User |
The user name to login to the HDFS server. |
Password |
The password used to authenticate to the HDFS server. Only used when Kerberos authentication is selected. |
AccessToken |
The HDFS Access Token. |
UseSSL |
This field sets whether SSL is enabled. |
| Property | Description |
|---|---|
Path |
This property specifies the HDFS path which will be used as the working directory. |
DirectoryRetrievalDepth |
Limit the subfolders recursively scanned. |
| Property | Description |
|---|---|
KerberosKDC |
Identifies the Kerberos Key Distribution Center (KDC) service used to authenticate the user. (SPNEGO or Windows authentication only). |
KerberosRealm |
Identifies the Kerberos Realm used to authenticate the user. |
KerberosSPN |
Identifies the service principal name (SPN) for the Kerberos Domain Controller. |
KerberosUser |
Confirms the principal name for the Kerberos Domain Controller, which uses the format host/user@realm. |
KerberosKeytabFile |
Identifies the Keytab file containing your pairs of Kerberos principals and encrypted keys. |
KerberosServiceRealm |
Identifies the service's Kerberos realm. (Cross-realm authentication only). |
KerberosServiceKDC |
Identifies the service's Kerberos Key Distribution Center (KDC). |
KerberosTicketCache |
Specifies the full file path to an MIT Kerberos credential cache file. |
| Property | Description |
|---|---|
SSLServerCert |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
| Property | Description |
|---|---|
Location |
Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path. |
BrowsableSchemas |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA, SchemaB, SchemaC. |
Tables |
Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA, TableB, TableC. |
Views |
Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA, ViewB, ViewC. |
| Property | Description |
|---|---|
MaxRows |
Specifies the maximum rows returned for queries without aggregation or GROUP BY. |
Other |
Specifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties. |
PseudoColumns |
Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property. |
Timeout |
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout. |
UserDefinedViews |
Specifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file. |
Authentication
This section provides a complete list of authentication properties you can configure.
| Property | Description |
|---|---|
AuthScheme |
The scheme used for authentication. Accepted entries are None, and Negotiate (Kerberos). None is the default. |
Host |
This property specifies the host of your HDFS installation. |
Port |
This property specifies the port of your HDFS installation. |
User |
The user name to login to the HDFS server. |
Password |
The password used to authenticate to the HDFS server. Only used when Kerberos authentication is selected. |
AccessToken |
The HDFS Access Token. |
UseSSL |
This field sets whether SSL is enabled. |
AuthScheme
The scheme used for authentication. Accepted entries are None, and Negotiate (Kerberos). None is the default.
Possible Values
None, Negotiate, Token
Data Type
string
Default Value
None
Remarks
This field is used to authenticate against the server. Use the following options to select your authentication scheme:
- None: Set this to use anonymous authentication and connect to the HDFS data source without specifying the user credentials.
- Negotiate: If
AuthSchemeis set to Negotiate, the connector will negotiate an authentication mechanism with the server. SetAuthSchemeto Negotiate if you want to use Kerberos authentication. - Token: Set this to authenticate using an AccessToken.
Host
This property specifies the host of your HDFS installation.
Data Type
string
Default Value
""
Remarks
This property specifies the host of your HDFS installation.
Port
This property specifies the port of your HDFS installation.
Data Type
string
Default Value
9870
Remarks
This property specifies the port of your HDFS installation.
User
The user name to login to the HDFS server.
Data Type
string
Default Value
""
Remarks
The user name to login to the HDFS server. If AuthScheme=None, it is used as the authenticated user. If AuthScheme=Negotiate, it is used in the Kerberos Authentication as a client principal.
Password
The password used to authenticate to the HDFS server. Only used when Kerberos authentication is selected.
Data Type
string
Default Value
""
Remarks
The password used to authenticate to the HDFS server. Only used when Kerberos authentication is selected.
AccessToken
The HDFS Access Token.
Data Type
string
Default Value
""
Remarks
The HDFS Access Token used to authenticate the requests.
UseSSL
This field sets whether SSL is enabled.
Data Type
bool
Default Value
false
Remarks
This field sets whether the connector will attempt to negotiate TLS/SSL connections to the server. By default, the connector checks the server's certificate against the system's trusted certificate store. To specify another certificate, set SSLServerCert.
Connection
This section provides a complete list of connection properties you can configure.
| Property | Description |
|---|---|
Path |
This property specifies the HDFS path which will be used as the working directory. |
DirectoryRetrievalDepth |
Limit the subfolders recursively scanned. |
Path
This property specifies the HDFS path which will be used as the working directory.
Data Type
string
Default Value
""
Remarks
This property specifies the HDFS path which will be used as the working directory. Used in views Files and Permissions.
DirectoryRetrievalDepth
Limit the subfolders recursively scanned.
Data Type
int
Default Value
-1
Remarks
DirectoryRetrievalDepth specifies how many subfolders will be recursively scanned before stopping. -1 specifies that all subfolders are scanned. 0 specifies that only the current folder will be scanned for items.
Kerberos
This section provides a complete list of Kerberos properties you can configure.
| Property | Description |
|---|---|
KerberosKDC |
Identifies the Kerberos Key Distribution Center (KDC) service used to authenticate the user. (SPNEGO or Windows authentication only). |
KerberosRealm |
Identifies the Kerberos Realm used to authenticate the user. |
KerberosSPN |
Identifies the service principal name (SPN) for the Kerberos Domain Controller. |
KerberosUser |
Confirms the principal name for the Kerberos Domain Controller, which uses the format host/user@realm. |
KerberosKeytabFile |
Identifies the Keytab file containing your pairs of Kerberos principals and encrypted keys. |
KerberosServiceRealm |
Identifies the service's Kerberos realm. (Cross-realm authentication only). |
KerberosServiceKDC |
Identifies the service's Kerberos Key Distribution Center (KDC). |
KerberosTicketCache |
Specifies the full file path to an MIT Kerberos credential cache file. |
KerberosKDC
Identifies the Kerberos Key Distribution Center (KDC) service used to authenticate the user. (SPNEGO or Windows authentication only).
Data Type
string
Default Value
""
Remarks
The Kerberos properties are used when using SPNEGO or Windows Authentication. The connector requests session tickets and temporary session keys from the Kerberos KDC service, which is usually co-located with the domain controller.
Note
Windows authentication is supported in JRE 1.6 and above only.
If KerberosKDC is not specified, the connector tries to detect these properties automatically from the following locations:
KRB5 Config File (krb5.ini/krb5.conf): If the KRB5_CONFIG environment variable is set and the file exists, the connector obtains the KDC from the specified file. If it is not found there, the connector tries to read from the default MIT location based on the OS:C:\ProgramData\MIT\Kerberos5\krb5.ini(Windows) or/etc/krb5.conf(Linux).Java System Properties: Using the system propertiesjava.security.krb5.realmandjava.security.krb5.kdc.Domain Name and Host: If the Kerberos Realm and Kerberos KDC cannot be inferred from another location, the connector infers them from the configured domain name and host.
KerberosRealm
Identifies the Kerberos Realm used to authenticate the user.
Data Type
string
Default Value
""
Remarks
A realm is a logical network, similar to a domain, that defines a group of systems under the same master KDC. Some realms are hierarchical, where one realm is a superset of the other realm, but usually realms are nonhierarchical (or “direct”) and the mapping between the two realms must be defined. Kerberos cross-realm authentication enables authentication across realms. Each realm only needs to have a principal entry for the other realm in its KDC.
The Kerberos properties are used when using SPNEGO or Windows Authentication. The connector requests session tickets and temporary session keys from the Kerberos KDC service, which is usually co-located with the domain controller. The Kerberos Realm can be configured by an administrator to be any string, but it is usually based on the domain name.
If Kerberos Realm is not specified, the connector will attempt to detect these properties automatically from the following locations:
KRB5 Config File (krb5.ini/krb5.conf): If the KRB5_CONFIG environment variable is set and the file exists, the connector will obtain the default realm from the specified file. Otherwise, it will attempt to read from the default MIT location based on the OS:C:\ProgramData\MIT\Kerberos5\krb5.ini(Windows) or/etc/krb5.conf(Linux)Java System Properties: Using the system propertiesjava.security.krb5.realmandjava.security.krb5.kdc.Domain Name and Host: If the Kerberos Realm and Kerberos KDC could not be inferred from another location, the connector will infer them from the user-configured domain name and host. This might work in some Windows environments.
Note
Kerberos-based authentication is supported in JRE 1.6 and above only.
KerberosSPN
Identifies the service principal name (SPN) for the Kerberos Domain Controller.
Data Type
string
Default Value
""
Remarks
If the SPN on the Kerberos Domain Controller is not the same as the URL that you are authenticating to, use this property to set the SPN to the KDC's URL.
KerberosUser
Confirms the principal name for the Kerberos Domain Controller, which uses the format host/user@realm.
Data Type
string
Default Value
""
Remarks
If there is a Kerberos principal, that Kerberos principal name should always be used to authenticate to the database.
KerberosKeytabFile
Identifies the Keytab file containing your pairs of Kerberos principals and encrypted keys.
Data Type
string
Default Value
""
Remarks
A keytab (short for “key table”) stores long-term keys for one or more principals. In most cases, end users authenticate to the KDC using their client secret (password). However, in situations where authentication or re-authentication happen using automated scripts and applications, it may be more efficient to use a keytab, which sends passwords to the KDC in encrypted form, automatically.
Keytabs are normally represented by files in a standard format, and named using the format type:value. Usually type is FILE and value is the absolute pathname of the file. The other possible value for type is MEMORY, which indicates a temporary keytab stored in the memory of the current process.
A keytab contains one or more entries, where each entry consists of a timestamp (indicating when the entry was written to the keytab), a principal name, a key version number, an encryption type, and the encryption key itself. They can be generated using kutil.
For example:
[admin@myhost]# ktutil
ktutil: addent -password -p starlord/myhost.galaxy.com@GALAXY.COM -k 1 -e aes256-cts-hmac-sha1-96
Password for starlord/myhost.galaxy.com:
ktutil: addent -password -p starlord/myhost.galaxy.com@GALAXY.COM -k 1 -e aes128-cts-hmac-sha1-96
Password for starlord/myhost.galaxy.com:
ktutil: addent -password -p starlord/myhost.galaxy.com@GALAXY.COM -k 1 -e des3-cbc-sha1
Password for starlord/myhost.galaxy.com:
ktutil: wkt /path/to/starlord.keytab
Note
You must create principals for all authentication methods (encryption types) you want to support.
To display a keytab, use klist -k.
KerberosServiceRealm
Identifies the service's Kerberos realm. (Cross-realm authentication only).
Data Type
string
Default Value
""
Remarks
The KerberosServiceRealm is used to specify a service's KerberosRealm when using cross-realm Kerberos authentication.
In most cases, a single realm and KDC machine are used to perform the Kerberos authentication, which means that this property would not be required. However, the property is available for complex setups where a different realm and KDC machine are used to obtain an authentication ticket (AS request) and a service ticket (TGS request).
KerberosServiceKDC
Identifies the service's Kerberos Key Distribution Center (KDC).
Data Type
string
Default Value
""
Remarks
The KerberosServiceKDC is used to specify the service Kerberos KDC when using cross-realm Kerberos authentication.
In most cases, a single realm and KDC machine are used to perform the Kerberos authentication, which means that this property would not be required. However, the property is available for complex setups where a different realm and KDC machine are used to obtain an authentication ticket (AS request) and a service ticket (TGS request).
KerberosTicketCache
Specifies the full file path to an MIT Kerberos credential cache file.
Data Type
string
Default Value
""
Remarks
Set this property if you want to use a credential cache file that was created using the MIT Kerberos Ticket Manager or kinit command.
SSL
This section provides a complete list of SSL properties you can configure.
| Property | Description |
|---|---|
SSLServerCert |
Specifies the certificate to be accepted from the server when connecting using TLS/SSL. |
SSLServerCert
Specifies the certificate to be accepted from the server when connecting using TLS/SSL.
Data Type
string
Default Value
""
Remarks
If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.
This property can take the following forms:
| Description | Example |
|---|---|
| A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
| A path to a local file containing the certificate | C:\\cert.cer |
| The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
| The MD5 Thumbprint (hex values can also be either space or colon separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
| The SHA1 Thumbprint (hex values can also be either space or colon separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
If not specified, any certificate trusted by the machine is accepted.
Certificates are validated as trusted by the machine based on the System's trust store. The trust store used is the 'javax.net.ssl.trustStore' value specified for the system. If no value is specified for this property, Java's default trust store is used (for example, JAVA_HOME\lib\security\cacerts).
Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.
Schema
This section provides a complete list of schema properties you can configure.
| Property | Description |
|---|---|
Location |
Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path. |
BrowsableSchemas |
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA, SchemaB, SchemaC. |
Tables |
Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA, TableB, TableC. |
Views |
Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA, ViewB, ViewC. |
Location
Specifies the location of a directory containing schema files that define tables, views, and stored procedures. Depending on your service's requirements, this may be expressed as either an absolute path or a relative path.
Data Type
string
Default Value
%APPDATA%\HDFS Data Provider\Schema
Remarks
The Location property is only needed if you want to either customize definitions (for example, change a column name, ignore a column, etc.) or extend the data model with new tables, views, or stored procedures.
If left unspecified, the default location is %APPDATA%\HDFS Data Provider\Schema, where %APPDATA% is set to the user's configuration directory:
| Platform | %APPDATA% |
|---|---|
Windows |
The value of the APPDATA environment variable |
Mac |
~/Library/Application Support |
Linux |
~/.config |
BrowsableSchemas
Optional setting that restricts the schemas reported to a subset of all available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC.
Data Type
string
Default Value
""
Remarks
Listing all available database schemas can take extra time, thus degrading performance. Providing a list of schemas in the connection string saves time and improves performance.
Tables
Optional setting that restricts the tables reported to a subset of all available tables. For example, Tables=TableA,TableB,TableC.
Data Type
string
Default Value
""
Remarks
Listing all available tables from some databases can take extra time, thus degrading performance. Providing a list of tables in the connection string saves time and improves performance.
If there are lots of tables available and you already know which ones you want to work with, you can use this property to restrict your viewing to only those tables. To do this, specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.
Note
If you are connecting to a data source with multiple schemas or catalogs, you must specify each table you want to view by its fully qualified name. This avoids ambiguity between tables that may exist in multiple catalogs or schemas.
Views
Optional setting that restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC.
Data Type
string
Default Value
""
Remarks
Listing all available views from some databases can take extra time, thus degrading performance. Providing a list of views in the connection string saves time and improves performance.
If there are lots of views available and you already know which ones you want to work with, you can use this property to restrict your viewing to only those views. To do this, specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.
Note
If you are connecting to a data source with multiple schemas or catalogs, you must specify each view you want to examine by its fully qualified name. This avoids ambiguity between views that may exist in multiple catalogs or schemas.
Miscellaneous
This section provides a complete list of miscellaneous properties you can configure.
| Property | Description |
|---|---|
MaxRows |
Specifies the maximum rows returned for queries without aggregation or GROUP BY. |
Other |
Specifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties. |
PseudoColumns |
Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property. |
Timeout |
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout. |
UserDefinedViews |
Specifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file. |
MaxRows
Specifies the maximum rows returned for queries without aggregation or GROUP BY.
Data Type
int
Default Value
-1
Remarks
This property sets an upper limit on the number of rows the connector returns for queries that do not include aggregation or GROUP BY clauses. This limit ensures that queries do not return excessively large result sets by default.
When a query includes a LIMIT clause, the value specified in the query takes precedence over the MaxRows setting. If MaxRows is set to "-1", no row limit is enforced unless a LIMIT clause is explicitly included in the query.
This property is useful for optimizing performance and preventing excessive resource consumption when executing queries that could otherwise return very large datasets.
Other
Specifies additional hidden properties for specific use cases. These are not required for typical provider functionality. Use a semicolon-separated list to define multiple properties.
Data Type
string
Default Value
""
Remarks
This property allows advanced users to configure hidden properties for specialized scenarios. These settings are not required for normal use cases but can address unique requirements or provide additional functionality. Multiple properties can be defined in a semicolon-separated list.
Note
It is strongly recommended to set these properties only when advised by the support team to address specific scenarios or issues.
Specify multiple properties in a semicolon-separated list.
Integration and Formatting
| Property | Description |
|---|---|
DefaultColumnSize |
Sets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000. |
ConvertDateTimeToGMT=True |
Converts date-time values to GMT, instead of the local time of the machine. The default value is False (use local time). |
RecordToFile=filename |
Records the underlying socket data transfer to the specified file. |
PseudoColumns
Specifies the pseudocolumns to expose as table columns. Use the format 'TableName=ColumnName;TableName=ColumnName'. The default is an empty string, which disables this property.
Data Type
string
Default Value
""
Remarks
This property allows you to define which pseudocolumns the connector exposes as table columns.
To specify individual pseudocolumns, use the following format: "Table1=Column1;Table1=Column2;Table2=Column3"
To include all pseudocolumns for all tables use: "*=*"
Timeout
Specifies the maximum time, in seconds, that the provider waits for a server response before throwing a timeout error. The default is 60 seconds. Set to 0 to disable the timeout.
Data Type
int
Default Value
60
Remarks
This property controls the maximum time, in seconds, that the connector waits for an operation to complete before canceling it. If the timeout period expires before the operation finishes, the connector cancels the operation and throws an exception.
The timeout applies to each individual communication with the server rather than the entire query or operation. For example, a query could continue running beyond the timeout value if each paging call completes within the timeout limit.
Setting this property to 0 disables the timeout, allowing operations to run indefinitely until they succeed or fail due to other conditions such as server-side timeouts, network interruptions, or resource limits on the server. Use this property cautiously to avoid long-running operations that could degrade performance or result in unresponsive behavior.
UserDefinedViews
Specifies a filepath to a JSON configuration file defining custom views. The provider automatically detects and uses the views specified in this file.
Data Type
string
Default Value
""
Remarks
This property allows you to define and manage custom views through a JSON-formatted configuration file called UserDefinedViews.json. These views are automatically recognized by the connector and enable you to execute custom SQL queries as if they were standard database views. The JSON file defines each view as a root element with a child element called "query", which contains the SQL query for the view. For example:
{
"MyView": {
"query": "SELECT * FROM Files WHERE MyColumn = 'value'"
},
"MyView2": {
"query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
}
}
You can define multiple views in a single file and specify the filepath using this property. For example: UserDefinedViews=C:\Path\To\UserDefinedViews.json. When you use this property, only the specified views are seen by the connector.
Refer to User Defined Views for more information.