Snowflake
Snowflake is a high-performing cloud data warehouse. It can either be used to store your data in it and/or to use it as a data source.
This page describes connecting and using Snowflake as a data source. If you want to write into Snowflake, please refer to Snowflake Data Warehouse.
Type Name
snowflake
Connection Properties
Template name: snowflake
Appropriate translator name: snowflake
Properties:
host(default:localhost)db(can be omitted; for multiple seecatalogmodel property)schemawarehouse(required; if absent:No warehouse name provided; for multiple warehouses see theroleMappingtranslator property)CLIENT_METADATA_REQUEST_USE_CONNECTION_CTX(default:TRUE)application(default:DataVirtuality_DV)user-namepasswordprivateKey(base64 or path)privateKeyPassphraseauthenticator(default: empty; ‘OAuth’ for OAuth authentication)accessToken(default: empty; for OAuth authentication)accessTokenEndpoint(default: empty; for OAuth authentication)refreshToken(default: empty; for OAuth authentication)clientId(default: empty; for OAuth authentication)clientSecret(default: empty; for OAuth authentication)scope(default: empty; for OAuth authentication)role(default: empty; for OAuth authentication)driver(default:snowflake)driver-class(default:net.snowflake.client.jdbc.SnowflakeDriver)jdbc-properties(arbitrary extra properties)new-connection-sqlcheck-valid-connection-sql(default:select 0)min-pool-size(default:2)max-pool-size(default:70)cloudAgent(default:FALSE)
privateKey, privateKeyPassphrase, authenticator, accessToken, accessTokenEndpoint, refreshToken, clientId, clientSecret, scope, role available since v25.2
Please note that Snowflake blocks single-factor authentication in November 2025.
Examples
CALL "SYSADMIN.createOrReplaceDatasource"(
"name" => 'snowflake_regular',
"translator" => 'snowflake',
"modelProperties" => 'importer.defaultSchema=<SCHEMA>,importer.schemaPattern=<SCHEMA>,importer.useFullSchemaName=FALSE,importer.tableTypes="TABLE,VIEW"',
"translatorProperties" => 'supportsNativeQueries=true',
"connectionTemplateName" => 'snowflake',
"connectionProperties" => 'host=<your_account_name.snowflakecomputing.com,db=<database>,user-name=<user_name>,password=<user_password>,warehouse=<warehouse>'
);;
An example with a private key and pass phrase:
CALL "SYSADMIN.createOrReplaceDatasource"(
"name" => 'snowflake_private_key',
"translator" => 'snowflake',
"modelProperties" => 'importer.defaultSchema=<SCHEMA>,importer.schemaPattern=<SCHEMA>,importer.useFullSchemaName=FALSE,importer.tableTypes="TABLE,VIEW"',
"translatorProperties" => 'supportsNativeQueries=true',
"connectionTemplateName" => 'snowflake',
"connectionProperties" => 'host=<your_account_name.snowflakecomputing.com,db=<database>,user-name=<user_name>,privateKey=c:/private_key.p8,privateKeyPassphrase=<privateKeyPassphrase>,warehouse=<warehouse>'
);;
Please note that privateKey can be provided as:
A Base64-encoded value
A path to a file with a plain-text value
You can encode a plain-text value to Base64 using the to_chars(to_bytes(snowflake_key, 'UTF-8'), 'base64') function.
When creating a data source via the Web UI or CData Virtuality Studio, the file does not need to be Base64-encoded; the encoding occurs automatically.
A data source with authentication via Microsoft Entra ID:
CALL "SYSADMIN.createOrReplaceDatasource"(
"name" => 'snowflake_entra_oauth',
"translator" => 'snowflake',
"modelProperties" => 'importer.defaultSchema=<SCHEMA>,importer.schemaPattern=<SCHEMA>,importer.useFullSchemaName=FALSE,importer.tableTypes="TABLE,VIEW"',
"translatorProperties" => 'supportsNativeQueries=true',
"connectionTemplateName" => 'snowflake',
"connectionProperties" => 'host=<host>,db=<db>,warehouse=<warehouse>,authenticator=OAuth,role=analyst,accessToken="",accessTokenEndpoint="https://login.microsoftonline.com/XXXX/OAuth2/v2.0/token",refreshToken="",clientId="XXXX",clientSecret="XXXX",scope="https://XXXX/session:scope:analyst offline_access",user-name="<user@somename.onmicrosoft.com>",password="<XXXX>"'
);;
A data source with multiple warehouses and role mapping:
CALL "SYSADMIN.createOrReplaceDatasource"(
"name" => 'snowflake_roleMapping',
"translator" => 'snowflake',
"modelProperties" => 'importer.defaultSchema=<SCHEMA>,importer.schemaPattern=<SCHEMA>,importer.useFullSchemaName=FALSE,importer.tableTypes="TABLE,VIEW"',
"translatorProperties" => 'supportsNativeQueries=true,roleMapping="DV_ROLE1=SF_ROLE1{WH1},DV_ROLE2=SF_ROLE2,DV_ROLE3=SF_ROLE3{WH3}',
"connectionTemplateName" => 'snowflake',
"connectionProperties" => 'host=<your_account_name.snowflakecomputing.com,db=<database>,user-name=<user_name>,privateKey=c:/private_key.p8,privateKeyPassphrase=<privateKeyPassphrase>,warehouse=<warehouse>'
);;
Translator Properties
Translator Properties Shared by All JDBC Connectors
(Properties listed in alphabetical order)
To view the full table, click the expand button in its top right corner
Name | Description | Default value |
|---|---|---|
| Sets a template to convert Examples
SQL
|
|
| Database time zone, used when fetching date, time, or timestamp values | System default time zone |
| Specific database version, used to fine-tune pushdown support | Automatically detected by the server through the data source JDBC driver, if possible |
| Only considered when |
|
| If |
|
| Maximum size of prepared insert batch |
|
| Sets a template to convert Examples
SQL
|
|
| If |
|
| Forces a translator to issue a Example
SQL
|
|
| If If |
|
| If If |
|
| If |
|
| if |
|
| Embeds a / comment / leading comment with session/request id in the source SQL query for informational purposes |
|
The names of the translator properties are case-sensitive.
Translator Properties Specific for Snowflake
To view the full table, click the expand button in its top right corner
Name | Description | Default value |
|---|---|---|
| Value: Uploads data as files on Amazon S3 storage. Translator properties | |
Value: Uploads data as files on Azure Blob storage. Translator properties | ||
Value: Uploads data as files to the Snowflake internal stage. The value is case-insensitive. | ||
| Value: a path to a folder (relative or absolute). Specifies the folder to be used for creating temporary files instead of the system configured one. If not specified, the default OS tmp folder is used | |
| Value: a bucket name. Only for the | |
| Value: a bucket prefix. Only for the | |
| Value: boolean. Specifies if the bucket set by the |
|
| Value: a region. Only for the | |
| Only for the | |
| As Redshift and Vertica measure varchar size in bytes, not chars, and store strings in UTF-8 encoding, a char may be 1 to 4 bytes long. You can specify the percentage by which the byte size will exceed the original char size. Also, there is a special value: 65535 which makes every varchar to be of 65535 bytes long | |
| Keep temporary files after uploading |
|
| The name of the internal stage to be created or used for uploading the files. If not set, a named stage with a unique name starting with TEMPORARY_NAMED_STAGE_ will be created for uploading the files and removed when done being used by the command. | |
| For | |
| For | |
| For | |
| For | |
| Specifies if the container set in the |
|
| SAS Token used for accessing the Blob Storage from Snowflake. For | |
| Defines the role that is used to retrieve Snowflake metadata. If the property is not set it is used to cache the default role gotten from Snowflake via the | |
| Enables/disables the Snowflake roles usage |
|
| Used to map the roles defined in the CData Virtuality Server to the roles on the Snowflake side. Mandatory if the The roles must be mapped as follows: Snowflake warehouses can also be assigned via the In the case of authentication via LDAP AD roles must be mapped to Snowflake roles in the same way. |
defaultRole, enableReAuth, roleMapping translator properties available since v4.1
Mapping AD roles to Snowflake roles supported since v4.7
internal_stage upload mode available since v4.8
Mapping warehouses to roles supported since v4.10
Data Source Properties
Data Source Properties Shared by All JDBC Connectors
(Properties listed in alphabetical order)
To view the full table, click the expand button in its top right corner
Name | Description | Default |
|---|---|---|
| Replaces |
|
| Database catalogs to use. Can be used if the Only for Microsoft SQL Server and Snowflake:
Only for Snowflake: the | Exasol:
SQL
All others: empty |
|
| |
|
Please note that writing into a data source is only possible if this parameter is set. | Empty |
| Turns on metadata cache for a single data source even when the global option is turned off. Together with |
|
| Case-insensitive regular expression that will exclude a matching fully qualified procedure name from import | Empty |
| Comma-separated list of schemas (no | Oracle:
SQL
All others: empty |
| Case-insensitive regular expression that will exclude a matching fully qualified table name from import. Does not speed up metadata loading. Here are some examples: 1. Excluding all tables in the (source) schemas
SQL
2. Excluding all tables except the ones starting with "public.br" and "public.mk" using a negative lookahead:
SQL
3. Excluding "tablename11" from the list ["tablename1", "tablename11", "company", "companies"]:
SQL
| Empty |
| Fetch size assigned to a resultset on loading metadata | No default value |
| If set to |
|
| If set to |
|
| If set to |
|
| If set to Please note that it is currently not possible to import procedures which use the same name for more than one parameter (e.g. same name for |
|
| If set to |
|
| If set to |
|
| If set to |
|
| Procedure(s) to import. If omitted, all procedures will be imported. | Empty |
| If set to |
|
| If set to |
|
| If set to |
|
| If set to |
|
| Schema(s) to import. If omitted or has "" value, all schemas will be imported. Schema names can be fully qualifed, for example | Empty |
| If set to |
|
| Table(s) to import. If omitted, all tables will be imported. Table names can be fully qualifed, for example | Empty |
| Comma-separated list (without spaces) of table types to import. Available types depend on the DBMS. Usual format: Other typical types are | Empty |
| If set to |
|
| If set to Please note that this may lead to objects with duplicate names when importing from multiple schemas, which results in an exception |
|
| If set to |
|
| If set to |
|
The names of the data source properties are case-sensitive.
importer.catalog property can be used in Snowflake together with the db connection property since v25.3
importer.schemaPattern supports fully qualified names since v25.3
importer.tableNamePattern supports fully qualified names since v25.3
When importer.loadMetadataWithJdbc is set to FALSE, all tables in the Snowflake data source will be loaded during metadata retrieval, which may cause delays if the number of tables exceeds 10,000. To improve performance, set this property to TRUE - however, only the 10,000 tables will be loaded in this case.
importer.loadMetadataWithJdbc available since v25.2