CData Virtuality Upgrade Utility (Exporter)
Please use the CData Virtuality Upgrade Utility (Exporter) supplied with your target server version to upgrade from an older version of the CData Virtuality Server.
This page gives a list of supported configuration options, some usage examples, and information on facades (specially designed sets of queries to facilitate upgrading from one version to another with newly added columns), JDK usage, and JDBC driver usage.
Properties
Mandatory Properties
Property | Description |
---|---|
--host | The host where the CData Virtuality Server is running |
Optional Properties
To view the full table, click the expand button in its top right corner
Property | Default value | Description |
---|---|---|
--username | (none) | User name used to connect to the server. Mandatory in case the auth-type property is not set to OPENID or OAUTH2 |
--password | (none) | User password. Mandatory in case the auth-type property is not set to OPENID or OAUTH2 |
--port | 31000 | Server port |
--ssl | FALSE | Indicates whether to use a secure connection or not |
--database | datavirtuality | The CData Virtuality VDB schema |
--file | dv-export-<current-date>.sql | Destination file name |
--dv-facade-file | (none) | SQL facade file in the file system |
--skip-aggr | FALSE | Indicates whether to skip aggregation optimizations or not |
--ds-validation | FALSE | Indicates whether to validate data sources on import or not. If enabled, data sources are exported with the Please note that the import will fail if validation is enabled and the data source is unreachable |
--opt-validation | FALSE | Indicates whether to validate optimizations on import or not. If enabled, optimizations are exported with the Please note that it is impossible to export materialized tables state if the validation is enabled |
--export-mattable-state | TRUE | Indicates whether to export the state of materialized tables or not. It is used when the new CData Virtuality installation uses the same internal DWH database with materialized tables. Please note that this option requires the optimization validation (see previous entry) to be disabled |
--view-proc-validation | FALSE | Indicates whether to validate virtual views and procedures on import or not. If enabled, virtual views and procedures are exported by printing definitions directly to the export script or wrapping them with the |
--use-maintenance-mode | TRUE | Runs export in maintenance server mode |
--purge-system-data | FALSE | Indicates whether to purge the system data or not. If enabled, adds the script to purge the existing data at the beginning of the export script. With this option, the following objects will be purged: Data sources, Connections, Procedures, VirtualSchemas, ViewDefinitions, ScheduleJobs (except 'cleanup' and 'performance'), Schedules (except 'performance'), RecommendedOptimizations, Users, and Roles |
--use-model-file | FALSE | Indicates whether to use a different file for "model" export or not. If enabled, separate files will be used for exporting data sources, virtual schemas with views and procedures, and the rest |
| FALSE | Indicates whether to export JBoss settings or not. If enabled, JBoss settings are exported to separate files |
--java-7 | FALSE | Needed to connect to server versions that use Java 7 (2.3 and below) when using an SSL connection |
--export-smtp-configuration | TRUE | If enabled, SMTP configuration is exported, otherwise not |
--export-global-job-email-notifications | TRUE | If enabled, Ggobal job email notifications are exported, otherwise not |
--export-job-specific-email-notifications | TRUE | If enabled, job-specific email notifications are exported, otherwise not |
--export-global-job-slack-notifications | TRUE | If enabled, global job Slack notifications are exported, otherwise not |
--export-job-specific-slack-notifications | TRUE | If enabled, job-specific Slack notifications are exported, otherwise not |
--export-modular-connectors | FALSE | If enabled, deployed modular connectors are exported, otherwise not |
export-modular-connectors
property is available since v4.5
OAuth 2.0 and OpenID Connect Authorization Properties
You can use authentication via username and password or via tokens. Here are some things to keep in mind:
- If
auth-user
andauth-password
are set, tokens should not be set; - If
access-token
andrefresh-token
are set,auth-user
andauth-password
should not be set; - If both
auth-user
/auth-password
and tokens are set, token authentication will be used.
Property | Default value | Description |
---|---|---|
--auth-type | (none) | Authentication type (OPENID or OAUTH2 ). If not set, the CData Virtuality security authentication is used |
--configuration-url | (none) | Configuration URL. Mandatory if OPENID or OAUTH2 authentication type is used |
--client-id | (none) | Client identifier. Mandatory if OPENID or OAUTH2 authentication type is used |
--client-secret | (none) | Client secret key. Mandatory if OPENID or OAUTH2 authentication type is used |
--auth-user | (none) | Username of the user. Mandatory if an access token is not provided |
--auth-password | (none) | Password of the user. Mandatory if an access token is not provided |
--access-token | (none) | Access token |
--refresh-token | (none) | Refresh token |
Usage Examples
1) The server has a standard port and database name:
export.bat --username admin --password admin --host localhost
2) The server has a custom configuration and custom destination script filename specified:
export.bat --username admin --password admin --host localhost --port 31002 --ssl true --database dvbase-1 --file upgrade-server1-to-server2.sql
3) Skipping export of aggregation optimizations:
export.bat --username admin --password admin --host localhost --skip-aggr true
4) Export using create* statements instead of import*:
export.bat --username admin --password admin --host localhost --ds-validation true --opt-validation true --export-mattable-state false --view-proc-validation true
5) Export using OAuth2 via user and password:
export.bat --host localhost --auth-type OAUTH2 --configuration-url http://localhost:8080/realms/oauth/.well-known/openid-configuration --client-id oauth1 --client-secret 1FrHGnl1cdeNxUYzpKtaALk11pTeF11a --auth-user user_name --auth-password user_pass
6) Export using OPENID via tokens:
export.bat --host localhost --auth-type OPENID --configuration-url http://localhost:8080/realms/oauth/.well-known/openid-configuration --client-id oauth1 --client-secret 1FrHGnl1cdeNxUYzpKtaALk11pTeF11a --access-token dDo4MTgwIl0sInJlc291cmNlX2FjY2VzcyI6eyJvZGF0YTQtb2F1dGgiOnsicm9sZXMiOlsiYWRtaW4tcm9sZSJdfX0sInNjb3BlIjoiZW1haWwgcHJvZmlsZSIsInNpZCI6 --refresh-token CwianRpIjoiMDgzZjYzZjItYTcwMy00ZjE4LTg2N2EtNjJiYWY0ZWU1ZDVkIiwiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDgwL3JlYWxtcy9vZGF0YS1vYXV0aCIsImF1ZCI6Imh0dHA6Ly9
Facade
An SQL facade is a context-specific interface for resolving compatibility issues between the source server and the target exporter.
Facades come in handy if the new version of the CData Virtuality Server adds new columns to system tables. If the newly added columns are non-nullable, the CData Virtuality Upgrade Utility must fill them with valid values. This is precisely what the facades are for: they are a set of queries that create dummy data.
Here is an example of such a facade with a detailed explanation:
getRecommendedIndexes=SELECT columnName, id, parentRecOptID, recOptID, type, cast('SINGLE' as varchar(30)) AS kind, cast(1 as biginteger) AS freq, cast('NEW' as varchar(20)) AS status FROM SYSADMIN.RecommendedIndexes
getRecommendedOptimizations=SELECT Enabled, Freq, ID, MatchDescriptor, Type, dwhState, dwhStateComment, lastMaterialized, lastReplicationState, lastReplicationStateComment, lastUsed, matTablePrefix, nextReplicationState, nextSelectState, sourceState, sourceStateComment, cast('ALL' as varchar(64)) AS allowIndexCreationByType, cast('NEW_ACCEPTED' as varchar(64)) AS allowIndexCreationByStatus FROM SYSADMIN.RecommendedOptimizations
In this example, we are updating two procedures, getRecommendedIndexes
and getRecommendedOptimizations
. In the target version, the first procedure receives new fields kind
, freq
, status
, and they are assigned arbitrary default values. The second procedure has two such newly added fields: allowIndexCreationByType
and allowIndexCreationByStatus
, and they, too, are assigned some default values.
If the upgrade is performed from one version to the subsequent version, a single-step facade may be required. If the upgrade skips several versions, you may need either a single-step facade (if only one of the versions being stepped over requires facade usage) or a cumulative facade comprised of single-step facades for each (or some) of the steps. Please contact our support team for more information and help with getting the facade required for upgrading your instance to the version you need.
JDK Usage
If the JDK is included in the server and its path is %dvserver%/JDK, and the exporter's path is %dvserver%/bin, then it will use the server Java when running the export.
If the JDK is not included in the server or the exporter is not running in %dvserver%/bin, it will use the system Java. This will be reflected in the log:
f:\datavirtuality\exporter\1_10\cli-export-1.0>export.bat
JDK is not defined at f:\datavirtuality\exporter\JDK\bin
Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8 -Duser.language=en -Duser.country=US
INFO: Export tool v.1.0 (rev.5525 built on 2016-07-26 03:54) [Tue Jul 26 09:54:56 MSK 2016]
=================================================================================================
If the JDK is not included in the server, or the exporter is not running in %dvserver%/bin
, and system Java does not exist, you will see the following warning in the log:
halina@halina-VirtualBox:/media/datavirtuality/exporter/1_11/cli-export-1.0$ ./export.sh
JDK is not defined at /media/datavirtuality/exporter/JDK/bin
JAVA_HOME is not set. Unexpected results may occur.
Set JAVA_HOME to the directory of your local JDK to avoid the message.
./export.sh: line 37: java: command is not found
JDBC Driver Usage
Sometimes, especially when upgrading between major server versions (e.g., from 2.1 to 2.3), it may make sense to use a specific JDBC driver to ensure that the data in the exported script is complete, correct, and usable with the target server (e.g., using the CLI-exporter v2.3 with a JDBC driver v2.1). You can specify the path to the JDBC driver in the jdbcDriverPath
variable of the export.bat/export.sh file.