Skip to main content
Skip table of contents

AWS S3 Connector

You are looking at an older version of the documentation. The latest version is found here.

The AWS S3 connector, known by the type name s3, exposes stored procedures to leverage resources stored in AWS S3.

Connector-specific Connection Properties

NameDescription
keyIdS3 key id
secretKeyS3 secret key
bucketName S3 bucket name to work with
region S3 region (optional)
prefixpathAndPattern prefix to be used when handling files

Example

SQL
CALL SYSADMIN.createConnection('s3alias', 's3', 'region=eu-west-1, keyId=<id>, secretKey="<secret>", bucketName=dv-redshift-upload-test');;
CALL SYSADMIN.createDatasource('s3alias', 'ufile', 'importer.useFullSchemaName=false', null);;

IAM Role Authorization

When IAM Role authorization is configured, the keyId and secretKey connector parameters can be omitted:

SQL
CALL SYSADMIN.createConnection('s3alias', 's3', 'region=eu-west-1, bucketName=dv-redshift-upload-test');;
CALL SYSADMIN.createDatasource('s3alias', 'ufile', 'importer.useFullSchemaName=false', null);;

Example

This example shows using IAM policy on the AWS side:

SQL
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAccountLevelS3Actions",
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets",
                "s3:HeadBucket"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowListAndReadS3ActionOnMyBucket",
            "Effect": "Allow",
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Resource": [
                "arn:aws:s3:::mk-s3-test/*",
                "arn:aws:s3:::mk-s3-test"
            ]
        }
    ]
}

Multi-part Upload

The AWS S3 connector can be configured to perform the multipart upload using the following properties:

NameDescriptionDefault value

multipartUpload

TRUE for performing multi-part upload (optional)FALSE

numberOfThreads

Number of threads for multi-part upload (optional)5

partSize

Part size for multi-part upload in bytes (optional)5MB

The partSize can be specified between 5 MB to 5 TB in size. If the specified value is out of this range, it will be automatically changed to either 5 MB or 5 TB, respectively.

Example

SQL
CALLL SYSADMIN.createConnection('s3alias', 's3', 'region=eu-west-1,keyId=<id>,secretKey="<secret>",bucketName=dv-redshift-upload-test,multipartUpload=true,partSize=1024,numberOfThreads=5');;
CALL SYSADMIN.createDatasource('s3alias', 'ufile', 'importer.useFullSchemaName=false', null);;

Prefix

The Prefix enables limiting result set (see SDK documentation):

  • The Prefix property value gets passed in connectionOrResourceAdapterProperties;
  • All procedures of the connector automatically take the prefix into consideration (e.g. calling listFiles(pathAndPattern => NULL) still applies the prefix from the data source settings;
  • If the data source has a prefix configured, and a pathAndPattern gets passed, the values are concatenated. For example, if the data source is configured with prefix: a/b, and listFiles(pathAndPattern => 'c/d') gets called, this results in a/b/c/d.
JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.