S3 Unload
S3 Unload is an orchestration component that creates files on a specified S3 bucket, and loads the bucket with data from a table or view.
To access an S3 bucket from a different AWS account, read Background: Cross-account permissions and using IAM roles.
If the component requires access to a cloud provider, it will use credentials as follows:
- If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
- If using Hybrid SaaS: By default the component will inherit the agent's execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.
Note
If you're using a Matillion Full SaaS solution, you may need to allow these IP address ranges from which Matillion Full SaaS agents will call out to their source systems or to cloud data platforms.
Properties
Name
= string
A human-readable name for the component.
Stage
= drop-down
Select a staging area for the data. Staging areas can be created through Snowflake using the CREATE STAGE command. Internal stages can be set up this way to store staged data within Snowflake. Selecting [Custom] will avail the user of properties to specify a custom staging area on S3. Users can add a fully qualified stage by typing the stage name. This should follow the format databaseName.schemaName.stageName
Authentication
= drop-down
Select the authentication method. Users can choose either:
- Credentials: Uses AWS security credentials.
- Storage Integration: Use a Snowflake storage integration. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of permitted or blocked storage locations (Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage). More information can be found at CREATE STORAGE INTEGRATION.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
S3 Object Prefix
= file explorer
To retrieve the intended files, use the file explorer to enter the container path where the S3 bucket is located, or select from the list of S3 buckets.
This must have the format S3://<bucket>/<path>
.
File Prefix
= string
Filename prefix for unloaded data to be named on the S3 bucket. Each file will be named as the prefix followed by a number denoting which node this was unloaded from. All unloads are parallel and will use the maximum number of nodes available at the time.
Encryption
= drop-down
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
- None: No encryption.
- Client Side Encryption: Encrypt the data according to a client-side master key. Read Protecting data using client-side encryption to learn more.
- SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID
= drop-down
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Only available when encryption is set to KMS Encryption.
Master Key
= drop-down
The secret definition denoting your master key for client-side encryption. Your password should be saved as a secret definition before using this component.
Only available when encryption is set to Client Side Encryption.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Target Table
= drop-down
Select an existing table. The tables available for selection depend on the chosen schema.
Format
= drop-down
Select a pre-made file format that will automatically set many of the S3 Load component properties. These formats can be created through the Create File Format component.
File Type
= drop-down
Choose the following file type: CSV, JSON, or Parquet.
Some file types may require additional formatting—this is explained in the Snowflake documentation. Component properties will change to reflect the selected file type.
Compression
= drop-down
Select the compression method if you wish to compress your data. If you do not wish to compress at all, select NONE. The default setting is AUTO.
Nest Columns
= drop-down
JSON only. When True, the table columns should be nested into a single JSON object so that the file can be configured correctly. A table with a single variant column will not require this setting to be True. Default is False.
Record Delimiter
= string (optional)
CSV only. Input a delimiter for records. This can be one or more single-byte or multibyte characters that separate records in an input file.
Accepted values include: leaving the field empty; a newline character \
or its hex equivalent 0x0a
; a carriage return \\r
or its hex equivalent 0x0d
. Also accepts a value of NONE.
If you set the Skip Header to a value such as 1, then you should use a record delimiter that includes a line feed or carriage return, such as \
or \\r
. Otherwise, your entire file will be interpreted as the header row, and no data will be loaded.
The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.
Do not specify characters used for other file type options such as Escape or Escape Unenclosed Field.
The default (if the field is left blank) is a newline character.
Field Delimiter
= string (optional)
CSV only. Input a delimiter for fields. This can be one or more single-byte or multibyte characters that separate fields in an input file.
Accepted characters include common escape sequences, octal values (prefixed by \), or hex values (prefixed by 0x). Also accepts a value of NONE.
This delimiter is limited to a maximum of 20 characters.
While multi-character delimiters are supported, the field delimiter cannot be a substring of the record delimiter, and vice versa. For example, if the field delimiter is "aa", the record delimiter cannot be "aabb".
The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.
Do not specify characters used for other file type options such as Escape or Escape Unenclosed Field.
The Default setting is a comma: ,
.
Date Format
= string (optional)
CSV only. Define the format of date values in the data files to be loaded. If a value is not specified or is AUTO, the value for the DATE_INPUT_FORMAT session parameter is used. The default setting is AUTO.
Time Format
= string (optional)
CSV only. Define the format of time values in the data files to be loaded. If a value is not specified or is AUTO, the value for the TIME_INPUT_FORMAT session parameter is used. The default setting is AUTO.
Timestamp Format
= string (optional)
CSV only. Define the format of timestamp values in the data files to be loaded. If a value is not specified or is AUTO, the value for the TIMESTAMP_INPUT_FORMAT session parameter is used.
Escape
= string (optional)
CSV only. Specify a single character to be used as the escape character for field values that are enclosed. Default is NONE.
Escape Unenclosed Field
= string (optional)
CSV only. Specify a single character to be used as the escape character for unenclosed field values only. Default is \\
. If you have set a value in the property Field Optionally Enclosed, all fields will become enclosed, rendering the Escape Unenclosed Field property redundant, in which case, it will be ignored.
Field Optionally Enclosed
= string (optional)
CSV only. Specify a character used to enclose strings. The value can be NONE, single quote character '
, or double quote character "
. To use the single quote character, use the octal or hex representation 0x27
or the double single-quoted escape ''
. Default is NONE.
When a field contains one of these characters, escape the field using the same character. For example, to escape a string like this: 1 "2" 3, use double quotation to escape, like this: 1 ""2"" 3.
Null If
= editor (optional)
Specify a string to convert to SQL NULL values.
Overwrite
= drop-down
If the target file already exists, overwrite data instead of generating an error.
Single File
= boolean
When True, the unload will work in serial rather than parallel. This results in a slower unload but a single, complete file. The default setting is False.
When True, no file extension is used in the output filename (regardless of the file type, and regardless of whether or not the file is compressed). When False, a filename prefix must be included in the path.
Max File Size
= integer (optional)
The maximum size (in bytes) of each file generated, per thread. Default is 16000000 bytes (16 MB) and Snowflake has a 6.2 GB file limit for copy-into-location operations. Files that exceed the stated maximum will be split into multiple size-abiding parts.
Include Headers
= boolean
When true, write column names as headers at the top of the unloaded files.
Name
= string
A human-readable name for the component.
Schema
= drop-down
Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.
Table Name
= string
The table or view to unload to S3.
S3 URL Location
= string
The URL of the S3 bucket to load the data into.
Note
This component can unload to any accessible bucket, regardless of region. When a user enters a forward slash character /
after a folder name, a validation of the file path is triggered.
S3 Object Prefix
= string
Create data files in S3 beginning with this prefix. The format of the output is
<prefix><slice-number>_part_<file-number>
Where prefix
is the string you've entered here, slice-number
is the number of the slice in your cluster, and file-number
denotes the file number range. For example, if a slice has 50 MB of data, and you've chosen a maximum file size of 10 MB, then the file numbers will range from 001 -> 005.
Encryption
= drop-down (optional)
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
- SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
KMS Key ID
= drop-down (optional)
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Manifest
= drop-down (optional)
Whether or not to generate a manifest file detailing the files that were added.
Note
Selecting the option Yes (Verbose) will create a manifest file that explicitly lists details for the data files created by the Unload process. For more information, read the Redshift documentation.
Data File Type
= drop-down
Choose the following file type: CSV, Delimited, Fixed Width, or Parquet. Component properties will change to reflect the selected file type.
Delimited
= string
The delimiter that separates columns. The default is a comma. A [TAB] character can be specified as "\ ".
This property is available when Data File Type
is set to Delimited.
Fixed Width Spec
= string
Loads the data from a file where each column width is a fixed length, rather than separated by a delimiter. Each column is described by a name and length, separated by a colon. Each described column is then separated by a comma.
e.g. We have four columns; name, id, age, state. These columns have the respective lengths; 12,8,2,2.
The written description to convert this data into a table using fixed-width columns would then be:
name:12,id:8,age:2,state:2_
Note that the columns can have any plaintext name. For more information on fixed width inputs, read the AWS documentation.
This property is available when Data File Type
is set to Fixed Width.
Compress Data
= drop-down
Whether or not the resultant files are to be compressed.
This property is available when Data File Type
is set to CSV, Delimited, or Fixed Width.
Compression Type
= drop-down
If Compress Data is set to Yes, select either GZIP or BZIP2 as the compression method.
NULL As
= string (optional)
This option replaces the specified string with null in the output table. Use this if your data has a particular representation of missing data.
This property is available when Data File Type
is set to CSV, Delimited, or Fixed Width.
Escape
= drop-down
Whether or not to insert backslashes to escape special characters. This is often a good idea if you intend to reload the data back into a table later, since the COPY also supports this option.
This property is available when Data File Type
is set to Delimited.
Allow Overwrites
= drop-down (optional)
If the target file already exists, overwrite data instead of generating an error.
Parallel
= drop-down (optional)
If set to Yes, the unload will work in parallel, creating multiple files (one for each slice of the cluster). Disabling parallel will result in a slower unload but a single, complete file.
Note
Files are only split if parallel is set to No, and they are split based on what the user specifies in the Max File Size parameter.
Add Quotes
= drop-down
If set, quotation marks are added to the data.
This property is available when Data File Type
is set to Delimited.
IAM Role ARN
= drop-down (optional)
Select an IAM role Amazon Resource Name (ARN) that is already attached to your Redshift cluster, and that has the necessary permissions to access S3.
This setting is optional, since without this style of setup, the credentials of the environment (instance credentials or manually entered access keys) will be used.
Read the Redshift documentation for more information about using a role ARN with Redshift.
Max File Size
= string (optional)
The maximum size (in MB) of each file generated, per thread. Default is 16 MB and AWS has a 6.2GB file limit for Unload operations. Files that exceed the stated maximum will be split into multiple size-abiding parts.
Include Header
= drop-down (optional)
If set to Yes, The Data Productivity Cloud will write column names as headers at the top of unloaded files.
This property is available when Data File Type
is set to CSV or Delimited.
S3 Bucket Region
= drop-down (optional)
The Amazon S3 region hosting the S3 bucket. This is not normally required and can be left as "None" (default) if the bucket is in the same region as your Redshift cluster.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
✅ | ❌ | ✅ |