Azure Blob Storage Unload
This component creates files on a specified Azure Blob Storage account and loads them with data from a table or view.
By default, your data will be unloaded in parallel.
If the component requires access to a cloud provider, it will use credentials as follows:
- If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
- If using Hybrid SaaS: By default the component will inherit the agent's execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.
Properties
Name
= string
A human-readable name for the component.
Stage
= drop-down
Select a staging area for the data. Staging areas can be created through Snowflake using the CREATE STAGE command. Internal stages can be set up this way to store staged data within Snowflake. Selecting [Custom] will avail the user of properties to specify a custom staging area on Azure Blob Storage. Users can add a fully qualified stage by typing the stage name. This should follow the format databaseName.schemaName.stageName
.
Azure Storage Location
= file explorer
To retrieve the intended files, use the file explorer to enter the container path where the Azure storage account is located, or select from the list of storage accounts.
This must have the format AZURE://<StorageAccount>/<path>
.
File Prefix
= string
Specify a file prefix for unloaded data on the blob container. Each file will be named as the prefix followed by a number denoting which node this was unloaded from. All unloads are parallel, and will use the maximum number of nodes available at the time.
Authentication
= drop-down
Select the authentication method. Users can choose either:
- Credentials: Uses Azure security credentials.
- Storage Integration: Use a Snowflake storage integration. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of permitted or blocked storage locations (Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage). More information can be found at CREATE STORAGE INTEGRATION.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Target Table
= drop-down
Select an existing table. The tables available for selection depend on the chosen schema.
Format
= drop-down
Select a pre-made file format that will automatically set many of the Azure unload component properties. These formats can be created through the Create File Format component. Selecting the [Custom] file format will use the component properties to define the file format.
File Type
= drop-down
The unload file format. Choose from CSV, JSON, or PARQUET. Some file types may require additional formatting—this is explained in the Snowflake documentation. Component properties will change to reflect the selected file type.
Compression
= drop-down
Select the compression format. Available CSV and JSON formats include:
- AUTO
- BROTLI
- BZ2
- DEFLATE
- gzip
- NONE (no compression)
- RAW_DEFLATE
- ZSTD
Available PARQUET formats include:
- AUTO
- LZO
- NONE (no compression)
- SNAPPY
Record Delimiter
= string
Specify a delimiter character to separate records (rows) in the file. Defaults to newline. \
can also signify a newline. \\r
can signify a carriage return.
Field Delimiter
= string
Specify a delimiter character to separate columns. The default character is a comma ,
A [TAB] character can be specified as \ .
Date Format
= string
Defaults to auto. Use this property to manually specify a date format. More information...
Time Format
= string
Defaults to auto. Use this property to manually specify a time format. More information...
Timestamp Format
= string
Defaults to auto. Use this property to manually specify a timestamp format. More information...
Escape
= string
Specify a single character to be used as the escape character for field values that are enclosed. Default is NONE.
Escape Unenclosed Field
= string
Specify a single character to be used as the escape character for unenclosed field values only. Accepts common escape sequences, octal values, or hex values. Also accepts a value of NONE (default). Default is \
.
If a character is specified in the "Escape" field, it will override this field.
If you have set a value in the property Field Optionally Enclosed, all fields will become enclosed, rendering the Escape Unenclosed Field property redundant, in which case it will be ignored.
Field Optionally Enclosed
= string
A character that is used to enclose strings. Can be single quote (') or double quote (") or NONE (default). Note that the character chosen can be escaped by that same character.
Nest Columns
= drop-down
When "True", the table columns will be nested into a single JSON object so that the file can be configured correctly. A table with a single variant column will not require this setting to be "True". The default setting is "False".
Null If
= string
Specify one or more strings (one string per row of the table) to convert to NULL values. When one of these strings is encountered in the file, it is replaced with a SQL NULL value for that field in the loaded table. Click + to add a string.
Trim Space
= drop-down
(JSON and PARQUET only) Removes trailing and leading whitespace from the input data.
Overwrite
= drop-down
When "True", overwrite existing data (if the target file already exists) instead of generating an error. Default setting is "False".
Single File
= drop-down
When True, the unload operation will work in serial rather than parallel. This results in a slower unload but a single, complete file.
The default setting is False.
When True, no file extension is used in the output filename (regardless of the file type, and regardless of whether the file is compressed).
When False, a filename prefix must be included in the path.
Max File Size
= integer
The maximum size (in bytes) of each file generated.
The default is 16000000 (16 MB). The maximum size is 5000000000 (5 GB).
For more information, see the Snowflake documentation.
Include Headers
= drop-down
When "True", write column names as headers at the top of the unloaded files. Default is "False".
Copying files to an Azure Premium Storage blob
When copying files to an Azure Premium Storage blob, Designer may provide the following error:
Self-suppression not permitted.
This is because, unlike standard Azure Storage, Azure Premium Storage does not support block blobs, append blobs, files, tables, or queues. Premium Storage supports only page blobs that are incrementally sized.
A page blob is a collection of 512-byte pages that are optimised for random read and write operations. Thus, all writes must be 512-byte aligned and so any file that is not sized a multiple of 512 will fail to write.
For additional information about Azure Storage blobs, we recommend consulting the Microsoft Azure documentation.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
✅ | ❌ | ❌ |