Skip to content

Azure Blob Storage

The Azure Blob Storage component lets users load data into an existing table from objects stored in Azure Blob Storage.

The Azure Blob Storage component requires working Azure cloud credentials with Read access to the Azure Blob Storage resources. The app used to authenticate your Azure cloud credentials must also have the Storage Account Contributor role added to it.

If the component requires access to a cloud provider, it will use credentials as follows:

  • If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
  • If using Hybrid SaaS: By default the component will inherit the agent's execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.

Note

If you're using a Matillion Full SaaS solution, you may need to allow the IP address ranges from which Matillion Full SaaS agents will call out to their source systems or to cloud data platforms.


Properties

Name = string

A human-readable name for the component.


Stage = drop-down

Select a staging area for the data. Staging areas can be created through Snowflake using the CREATE STAGE command. Internal stages can be set up this way to store staged data within Snowflake. Selecting [Custom] will avail the user of properties to specify a custom staging area on Azure Blob Storage. Users can add a fully qualified stage by typing the stage name. This should follow the format databaseName.schemaName.stageName


Authentication = drop-down

Select the authentication method. Users can choose either:

  • Credentials: Use the Azure cloud credentials that have been associated with the environment. For more information, read Acquiring Azure credentials. The app used to authenticate your Azure cloud credentials must also have the Storage Account Contributor role added to it. If no credentials have been configured, an error will occur.
  • Storage Integration: Use a Snowflake storage integration to authenticate data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. More information can be found at CREATE STORAGE INTEGRATION.

Storage Integration = drop-down

Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.


Azure Storage Location = file explorer

To retrieve the intended files, use the file explorer to enter the container path where the Azure storage account is located, or select from the list of storage accounts.

This must have the format AZURE://<StorageAccount>/<path>.


Pattern = string

A string that will partially match all file paths and names that are to be included in the load. Defaults to .* indicating all files within the Azure Storage Location.


Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Target Table = string

Select an existing table to load data into. The tables available for selection depend on the chosen schema.


Load Columns = dual listbox

Choose the columns to load. If you leave this parameter empty, all columns will be loaded.


Format = drop-down

Select a pre-made file format that will automatically set many of the Azure Blob Storage component properties. These formats can be created through the Create File Format component.

Users can add a fully qualified format by typing the format name. This should read as databaseName.schemaName.formatName


File Type = drop-down

Select the type of data to load. Available data types are: AVRO, CSV, JSON, ORC, PARQUET, and XML. Some file types may require additional formatting—this is explained in the Snowflake documentation. Component properties will change to reflect the selected file type.


Compression = drop-down

Select the compression method if you wish to compress your data. If you do not wish to compress at all, select NONE. The default setting is AUTO.


Record Delimiter = string

CSV only. Input a delimiter for records. This can be one or more single-byte or multibyte characters that separate records in an input file.

Accepted values include: leaving the field empty; a newline character \ or its hex equivalent 0x0a; a carriage return \\r or its hex equivalent 0x0d. Also accepts a value of NONE.

If you set the Skip Header to a value such as 1, then you should use a record delimiter that includes a line feed or carriage return, such as \ or \\r. Otherwise, your entire file will be interpreted as the header row, and no data will be loaded.

The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.

Do not specify characters used for other file type options such as Escape or Escape Unenclosed Field.

The default (if the field is left blank) is a newline character.


Field Delimiter = string

CSV only. Input a delimiter for fields. This can be one or more single-byte or multibyte characters that separate fields in an input file.

Accepted characters include common escape sequences, octal values (prefixed by \), or hex values (prefixed by 0x). Also accepts a value of NONE.

This delimiter is limited to a maximum of 20 characters.

While multi-character delimiters are supported, the field delimiter cannot be a substring of the record delimiter, and vice versa. For example, if the field delimiter is "aa", the record delimiter cannot be "aabb".

The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.

Do not specify characters used for other file type options such as Escape or Escape Unenclosed Field.

The Default setting is a comma: ,.


Skip Header = integer

CSV only. Specify the number of rows to skip. The default is 0.

If Skip Header is used, the value of the record delimiter will not be used to determine where the header line is. Instead, the specified number of CRLF will be skipped. For example, if the value of Skip Header = 1, skips to the first CRLF that it finds. If you have set the Field Delimiter property to be a single character without a CRLF, then skips to the end of the file (treating the entire file as a header).


Skip Blank Lines = boolean

CSV only. When True, ignores blank lines that only contain a line feed in a data file and does not try to load them. Default setting is False.


Date Format = string

CSV only. Define the format of date values in the data files to be loaded. If a value is not specified or is AUTO, the value for the DATE_INPUT_FORMAT session parameter is used. The default setting is AUTO.


Time Format = string

CSV only. Define the format of time values in the data files to be loaded. If a value is not specified or is AUTO, the value for the TIME_INPUT_FORMAT session parameter is used. The default setting is AUTO.


Timestamp Format = string

CSV only. Define the format of timestamp values in the data files to be loaded. If a value is not specified or is AUTO, the value for the TIMESTAMP_INPUT_FORMAT session parameter is used.


Escape = string

CSV only. Specify a single character to be used as the escape character for field values that are enclosed. Default is NONE.


Escape Unenclosed Field = string

CSV only. Specify a single character to be used as the escape character for unenclosed field values only. Default is \\. If you have set a value in the property Field Optionally Enclosed, all fields will become enclosed, rendering the Escape Unenclosed Field property redundant, in which case, it will be ignored.


Trim Space = boolean

CSV only. When True, removes whitespace from fields. Default setting is False.


Field Optionally Enclosed = string

CSV only. Specify a character used to enclose strings. The value can be NONE, single quote character ', or double quote character ". To use the single quote character, use the octal or hex representation 0x27 or the double single-quoted escape ''. Default is NONE.

When a field contains one of these characters, escape the field using the same character. For example, to escape a string like this: 1 "2" 3, use double quotation to escape, like this: 1 ""2"" 3.


Null If = editor

Specify one or more strings (one string per row of the table) to convert to NULL values. When one of these strings is encountered in the file, it is replaced with an SQL NULL value for that field in the loaded table. Click + to add a string.


Error On Column Count Mismatch = boolean

CSV only. When True, generates an error if the number of delimited columns in an input file does not match the number of columns in the corresponding table. When False (default), an error is not generated and the load continues. If the file is successfully loaded in this case:

  • Where the input file contains records with more fields than columns in the table, the matching fields are loaded in order of occurrence in the file, and the remaining fields are not loaded.
  • Where the input file contains records with fewer fields than columns in the table, the non-matching columns in the table are loaded with NULL values.

Empty Field As Null = boolean

CSV only. When True, inserts NULL values for empty fields in an input file. This is the default setting.


Replace Invalid Characters = boolean

When True, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character. When False (default), the load operation produces an error when invalid UTF-8 character encoding is detected.


Encoding Type = drop-down

CSV only. Select the string that specifies the character set of the source data when loading data into a table. Refer to the Snowflake documentation for more information.


Enable Octal = boolean

JSON only. When True, enables the parsing of octal values. Default setting is False.


Allow Duplicates = boolean

JSON only. When True, allows duplicate object field names. Default setting is False.


Strip Outer Array = boolean

JSON only. When True, instructs the JSON parser to remove outer brackets. Default setting is False.


Strip Null Values = boolean

JSON only. When True, instructs the JSON parser to remove any object fields or array elements containing NULL values. Default setting is False.


Ignore UTF8 Errors = boolean

JSON and XML only. When True, replaces any invalid UTF-8 sequences with Unicode characters. When False (default), UTF-8 errors will not produce an error in the pipeline run.


Preserve Space = boolean

XML only. When True, the XML parser preserves leading and trailing spaces in element content. Default setting is False.


Strip Outer Element = boolean

XML only. When True, the XML parser strips out any outer XML elements, exposing second-level elements as separate documents. Default setting is False.


Disable Snowflake Data = boolean

XML only. When True, the XML parser will not recognise Snowflake semi-structured data tags. Default setting is False.


Disable Auto Convert = boolean

XML only. When True, the XML parser will disable automatic conversion of numeric and Boolean values from text to native representations. Default setting is False.


On Error = drop-down

Decide how to proceed upon an error.

  • Abort Statement: Aborts the load if any error is encountered. This is the default setting.
  • Continue: Continue loading the file.
  • Skip File: Skip file if any errors are encountered in the file.
  • Skip File When n Errors: Skip file when the number of errors in the file is equal to or greater than the specified number in the next property, N.
  • Skip File When n% Errors: Skip file when the percentage of errors in the file exceeds the specified percentage of N.

N = integer

Specify the number of errors or the percentage of errors required to skip the file. This parameter only accepts integer characters. % is not accepted. Specify percentages as a number only.


Size Limit B = integer

Specify the maximum size, in bytes, of data to be loaded for a given COPY statement. If the maximum is exceeded, the COPY operation discontinues loading files. For more information, refer to the Snowflake documentation.


Purge Files = boolean

When True, purges data files after the data is successfully loaded. Default setting is False.


Match By Column Name = drop-down

Specify whether to load semi-structured data into columns in the target table that match corresponding columns represented in the data.

  • Case Insensitive: Load semi-structured data into columns in the target table that match corresponding columns represented in the data. Column names should be case-insensitive.
  • Case Sensitive: Load semi-structured data into columns in the target table that match corresponding columns represented in the data. Column names should be case-sensitive.
  • None: The COPY operation loads the semi-structured data into a variant column or, if a query is included in the COPY statement, transforms the data.

Truncate Columns = boolean

When True, strings are automatically truncated to the target column length. When False (default), the COPY statement produces an error if a loaded string exceeds the target column length.


Force Load = boolean

When True, loads all files, regardless of whether they have been loaded previously and haven't changed since they were loaded. Default setting is False.

When set to True, this option reloads files and can lead to duplicated data in a table.


Metadata Fields = dual listbox

Snowflake metadata columns available to include in the load. For more information, read Querying Metadata for Staged Files. This property is only available when an external stage is selected.

Name = string

A human-readable name for the component.


Storage Account = drop-down

Select a storage account.

Note

To ensure this property populates correctly, make sure Azure cloud credentials are added to your Databricks environment. For more information about adding a new environment, and setting up cloud credentials, read Add an environment and Create cloud provider credentials, respectively.


Blob Container = drop-down

Select a blob container.


Pattern = string

A string that will partially match all file paths and names that are to be included in the load. Defaults to .* indicating all files within the Blob Container.


Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which databases are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Table = drop-down

Select the table in your chosen Schema (Database).


Load Columns = dual listbox

Choose the columns to load. If you leave this parameter empty, all columns will be loaded.


File Type = drop-down

Select the type of data to load. Available data types are: CSV, JSON, and PARQUET, and AVRO.

Component properties will change to reflect the selected file type. Click one of the tabs below for properties applicable to that file type.

Header = boolean

Select Yes to use the first line of the file as column names. If not specified, the default is No.


Field Delimiter = string

Enter the delimiter character used to separate fields in the CSV file. This can be one or more single-byte or multibyte characters that separate fields in an input file. If none is specified, the default is a comma.

Accepted characters include common escape sequences, octal values (prefixed by \\), or hex values (prefixed by 0x). This delimiter is limited to a maximum of 20 characters. The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.


Date Format = string

Describe the format of date values in the data files to be loaded. If none is specified, the default is yyyy-MM-dd.


Timestamp Format = string

Describe the format of timestamp values in the data files to be loaded. If none is specified, the default is yyyy-MM-dd'T'HH:mm:ss.[SSS][XXX].


Encoding Type = boolean

The encoding type to use when decoding the input files. If none is specified, the default is UTF-8.


Ignore Leading Whitespace = boolean

When Yes, removes whitespace from the beginning of fields. If not specified, the default is No.


Ignore Trailing Whitespace = boolean

When Yes, removes whitespace from the end of fields. If not specified, the default is No.


Infer Schema = boolean

If Yes, will attempt to determine the input schema automatically from the data contained in the CSV file. If not specified, the default is No.


Multi Line = boolean

If Yes, will parse records which may span multiple lines. If not specified, the default is No.


Null Value = string

Sets the string representation of a null value. If not specified, the default value is an empty string.


Empty Value = string

Sets the string representation of an empty value. If not specified, the default value is an empty string.


Recursive File Lookup = boolean

If Yes, partition inference is disabled. If not specified, the default is No. To control which files are loaded, use the Pattern property instead.


Force Load = boolean

If Yes, files are loaded regardless of whether they've been loaded before. If not specified, the default is No.

Date Format = string

Describe the format of date values in the data files to be loaded. If none is specified, the default is yyyy-MM-dd.


Timestamp Format = string

Describe the format of timestamp values in the data files to be loaded. If none is specified, the default is yyyy-MM-dd'T'HH:mm:ss.[SSS][XXX].


Encoding Type = string

The encoding type to use when decoding the input files. If none is specified, the default is UTF-8.


Multi Line = boolean

If Yes, will parse records which may span multiple lines. If not specified, the default is No.


Primitives As String = boolean

If Yes, primitive data types are interpreted as strings in JSON files. If not specified, the default is No.


Prefers Decimals = boolean

If Yes, all floating-point values will be treated as a decimal type in JSON files. If not specified, the default is No.


Allow Comments = boolean

If Yes, will allow JAVA/C++ comments in JSON records. If not specified, the default is No.


Allow Unquoted Field Names = boolean

If Yes, will allow unquoted JSON field names. If not specified, the default is No.


Allow Single Quotes = boolean

If Yes, will allow single quotes in addition to double quotes in JSON records. If not specified, the default is No.


Allow Numeric Leading Zeros = boolean

If Yes, will allow leading zeros in numbers in JSON records. If not specified, the default is No.


Allow Backslash Escaping Any Character = boolean

If Yes, will allow the quoting of all characters using the backslash quoting mechanism in JSON records. If not specified, the default is No.


Allow Unquoted Control Chars = boolean

If Yes, will allow JSON strings to include unquoted control characters in JSON records. If not specified, the default is No.


Drop Field If All Null = boolean

If Yes, will ignore columns that contain only null values in JSON records. If not specified, the default is No.


Recursive File Lookup = boolean

If Yes, partition inference is disabled. If not specified, the default is No. To control which files are loaded, use the Pattern property instead.


Force Load = boolean

If Yes, files are loaded regardless of whether they've been loaded before. If not specified, the default is No.

Merge Schema = boolean

If Yes, will merge schema from all PARQUET and AVRO part-files. If not specified, the default is No.


Recursive File Lookup = boolean

If Yes, partition inference is disabled. If not specified, the default is No. To control which files are loaded, use the Pattern property instead.


Force Load = boolean

If Yes, files are loaded regardless of whether they've been loaded before. If not specified, the default is No.

Merge Schema = boolean

If Yes, will merge schema from all PARQUET and AVRO part-files. If not specified, the default is No.


Recursive File Lookup = boolean

If Yes, partition inference is disabled. If not specified, the default is No. To control which files are loaded, use the Pattern property instead.


Force Load = boolean

If Yes, files are loaded regardless of whether they've been loaded before. If not specified, the default is No.


Snowflake Databricks Amazon Redshift