S3 Load
The S3 Load component lets users load data into an existing table from objects stored in Amazon Simple Storage Service (Amazon S3).
The S3 Load component requires working AWS Credentials, with Read access to the bucket containing the source data files. This is achieved by attaching an IAM role to the instance when launching Matillion ETL; it can also be done manually by editing a Matillion ETL environment.
Furthermore, Matillion ETL requires use of a policy that contains the action s3:ListBuckets, such as the policy provided in Manage Credentials.
See the sections at the bottom of this article for additional notes on usage.
Properties
Name
= string
A human-readable name for the component.
Stage
= drop-down
Select a staging area for the data. Staging areas can be created through Snowflake using the CREATE STAGE.
Internal stages can be set up this way to store staged data within Snowflake.
Selecting [Custom] will avail the user of properties to specify a custom staging area on S3.
Users can add a fully qualified stage by typing the stage name. This should follow the format databaseName.schemaName.stageName
Authentication
= drop-down
Select the authentication method. Users can choose either:
- Credentials: Uses AWS security credentials. Read Manage Credentials.
- Storage Integration: Uses a Snowflake storage integration. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of permitted or blocked storage locations (Amazon S3, Google Cloud Storage, or Microsoft Azure). More information can be found at CREATE STORAGE INTEGRATION.
Credentials
= drop-down
Select your AWS credentials. The special value, [Environment Default], uses the set of credentials specified in your Matillion ETL environment. This is the default value. Click Manage to edit or create new credentials in Manage Credentials.
Storage Integration
= drop-down
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them in Matillion ETL. To learn more about setting up a storage integration, read Storage Integration Setup Guide.
Note
Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure cloud storage regardless of the cloud provider that hosts your Snowflake account.
S3 Object Prefix
= drop-down
Specify the URL of the S3 bucket to load files from. The URL follows the format s3://bucket/path/
Note
The "path" parameter in the URL is the subfolder and should be included.
When a user enters a forward slash character / after a folder name, a validation of the file path is triggered. This works in the same manner as the Go button.
Pattern
= string
A string that will partially match all file paths and names that are to be included in the load. Defaults to '.*' indicating all files within the S3 Object Prefix.
This property is a pattern on the complete path of the file, and is not just relative to the directory configured in the S3 Object Prefix property.
Note
The subfolder containing the object to load must be included here.
Encryption
= drop-down
Select how the files are encrypted inside the S3 Bucket. This property is available when using an existing Amazon S3 location for staging.
- Client Side Encryption: Data is encrypted with client-side encryption.
- None: No encryption.
- SSE Encryption: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
- S3 Encryption: Encrypt the data according to a key stored on an S3 bucket.
Warehouse
= drop-down
The Snowflake warehouse that will run the load. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Target Table
= drop-down
Select an existing table to load data into. The tables available for selection depend on the chosen schema.
Load Columns
= dual listbox
Select which columns to include in the load.
Format
= drop-down
Select a pre-made file format that will automatically set many of the S3 Load component properties. These formats can be created through the Create File Format component.
Users can add a fully qualified format by typing the format name. This should read as databaseName.schemaName.formatName
The list of file formats will come from the database and schema that the specified tables exist in.
Warning
In versions of Matillion ETL prior to 1.76, jobs could use file formats that existed in the environment default schema. Existing jobs that used these formats will no longer work after an upgrade to 1.76 or later. To fix this, recreate those file formats in the same schemas as the tables.
File Type
= drop-down
Select the type of data to load. Available data types are: AVRO, CSV, JSON, ORC, PARQUET, and XML.
Some file types may require additional formatting. This is explained in the Snowflake documentation. Component properties will change to reflect the selected file type.
Compression
= drop-down
Select the compression method if you wish to compress your data. If you do not wish to compress at all, select NONE. The default setting is AUTO.
Record Delimiter
= string
(CSV only) Input a delimiter for records. This can be one or more single-byte or multibyte characters that separate records in an input file.
Accepted values include: leaving the field empty; a newline character \
or its hex equivalent 0x0a
; a carriage return \\r
or its hex equivalent 0x0d
. Also accepts a value of NONE
.
If you set the Skip Header to a value such as 1, then you should use a record delimiter that includes a line feed or carriage return, such as \
or \\r
. Otherwise, your entire file will be interpreted as the header row, and no data will be loaded.
The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.
Do not specify characters used for other file type options such as Escape or Escape Unenclosed Field.
The default (if the field is left blank) is a newline character.
Field Delimiter
= string
(CSV only) Input a delimiter for fields. This can be one or more single-byte or multibyte characters that separate fields in an input file.
Accepted characters include common escape sequences, octal values (prefixed by \\\\
), or hex values (prefixed by 0x
). Also accepts a value of NONE
.
This delimiter is limited to a maximum of 20 characters.
While multi-character delimiters are supported, the field delimiter cannot be a substring of the record delimiter, and vice versa. For example, if the field delimiter is "aa", the record delimiter cannot be "aabb".
The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes.
Do not specify characters used for other file type options such as Escape or Escape Unenclosed Field.
The Default setting is a comma: ,
Skip Header
= integer
(CSV only) Specify the number of rows to skip. The default is 0.
If Skip Header is used, the value of the record delimiter will not be used to determine where the header line is. Instead, the specified number of CRLF will be skipped.
For example, if the value of Skip Header = 1, then Matillion ETL will skip to the first CRLF that it finds. If you have set the Field Delimiter property to be a single character without a CRLF, then Matillion ETL skips to the end of the file (treating the entire file as a header).
Skip Blank Lines
= drop-down
(CSV only) When "True", ignores blank lines that only contain a line feed in a data file and does not try to load them. The default setting is "False".
Date Format
= string
(CSV only) Define the format of date values in the data files to be loaded. If a value is not specified or is AUTO, the value for the DATE_INPUT_FORMAT session parameter is used.
The default setting is AUTO.
Time Format
= string
(CSV only) Define the format of time values in the data files to be loaded. If a value is not specified or is AUTO, the value for the TIME_INPUT_FORMAT session parameter is used.
The default setting is AUTO.
Timestamp Format
= string
(CSV only) Define the format of timestamp values in the data files to be loaded. If a value is not specified or is AUTO, the value for the TIMESTAMP_INPUT_FORMAT session parameter is used.
Escape
= string
(CSV only) Specify a single character to be used as the escape character for field values that are enclosed. The default setting is NONE.
Escape Unenclosed Field
= string
(CSV only) Specify a single character to be used as the escape character for unenclosed field values only. The default is \
.
If you have set a value in the property Field Optionally Enclosed, all fields will become enclosed, rendering the Escape Unenclosed Field property redundant, in which case it will be ignored.
Trim Space
= drop-down
(CSV only) When "True", removes whitespace from fields.
The default setting is "False".
Field Optionally Enclosed
= string
(CSV only) Specify a character used to enclose strings. The value can be NONE, single quote character ('), or double quote character ("). To use the single quote character, use the octal or hex representation (0x27) or the double single-quoted escape (''). The default is NONE.
Note
When a field contains one of these characters, escape the field using the same character. For example, to escape a string like this: 1 "2" 3, use double quotation to escape, like this: 1 ""2"" 3.
Null If
= string
Specify one or more strings (one string per row of the table) to convert to NULL values. When one of these strings is encountered in the file, it is replaced with an SQL NULL value for that field in the loaded table. Click + to add a string.
Error On Column Count Mismatch
= drop-down
(CSV only) When "True", generates an error if the number of delimited columns in an input file does not match the number of columns in the corresponding table.
When "False", an error is not generated and the load continues. If the file is successfully loaded in this case:
- Where the input file contains records with more fields than columns in the table, the matching fields are loaded in order of occurrence in the file, and the remaining fields are not loaded.
- Where the input file contains records with fewer fields than columns in the table, the non-matching columns in the table are loaded with NULL values.
In Matillion ETL, the default setting is False.
Empty Field As Null
= drop-down
(CSV only) When "True", inserts NULL values for empty fields in an input file. This is the default setting.
Replace Invalid Characters
= drop-down
When "True", Snowflake replaces invalid UTF-8 characters with the Unicode replacement character.
When "True", the load operation produces an error when invalid UTF-8 character encoding is detected.
The default setting is "False".
Encoding Type
= drop-down
(CSV only) Select the string that specifies the character set of the source data when loading data into a table. Please refer to the Snowflake documentation for more information.
Enable Octal
= drop-down
(JSON only) When "True", enables the parsing of octal values.
The default setting is "False".
Allow Duplicates
= drop-down
(JSON only) When "True", allows duplicate object field names. The default setting is "False".
Strip Outer Array
= drop-down
(JSON only) When "True", instructs the JSON parser to remove outer brackets. The default setting is False.
Strip Null Values
= drop-down
(JSON only) When "True", instructs the JSON parser to remove any object fields or array elements containing NULL values.
The default setting is "False".
Ignore UTF8 Errors
= drop-down
(JSON, XML only) When "True", replaces any invalid UTF-8 sequences with Unicode characters.
When "False", UTF-8 errors will not produce an error in the job run. The default setting is "False".
Preserve Space
= drop-down
(XML only) When "True", the XML parser preserves leading and trailing spaces in element content.
The default setting is "False".
Strip Outer Element
= drop-down
(XML only) When "True", the XML parser strips out any outer XML elements, exposing second-level elements as separate documents.
The default setting is "False".
Disable Snowflake Data
= drop-down
(XML only) When "True", the XML parser will not recognise Snowflake semi-structured data tags.
The default setting is "False".
Disable Auto Convert
= drop-down
(XML only) When "True", the XML parser will disable automatic conversion of numeric and Boolean values from text to native representations.
The default setting is "False".
On Error
= drop-down
Decide how to proceed upon an error.
- Abort Statement: Aborts the load if any error is encountered.
- Continue: Continue loading the file.
- Skip File: Skip file if any errors are encountered in the file.
- Skip File When n Errors: Skip file when the number of errors in the file is equal to or greater than the specified number in the next property, n.
- Skip File When n% Errors: Skip file when the percentage of errors in the file exceeds the specified percentage of n.
The default setting is Abort Statement.
n
= integer
Specify the number of errors or the percentage of errors required for Matillion ETL to skip the file.
This parameter only accepts integer characters. %
is not accepted. Specify percentages as a number only.
Size Limit (B)
= integer
Specify the maximum size, in bytes, of data to be loaded for a given COPY statement. If the maximum is exceeded, the COPY operation discontinues loading files. For more information, please refer to the Snowflake documentation.
Purge Files
= drop-down
When "True", purges data files after the data is successfully loaded. The default setting is "False".
Match By Column Name
= drop-down
Specify whether to load semi-structured data into columns in the target table that match corresponding columns represented in the data.
- Case Insensitive: Load semi-structured data into columns in the target table that match corresponding columns represented in the data. Column names should be case-insensitive.
- Case Sensitive: Load semi-structured data into columns in the target table that match corresponding columns represented in the data. Column names should be case-sensitive.
- None: The COPY operation loads the semi-structured data into a variant column or, if a query is included in the COPY statement, transforms the data.
Truncate Columns
= drop-down
When "True", strings are automatically truncated to the target column length.
When "False", the COPY statement produces an error if a loaded string exceeds the target column length.
The default setting is "False".
Force Load
= drop-down
When "True", loads all files, regardless of whether they have been loaded previously and haven't changed since they were loaded. The default setting is "False".
When set to "True", this option reloads files and can lead to duplicated data in a table.
Metadata Fields
= dual listbox
Snowflake metadata columns available to include in the load.
Snowflake automatically generates metadata for files in internal stages (i.e. Snowflake) and external stages (Amazon S3, Google Cloud Storage, or Microsoft Azure). This metadata is "stored" in virtual columns. These metadata columns are added to the staged data, but are only added to the table when included in a query of the table. For more information, read Querying Metadata for Staged Files.
This property is only available when an external stage is selected. To manage stages, click the Environments panel in the bottom-left, then right-click a Matillion ETL environment, and click Manage Stages. To learn more, read Manage Stages.
Name
= string
A human-readable name for the component.
Location
= string
An S3 bucket from which to load data. A template URL is provided: S3://<bucket>/<path>
Users can also click through the nested folder system.
When a user enters a forward slash character / after a folder name, a validation of the file path is triggered. This works in the same manner as the Go button.
Pattern
= string
A string that will partially match all file paths and names that are to be included in the load. Defaults to '.*' indicating all files within the S3 Object Prefix.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Matillion ETL environment setup. Selecting a catalog will determine which databases are available in the next parameter.
Database
= drop-down
Select the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup.
Target Table
= string
Select an existing table to load data into. The chosen Delta Lake database determines the available tables.
Load Columns
= dual listbox
Select which columns to include in the load.
Recursive File Lookup
= drop-down
When enabled, disables partition inference. To control which files are loaded, use the "pattern" property instead.
File Type
= drop-down
Select the file type. Available file types are AVRO, CSV, JSON, and PARQUET.
Skip Header
= drop-down
(CSV only) When True, uses the first line as names of columns. The default is False.
Field Delimiter
= string
(CSV only) Specify a delimiter to separate columns. The default is a comma ,.
A TAB character can be specified as "\ ".
Date Format
= string
(CSV and JSON only) Manually set a date format. If none is set, the default is yyyy-MM-dd
.
Timestamp Format
= string
(CSV and JSON only) Manually set a timestamp format. If none is set, the default is yyyy-MM-dd'T'HH:mm:ss.[SSS][XXX]
.
Encoding Type
= string
(CSV and JSON only) Decodes the CSV files via the given encoding type. If none is set, the default is UTF-8
.
Ignore Leading White Space
= drop-down
(CSV only) When True, skips any leading whitespaces. The default is False.
Ignore Trailing White Space
= drop-down
(CSV only) When True, skips any trailing whitespaces. The default is False.
Infer Schema
= drop-down
(CSV only) When True, infers the input schema automatically from the data. The default is False.
Multi Line
= drop-down
When True, parses records, which may span multiple lines. The default is False.
Null Value
= drop-down
(CSV only) Sets the string representation of a null value. The default value is an empty string.
Empty Value
= string
(CSV only) Sets the string representation of an empty value. The default value is an empty string.
Primitive as String
= drop-down
(JSON only) When True, primitive data types are inferred as strings. The default is False.
Prefers Decimal
= drop-down
(JSON only) When True, infers all floating-point values as a decimal type. If the values don't fit in decimal, then they're inferred as doubles. The default is False.
Allow Comments
= drop-down
(JSON only) When True, allows JAVA/C++ comments in JSON records. The default is False.
Allow Unquoted Field Names
= drop-down
(JSON only) When True, allows unquoted JSON field names. The default is False.
Allow Single Quotes
= drop-down
(JSON only) When True, allows single quotes in addition to double quotes. The default is True.
Allow Numeric Leading Zeros
= drop-down
(JSON only) When True, allows leading zeros in numbers, e.g. 00019
. The default is False.
Allow Backslash Escaping Any Character
= drop-down
(JSON only) When True, allows accepting the quoting of all characters using the backslash quoting mechanism \
. The default is False.
Allow Unquoted Control Chars
= drop-down
(JSON only) When True, allows JSON strings to include unquoted control characters (ASCII characters where their value is less than 32, including Tab and line feed characters). The default is False.
Drop Field If All Null
= drop-down
(JSON only) When True, ignores column of all null values or empty arrays/structs during the schema inference. The default is False.
Merge Schema
= drop-down
(AVRO, PARQUET only) When True, merges schemata from all Parquet part-files. The default is False.
Path Glob Filter
= string
An optional glob pattern, used to only include files with paths matching the pattern.
Force Load
= drop-down
When True, idempotency is disabled and files are loaded regardless of whether they've been loaded before. The default is False.
Name
= string
A human-readable name for the component.
Schema
= drop-down
Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.
Target Table Name
= drop-down
Select an existing table to load data into. The selected target table is where data will be loaded into from from the S3 object.
When selecting a target table with the Redshift SUPER data type, the S3 Load component will only offer the following data file types: JSON, ORC, and PARQUET. However, if you select to include columns of data file types other than SUPER in the Load Columns property, all data file types will be available for selection.
Load Columns
= dual listbox
Select which of the target table's columns should be loaded.
S3 URL Location
= drop-down
Specify the URL of the S3 bucket to load files from. The URL follows the format s3://bucket-name/location
The "location" parameter in the URL is optional.
When a user enters a forward slash character / after a folder name, a validation of the file path is triggered. This works in the same manner as the Go button.
S3 Object Prefix
= string
All files that begin with this prefix will be included in the load into the target table.
IAM Role ARN
= drop-down
Select an IAM role Amazon Resource Name (ARN) that is already attached to your Redshift cluster, and that has the necessary permissions to access S3.
This setting is optional, since without this style of setup, the credentials of the environment (instance credentials or manually entered access keys) will be used.
See the Redshift documentation for more information about using a role ARN with Redshift.
Data File Type
= drop-down
The type of expected data to load. Some may require additional formatting, explained in the Amazon documentation.
Available options are: Avro, CSV, Delimited, Fixed Width, JSON, ORC, PARQUET.
Component properties will change to reflect the choice made here and give options based on the specific data file type.
Fixed Width Spec
= string
Fixed width only. Loads the data from a file where each column width is a fixed length, rather than columns being separated by a delimiter. For more information, see the AWS documentation.
JSON Layout
= string
(JSON only) Defaults to "auto", which will work for the majority of JSON files if the fields match the table field names. More information about JSON file types can be found in the AWS documentation.
Delimiter
= string
(CSV only) Specify a delimiting character to separate columns. The default character is a comma: ,
A [TAB] character can be specified using "\ ".
CSV Quoter
= string
(CSV only) Specify the character to be used as the quote character when using the CSV file type.
Explicit IDs
= drop-down
Select whether to load data from the S3 objects into an IDENTITY column. See the Redshift documentation for more information.
S3 Bucket Region
= drop-down
Select the Amazon S3 region that hosts the selected S3 bucket. This setting is optional and can be left as "None" if the bucket is in the same region as your Redshift cluster.
Compression Method
= drop-down
Specify whether the input is compressed in any of the following formats: BZip2, gzip, LZop, Zstd, or not at all (None).
Encoding
= drop-down
Select the encoding format the data is in. The default for this setting is UTF-8.
Replace Invalid Characters
= string
Specify a single character that will replace any invalid unicode characters in the data. The default is: ?.
Remove Quotes
= drop-down
(Delimited, Fixed Width only) When "Yes", removes surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained.
If a string has a beginning single or double quotation mark but no corresponding ending mark, the COPY command fails to load that row and returns an error.
The default setting is "No".
For more information, see the AWS documentation.
Maximum Errors
= integer
The maximum number of individual parsing errors permitted before the load will fail. Valuers up to this will be substituted as NULL values. In Matillion ETL, the default is set at 0. The Amazon default is 1000.
Date Format
= string
Defaults to 'auto'. Users can specify their preferred date format manually if they wish. For more information on date formats, see the AWS documentation.
Time Format
= string
Defaults to 'auto'. Users can specify their preferred time format manually if they wish. For more information on time formats, see the AWS documentation.
Ignore Header Rows
= integer
Specify the number of rows at the top of the file to ignore. The default is 0 (no rows ignored).
Accept Any Date
= drop-down
- Yes: invalid dates such as '45-65-2020' are not considered an error, but will be loaded as NULL values. This is the default setting.
- No: invalid dates will return an error.
Ignore Blank Lines
= drop-down
(CSV only) When "Yes", ignores blank lines that only contain a line feed in a data file and does not try to load them. This is the default setting.
Truncate Columns
= drop-down
- Yes: any instance of data in the input file that is too long to fit into the specified target column width will be truncated to fit, instead of causing an error.
- No: any data in the input file that is too long to fit into the specified target column width will cause an error. This is the default setting.
Fill Record
= drop-down
When "Yes", allows data files to be loaded when contiguous columns are missing at the end of some records. The missing columns are filled with either zero-length strings or NULLs, as appropriate for the data types of the columns in question. This is the default setting.
Trim Blanks
= drop-down
- Yes: Removes the trailing white space characters from a VARCHAR string. This property only applies to columns with a VARCHAR data type.
- No: does not remove trailing white space characters from the input data.
NULL As
= string
Loads fields that match the specified NULL string. The default is \\N
with an additional \
at the start to escape. Case-sensitive.
For more information, please see the AWS documentation.
Empty As Null
= drop-down
When "Yes", empty columns in the input file will become NULL values. This is the default setting.
Blanks As Null
= drop-down
When "Yes", loads blank columns, which consist of only whitespace characters, as NULL. This is the default setting.
This option applies only to CHAR and VARCHAR columns. Blank fields for other data types, such as INT, are always loaded with NULL. For example, a string that contains three space characters in succession (and no other characters) is loaded as a NULL. The default behavior, without this option, is to load the space characters as is.
Comp Update
= drop-down
When "On", compression encodings are automatically applied during a COPY command. This is the default setting.
This is usually a good idea to optimise the compression used when storing the data.
Stat Update
= drop-down
When "On", governs automatic computation and refreshing of optimiser statistics at the end of a successful COPY command. This is the default setting.
Escape
= drop-down
(Delimited only) When "Yes", the backslash character \
in input data is treated as an escape character. The character that immediately follows the backslash character is loaded into the table as part of the current column value, even if it is a character that normally serves a special purpose. For example, you can use this parameter to escape the delimiter character, a quotation mark, an embedded newline character, or the escape character itself when any of these characters is a legitimate part of a column value.
The default setting is "No".
Round Decimals
= drop-down
When "Yes", any decimals are rounded to fit into the column in any instance where the number of decimal places in the input is larger than defined for the target column. This is the default setting.
Manifest
= drop-down
When "Yes", the given object prefix is that of a manifest file. The default setting is "No".
For more information, see the AWS documentation.
S3 access
To access an S3 bucket from a different AWS account, the following is required:
- Set up cross-account access via AWS roles.
- The user must type in the bucket they want to access or use a variable to load/unload to those structures.
Load generators
To help with S3 Load, you can use one of the S3 Load Generator components. Load generators are wizards that let you load and view files on the fly, altering load component properties and observing their effects without the need for a separate Transformation job. Read Storage Load Generator for more information.
External staging
If using an external stage with Matillion ETL for Snowflake, consider the following:
- All columns to be included in the load must be included in the Load Columns property, in the correct order and associated with the correct data type. The Create Table component can be used to specify the metadata of the columns.
- The Metadata Fields property will insert metadata columns at the end of the existing table and overwrite the same number of columns unless additional columns are added via the Create Table component.
- The table's data and structure are only accessed at runtime, meaning that additional columns must be added (and data types set) before the job is run. To ensure these columns are set up beforehand, users can load their data with Matillion ETL's S3 Load Generator.
File patterns with Snowflake
In Snowflake, the Pattern parameter in the COPY INTO syntax is a pattern on the complete path of the file and is not just relative to the directory configured in the S3 Object Prefix parameter.
The table below provides an example of S3 Object Prefix and Pattern behaviours, including success and failure states.
S3 Object Prefix | Pattern | Outcome | Comments |
---|---|---|---|
s3://testbucket/ | testDirectory/alphabet_0_0_0.csv.gz | Success | This is the format that the S3 Load Generator will generate. |
s3://testbucket/testDirectory/ | testDirectory/alphabet_0_0_0.csv.gz | Success | Loads the file successfully because the pattern is matching the full path. |
s3://testbucket/testDirectory/ | .*.csv.gz | Success | Loads all files ending in .csv.gz in the testDirectory directory. |
s3://testbucket/testDirectory/ | alphabet_0_0_0.csv.gz | Failure | Does not load the file because the pattern does not match. |
Video
Snowflake | Delta Lake on Databricks | Amazon Redshift | Google BigQuery | Azure Synapse Analytics |
---|---|---|---|---|
✅ | ✅ | ✅ | ❌ | ❌ |