Skip to content

Create External Table

Create External Table is an orchestration component that lets you define a table that references data stored outside of your data warehouse. These external tables don't store data themselves, but can be queried similarly to regular tables. External tables are read-only.

Cloud platform information

If you're a Snowflake or Amazon Redshift user, familiarize yourself with the corresponding information about this component.

Snowflake

When using Snowflake, external tables are useful for querying large datasets without physically loading them into Snowflake storage. The data is typically returned in a single column named VALUE, containing variant-type values. Each row stores the source data as a set of properties within this variant.

Note

  • Due to the structure of the returned data, transformation (e.g using the Flatten Variant component) is often required to convert it into a usable format.
  • External tables must be refreshed after any changes to the underlying data. Use the Refresh External Table component to update the metadata before querying.

Amazon Redshift

When using Amazon Redshift, the Create External Table component references data stored in an Amazon S3 bucket. Like Snowflake, the external table doesn't store the data internally.

If the component requires access to a cloud provider, it will use credentials as follows:

  • If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
  • If using Hybrid SaaS: By default the component will inherit the agent's execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.

Referencing external data in this way provides an efficient method for querying large datasets without consuming storage on the Amazon Redshift cluster.

Note


Properties

Create/Replace = drop-down

Select one of:

  • Create: (default). This option creates a new external table. The component will fail if an external table with the same name already exists.
  • Create if not exists: This option will only create a new external table when an external table of the same name does not already exist. If the schema of the existing external table does not match the schema defined in this component, no attempt is made to fix or correct the schema, which could lead to errors later on in the pipeline if:
    • Users did not expect an external table to already exist.
    • Users did not expect to have a different schema to the one defined in this component.
  • Replace: This option drops any existing external table of the same name and then creates a new one. Selecting this option guarantees that after the component succeeds, the external table matches the schema defined in this component.

This component uses the CREATE OR REPLACE clause. When using the REPLACE clause, it also applies the COPY GRANTS clause. When you clone or create a new object (such as a table, view, schema, or database) from an existing one, the new object doesn't automatically inherit the original's grants (privileges). However, with the COPY GRANTS clause, you can seamlessly transfer object-level privileges from the source object to the new one. This helps maintain consistent access control and simplifies permission management when cloning or recreating objects. For more information, read Snowflake COPY GRANTS.


Database = drop-down

The Snowflake database. The special value [Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value [Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


New Table Name = string

The name of the external table to be created or used.


Partition Columns = column editor (optional)

A partition is defined by an expression applied to a set of data, resulting in a partition that only contains rows that satisfy the expression. For example, partitioning a dataset by the value in the 'year' column.

For more information about setting up partitions with Snowflake's external tables, read Managing Regular Data Loads.

  • Name: The name of the new partition column.
  • Type: The data type of the partition column.
  • Size: The data size of the new column values.
  • Precision: The precision of the new column values.
  • Partition Expression: The expression used to partition the data.

Click the Text mode toggle at the bottom of the dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.

To use grid variables, select the Use Grid Variable checkbox at the bottom of the dialog. For more information, read grid variables.


Stage Database = drop-down

Select a Snowflake database for the external stage.


Stage Schema = drop-down

Select a Snowflake schema for the external stage.


Stage = drop-down

Select an external stage for the data. Staging areas can be created through Snowflake using the CREATE STAGE command.


Relative Path = string (optional)

The directory path to follow to the target data. File names cannot be specified. The path is relative to the storage location given in the external stage setup.


Pattern = string (optional)

Specify, using regular expression (RegEx) pattern syntax, files to be matched on the external stage.

For example: '.*flight.*[.]csv'


Format = drop-down

Select a pre-made file format that will automatically set many of the component properties accordingly. These formats can be created through the Create File Format component.

The list of file formats will come from the database and schema that the specified tables exist in.


File Type = drop-down

The type of expected data to load. Some data may require additional formatting, explained in Preparing to Load Data.

Available options are: AVRO, CSV, JSON, ORC, PARQUET, and XML.

Component properties will change to reflect the choice made here and give options based on the specific file type.


Compression = drop-down

(AVRO, CSV, JSON, PARQUET, XML) Select whether the input file is compressed in GZIP format, BROTLI, BZ2, DEFLATE, RAW_DEFLATE, ZSTD, or not compressed at all. The default is AUTO.

When AUTO is selected, the compression algorithm is detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically.


Record Delimiter = string (optional)

(CSV) The delimiter to be used that separates records (rows) in the file. Defaults to newline.

\ can also signify a newline. \\r can signify a carriage return.


Field Delimiter = string (optional)

(CSV) The delimiter that separates columns. The default is a comma. A [TAB] character can be specified as \.


Skip Header = integer (optional)

(CSV) The number of rows at the top of the file to ignore. The default setting is 0 and the property is left blank.


Skip Blank Lines = boolean

(CSV) When true, skip any blank lines encountered in the CSV files. When false, blank lines produce an end-of-record error. Default is false.


Escape = string (optional)

(CSV) When this option is specified, the backslash character \ in input data is treated as an escape character.


Escape Unenclosed Field = string (optional)

(CSV) Single character string used as the escape character for unenclosed field values. Accepts common escape sequences, octal values, or hex values. Also accepts a value of NONE (default). If a character is specified in the Escape field, it will override this field.


Field Optionally Enclosed = string (optional)

(CSV) A character that is used to enclose strings. Can be a single quote ' or a double quote " or NONE (default). Escape the character with an instance of the same character.


Error On Column Count Mismatch = boolean

(CSV) Generate an error if the number of delimited columns in the input does not match that of the table. If false, extra columns are not loaded into the table and missing columns are recorded as NULL in the table. Default is false.


Empty Field As Null = boolean

(CSV) When set to true, empty columns in the input file will become NULL. Default is true.


Replace Invalid Characters = boolean

(CSV) When true, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character. When false, the load operation produces an error when invalid UTF-8 character encoding is detected. Default is false.


Encoding Type = drop-down (optional)

(CSV) The type of encoding that has been applied to the data. The default setting is UTF-8. Consult the Snowflake documentation for more information.


Enable Octal = boolean

(JSON) Enables parsing of octal numbers. Default is false.


Allow Duplicates = boolean

(JSON) When you pass in a JSON with a duplicate property, choose whether to allow the duplicate or not. Default is false.


Strip Outer Array = boolean

(JSON) When true, enables removal of outer square brackets from JSONs. Default is false.


Strip Null Values = boolean

(JSON) When true, enables removal of NULL values. Default is false.


Ignore UTF-8 Errors = boolean

(JSON, XML) When true, replaces invalid UTF-8 sequences with the Unicode replacement character (U+FFFD), instead of throwing an error. Default is false.


Preserve Space = boolean

(XML) When set to true, leading and trailing spaces in XML element values are preserved during parsing. Default is false.


Strip Outer Element = boolean

(XML) When set to true, when parsing XML files, this will strip the outermost XML element, exposing the second-level elements as separate documents. Default is false.


Disable Snowflake Data = boolean

(XML) When set to true, the parser will ignore Snowflake semi-structured data tags when processing XML files. Default is false.


Disable Auto Convert = boolean

(XML) When set to true, numeric and Boolean values in XML text elements are automatically converted to their native types during parsing. Default is false.


Null If = string (optional)

(AVRO, CSV, JSON, ORC, PARQUET) This option replaces the specified string with NULL in the output table. Use this if your data has a particular representation of missing data.


Trim Space = boolean

(AVRO, CSV, JSON, ORC, PARQUET) When true, removes trailing and leading whitespace from the input data. Default is false.


External Schema = drop-down

Select the table schema. The special value [Environment Default] uses the schema defined in the environment. For more information on using multiple schemas, read Schemas.


New Table Name = string

The name of the external table to be created or used.


Method = drop-down

  • Create: Create the new table with the given name. The pipeline will fail if a table of that name already exists.
  • Create if not exists: Create the new table with the given name unless one already exists. The pipeline will succeed and continue in either case. If the schema of the existing table does not match the schema defined in this component, no attempt is made to fix or correct it, which could lead to errors later in the pipeline if you did not expect an existing table to exist, or to have a different schema to the one defined in this component.
  • Replace: Create the new table, potentially overwriting any existing table of the same name.

Since other database objects depend upon this table, DROP ... cascade is used, which may actually remove many other database objects.


Table Metadata = column editor

  • Name: The name of the new column.
  • Type: Select the data type.
    • Text: Can hold any type of data, subject to a maximum size.
    • Integer: Suitable for whole-number types (no decimals).
    • Numeric: Suitable for numeric types, with or without decimals.
    • Real: Suitable for data of a single precision floating-point number.
    • Double Precision: Suitable for data of a double precision floating-point number.
    • Boolean: Suitable for data that is either true or false.
    • Date: Suitable for dates without times.
    • DateTime: Suitable for dates, times, or timestamps (both date and time).
    • Super: Suitable for storing semi-structured data or documents as values.
  • Size: For text types, this is the maximum length. This is a limit on the number of bytes, not characters. For Amazon Redshift, since all data is stored using UTF-8, any non-ASCII character will count as 2 or more bytes. For numeric types, this is the total number of digits allowed, whether before or after the decimal point.
  • Scale: Scale is the number of digits to the right of the decimal point in a Float. For example, the number 123.45 has a scale of 2.

Click the Text mode toggle at the bottom of the dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.

To use grid variables, select the Use Grid Variable checkbox at the bottom of the dialog. For more information, read grid variables.


Partition = column editor

Assign one or more columns in this table as potential partitions.

Partition columns allow queries on large datasets to be optimized when that query is made against the columns chosen as partition columns. When a partition is created, values for that column become distinct Amazon S3 storage locations, allowing rows of data in a location that is dependent on their partition column value.

For example, it is common for a date column to be chosen as a partition column, thus storing all other data according to the date it belongs to. When creating partitioned data using the Add Partition component, it is vital that those partitioned columns have already been marked using this property.

To use grid variables, select the Use Grid Variable checkbox at the bottom of the dialog. For more information, read grid variables.


Location = column editor

The Amazon S3 bucket location for the external table data.

If the component requires access to a cloud provider, it will use credentials as follows:

  • If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
  • If using Hybrid SaaS: By default the component will inherit the agent's execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.

Format = drop-down

Choose a file format for the source file.


Field Terminator = string (optional)

(TEXTFILE) The delimiter to be used that separates fields (columns) in the file. Defaults to \\A.


Line Terminator = string (optional)

(TEXTFILE) The delimiter to be used that separates records (rows) in the file. Defaults to newline.

  • \ can also signify a newline.
  • \\r can signify a carriage return.

Skip Header Rows = string (optional)

The number of rows at the top of the file to skip. The default setting is an empty field.


Strip Outer Array = drop-down

(JSON) Strips the outer array from the JSON file, enabling JSON files that contain a single, anonymous array to be loaded without error. The default setting is No.


Snowflake Databricks Amazon Redshift