Table Output
The Table Output component empowers users to write an input data flow out to an existing output table.
Successful validation of this component ensures the following:
- The target table exists.
- The target columns have been found.
Data is only written to the table when the Transformation Job containing the Table Output component is actually run.
Amazon Redshift: Many potential problems are avoided by a successful validation; however, run-time errors can still occur during execution, for example, your Redshift Cluster may run out of disk space.
Snowflake: Many potential problems are avoided by a successful validation; however, run-time errors can still occur during execution, for example, your Snowflake Warehouse may hit the resource monitor cap.
Properties
Name
= string
A human-readable name for the component.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Target Table
= drop-down
Select the output table. The available tables depend on the selected Matillion ETL environment.
Fix Data Type Mismatches
= drop-down
Select whether to cast types in the event of data type mismatches.
- No: do not cast types (do not fix data type mismatches).
- Yes: if the source column type does not match the target table type, attempt to CAST the value to the required target type.
Column Mapping
= column editor
- Input Column: The source field from the input flow.
- Output Column: The target table's output column to insert data into.
Order By
= column editor
- Column: Select the output column(s) to sort by.
- Sort Order: Set the corresponding column to be ordered ascending or descending. The default sort order is ascending.
Truncate
= drop-down
Select from the following operations in regard to handling new data:
- Append: adds the records to the end of the table.
- Truncate: empty existing rows in the target table before loading the incoming input rows. This operation may use either a TRUNCATE or DELETE FROM query, depending on whether the current flow is in a transaction or not. TRUNCATE is faster, and therefore preferred, but is not transaction-safe and is not used if a transaction is in progress.
Name
= string
A human-readable name for the component.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Matillion ETL environment setup. Selecting a catalog will determine which databases are available in the next parameter.
Database
= drop-down
Select the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup.
Target Table
= drop-down
Select the output table. The available tables depend on the selected Matillion ETL environment.
Column Mapping
= column editor
- Input Column: The source field from the input flow.
- Output Column: The target table's output column to insert data into.
Truncate
= drop-down
Select from the following operations in regard to handling new data:
- Append: adds the records to the end of the table.
- Truncate: empty existing rows in the target table before loading the incoming input rows. This operation may use either a TRUNCATE or DELETE FROM query, depending on whether the current flow is in a transaction or not. TRUNCATE is faster, and therefore preferred, but is not transaction-safe and is not used if a transaction is in progress.
Fix Data Type Mismatches
= drop-down
Select whether to cast types in the event of data type mismatches.
- No: do not cast types (do not fix data type mismatches).
- Yes: if the source column type does not match the target table type, attempt to CAST the value to the required target type.
Name
= string
A human-readable name for the component.
Schema
= drop-down
Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.
Target Table
= drop-down
Select the output table. The available tables depend on the selected Matillion ETL environment.
Fix Data Type Mismatches
= drop-down
Select whether to cast types in the event of data type mismatches.
- No: do not cast types (do not fix data type mismatches).
- Yes: if the source column type does not match the target table type, attempt to CAST the value to the required target type.
Column Mapping
= column editor
- Input Column: The source field from the input flow.
- Output Column: The target table's output column to insert data into.
Truncate
= drop-down
Select from the following operations in regard to handling new data:
- Append: adds the records to the end of the table.
- Truncate: empty existing rows in the target table before loading the incoming input rows. This operation may use either a TRUNCATE or DELETE FROM query, depending on whether the current flow is in a transaction or not. TRUNCATE is faster, and therefore preferred, but is not transaction-safe and is not used if a transaction is in progress.
Name
= string
A human-readable name for the component.
Target Project
= drop-down
Select the Google Cloud project. The special value, [Environment Default], will use the project defined in the environment. For more information, read Creating and managing projects.
Target Dataset
= drop-down
Select the Google BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the environment. For more information, read Introduction to datasets.
Target Table
= drop-down
Select the output table. The available tables depend on the selected Matillion ETL environment.
Column Mapping
= column editor
- Input Column: The source field from the input flow.
- Output Column: The target table's output column to insert data into.
Truncate
= drop-down
Select from the following operations in regard to handling new data:
- Append: adds the records to the end of the table.
- Truncate: empty existing rows in the target table before loading the incoming input rows. This operation may use either a TRUNCATE or DELETE FROM query, depending on whether the current flow is in a transaction or not. TRUNCATE is faster, and therefore preferred, but is not transaction-safe and is not used if a transaction is in progress.
Cast Input
= drop-down
Select whether to CAST input types in the event of a data type mismatch with the target column's data type.
- No: do not cast types (do not fix data type mismatches). Google BigQuery may still attempt to coerce the types in this case.
- Yes: if the source column type does not match the target table column type, Matillion ETL will attempt to CAST the value to the required target type.
Name
= string
A human-readable name for the component.
Schema
= drop-down
Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on schemas, read the Azure Synapse documentation.
Target Table
= drop-down
Select the output table. The available tables depend on the selected Matillion ETL environment.
Column Mapping
= column editor
- Input Column: The source field from the input flow.
- Output Column: The target table's output column to insert data into.
Truncate
= drop-down
Select from the following operations in regard to handling new data:
- Append: adds the records to the end of the table.
- Truncate: empty existing rows in the target table before loading the incoming input rows. This operation may use either a TRUNCATE or DELETE FROM query, depending on whether the current flow is in a transaction or not. TRUNCATE is faster, and therefore preferred, but is not transaction-safe and is not used if a transaction is in progress.
Fix Data Type Mismatches
= drop-down
Select whether to cast types in the event of data type mismatches.
- No: do not cast types (do not fix data type mismatches).
- Yes: if the source column type does not match the target table type, attempt to CAST the value to the required target type.
Variable Exports
A full list of common component exports can be found here.
Source | Description |
---|---|
Automatic Compression | If the Automatic Compression property is set to "If not already", then this variable will be set to "Yes" when compression has been applied where it had not been before. Otherwise, this will be "No". |
Strategy
Perform a bulk-insert into the target table, possibly after a TRUNCATE.
Snowflake | Delta Lake on Databricks | Amazon Redshift | Google BigQuery | Azure Synapse Analytics |
---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ✅ |