Skip to content

Table Output

The Table Output component empowers users to write an input data flow out to an existing output table. Successful validation of this component ensures the following:

  • The target table exists.
  • The target columns have been found.

Data is only written to the table when the transformation pipeline containing the Table Output component is actually run.

Many potential problems are avoided by a successful validation; however, run-time errors can still occur during execution—for example, your Snowflake Warehouse may hit the resource monitor cap.


Properties

Name = string

A human-readable name for the component.


Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Target Table = drop-down

Select the output table. The available tables depend on the selected environment.


Fix Data Type Mismatches = boolean

Select whether to cast types in the event of data type mismatches.

  • Yes: If the source column type does not match the target table type, attempt to CAST the value to the required target type.
  • No: Do not cast types (do not fix data type mismatches).

Column Mapping = column editor

  • Input Column: The source field from the input flow.
  • Output Column: The target table's output column to insert data into.

Click the Add all button to automatically populate the Output Column with all the columns in the table selected in the Target Table property. The Input column matches what is in the Output Column, unless there is no column in the source table that match that column name in output table, in which case it is blank. You can then selectively edit the list if required, but if you click Add all again it will return the list to the default fully populated state, removing any changes you have made.


Order By = column editor

  • Column: Select the output column(s) to sort by.
  • Sort Order: Set the corresponding column to be ordered ascending (default) or descending.

Output Mode = drop-down

  • Append: Adds the records to the end of the table.
  • Truncate: Empty existing rows in the target table before loading the incoming input rows. This operation may use either a TRUNCATE or DELETE FROM query, depending on whether the current flow is in a transaction or not. TRUNCATE is faster, and therefore preferred, but is not transaction-safe and is not used if a transaction is in progress.

Name = string

A human-readable name for the component.


Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the environment setup. Selecting a catalog will determine which databases are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Target Table = drop-down

Select the output table. The available tables depend on the selected environment.


Fix Data Type Mismatches = boolean

Select whether to cast types in the event of data type mismatches.

  • Yes: If the source column type does not match the target table type, attempt to CAST the value to the required target type.
  • No: Do not cast types (do not fix data type mismatches).

Column Mapping = column editor

  • Input Column: The source field from the input flow.
  • Output Column: The target table's output column to insert data into.

Click the Add all button to automatically populate the Output Column with all the columns in the table selected in the Target Table property. The Input column matches what is in the Output Column, unless there is no column in the source table that matches that column name in the output table, in which case it is blank. You can then selectively edit the list if required, but if you click Add all again it will return the list to the default fully populated state, removing any changes you have made.


Output Mode = drop-down

  • Append: Adds the records to the end of the table.
  • Truncate: Empty existing rows in the target table before loading the incoming input rows. This operation may use either a TRUNCATE or DELETE FROM query, depending on whether the current flow is in a transaction or not. TRUNCATE is faster, and therefore preferred, but is not transaction-safe and is not used if a transaction is in progress.

Name = string

A human-readable name for the component.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Target Table = drop-down

Select the output table. The available tables depend on the selected environment.


Fix Data Type Mismatches = boolean

Select whether to cast types in the event of data type mismatches.

  • Yes: If the source column type does not match the target table type, attempt to CAST the value to the required target type.
  • No: Do not cast types (do not fix data type mismatches).

Column Mapping = column editor

  • Input Column: The source field from the input flow.
  • Output Column: The target table's output column to insert data into.

Click the Add all button to automatically populate the Output Column with all the columns in the table selected in the Target Table property. The Input column matches what is in the Output Column, unless there is no column in the source table that match that column name in output table, in which case it is blank. You can then selectively edit the list if required, but if you click Add all again it will return the list to the default fully populated state, removing any changes you have made.


Output Mode = drop-down

  • Append: Adds the records to the end of the table.
  • Truncate: Empty existing rows in the target table before loading the incoming input rows. This operation may use either a TRUNCATE or DELETE FROM query, depending on whether the current flow is in a transaction or not. TRUNCATE is faster, and therefore preferred, but is not transaction-safe and is not used if a transaction is in progress.

Analyze Compression = drop-down

Select whether the component will perform a Redshift compression analysis. Options are:

  • Yes
  • No
  • If not already

Snowflake Databricks Amazon Redshift