Skip to content

Table Update

Update a target table with a set of input rows. The rows to update are based on matching keys. It is very important that the keys uniquely identify the rows, and that the keys are not NULL.

Note

Successful validation of this component ensures the target table exists, and the target columns have been found. However, data is only written to the table when the job containing the table update is actually run. Most potential problems are avoided by a successful validation; however, run-time errors can still occur during execution. For example, on Amazon Redshift your Cluster may run out of disk space.

Properties

Name = string

A human-readable name for the component.


Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Target Table Name = drop-down

The name of the output table. The tables found in the currently selected environment is provided to choose from.

You can change the currently selected environment in the Environments section.


Target Alias = string

The alias for the target table. An alias allows a table to be referred to usually a name that is typically shorter and simpler than its actual name.


Source Alias = string

The alias for the source table. An alias allows a table to be referred to usually a name that is typically shorter and simpler than its actual name.


Join Expressions = expression editor

A list of expressions specifying how each join is performed. This is the same expression editor used by the Calculator component.

Each expression must be valid SQL and can use the all the built in Snowflake Functions.

There is exactly one expression for each Join, and the result of the expression is evaluated as True or False which indicates whether the two records being compared 'match'.

Often this will be a simple 'equality' condition, but it could be more complex, e.g. where a date falls within a start/end date range.


When Matched = column editor

  • Case: Select a case as previously defined in the 'Join Expression' property. Add as many rows to the editor as you need, one per case.
  • Operation: Decide an action for when the corresponding case occurs according to the join expression.
    • Delete: Completely remove the row from the output table if the case is matched.
    • Update: Output the data as expected if a match is found.

Include Not Matched = drop-down

If TRUE, allow non-matched data to continue through to the output. The column/s this data is written to is defined in a new property, 'Insert Mapping', described below.


Update Mapping = column editor

  • Input Column: (Visible only when a matched case results in an update). The column name from the matched input flow. Add as many rows to the editor as you need, one per input column.
  • Output Column: The name of the input column that the corresponding matched input is written to. Note, this can be the same name as the input column if desired.

Insert Mapping = column editor

  • Input Column: (Visible only when an unmatched case is included). The column name from the unmatched input flow. Add as many rows to the editor as you need, one per input column.
  • Output Column: The name of the output column that the corresponding unmatched input is written to. Note, this can be the same name as the input column if desired.

Name = string

A human-readable name for the component.


Database = drop-down

Select the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup.


Target Table = drop-down

The name of the output table. The tables found in the currently selected environment is provided to choose from.

You can change the currently selected environment in the Environments section.


Target Alias = string

The alias for the target table. An alias allows a table to be referred to usually a name that is typically shorter and simpler than its actual name.


Source Alias = string

The alias for the source table. An alias allows a table to be referred to usually a name that is typically shorter and simpler than its actual name.


Join Expressions = expression editor

A list of expressions specifying how each join is performed. This is the same expression editor used by the Calculator component.

Each expression must be valid SQL.

There is exactly one expression for each Join, and the result of the expression is evaluated as True or False which indicates whether the two records being compared 'match'. Often this will be a simple 'equality' condition, but it could be more complex, e.g. where a date falls within a start/end date range.


When Matched = column editor

  • Case: Select a case as previously defined in the 'Join Expression' property. Add as many rows to the editor as you need, one per case.
  • Operation: Decide an action for when the corresponding case occurs according to the join expression.
    • Delete: Completely remove the row from the output table if the case is matched.
    • Update: Output the data as expected if a match is found.

Update Mapping = column editor

  • Input Column: (Visible only when a matched case results in an update). The column name from the matched input flow. Add as many rows to the editor as you need, one per input column.
  • Output Column: The name of the input column that the corresponding matched input is written to. Note, this can be the same name as the input column if desired.

Include Not Matched = drop-down

When "Yes", allow non-matched data to continue through to the output. The column(s) this data is written to is(are) defined in a new property, 'Insert Mapping', described below.

The default setting is "No".


Insert Mapping = column editor

  • Input Column: (Visible only when an unmatched case is included). The column name from the unmatched input flow. Add as many rows to the editor as you need, one per input column.
  • Output Column: The name of the output column that the corresponding unmatched input is written to. Note, this can be the same name as the input column if desired.

Name = string

A human-readable name for the component.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Target Table Name = drop-down

The name of the output table. The tables found in the currently selected environment is provided to choose from.

You can change the currently selected environment in the Environments section.


Fix Data Type Mismatches = drop-down

  • Yes: If the source column type does not match the target table type, attempt to CAST the value to the required target type.
  • No: Do not cast types. Amazon Redshift may still attempt to coerce the types in this case.

Column Mapping = column editor

  • Input Column: The source column from the input flow.
  • Output Column: The target table output column to update.

Unique Keys = drop-down

Selected column(s) from the input table used as unique keys.

The associated target columns are found using the Column Mapping.


Update Strategy = drop-down

  • Delete/Insert: Removes overlapping rows (matching on Unique Keys) and then inserts all incoming rows. This is effectively an update, and is very fast. However, if you don't have incoming values for all target columns, replaced rows will be NULL for those missing columns. Deleting rows in this way ideally requires a vacuum afterwards to recover space. This component does not arrange that vacuum for you, but there is a vacuum tables component available in an Orchestration job. For more information on vacuuming tables see here.
  • Update/Insert: a traditional update statement, and an insert of incoming rows that don't match the target table (matching on Unique Keys). This is sometimes referred to as an upsert. The update counts reported will differ between the two strategies, even for the same datasets. This is because the database reports the number of affected rows. The first strategy will count all the deletes, plus all the inserts, which may overlap. The second strategy will count the number of updates, plus the number rows added that weren't already updated.

Name = string

A human-readable name for the component.


Project = drop-down

Select the Google Cloud project. The special value, [Environment Default], will use the project defined in the environment. For more information, read Creating and managing projects.


Dataset = drop-down

Select the Google BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the environment. For more information, read Introduction to datasets.


Target Table Name = drop-down

The name of the output table. The tables found in the currently selected environment is provided to choose from.

You can change the currently selected environment in the Environments section.


Fix Data Type Mismatches = drop-down

  • Yes: If the source column type does not match the target table type, attempt to CAST the value to the required target type.
  • No: Do not cast types. Google BigQuery may still attempt to coerce the types in this case.

Column Mapping = column editor

  • Input Column: The source column from the input flow.
  • Output Column: The target table output column to update.

Unique Keys = drop-down

Selected column(s) from the input table used as unique keys.

The associated target columns are found using the Column Mapping.


Update Strategy = drop-down

  • Merge (Rate Limited): Combines Insert and Update operations into a single statement, and performs the operations in isolation (performs the operations separately from other operations that may be going on). Uses the BigQuery Data Manipulation Language. The number of requests using the data BigQuery Data Manipulation Language is severely limited. Exceeding these limits will cause jobs to fail. Merge comes with the BigQuery DML limitations and quotas.
  • Delete/Insert (Rate Limited): Removes overlapping rows (matching on Unique Keys) and then inserts all incoming rows. This is effectively an update, and is very fast. However, if you don't have incoming values for all target columns, replaced rows will be NULL for those missing columns. Uses the BigQuery Data Manipulation Language. The number of requests using the data BigQuery Data Manipulation Language is severely limited. Exceeding these limits will cause jobs to fail.
  • Update/Insert (Rate Limited): a traditional update statement, and an insert of incoming rows that don't match the target table (matching on Unique Keys). This is sometimes referred to as an upsert. Uses the BigQuery Data Manipulation Language. The number of requests using the data BigQuery Data Manipulation Language is severely limited. Exceeding these limits will cause jobs to fail. The update counts reported will differ between the two strategies, even for the same datasets. This is because the database reports the number of affected rows. The first strategy will count all the deletes, plus all the inserts, which may overlap. The second strategy will count the number of updates, plus the number rows added that were not already updated.
  • Rewrite and Append: Rewrites all rows not requiring updates and then appends all rows that are updated. For large tables or small relative numbers of rows to be changed this may be resource heavy.

Name = string

A human-readable name for the component.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on schemas, read the Azure Synapse documentation.


Target Table = drop-down

The name of the output table. The tables found in the currently selected environment is provided to choose from.

You can change the currently selected environment in the Environments section.


Column Mapping = column editor

  • Input Column: The source column from the input flow.
  • Output Column: The target table output column to update.

Unique Keys = drop-down

Selected column(s) from the input table used as unique keys.

The associated target columns are found using the Column Mapping.


Update Strategy = drop-down

  • Update Only: Updates existing data in a table.
  • Update/Insert: Updates existing data in a table and adds one or more rows to a table that do not match the target table.

Fix Data Type Mismatches = drop-down

  • No: Matillion ETL will not CAST data types. This is the default setting.
  • Yes: Where a source column does not match the target table data type, Matillion ETL will attempt to CAST the value to the required type.

For more information, please read this article.


Exports

A full list of common component exports can be found here.

Source Description
Rows Deleted When the Update Strategy is "Delete/Insert", the number of rows deleted.
Rows Updated When the Update Strategy is "Update/Insert", the number of rows updated.
Rows Inserted For either update strategy, the number of rows inserted.

Strategy

Depends upon the chosen Update Strategy.


Snowflake Delta Lake on Databricks Amazon Redshift Google BigQuery Azure Synapse Analytics