Skip to content

Table Update

Update a target table with a set of input rows. The rows to update are based on matching keys. It is very important that the keys uniquely identify the rows, and that the keys are not NULL.

Successful validation of this component ensures the target table exists, and the target columns have been found. However, data is only written to the table when the pipeline containing the table update is actually run. Most potential problems are avoided by a successful validation; however, run-time errors can still occur during execution.


Properties

Name = string

A human-readable name for the component.


Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Target Table = drop-down

The table you want to update.


Target Alias = string

The alias for the target table. An alias allows a table to be referred to and is usually a name that is typically shorter and simpler than its actual name.


Source Alias = string

The alias for the source table. An alias allows a table to be referred to and is usually a name that is typically shorter and simpler than its actual name.


Join Expression = column editor

A list of expressions specifying how each join is performed. Each expression must be valid SQL and can use all the built-in Snowflake functions. There is exactly one expression for each JOIN, and the result of the expression is evaluated as true or false, which indicates whether the two records being compared 'match'. Often this will be a simple 'equality' condition, but it could be more complex, for example, where a date falls within a start/end date range.


When Matched = column editor

Select a case as previously defined in the Join Expression property. Add as many rows to the editor as you need, one per case. Choose an operation for when the corresponding case occurs according to the join expression.

  • Delete: Completely remove the row from the output table if the case is matched.
  • Update: Output the data as expected if a match is found.

Update Mapping = column editor

The input column name from the matched input flow. Add as many rows to the editor as you need, one per input column. Property only available when a matched case results in an update.

The name of the output column that the corresponding matched input is written to. This can be the same name as the input column if desired.


Include Not Matched = boolean

If Yes, allow non-matched data to continue through to the output. The columns this data is written to is defined in a new property, 'Insert Mapping', described above.


Insert Mapping = column editor

The input column name from the unmatched input flow. Add as many rows to the editor as you need, one per input column. Property only available when an unmatched case is included.

The name of the output column that the corresponding unmatched input is written to. This can be the same name as the input column if desired.

Name = string

A human-readable name for the component.


Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the environment setup. Selecting a catalog will determine which databases are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Target Table = drop-down

The table you want to update.


Target Alias = string

The alias for the target table. An alias allows a table to be referred to and is usually a name that is typically shorter and simpler than its actual name.


Source Alias = string

The alias for the source table. An alias allows a table to be referred to and is usually a name that is typically shorter and simpler than its actual name.


Join Expression = column editor

A list of expressions specifying how each join is performed. Each expression must be valid SQL and can use all the built-in Snowflake functions. There is exactly one expression for each JOIN, and the result of the expression is evaluated as true or false, which indicates whether the two records being compared 'match'. Often this will be a simple 'equality' condition, but it could be more complex, for example, where a date falls within a start/end date range.


When Matched = column editor

Select a case as previously defined in the Join Expression property. Add as many rows to the editor as you need, one per case. Choose an operation for when the corresponding case occurs according to the join expression.

  • Delete: Completely remove the row from the output table if the case is matched.
  • Update: Output the data as expected if a match is found.

Update Mapping = column editor

The input column name from the matched input flow. Add as many rows to the editor as you need, one per input column. Property only available when a matched case results in an update.

The name of the output column that the corresponding matched input is written to. This can be the same name as the input column if desired.


Include Not Matched = boolean

If Yes, allow non-matched data to continue through to the output. The columns this data is written to are defined in the Insert Mapping property.


Insert Mapping = column editor

The input column name from the unmatched input flow. Add as many rows to the editor as you need, one per input column. Property only available when an unmatched case is included.

The name of the output column that the corresponding unmatched input is written to. This can be the same name as the input column if desired.

Name = string

A human-readable name for the component.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Target Table = drop-down

The table you want to update.


Update Mapping = column editor

  • Source Column: The input column name from the matched input flow. Add as many rows to the editor as you need, one per input column. Property only available when a matched case results in an update.
  • Target Column: The name of the output column that the corresponding matched input is written to. This can be the same name as the input column if desired.

Update mode = drop-down

Select how the table wil update. Choose from:

  • Update/Insert: A traditional update statement, and an insert of incoming rows that don't match the target table (matching on Unique Keys). This is sometimes referred to as an upsert. The update counts reported will differ between the two strategies, even for the same datasets. This is because the database reports the number of affected rows. The first strategy will count all the deletes, plus all the inserts, which may overlap. The second strategy will count the number of updates, plus the number rows added that weren't already updated.
  • Delete/Insert: Removes overlapping rows (matching on Unique Keys) and then inserts all incoming rows. This is effectively an update, and is very fast. However, if you don't have incoming values for all target columns, replaced rows will be NULL for those missing columns. Deleting rows in this way ideally requires a vacuum afterwards to recover space. This component does not arrange that vacuum for you, but there is a vacuum tables component available in an Orchestration job. For more information on vacuuming tables, read Vacuuming tables.

Unique Keys = drop-down

Selected column(s) from the input table used as unique keys.


Fix Data Type Mismatches = drop-down

  • Yes: If the source column type does not match the target table type, attempt to CAST the value to the required target type.
  • No: Do not cast types. Amazon Redshift may still attempt to coerce the types in this case.

Snowflake Databricks Amazon Redshift