Skip to content

Rewrite Table

Write the input data flow out to a new table. Runtime errors may occur, for example, if a data value overflows the maximum allowed size of a field.

The output table is overwritten each time the component is executed; therefore, don't use this component to output permanent data that you do not want to overwrite.


Properties

Name = string

A human-readable name for the component.


Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Target Table = string

A name for the new table.


Order By = column editor

Select the columns to order by. Set the corresponding column to be ordered ascending (default) or descending.

Name = string

A human-readable name for the component.


Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which databases are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Table = string

A name for the new table.


Partition Keys = dual listbox

Select any input columns to be used as partition keys.


Table Properties = column editor

  • Key: A metadata property within the table. These are expressed as key=value pairs.
  • Value: The value of the corresponding row's key.

Comment = string

A descriptive comment of the view.

Name = string

A human-readable name for the component.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Target Table = string

A name for the new table.


Table Sort Key = column editor

Specify the columns from the input that should be set as the table's sort-key.

Sort-keys are critical to good performance. Read the Amazon Redshift documentation for more information.


Table Distribution Style = drop-down

Select how the data in the table will be distributed. Choose from - ALL, EVEN, or KEY. For more information, read Distribution styles.


Table Distribution Key = drop-down

This is only displayed if the Table Distribution Style is set to Key. It is the column used to determine which cluster node the row is stored on.


Strategy

Drop and recreate a target table, and at runtime perform a bulk-insert from the input flow.


Snowflake Databricks Amazon Redshift (preview)