Skip to content

Create Table

Create or replace a table. Generates any required DDL to manage an existing table of the same name, followed by a CREATE TABLE statement.

Warning

This component is potentially destructive. Take care when running this component as it may remove existing data.


Properties

Name = string

A human-readable name for the component.


Create/Replace = drop-down

Select one of:

  • Create: (default). This option creates a new table. Creating a new table will generate an error if a table with the same name already exists. However, existing data will not be destroyed.
  • Create if not exists: This option will only create a new table when a table of the same name does not already exist. This option will not destroy existing data. If the schema of the existing table does not match the schema defined in this component, no attempt is made to fix or correct the schema, which could lead to errors later on in the pipeline if:

    • Users did not expect a table to already exist.
    • Users did not expect to have a different schema to the one defined in this component.
  • Replace: this option drops any existing table of the same name and then creates a new table. Selecting this option guarantees that after the component succeeds, the table matches the schema defined in this component. However, any data from the existing table (to be replaced) will be lost. Since other database objects might depend upon this table, drop ... cascade is used in the "Comment" property, which may remove many other database objects.


Database = drop-down

The Snowflake database that the newly created table will belong to. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Schema = drop-down

The Snowflake schema that the newly created table will belong to. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


New Table Name = string

The name of the table to create or replace. This field is case-sensitive by default, since Designer uses quoted identifiers. To change this behavior, please consult the Snowflake documentation.


Table Type = drop-down

Select the type of Snowflake table to create. Permanent holds data indefinitely. This table type can be restored using Snowflake's Time Travel.


Columns = column editor

Enter the following details for each table column.

Column Name: The name of the new column.

Data Type: Select one of:

  • Varchar: This type is suitable for numbers and letters. A varchar or Variable Character Field is a set of character data of indeterminate length. More....
  • Number: This type is suitable for numeric types, with or without decimals. More....
  • Float: This type of values are approximate numeric values with fractional components. More....
  • Boolean: This type is suitable for data that is either "true" or "false". More....
  • Date: This type is suitable for dates without times. More....
  • Time: This type is suitable for time, independent of a specific date and timezone. More....
  • Timestamp: This type is a timestamp left unformatted (exists as Unix/Epoch Time). More....
  • Variant: Variant is a tagged universal type that can hold up to 16 MB of any data type supported by Snowflake. More....

Size: For Text types, this is the maximum length. This is a limit on the number of bytes, not characters. For Numeric types, this is the total number of digits allowed, whether before or after the decimal point.

Precision: The precision of the data in the column. Will be 0 (zero) for non-applicable types.

Default Value: The default value under this column for any row. You can enter AUTOINCREMENT or IDENTITY as the default value for any numeric-type column and the column will automatically give an incremented value on each new row starting from 1. This feature is useful when creating a unique key on the table.

Not Null: True if this column does not accept null values.

Unique: Mark this column as unique, declaring that all values in it are unique with respect to one another. You may have multiple unique columns per table.

Comment: A location to store descriptive VARCHAR comments against columns. Used to contextualize the content being stored in tables in your database.


Default DDL Collation = string

Set the default DDL collation. Setting this parameter forces all subsequently created columns in the affected table to have the specified collation as the default, unless the collation for the column is explicitly defined in the DDL. For more information, refer to the Snowflake documentation.


Primary Keys = dual listbox

Declare one column to be a primary key. Primary keys are, by default, unique. Users can only have one primary key per table.


Clustering Keys = dual listbox

Specify clustering key(s) to use. Defining clustering keys can allow for more efficient sorting and scanning of large tables. For more information, refer to the Snowflake documentation.


Data Retention Time in Days = integer

Set a number of days for which data is retained after deletion. For more information, refer to the Snowflake documentation.


Comment = string

Attach a comment to the table. For more information, refer to the Snowflake documentation.

Name = string

A human-readable name for the component.


Create Method = drop-down

Select one of:

  • Create: (default). This option creates a new table. Creating a new table will generate an error if a table with the same name already exists. However, existing data will not be destroyed.
  • Create If Not Exists: This option will only create a new table when a table of the same name does not already exist. This option will not destroy existing data. If the schema of the existing table does not match the schema defined in this component, no attempt is made to fix or correct the schema, which could lead to errors later on in the pipeline if:

    • Users did not expect a table to already exist.
    • Users did not expect to have a different schema to the one defined in this component.
  • Replace: this option drops any existing table of the same name and then creates a new table. Selecting this option guarantees that after the component succeeds, the table matches the schema defined in this component. However, any data from the existing table (to be replaced) will be lost. Since other database objects might depend upon this table, drop ... cascade is used in the "Comment" property, which may remove many other database objects.


Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which databases are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Table = string

The name of the table to create or replace. This field is case-sensitive by default, since Designer uses quoted identifiers. To change this behavior, please consult the Snowflake documentation.


Table Metadata = column editor

  • Name: The name of the new column.
  • Type: The data type of each column. For more information, read Data types.
    • INTEGER: This type is suitable for whole-numbers (no decimals). For more information, read Databricks INT type.
    • NUMBER: This is suitable for numeric types, with or without decimals. For more information, read Databricks Numeric Data types.
    • FLOAT: These types of values are approximate numeric values with fractional components. For more information, read Databricks Float type.
    • TEXT: Represents character string values. For more information, read Databricks String type.
    • TIMESTAMP: This type is a timestamp left unformatted (exists as Unix/Epoch Time). For more information, read Databricks timestamp type.
    • DATE: This type is suitable for dates without times. For more information, read Databricks Date type.
    • BOOLEAN: This type is suitable for data that is either "true" or "false", or "1" or "0", respectively. For more information, read Databricks Boolean type.
    • BINARY: This data type represents byte sequence values. For more information, read Databricks Binary type.
  • Size: For text types, this is the maximum length. This is a limit on the number of bytes, not characters. For numeric types, this is the total number of digits allowed, whether before or after the decimal point.
  • Scale: Set the data type scale.
  • Comment: A location to store descriptive text comments against columns. Used to contextualize the content being stored in tables in your database.

Table Type = drop-down

Select whether the new table will be of type Managed or External. The default setting is Managed. Read Managed table for more information.


Using = drop-down

Select the file type. Available types:

  • CSV
  • Parquet
  • JSON
  • ORC
  • Delta

This property is only available when Table Type is set to External.


Location = string

(AWS only) Specify the S3 bucket location. This property is only available when Table Type is set to External.


Storage Account = drop-down

(Azure only) Select a storage account. An Azure storage account contains all of your Azure storage data objects: blobs, files, queues, tables, and disks. For more information, read Storage account overview. This property is only available when Table Type is set to External.


Blob Container = string

(Azure only) A Blob Storage location. The available blob containers will depend on the selected storage account. This property is only available when Table Type is set to External.


Partition Columns = dual listbox

Specify partition columns.


Table Properties = column editor

Specify any table metadata using key=value pairs:

  • Key: Specify the key.
  • Value: Specify the key's value.

Comment = string

Attach a comment to the table.

Name = string

A human-readable name for the component.


Create/Replace = drop-down

Select one of:

  • Create: (default). This option creates a new table. Creating a new table will generate an error if a table with the same name already exists. However, existing data will not be destroyed.
  • Create if not exists: This option will only create a new table when a table of the same name does not already exist. This option will not destroy existing data. If the schema of the existing table does not match the schema defined in this component, no attempt is made to fix or correct the schema, which could lead to errors later on in the pipeline if:

    • Users did not expect a table to already exist.
    • Users did not expect to have a different schema to the one defined in this component.
  • Replace: this option drops any existing table of the same name and then creates a new table. Selecting this option guarantees that after the component succeeds, the table matches the schema defined in this component. However, any data from the existing table (to be replaced) will be lost. Since other database objects might depend upon this table, drop ... cascade is used in the "Comment" property, which may remove many other database objects.


Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


New Table Name = string

The name of the table to create or replace. This field is case-sensitive by default, since Designer uses quoted identifiers. To change this behavior, please consult the Snowflake documentation.


Table Metadata = column editor

  • Column Name: The name of the new column.
  • Data Type: The data type of each column. For more information, read Data types.
    • Text: Can hold any type of data, subject to a maximum size.
    • Integer: Suitable for whole-number types (no decimals).
    • Numeric: Suitable for numeric types, with or without decimals.
    • Real: Suitable for data of a single precision floating-point number.
    • Double Precision: Suitable for data of a double precision floating-point number.
    • SUPER: Use the SUPER data type to store semi-structured data or documents as values.
    • Boolean: Suitable for data that is either true or false.
    • Date: Suitable for dates without times.
    • DateTime: Suitable for dates, times, or timestamps (both date and time).
  • Size: For text types, this is the maximum length. This is a limit on the number of bytes, not characters. With Redshift, since all data is stored using UTF-8, any non-ASCII character will count as 2 or more bytes. For numeric types, this is the total number of digits allowed, whether before or after the decimal point.
  • Decimal Places: Relevant only for numeric data, it is the maximum number of digits that may appear to the right of the decimal point.
  • Encoding Type: The Redshift compression encoding. Read Compression encodings for details of the various available encodings. Although users can manually specify the encoding type here, it is advised to use automatic compression analysis to select the optimal compression. When loading into empty tables, this is performed by the S3 Load (unless you disable COMPUPDATE). It can also be performed by the Table Output, provided you are truncating the table.
  • Allow Nullable: When "True", specifies that the column accepts null values. When "False", specifies that the column is not allowed to contain null values. Default is "True".
  • Comment: A location to store descriptive text comments against columns. Used to contextualize the content being stored in tables in your database.

Distribution Style = drop-down

  • All: Copy rows to all nodes in the Redshift cluster.
  • Auto: (Default) Allow Redshift to manage your distribution style.
  • Even: Distribute rows around the Redshift cluster evenly.
  • Key: Distribute rows around the Redshift cluster according to the value of a key column.

Note

Table distribution is critical to good performance. Read the Distribution styles documentation for more information.


Distribution Key = drop-down

If Distribution Style is Key, select the column to use as the distribution key.


Sort Key = dual listbox

This is optional, and lets users specify one or more columns from the input that should be set as the table's sort key.

Note

Sort keys are critical to good performance. Read Working with sort keys for more information.


Sort Key Options = drop-down

Decide whether the sort key is of a compound or interleaved variety.


Primary Key = dual listbox

Select one or more columns to be designated as the table's primary key.


Identity Columns = column editor

  • Column Name: The name of the column. It must match the name of a column defined in the Table Metadata property, which will be set as an identity column. An identity column contains automatically generated values. The specified column must be of data type "Integer" or "Numeric" with zero (0) decimal places.
  • Seed: Specifies the starting value.
  • Step: Specifies the increment between values.

Backup Table = drop-down

Specify whether the created table is to be included in automated and manual cluster snapshots. "No" has no effect on automatic replication of data to other nodes within the cluster, meaning that tables set with "No" in this property are restored in a node failure.

The default setting is "Yes".


Snowflake Databricks Amazon Redshift