Destination

Select your cloud data warehouse.

Type = drop-down

  • Standard: The data will be staged in your storage location before being loaded into a table. This is the only setting currently available.

Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value [Environment Default] uses the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value [Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value [Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Target Table = string

The name of the table to be created in your Snowflake database. This table will be recreated and will drop any existing table of the same name.

You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.


Primary Keys = dual listbox

Select one or more columns to be designated as the table's primary key.


Stage = drop-down

Select a managed stage. The special value, [Custom], will create a stage "on the fly" for use solely within this component.


Stage Platform = drop-down

Choose where the data is staged before being loaded into your Snowflake table using the drop-down menu.

  • Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
  • Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
  • Existing Google Cloud Storage Location: Activates the Storage Integration and GCS Staging Area properties, allowing users to specify a custom staging area within Google Cloud Storage.
  • Snowflake Managed: Create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete. This is the default setting.

Stage Authentication = drop-down

Select an authentication method for data staging. Only available when Stage Platform is set to either Existing Amazon S3 Location or Existing Azure Blob Storage Location.

  • Credentials: Uses the credentials configured in the environment. If no credentials have been configured, an error will occur.
  • Storage Integration: Use a Snowflake storage integration to authentication data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. To learn more, read Create Storage Integration.

Storage Integration = drop-down

Select a Snowflake storage integration from the drop-down list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location (Amazon S3, Azure Blob Storage, Google Cloud Storage) and must be set up in advance of selection.

To use this property with an Amazon S3 or Azure Blob Storage location, set Stage Authentication to Storage Integration. For Google Cloud Storage, Storage Integration is the only stage authentication method.


S3 Staging Area = drop-down

Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Secrets and secret definitions for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.


Use Accelerated Endpoint = boolean

When True, data will be loaded via the s3-accelerate endpoint. Consider the following information:

  • Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Read Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
  • Users must manually set the acceleration configuration of an existing bucket. To learn more, read PutBucketAccelerateConfiguration in the AWS API Reference.
  • This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
  • Cases may arise where Data Productivity Cloud can't determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Designer will reveal this property for user input on a "just in case" basis. In these cases, Designer may return a validation message that reads "OK - Bucket could not be validated." You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission, GetAccelerateConfiguration), Designer will again show this property "just in case".
  • The default setting is False.

Storage Account = drop-down

Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.


Blob Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.


GCS Staging Area = drop-down

The URL and path of the target Google Cloud Storage bucket to be used for staging the queried data. For more information, read Creating storage buckets.

Catalog = drop-down

Select a Databricks Unity Catalog. The special value [Environment Default] uses the catalog defined in the environment. Selecting a catalog will determine which databases are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value [Environment Default] uses the schema defined in the environment. Read Create and manage schemas to learn more.


Table = string

The name of the table to be created in your Snowflake database. This table will be recreated and will drop any existing table of the same name.

You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.


Stage Platform = drop-down

Choose where the data is staged before being loaded into your Databricks table using the drop-down menu.

  • AWS S3: Lets users specify a custom staging area on Amazon S3.
  • Azure Blob: Lets users specify a custom staging area on Azure Blob storage.
  • Personal Staging (deprecated): Uses a Databricks personal staging location. This option has been deprecated by Databricks.
  • Volume: Use a pre-existing Databricks volume to stage your data. Only external volumes are available.

S3 Staging Area = drop-down

Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Secrets and secret definitions for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.


Storage Account = drop-down

Select a storage account with your desired Blob container to be used for staging the data. For more information, read Storage account overview.


Blob Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.


Volume = drop-down

Select a Databricks volume.

Type = drop-down

  • Standard: The data will be staged in your storage location before being loaded into a table. This is the only setting currently available.

Use Accelerated Endpoint = boolean

When True, data will be loaded via the s3-accelerate endpoint.

  • Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Read Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
  • Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
  • This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
  • Cases may arise where Data Productivity Cloud cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Designer will reveal this property for user input on a "just in case" basis. In these cases, Designer may return a validation message that reads "OK - Bucket could not be validated." You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission, GetAccelerateConfiguration) Designer will again show this property "just in case".
  • The default setting is False.

Schema = drop-down

Select the table schema. The special value [Environment Default] uses the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Target Table = string

The name of the table to be created in your Amazon Redshift schema. This table will be recreated and will drop any existing table of the same name.

You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.


S3 Staging Area = S3 bucket

Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Secrets and secret definitions for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.


Distribution Style = drop-down

Table distribution is critical to good performance. Read the Distribution styles documentation for more information.

  • All: Copy rows to all nodes in the Redshift cluster.
  • Auto: (Default) Allow Redshift to manage your distribution style.
  • Even: Distribute rows around the Redshift cluster evenly.
  • Key: Distribute rows around the Redshift cluster according to the value of a key column.

Sort Key = dual listbox

Sort keys are critical to good performance. Read Working with sort keys for more information.

This property is optional, and lets you specify one or more columns from the input that should be set as the table's sort key.


Sort Key Options = drop-down

Decide whether the sort key is of a compound or interleaved variety.


Primary Key = dual listbox

Select one or more columns to be designated as the table's primary key.