Destination

`Destination` = _drop-down_

Select the destination for your data. This is either in Snowflake as a table or as files in cloud storage.

- **Snowflake:** Load your data into a table in Snowflake. The data must first be staged via Snowflake or a cloud storage solution.
- **Cloud Storage:** Load your data directly into files in your preferred cloud storage location. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.

Click either the **Snowflake** or **Cloud Storage** tab on this page for documentation applicable to that destination type.

=== "Snowflake"

    <!-- param-start:[snowflake-output-connector-v0.warehouse] | warehouses: [snowflake] -->
    `Warehouse` = _drop-down_



    The Snowflake warehouse used to run the queries. The special value `[Environment Default]` uses the warehouse defined in the environment. Read [Overview of Warehouses](https://docs.snowflake.com/en/user-guide/warehouses-overview.html) to learn more.


    <!-- param-end:[snowflake-output-connector-v0.warehouse] -->

    ---

    <!-- param-start:[snowflake-output-connector-v0.database] | warehouses: [snowflake] -->
    `Database` = _drop-down_

    The Snowflake database to access. The special value `[Environment Default]` uses the database defined in the environment. Read [Databases, Tables and Views - Overview](https://docs.snowflake.com/en/guides-overview-db) to learn more.
    <!-- param-end:[snowflake-output-connector-v0.database] -->

    ---

    <!-- param-start:[snowflake-output-connector-v0.schema] | warehouses: [snowflake] -->
    `Schema` = _drop-down_



    The Snowflake schema. The special value `[Environment Default]` uses the schema defined in the environment. Read [Database, Schema, and Share DDL](https://docs.snowflake.com/en/sql-reference/ddl-database.html) to learn more.


    <!-- param-end:[snowflake-output-connector-v0.schema] -->

    ---

    <!-- param-start:[snowflake-output-connector-v0.tableName] | warehouses: [snowflake] -->
    `Table Name` = _string_

    The name of the table to be created in your Snowflake database. You can use a [Table Input](/data-productivity-cloud/designer/docs/table-input/) component in a transformation pipeline to access and transform this data after it has been loaded.
    <!-- param-end:[snowflake-output-connector-v0.tableName] -->

    ---

    <!-- param-start:[snowflake-output-connector-v0.createTableMode] | warehouses: [snowflake] -->
    `Load Strategy` = _drop-down_

    Define what happens if the table name already exists in the specified Snowflake database and schema.

    - **Replace:** If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
    - **Truncate and Insert:** If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
    - **Fail if Exists:** If the specified table name already exists, this pipeline will fail to run.
    - **Append:** If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
    <!-- param-end:[snowflake-output-connector-v0.createTableMode] -->

    ---

    <!-- param-start:[snowflake-output-connector-v0.primaryKeys] | warehouses: [snowflake] -->
    `Primary Keys` = _dual listbox_ _(optional)_

    Select one or more columns to be designated as the table's primary key.
    <!-- param-end:[snowflake-output-connector-v0.primaryKeys] -->

    ---

    <!-- param-start:[snowflake-output-connector-v0.cleanStagedFiles] | warehouses: [snowflake] -->
    `Clean Staged files` = _boolean_

    - **Yes:** Staged files will be destroyed after data is loaded. This is the default setting.
    - **No:** Staged files are retained in the staging area after data is loaded.
    <!-- param-end:[snowflake-output-connector-v0.cleanStagedFiles] -->

    ---

    <!-- param-start:[snowflake-output-connector-v0.stageAccessStrategyForGcs, snowflake-output-connector-v0.stageAccessStrategyForAzure, snowflake-output-connector-v0.stageAccessStrategyForS3] | warehouses: [snowflake] -->
    `Stage Access Strategy` = _drop-down_ _(optional)_

    Select the stage access strategy. The strategies available depend on the cloud platform you select in **Stage Platform**.

    - **Credentials:** Connects to the external stage (AWS, Azure) using your configured [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/). Not available for Google Cloud Storage.
    - **Storage Integration:** Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the **Storage Integration** property, through which you can select any of your existing Snowflake storage integrations.
    <!-- param-end:[snowflake-output-connector-v0.stageAccessStrategyForGcs, snowflake-output-connector-v0.stageAccessStrategyForAzure, snowflake-output-connector-v0.stageAccessStrategyForS3] -->

    ---

    <!-- param-start:[snowflake-output-connector-v0.stagePlatform] | warehouses: [snowflake] -->
    `Stage Platform` = _drop-down_

    Use the drop-down menu to choose where the data is staged before being loaded into your Snowflake table.

    - **Amazon S3:** Stage your data on an AWS S3 bucket.
    - **Snowflake:** Stage your data on a Snowflake internal stage.
    - **Azure Storage:** Stage your data in an Azure Blob Storage container.
    - **Google Cloud Storage:** Stage your data in a Google Cloud Storage bucket.

    Click one of the tabs below for documentation applicable to that staging platform.
    <!-- param-end:[snowflake-output-connector-v0.stagePlatform] -->

    === "Amazon S3"

        <!-- param-start:[snowflake-output-connector-v0.storageIntegrationForS3] | warehouses: [snowflake] -->
        `Storage Integration`= _drop-down_

        Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
        <!-- param-end:[snowflake-output-connector-v0.storageIntegrationForS3] -->

        ---

        <!-- param-start:[snowflake-output-connector-v0.amazonS3Bucket] | warehouses: [snowflake] -->
        `Amazon S3 Bucket` = _drop-down_

        An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[snowflake-output-connector-v0.amazonS3Bucket] -->

    === "Snowflake"

        <!-- param-start:[snowflake-output-connector-v0.snowflake#internalStageType] | warehouses: [snowflake] -->
        `Internal Stage Type` = _drop-down_

        Select the Snowflake internal stage type. Use the Snowflake links provided to learn more about each type of stage.

        - **User:** Each Snowflake user has a [user stage](https://docs.snowflake.com/en/user-guide/data-load-local-file-system-create-stage#user-stages) allocated to them by default for file storage. You may find the user stage convenient if your files will only be accessed by a single user, but need to be copied into multiple tables.
        - **Named:** A [named stage](https://docs.snowflake.com/en/user-guide/data-load-local-file-system-create-stage#named-stages) provides high flexibility for data loading. Users with the appropriate privileges on the stage can load data into any table. Furthermore, because the stage is a database object, any security or access rules that apply to all objects will apply to the named stage.

        Named stages can be altered and dropped. User stages cannot.
        <!-- param-end:[snowflake-output-connector-v0.snowflake#internalStageType] -->

        ---

        <!-- param-start:[snowflake-output-connector-v0.snowflake#internalNamedStage] | warehouses: [snowflake] -->
        `Named Stage` = _drop-down_

        Select your named stage. Read [Creating a named stage](https://docs.snowflake.com/en/user-guide/data-load-local-file-system-create-stage#creating-a-named-stage) to learn how to create a new named stage.

        !!! warning

            There is a known issue where named stages that include special characters or spaces are not supported.
        <!-- param-end:[snowflake-output-connector-v0.snowflake#internalNamedStage] -->

    === "Azure Storage"

        <!-- param-start:[snowflake-output-connector-v0.storageIntegrationForAzure] | warehouses: [snowflake] -->
        `Storage Integration`= _drop-down_

        Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
        <!-- param-end:[snowflake-output-connector-v0.storageIntegrationForAzure] -->

        ---

        <!-- param-start:[snowflake-output-connector-v0.azureBlobAccount] | warehouses: [snowflake] -->
        `Storage Account` = _drop-down_

        Select a storage account linked to your desired Blob container to be used for staging the data. For more information, read [Storage account overview](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview).
        <!-- param-end:[snowflake-output-connector-v0.azureBlobAccount] -->

        ---

        <!-- param-start:[snowflake-output-connector-v0.azureBlobContainer] | warehouses: [snowflake] -->
        `Container` = _drop-down_

        Select a Blob container to be used for staging the data. For more information, read [Introduction to Azure Blob storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction).
        <!-- param-end:[snowflake-output-connector-v0.azureBlobContainer] -->

    === "Google Cloud Storage"

        <!-- param-start:[snowflake-output-connector-v0.stageAccessStrategyForGcs] | warehouses: [snowflake] -->
        `Stage Access Strategy` = _drop-down_ _(optional)_

        Select the stage access strategy. The strategies available depend on the cloud platform you select in **Stage Platform**.

        - **Storage Integration:** Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the **Storage Integration** property, through which you can select any of your existing Snowflake storage integrations.
        <!-- param-end:[snowflake-output-connector-v0.stageAccessStrategyForGcs] -->

        ---

        <!-- param-start:[snowflake-output-connector-v0.storageIntegration] | warehouses: [snowflake] -->
        `Storage Integration`= _drop-down_

        Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
        <!-- param-end:[snowflake-output-connector-v0.storageIntegration] -->

        ---

        <!-- param-start:[snowflake-output-connector-v0.gcsBucket] | warehouses: [snowflake] -->
        `GCS Bucket`= _drop-down_

        The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[snowflake-output-connector-v0.gcsBucket] -->

        ---

        <!-- param-start:[snowflake-output-connector-v0.gcs#overwriteAllowed] | warehouses: [snowflake] -->
        `Overwrite` = _boolean_

        Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
        <!-- param-end:[snowflake-output-connector-v0.gcs#overwriteAllowed] -->

=== "Cloud Storage"

    <!-- param-start:[storage-only-output-v0.prepareStageStrategy] | warehouses: [snowflake] -->
    `Load Strategy` = _drop-down_ _(optional)_

    - **Append Files in Folder:** Appends files to storage folder. This is the default setting.
    - **Overwrite Files in Folder:** Overwrite existing files with matching structure.

    See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

    |Configuration|Description|
    |---|---|
    |Append files in folder with defined folder path and file prefix.|Files will be stored under the structure `uniqueID/timestamp-partX` where X is the part number, starting from 1. For example, `1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1`.|
    |Append files in folder without defined folder path and file prefix.|Files will be stored under the structure `folder/prefix-timestamp-partX` where X is the part number, starting from 1. For example, `folder/prefix-20240229100736969-part1`.|
    |Overwrite files in folder with defined folder path and file prefix.|Files will be stored under the structure `folder/prefix-partX` where X is the part number, starting from 1. All files with matching structures will be overwritten.|
    |Overwrite files in folder without defined folder path and file prefix.|Validation will fail. Folder path and file prefix must be supplied for this load strategy.|
    <!-- param-end:[storage-only-output-v0.prepareStageStrategy] -->

    ---

    <!-- param-start:[storage-only-output-v0.folderPath] | warehouses: [snowflake] -->
    `Folder Path` = _string_ _(optional)_

    The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
    <!-- param-end:[storage-only-output-v0.folderPath] -->

    ---

    <!-- param-start:[storage-only-output-v0.filePrefix] | warehouses: [snowflake] -->
    `File Prefix` = _string_ _(optional)_

    A string of characters that precedes the name of the written files. This can be useful for organizing database objects.
    <!-- param-end:[storage-only-output-v0.filePrefix] -->

    ---

    <!-- param-start:[storage-only-output-v0.storage] | warehouses: [snowflake] -->
    `Storage` = _drop-down_

    A cloud storage location to load your data into files for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.

    Click the tab that corresponds to your chosen cloud storage service.
    <!-- param-end:[storage-only-output-v0.storage] -->

    === "Amazon S3"

        <!-- param-start:[storage-only-output-v0.s3#bucket] | warehouses: [snowflake] -->
        `Amazon S3 Bucket` = _drop-down_

        An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[storage-only-output-v0.s3#bucket] -->

    === "Azure Storage"

        <!-- param-start:[storage-only-output-v0.azure#account] | warehouses: [snowflake] -->
        `Storage Account` = _drop-down_

        Select a storage account linked to your desired Blob container to be used for staging the data. For more information, read [Storage account overview](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview).
        <!-- param-end:[storage-only-output-v0.azure#account] -->

        ---

        <!-- param-start:[storage-only-output-v0.azure#container] | warehouses: [snowflake] -->
        `Container` = _drop-down_

        Select a Blob container to be used for staging the data. For more information, read [Introduction to Azure Blob storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction).
        <!-- param-end:[storage-only-output-v0.azure#container] -->

    === "Google Cloud Storage"

        <!-- param-start:[storage-only-output-v0.gcs#bucket] | warehouses: [snowflake] -->
        `GCS Bucket`= _drop-down_

        The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[storage-only-output-v0.gcs#bucket] -->

        ---

        <!-- param-start:[storage-only-output-v0.gcs#overwriteAllowed] | warehouses: [snowflake] -->
        `Overwrite` = _boolean_

        Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
        <!-- param-end:[storage-only-output-v0.gcs#overwriteAllowed] -->
`Destination` = _drop-down_

Select the destination for your data. This is either in Databricks as a table or as files in cloud storage. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.

- **Databricks:** Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
- **Cloud Storage:** Load your data directly into files in your preferred cloud storage location.

Click either the **Databricks** or **Cloud Storage** tab on this page for documentation applicable to that destination type.

=== "Databricks"

    <!-- param-start:[databricks-output-connector-v0.catalog] | warehouses: [databricks] -->
    `Catalog` = _drop-down_



    Select a [Databricks Unity Catalog](https://docs.databricks.com/en/data-governance/unity-catalog/index.html). The special value `[Environment Default]` uses the catalog defined in the environment. Selecting a catalog will determine which databases are available in the next parameter.


    <!-- param-end:[databricks-output-connector-v0.catalog] -->

    ---

    <!-- param-start:[databricks-output-connector-v0.schema] | warehouses: [databricks] -->
    `Schema` = _drop-down_



    The Databricks schema. The special value `[Environment Default]` uses the schema defined in the environment. Read [Create and manage schemas](https://docs.databricks.com/en/data-governance/unity-catalog/create-schemas.html) to learn more.


    <!-- param-end:[databricks-output-connector-v0.schema] -->

    ---

    <!-- param-start:[databricks-output-connector-v0.tableName] | warehouses: [databricks] -->
    `Table Name` = _string_

    The name of the table to be created in your Databricks schema. You can use a [Table Input](/data-productivity-cloud/designer/docs/table-input/) component in a transformation pipeline to access and transform this data after it has been loaded.
    <!-- param-end:[databricks-output-connector-v0.tableName] -->

    ---

    <!-- param-start:[databricks-output-connector-v0.loadStrategy] | warehouses: [databricks] -->
    `Load Strategy` = _drop-down_

    Define what happens if the table name already exists in the specified Databricks schema.

    - **Fail if Exists:** If the specified table name already exists, this pipeline will fail to run.
    - **Replace:** If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
    - **Truncate and Insert:** If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
    - **Append:** If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
    <!-- param-end:[databricks-output-connector-v0.loadStrategy] -->

    ---

    <!-- param-start:[databricks-output-connector-v0.cleanStagedFiles] | warehouses: [databricks] -->
    `Clean Staged Files` = _boolean_

    - **Yes:** Staged files will be destroyed after data is loaded. This is the default setting.
    - **No:** Staged files are retained in the staging area after data is loaded.
    <!-- param-end:[databricks-output-connector-v0.cleanStagedFiles] -->

    ---

    <!-- param-start:[databricks-output-connector-v0.stagePlatform] | warehouses: [databricks] -->
    `Stage Platform` = _drop-down_

    Use the drop-down menu to choose where the data is staged before being loaded into your Databricks table.

    - **Amazon S3:** Stage your data on an AWS S3 bucket.
    - **Azure Storage:** Stage your data in an Azure Blob Storage container.

    Click one of the tabs below for documentation applicable to that staging platform.
    <!-- param-end:[databricks-output-connector-v0.stagePlatform] -->

    === "Amazon S3"

        <!-- param-start:[databricks-output-connector-v0.s3#bucket] | warehouses: [databricks] -->
        `Amazon S3 Bucket` = _drop-down_

        An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[databricks-output-connector-v0.s3#bucket] -->

    === "Azure Storage"

        <!-- param-start:[databricks-output-connector-v0.azure#account] | warehouses: [databricks] -->
        `Storage Account` = _drop-down_

        Select a storage account linked to your desired Blob container to be used for staging the data. For more information, read [Storage account overview](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview).
        <!-- param-end:[databricks-output-connector-v0.azure#account] -->

        ---

        <!-- param-start:[databricks-output-connector-v0.azure#container] | warehouses: [databricks] -->
        `Container` = _drop-down_

        Select a Blob container to be used for staging the data. For more information, read [Introduction to Azure Blob storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction).
        <!-- param-end:[databricks-output-connector-v0.azure#container] -->

=== "Cloud Storage"

    <!-- param-start:[storage-only-output-v0.prepareStageStrategy] | warehouses: [databricks] -->
    `Load Strategy` = _drop-down_ _(optional)_

    - **Append Files in Folder:** Appends files to storage folder. This is the default setting.
    - **Overwrite Files in Folder:** Overwrite existing files with matching structure.

    See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

    |Configuration|Description|
    |---|---|
    |Append files in folder with defined folder path and file prefix.|Files will be stored under the structure `uniqueID/timestamp-partX` where X is the part number, starting from 1. For example, `1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1`.|
    |Append files in folder without defined folder path and file prefix.|Files will be stored under the structure `folder/prefix-timestamp-partX` where X is the part number, starting from 1. For example, `folder/prefix-20240229100736969-part1`.|
    |Overwrite files in folder with defined folder path and file prefix.|Files will be stored under the structure `folder/prefix-partX` where X is the part number, starting from 1. All files with matching structures will be overwritten.|
    |Overwrite files in folder without defined folder path and file prefix.|Validation will fail. Folder path and file prefix must be supplied for this load strategy.|
    <!-- param-end:[storage-only-output-v0.prepareStageStrategy] -->

    ---

    <!-- param-start:[storage-only-output-v0.folderPath] | warehouses: [databricks] -->
    `Folder Path` = _string_ _(optional)_

    The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
    <!-- param-end:[storage-only-output-v0.folderPath] -->

    ---

    <!-- param-start:[storage-only-output-v0.filePrefix] | warehouses: [databricks] -->
    `File Prefix` = _string_ _(optional)_

    A string of characters that precedes the name of the written files. This can be useful for organizing database objects.
    <!-- param-end:[storage-only-output-v0.filePrefix] -->

    ---

    <!-- param-start:[storage-only-output-v0.storage] | warehouses: [databricks] -->
    `Storage` = _drop-down_

    A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.

    Click the tab that corresponds to your chosen cloud storage service.
    <!-- param-end:[storage-only-output-v0.storage] -->

    === "Amazon S3"

        <!-- param-start:[storage-only-output-v0.s3#bucket] | warehouses: [databricks] -->
        `Amazon S3 Bucket` = _drop-down_

        An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[storage-only-output-v0.s3#bucket] -->

    === "Azure Storage"

        <!-- param-start:[storage-only-output-v0.azure#account] | warehouses: [databricks] -->
        `Storage Account` = _drop-down_

        Select a storage account linked to your desired Blob container to be used for staging the data. For more information, read [Storage account overview](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview).
        <!-- param-end:[storage-only-output-v0.azure#account] -->

        ---

        <!-- param-start:[storage-only-output-v0.azure#container] | warehouses: [databricks] -->
        `Container` = _drop-down_

        Select a Blob container to be used for staging the data. For more information, read [Introduction to Azure Blob storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction).
        <!-- param-end:[storage-only-output-v0.azure#container] -->

    === "Google Cloud Storage"

        <!-- param-start:[storage-only-output-v0.gcs#bucket] | warehouses: [databricks] -->
        `GCS Bucket`= _drop-down_

        The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[storage-only-output-v0.gcs#bucket] -->

        ---

        <!-- param-start:[storage-only-output-v0.gcs#overwriteAllowed] | warehouses: [databricks] -->
        `Overwrite` = _boolean_

        Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
        <!-- param-end:[storage-only-output-v0.gcs#overwriteAllowed] -->
`Destination` = _drop-down_

Select the destination for your data. This is either in Amazon Redshift as a table or as files in cloud storage. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.

- **Redshift:** Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
- **Cloud Storage:** Load your data directly into files in your preferred cloud storage location.

Click either the **Amazon Redshift** or **Cloud Storage** tab on this page for documentation applicable to that destination type.

=== "Amazon Redshift"

    <!-- param-start:[redshift-output-connector-v0.schema] | warehouses: [redshift] -->
    `Schema` = _drop-down_

    Select the Amazon Redshift schema that will contain your table. The special value `[Environment Default]` uses the schema defined in the environment. For information about using multiple schemas, read [Schemas](https://docs.aws.amazon.com/redshift/latest/dg/r_Schemas_and_tables.html).
    <!-- param-end:[redshift-output-connector-v0.schema] -->

    ---

    <!-- param-start:[redshift-output-connector-v0.table] | warehouses: [redshift] -->
    `Table Name` = _string_

    The name of the table to be created in your Amazon Redshift database. You can use a [Table Input](/data-productivity-cloud/designer/docs/table-input/) component in a transformation pipeline to access and transform this data after it has been loaded.
    <!-- param-end:[redshift-output-connector-v0.table] -->

    ---

    <!-- param-start:[redshift-output-connector-v0.createTableMode] | warehouses: [redshift] -->
    `Load Strategy` = _drop-down_

    Define what happens if the table name already exists in the specified Amazon Redshift database and schema.

    - **Replace:** If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
    - **Fail if Exists:** If the specified table name already exists, this pipeline will fail to run.
    - **Truncate and Insert:** If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
    - **Append:** If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It's appended onto the end of the existing data in the table. If the specified table name doesn't exist, then the table will be created, and your data will be inserted into the table. For example, if you have a source holding 100 records, then on the first pipeline run, your target table will be created and 100 rows will be inserted. On the second pipeline run, those same 100 records will be appended to your existing target table, so now it holds 200 records. Third pipeline run will be 300 records in your table, and so on.
    <!-- param-end:[redshift-output-connector-v0.createTableMode] -->

    ---

    <!-- param-start:[redshift-output-connector-v0.cleanStagedFiles] | warehouses: [redshift] -->
    `Clean Staged Files` = _boolean_

    - **Yes:** Staged files will be destroyed after data is loaded. This is the default setting.
    - **No:** Staged files are retained in the staging area after data is loaded.
    <!-- param-end:[redshift-output-connector-v0.cleanStagedFiles] -->

    ---

    <!-- param-start:[redshift-output-connector-v0.s3#bucket] | warehouses: [redshift] -->
    `Amazon S3 Bucket` = _drop-down_

    An AWS S3 bucket to stage data into before it is loaded into your Amazon Redshift table. The drop-down menu will include buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
    <!-- param-end:[redshift-output-connector-v0.s3#bucket] -->

=== "Cloud Storage"

    <!-- param-start:[storage-only-output-v0.prepareStageStrategy] | warehouses: [redshift] -->
    `Load Strategy` = _drop-down_ _(optional)_

    - **Append Files in Folder:** Appends files to storage folder. This is the default setting.
    - **Overwrite Files in Folder:** Overwrite existing files with matching structure.

    See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

    |Configuration|Description|
    |---|---|
    |Append files in folder with defined folder path and file prefix.|Files will be stored under the structure `uniqueID/timestamp-partX` where X is the part number, starting from 1. For example, `1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1`.|
    |Append files in folder without defined folder path and file prefix.|Files will be stored under the structure `folder/prefix-timestamp-partX` where X is the part number, starting from 1. For example, `folder/prefix-20240229100736969-part1`.|
    |Overwrite files in folder with defined folder path and file prefix.|Files will be stored under the structure `folder/prefix-partX` where X is the part number, starting from 1. All files with matching structures will be overwritten.|
    |Overwrite files in folder without defined folder path and file prefix.|Validation will fail. Folder path and file prefix must be supplied for this load strategy.|
    <!-- param-end:[storage-only-output-v0.prepareStageStrategy] -->

    ---

    <!-- param-start:[storage-only-output-v0.folderPath] | warehouses: [redshift] -->
    `Folder Path` = _string_ _(optional)_

    The folder path for the files to be written to. Note that this path follows, but does not include, the bucket or container name.
    <!-- param-end:[storage-only-output-v0.folderPath] -->

    ---

    <!-- param-start:[storage-only-output-v0.filePrefix] | warehouses: [redshift] -->
    `File Prefix` = _string_ _(optional)_

    A string of characters to include at the beginning of the written files. Often used for organizing database objects.
    <!-- param-end:[storage-only-output-v0.filePrefix] -->

    ---

    <!-- param-start:[storage-only-output-v0.storage] | warehouses: [redshift] -->
    `Storage` = _drop-down_

    A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Storage, or Google Cloud Storage.

    Click the tab that corresponds to your chosen cloud storage service.
    <!-- param-end:[storage-only-output-v0.storage] -->

    === "Amazon S3"

        <!-- param-start:[storage-only-output-v0.s3#bucket] | warehouses: [redshift] -->
        `Amazon S3 Bucket` = _drop-down_

        An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[storage-only-output-v0.s3#bucket] -->

    === "Azure Storage"

        <!-- param-start:[storage-only-output-v0.azure#account] | warehouses: [redshift] -->
        `Storage Account` = _drop-down_

        Select a storage account linked to your desired Blob container to be used for staging the data. For more information, read [Storage account overview](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview).
        <!-- param-end:[storage-only-output-v0.azure#account] -->

        ---

        <!-- param-start:[storage-only-output-v0.azure#container] | warehouses: [redshift] -->
        `Container` = _drop-down_

        Select a Blob container to be used for staging the data. For more information, read [Introduction to Azure Blob storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction).
        <!-- param-end:[storage-only-output-v0.azure#container] -->

    === "Google Cloud Storage"

        <!-- param-start:[storage-only-output-v0.gcs#bucket] | warehouses: [redshift] -->
        `GCS Bucket`= _drop-down_

        The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the [cloud provider credentials](/data-productivity-cloud/designer/docs/cloud-credentials/) that you have associated with your [environment](/data-productivity-cloud/designer/docs/environments/).
        <!-- param-end:[storage-only-output-v0.gcs#bucket] -->

        ---

        <!-- param-start:[storage-only-output-v0.gcs#overwriteAllowed] | warehouses: [redshift] -->
        `Overwrite` = _boolean_

        Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.
        <!-- param-end:[storage-only-output-v0.gcs#overwriteAllowed] -->