Skip to content

Mailchimp

The Mailchimp Query component uses the Mailchimp API to retrieve data and load it into a table—this stages the data, so the table is reloaded each time. Users can then transform their data with transformation components.


Properties

Name = string

A human-readable name for the component.


Authentication Type = drop-down

Currently only supports API key.


API Key = drop-down

Drop-down list of secret definitions. Create a secret definition to reference your corresponding Mailchimp API key.

To create a secret definition:

  1. Click Manage to open the Secret definitions tab in a new browser tab.
  2. Click Add secret definition.
  3. Name your secret definition. Names must be unique.
  4. Add a description if you wish. This is optional.
  5. Toggle Yes if using a multi-line input password. Otherwise, keep the toggle set to No.
  6. Add your Mailchimp API key to the Secret value field.
  7. Click Create secret. You will return to the Secret definitions tab.
  8. Close the browser tab to return to Designer.
  9. Select your new secret definition from the drop-down menu. If the secret has not propagated, close the dialog and try again.

Mode = drop-down

  • Basic: This mode will build a query for you using settings from the Schema, Data Source, Data Selection, Data Source Filter, Combine Filters, and Limit parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from Mailchimp. The available fields and their descriptions are documented in the data model.

There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.

Note

While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.


Connection Options = column editor

  • Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
  • Value: A value for the given parameter.

SQL Query = code editor

This is an SQL-like SELECT query. Treat collections as table names, and fields as columns. Only available in Advanced mode.


Data Source = drop-down

Select a data source.


Data Selection = dual listbox

Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.


Data Source Filter = column editor

  • Input Column: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character % to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
  • Value: The value to be compared.

Combine Filters = drop-down

Select whether to use the defined filters in combination with one another according to either And or Or.


Limit = integer

Set a numeric value to limit the number of rows that are loaded.


Select your cloud data warehouse.

Destination = drop-down

  • Snowflake: Load your data into Snowflake. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.

Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace if exists: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • S3: Stage your data on an AWS S3 bucket.
  • Snowflake: Stage your data on a Snowflake internal stage.
  • Azure Storage: Stage your data in an Azure Blob Storage container.
  • Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.

Click one of the tabs below for documentation applicable to that staging platform.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Internal Stage Type = drop-down

A Snowflake internal stage type. Currently, only type User is supported.

Read Choosing an Internal Stage for Local Files to learn more about internal stage types and the usage of each.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Folder Path = string

The folder path of the written files.


File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.

Destination = drop-down

  • Databricks: Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Databricks or Cloud Storage tab on this page for documentation applicable to that destination type.

Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which schema are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace if exists: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged Files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • S3: Stage your data on an AWS S3 bucket.
  • Azure Storage: Stage your data in an Azure Blob Storage container.

Click one of the tabs below for documentation applicable to that staging platform.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Folder Path = string

The folder path of the written files.


File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.

Destination = drop-down

  • Amazon Redshift: Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Amazon Redshift or Cloud Storage tab on this page for documentation applicable to that destination type.

Schema = drop-down

Select the Redshift schema. The special value, [Environment Default], will use the schema defined in the environment. For information about using multiple schemas, read Schemas.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace if exists: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged Files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Folder Path = string

The folder path of the written files.


File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Overwrite = boolean

Select whether to overwrite files of the same name when this pipeline runs. Default is Yes.


Strategy

  1. Connect to the target database and issue the query.
  2. Stream the results into objects in cloud storage.
  3. Create or truncate the target table and issue a COPY command to load the cloud storage objects into the table.
  4. Finally, clean up the temporary cloud storage objects.

Snowflake Databricks Amazon Redshift (preview)