Skip to content

CircleCI

This page describes how to configure the CircleCI connector as part of a data pipeline within Designer.

The CircleCI connector is a Flex connector. In the Data Productivity Cloud, Flex connectors let you connect to a curated set of endpoints to load data.

You can use the CircleCI connector in its preconfigured state, or you can edit the connector by adding or amending available CircleCI endpoints as per your use case. You can edit Flex connectors in the Custom Connector user interface.

For detailed information about authentication, specific endpoints parameters, pagination, and more aspects of the CircleCI API, read the CircleCI API documentation.


Properties

Name = string

A human-readable name for the component.


Data Source = drop-down

The data source to load data from in this pipeline. The drop-down menu lists the CircleCI API endpoints available in the connector. For detailed information about specific endpoints, read the CircleCI API documentation.

Endpoint Method Reference
Get List of Pipelines GET List of all pipelines
Get Summary Metrics, Trends for Project GET Retrieve summary metrics and trends for a project across its workflows and branches
Get a Pipeline Configuration GET Retrieve a pipeline's configuration
Get Workflows for a Pipeline GET Retrieve a pipeline's workflows
Get Jobs for a Workflow GET Retrieve a workflow's jobs
Get List of Contexts GET List contexts
Get a Context GET Retrieve a context
Get Flaky Tests for a Project GET Retrieve flaky tests for a project

Authentication Type = drop-down

The authentication method to authorize access to your CircleCI data. Currently supports API Key.


Key = string

The key of a working API key:value pair.

You are required to authenticate through the header parameter Circle-Token. Enter Circle-Token in the Key field.


Value = drop-down

Use the drop-down menu to select the corresponding secret definition that denotes the value of a working API key:value pair.

Read Secret definitions to learn how to create a new secret definition.

Read the CircleCI API documentation to learn how to acquire an API key.


URI Parameters = column editor

  • Parameter Name: The name of a URI parameter.
  • Parameter Value: The value of the corresponding parameter.
Required parameter Endpoints Description
project_slug Get List of Pipelines, Get Summary Metrics, Trends for Project, Get a Pipeline Configuration, Get Workflows for a Pipeline, Get Jobs for a Workflow, Get Flaky Tests for a Project Project slug in the form vcs-slug/org-name/repo-name. The / characters may be URL-escaped. For projects that use GitLab or GitHub App, use circleci as the vcs-slug, replace org-name with the organization ID (found in Organization Settings), and replace repo-name with the project ID (found in Project Settings). Example, gh/CircleCI-Public/api-preview-docs.
pipeline_id Get a Pipeline Configuration, Get Workflows for a Pipeline, Get Jobs for a Workflow The unique ID of the pipeline. Example of the pipeline ID 5034460f-c7c4-4c43-9457-de07e2029e7b.
workflow_id Get Jobs for a Workflow The unique ID of the workflow.
context_id Get a Context The unique ID of the context.

Query Parameters = column editor

  • Parameter Name: The name of a query parameter.
  • Parameter Value: The value of the corresponding parameter.
Required parameter Endpoints Description
owner-id Get List of Contexts The unique ID of the owner of the context. Specify either this or owner-slug.

Note

owner-slug is a string that represents an organization. Specify this or owner-ID. For more information, read List contexts.


Header Parameters = column editor

  • Parameter Name: The name of a URI parameter.
  • Parameter Value: The value of the corresponding parameter.

Post Body = JSON

A JSON body as part of a POST request.


Page Limit = integer

A numeric value to limit the maximum number of records per page.


Select your cloud data warehouse.

Destination = drop-down

  • Snowflake: Load your data into Snowflake. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.

Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • S3: Stage your data on an AWS S3 bucket.
  • Snowflake: Stage your data on a Snowflake internal stage.
  • Azure Storage: Stage your data in an Azure Blob Storage container.

Click one of the tabs below for documentation applicable to that staging platform.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Internal Stage Type = drop-down

A Snowflake internal stage type. Currently, only type User is supported.

Read Choosing an Internal Stage for Local Files to learn more about internal stage types and the usage of each.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Folder Path = string

The folder path of the written files.


Load Strategy = drop-down

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.

Destination = drop-down

  • Databricks: Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Databricks or Cloud Storage tab on this page for documentation applicable to that destination type.

Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which schema are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged Files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • S3: Stage your data on an AWS S3 bucket.
  • Azure Storage: Stage your data in an Azure Blob Storage container.

Click one of the tabs below for documentation applicable to that staging platform.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Folder Path = string

The folder path of the written files.


Load Strategy = drop-down

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.

Destination = drop-down

  • Amazon Redshift: Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Amazon Redshift or Cloud Storage tab on this page for documentation applicable to that destination type.

Schema = drop-down

Select the Redshift schema. The special value, [Environment Default], will use the schema defined in the environment. For information about using multiple schemas, read Schemas.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged Files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Folder Path = string

The folder path of the written files.


Load Strategy = drop-down

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Log Level = drop-down

Set the severity level of logging. Choose from Error, Warn, Info, Debug, or Trace.


Deactivate soft delete for Azure blobs (Databricks)

If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the "Enable soft delete for blobs" setting in your Azure account for your pipeline to run successfully. To do this:

  1. Log in to the Azure portal.
  2. In the top-left, click ☰ → Storage Accounts.
  3. Select the intended storage account.
  4. In the menu, under Data management, click Data protection.
  5. Untick Enable soft delete for blobs. For more information, read Soft delete for blobs.

Snowflake Databricks (preview) Amazon Redshift