Skip to content

Asana

This page describes how to configure the Asana connector as part of a data pipeline within Designer.

The Asana connector is a Flex connector. In the Data Productivity Cloud, Flex connectors let you connect to a curated set of endpoints to load data.

You can use the Asana connector in its preconfigured state, or you can edit the connector by adding or amending available Asana endpoints as per your use case. You can edit Flex connectors in the Custom Connector user interface.

For detailed information about authentication, specific endpoints parameters, pagination, and more aspects of the Asana API, read the Asana API documentation.


Properties

Name = string

A human-readable name for the component.


Data Source = drop-down

The data source to load data from in this pipeline. The drop-down menu lists the Asana API endpoints available in the connector. For detailed information about specific endpoints, read the Asana API documentation.

Endpoint Method Reference
Attachments GET Get attachments from an object
Project Custom Fields GET Get a project's custom fields
Portfolio Custom Fields GET Get a portfolio's custom fields
Workspace Custom Fields GET Get a workspace's custom fields
Goal Relationships GET Get goal relationships
Goals GET Get goals
Memberships GET Get team memberships
Portfolio Memberships GET Get multiple portfolio memberships
Portfolios GET Get multiple portfolios
Portfolio Items GET Get portfolio items
Project Brief GET Get a project brief
Project Memberships GET Get memberships from a project
Project Statuses GET Get statuses from a project
Project Templates GET Get multiple project templates
Projects GET Get multiple projects
Project Sections GET Get sections in a project
Status Updates GET Get status updates from an object
Story GET Get a story
Task Stories GET Get stories from a task
Tags GET Get multiple tags
Task Templates GET Get multiple task templates
Tasks GET Get multiple tasks
Team Memberships GET Get team memberships
Workspace Teams GET Get teams in a workspace
Time Periods GET Get time periods
Typeahead Objects GET Get objects via typeahead
Users GET Get multiple users
Workspace Memberships GET Get the workspace memberships for a workspace
Workspaces GET Get multiple workspaces

Authentication Type = drop-down

The authentication method to authorize access to your Asana data. Currently supports bearer token.


Token = string

Use the drop-down menu to select the corresponding secret definition that denotes your Asana bearer token.

Read Secret definitions to learn how to create a new secret definition.

To acquire an Asana bearer token:

  1. Log in to Asana.
  2. Click your profile picture at the top.
  3. Click Settings.
  4. Open the Apps tab.
  5. Click Manage Developer Apps.
  6. Click + Create new token.
  7. In the Create new token dialog, enter a description in the field provided about what the token will be used for.
  8. Click the check box to agree to Asana's API terms.
  9. Click Create token.

    Warning

    Make sure to copy this access token now. You won't see it again.

  10. Paste the token in the Secret value field in the Add new secret definition dialog.

For more information about authentication, read the Asana API documentation.


URI Parameters = column editor

  • Parameter Name: The name of a URI parameter.
  • Parameter Value: The value of the corresponding parameter.
Required parameter Endpoints Description
api_version All endpoints 1.0
project_gid Project Custom Fields, Project Memberships, Project Statuses, Project Sections Globally unique identifier for the project.
portfolio_gid Portfolio Custom Fields, Portfolio Items Globally unique identifier for the portfolio.
workspace_gid Workspace Custom Fields, Workspace Teams, Typeahead Objects, Workspace Memberships Globally unique identifier for the workspace or organization.
project_brief_gid Project Brief Globally unique identifier for the project brief.
story_gid Story Globally unique identifier for the story.
task_gid Task Stories The task to operate on.

Query Parameters = column editor

  • Parameter Name: The name of a query parameter.
  • Parameter Value: The value of the corresponding parameter.
Required parameter Endpoints Description
parent Attachments, Memberships, Status Updates Globally unique identifier for object to fetch statuses from. Must be a GID for a project, project_brief, or task.
supported_goal Goal Relationships Globally unique identifier for the supported goal in the goal relationship.
workspace Goals, Portfolios, Projects, Tags, Time Periods, Users Globally unique identifier for the workspace.
portfolio Portfolio Memberships The portfolio to filter results on.
team Project Templates, Team Memberships The team to filter projects on.
project Task Templates, Tasks The project to filter task templates on.
resource_type Typeahead Objects The type of values the typeahead should return. You can choose from one of the following: custom_field, goal, project, project_template, portfolio, tag, task, team, and user. Note that unlike in the names of endpoints, the types listed here are in singular form (e.g. task). Using multiple types is not yet supported.

Header Parameters = column editor

  • Parameter Name: The name of a URI parameter.
  • Parameter Value: The value of the corresponding parameter.

Post Body = JSON

A JSON body as part of a POST request.


Page Limit = integer

A numeric value to limit the maximum number of records per page.


Select your cloud data warehouse.

Destination = drop-down

  • Snowflake: Load your data into Snowflake. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Snowflake or Cloud Storage tab on this page for documentation applicable to that destination type.

Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • S3: Stage your data on an AWS S3 bucket.
  • Snowflake: Stage your data on a Snowflake internal stage.
  • Azure Storage: Stage your data in an Azure Blob Storage container.

Click one of the tabs below for documentation applicable to that staging platform.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Internal Stage Type = drop-down

A Snowflake internal stage type. Currently, only type User is supported.

Read Choosing an Internal Stage for Local Files to learn more about internal stage types and the usage of each.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Folder Path = string

The folder path of the written files.


Load Strategy = drop-down

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.

Destination = drop-down

  • Databricks: Load your data into Databricks. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Databricks or Cloud Storage tab on this page for documentation applicable to that destination type.

Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which schema are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged Files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • S3: Stage your data on an AWS S3 bucket.
  • Azure Storage: Stage your data in an Azure Blob Storage container.

Click one of the tabs below for documentation applicable to that staging platform.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Folder Path = string

The folder path of the written files.


Load Strategy = drop-down

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage. Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.

Destination = drop-down

  • Amazon Redshift: Load your data into Amazon Redshift. You'll need to set a cloud storage location for temporary staging of the data.
  • Cloud Storage: Load your data directly into your preferred cloud storage location.

Click either the Amazon Redshift or Cloud Storage tab on this page for documentation applicable to that destination type.

Schema = drop-down

Select the Redshift schema. The special value, [Environment Default], will use the schema defined in the environment. For information about using multiple schemas, read Schemas.


Table Name = string

The name of the table to be created.


Load Strategy = drop-down

  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged Files = boolean

  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Folder Path = string

The folder path of the written files.


Load Strategy = drop-down

  • Append Files in Folder: Appends files to storage folder. This is the default setting.
  • Overwrite Files in Folder: Overwrite existing files with matching structure.

See the configuration table for how this parameter works with the Folder Path and File Prefix parameters:

Configuration Description
Append files in folder with defined folder path and file prefix. Files will be stored under the structure uniqueID/timestamp-partX where X is the part number, starting from 1. For example, 1da27ea6-f0fa-4d15-abdb-d4e990681839/20240229100736969-part1.
Append files in folder without defined folder path and file prefix. Files will be stored under the structure folder/prefix-timestamp-partX where X is the part number, starting from 1. For example, folder/prefix-20240229100736969-part1.
Overwrite files in folder with defined folder path and file prefix. Files will be stored under the structure folder/prefix-partX where X is the part number, starting from 1. All files with matching structures will be overwritten.
Overwrite files in folder without defined folder path and file prefix. Validation will fail. Folder path and file prefix must be supplied for this load strategy.

File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage.

Choose either Amazon S3, Azure Blob Storage, or Google Cloud Storage.

Click the tab that corresponds to your chosen cloud storage service.

Amazon S3 Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

GCS Bucket= drop-down

The drop-down menu will include Google Cloud Storage (GCS) buckets tied to the cloud provider credentials that you have associated with your environment.


Log Level = drop-down

Set the severity level of logging. Choose from Error, Warn, Info, Debug, or Trace.


Deactivate soft delete for Azure blobs (Databricks)

If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the "Enable soft delete for blobs" setting in your Azure account for your pipeline to run successfully. To do this:

  1. Log in to the Azure portal.
  2. In the top-left, click ☰ → Storage Accounts.
  3. Select the intended storage account.
  4. In the menu, under Data management, click Data protection.
  5. Untick Enable soft delete for blobs. For more information, read Soft delete for blobs.

Snowflake Databricks (preview) Amazon Redshift