Skip to content

Freshdesk

This page describes how to configure a Freshdesk connector as part of a data pipeline within Designer.

The Freshdesk connector is a Flex connector. In the Data Productivity Cloud, Flex connectors let you connect to a curated set of endpoints to load data.

You can use the Freshdesk connector in its preconfigured state, or you can edit the connector by adding or amending available Freshdesk endpoints as per your use case. You can edit Flex connectors in the Custom Connector user interface.

For detailed information about authentication, specific endpoints parameters, pagination, and more aspects of the Freshdesk API, read the Freshdesk API documentation.


Properties

Name = string

A human-readable name for the component.


Data Source = drop-down

The data source to load data from in this pipeline. The drop-down menu lists the Freshdesk API endpoints available in the connector. For detailed information about specific endpoints, read the Freshdesk API documentation.


Authentication Type = drop-down

The authentication method to authorize access to your Freshdesk data.


Username = string

Your Freshdesk username.


Password = drop-down

Use the drop-down menu to select the corresponding secret definition that denotes your Freshdesk password.

Read Secret definitions to learn how to create a new secret definition.


URI Parameters = column editor

  • Parameter Name: The name of a URI parameter.
  • Paramater Value: The value of the corresponding parameter.

Query Parameters = column editor

  • Parameter Name: The name of a query parameter.
  • Paramater Value: The value of the corresponding parameter.

Header Parameters = column editor

  • Parameter Name: The name of a URI parameter.
  • Paramater Value: The value of the corresponding parameter.

Post Body = JSON

A JSON body as part of a POST request.


Page Limit = integer

A numeric value to limit the maximum number of records per page.


Destination = drop-down

  • Cloud Storage: Load your data into your preferred cloud storage location.
  • Snowflake: Load your data into your preferred cloud data warehouse. A cloud storage location will be used for temporary staging of the data.

Click either the Cloud Storage or Snowflake tab on this page for documentation applicable to that destination type.

Folder Path = string

The folder path of the written files.


File Prefix = string

A string of characters to include at the beginning of the written files. Often used for organizing database objects.


Storage = drop-down

A cloud storage location to load your data into for storage.

Choose either Amazon S3 or Azure Blob Storage.

Click either the Amazon S3 or Azure Blob Storage tab below for documentation applicable to that storage type.

AWS Bucket = drop-down

An AWS S3 bucket to load data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.

Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Table Name = string

The name of the table to be created.


Create Table Mode = drop-down

  • Replace if exists: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if exists: If the specified table name already exists, this pipeline will fail to run.

Clean Staged files = boolean

  • Yes: Staged files will be destroyed after data is loaded.
  • No: Staged files are retained in the staging area after data is loaded.

Stage Platform = drop-down

Choose a data staging platform using the drop-down menu.

  • S3: Stage your data on an AWS S3 bucket.
  • Snowflake: Stage your data on a Snowflake internal stage.

Click one of the tabs below for documentation applicable to that staging platform.

AWS Bucket = drop-down

An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.

Internal Stage Type = drop-down

A Snowflake internal stage type. Currently, only type User is supported.

Read Choosing an Internal Stage for Local Files to learn more about internal stage types and the usage of each.

Storage Account = drop-down

Select a storage account linked to your desired blob container to be used for staging the data. For more information, read Storage account overview.


Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.