Skip to content

Datadog

This page describes how to configure a Datadog data source. With Data Loader, you can replicate and load your source data into your target destination.

Datadog is a Flex connector. In the Data Productivity Cloud, Flex connectors are preconfigured connectors that satisfy a number of endpoints (see below).

You can use the Datadog connector in its preconfigured state, or you can edit the connector by adding or amending available endpoints in your Datadog source as per your use case. You can edit Flex connectors in the Custom Connector user interface.

Schema Drift Support: Yes. Read Schema Drift to learn more.

Return to any page of this wizard by clicking Previous.

Click X in the upper-right of the UI and then click Yes, discard to close the pipeline creation wizard.


Prerequisites

Read the Allowed IP addresses topic before you begin. You may not be able to connect to certain data sources without first allowing the Batch IP addresses. In these circumstances, connection tests will always fail and you will not be able to complete the pipeline.


Endpoints

The following endpoints are available by default for the Datadog Flex connector:

Endpoint Name Description Documentation
Get All Dashboard Lists Get all dashboards. Click here
Get All Downtimes Get all scheduled downtimes. Click here
Get All Teams Get all teams. Click here
Lists All Users Get the list of all users in the organization. Click here
Search Monitors Search and filter your monitors details. Click here
Get All Monitor Details Get details about the specified monitor from your organization. Click here
Get All Synthetics Tests Get the list of all Synthetic tests. Click here
Get All Dashboards Get all dashboards. Click here
Get List of Events The event stream can be queried and filtered by time, priority, sources and tags. Click here
Get All Hosts Allows to search hosts by name, alias, or tag. Click here
List IP Ranges Get information about Datadog IP ranges. Click here
Get All API Keys List all API keys available for your account. Click here
Get All Application Keys List all application keys available for your org. Click here
Get All SLOs Get a list of service level objective objects for your organization. Click here
Search Logs List endpoint returns logs that match a log search query. Click here
Get Metrics Get metadata about a specific metric. Click here

Create pipeline

  1. In Data Loader, click Add pipeline.
  2. Choose Datadog from the grid of data sources. You can also use the search bar.

:::info{title='Note'} When you create a Datadog Flex connector, it will become accessible from the Custom Connectors tab of the Choose sources menu. Read Flex connector setup for more information. :::


Choose endpoints

Select one or more endpoints to use. Use the arrow buttons to move endpoints to the Endpoints to extract and load listbox and then reorder any tables with click-and-drag. Additionally, select multiple tables using the SHIFT key.

Click Continue with X endpoints to move forward.


Configure your endpoints

You need to configure each endpoint you wish to use. These instructions assume you have kept Configuration Mode set to "Basic".

General

The General tab displays the endpoint URL. In this tab, you can set your data warehouse table name and choose either "Basic" or "Advanced" configuration.

  1. Provide a data warehouse table name.
  2. Choose either Basic or Advanced. Advanced configuration requires more manual user input. Read Custom Connector batch pipeline to learn more.

Authentication

Datadog uses an API key for authentication.

  1. Read the Datadog documentation to learn how to acquire an API key.
  2. Specify the required key.
  3. Use the Manage Passwords dialog to save your API key value as a password entry.
    1. Click Manage.
    2. Click + Add new password.
    3. In the Add new password dialog, provide a unique, descriptive password label.
    4. Provide the literal value of the password.
    5. Click Save password.
    6. Click Done.
    7. In the API Key Value drop-down menu, choose your newly created password.

Behaviour (advanced mode)

In the Behaviour tab you can choose which elements you want to include as columns in the target table. By default, all elements are selected.


Parameters

  1. The Query and Body tabs are not required.
  2. In the URI tab, set the Datadog API version number, for example v1. Refer to the Datadog documentation for the latest version of the API to use.
  3. In the Header tab, enter the Application key. Refer to Application Keys for information on adding an Application key.
  4. Click Continue when you are happy with the configuration. An endpoint will display a green checkmark and red warnings will disappear from each tab.

Parameters (advanced mode)

  1. The Query and Body tabs are not required.
  2. Select the URI tab, set the Datadog API version number, for example v1.
  3. In the Header tab, enter the Application key. Refer to Application Keys for information on adding an Application key.
  4. Use the Dynamic? toggle to select whether the parameter is dynamic (toggle on/green) or static (toggle off/gray).

    • Static parameters have an unchanging value that you must type into the Value field.
    • Dynamic parameters have a behavior dependent on which Parser is selected from the drop-down: DATE or STRING.

    The DATE parser retrieves source data based on some date in your current data set. To define which date will be used, you need to create a variable (see above) mapping to that date. The MAX variable type will identify the maximum value of the mapped field in your data set and load new data based on that value. For example, if the maximum (latest) date in the current data set is one week ago, all data newer than one week ago will be loaded when the pipeline runs.

    The STRING parser allows you to write a custom query that will be used in the API call.

  5. Add or select a Variable after selecting a parser type. To set a new variable:

    1. Click Add new variable.
    2. Select whether the variable type is MAX or DATE. This will determine how it is used with a parameter (see below).
    3. Name the variable. Names must be unique.
    4. Select which data field the variable maps to.
    5. Click Add.
  6. Configure the parser if required. Click the cog wheel icon next to the variable you have created to open the Version settings.
    1. Choose the required variable, if not already selected.
    2. Choose an appropriate API format.
    3. Choose a suitable timeframe from the Shift drop-down menu.
    4. Choose the required Data warehouse format from the drop-down menu.
    5. Click Save when finished.

Keys (advanced mode)

In the Keys tab:

  1. Add a key column by selecting a property from the drop-down. Use the + Add property button to add further properties as required.
  2. Select required parameters from the drop-down menu.

Choose destination

Choose an existing destination or click Add a new destination. - Read Set up Snowflake to configure your Snowflake account to use Snowflake as a destination within Data Loader. - Read Connect to Snowflake to use Snowflake as your destination for batch-loading a pipeline.


Set frequency

Property Description
Pipeline name A descriptive label for your pipeline. This is how the pipeline appears on the pipeline dashboard and how Data Loader refers to the pipeline.
Sync every The frequency at which the pipeline should sync. The minimum frequency is every 5 minutes. Day values include 1—7. Hour values include 1—23. Minute values include 5—59. The input is also the length of delay before the first sync.

:::info{title='Note'} Currently, you can't specify a start time. :::

Once you are happy with your pipeline configuration, click Create pipeline to complete the process and add the pipeline to your dashboard.


Support

For any queries or assistance, visit Getting support or visit our support portal.