Skip to content

Marketo

This page describes how to configure a Marketo data source. With Data Loader, you can replicate and load your source data into your target destination.

Schema Drift Support: yes—read Schema Drift to learn more.

Return to any page of this wizard by clicking Previous.

Click X in the upper-right of the UI and then click Yes, discard to close the pipeline creation wizard.


Prerequisites

  • Read the Allowed IP addresses topic before you begin. You may not be able to connect to certain data sources without first allowing the Batch IP addresses. In these circumstances, connection tests will always fail and you will not be able to complete the pipeline.
  • Kafka Broker: Confluent Platform 3.3.0 or above, or Kafka 0.11.0 or above.
  • Kafka Connect: Confluent Platform 4.1.0 or above, or Kafka 1.1.0 or above.
  • Java 1.8.
  • This pipeline queries system tables to find metadata for your data sources, and can result in more API calls than expected. We advise keeping the frequency of these jobs as low as possible. Contact your Marketo administrator to gauge your API usage.
  • It's advised that you don't create pipelines with more than 5 sources. This is due to the Marketo API having a limit of 100 calls per 20 seconds per instance.

Create pipeline

  1. In Data Loader, click Add pipeline.
  2. Choose Marketo from the grid of data sources.
  3. Choose Batch Loading.

Connect to Marketo

Configure the Marketo connection settings, specifying the following:

Property Description
Marketo Connection Select a connection from the drop-down menu, or click Add Connection if one doesn't exist.
Connection Name Give a unique name for the connection, and click Connect. A new browser tab will open, where Marketo will ask you to confirm authorization using valid credentials.
Endpoint Enter the REST endpoint for your Marketo data source.
Advanced settings Additional JDBC parameters or connection settings. Expand the Advanced settings, and choose a parameter from the drop-down menu. Enter a value for the parameter, and click Add parameter for any extra parameters you want to add. For a list of compatible connection properties, read Allowed connection properties.

To create a new connection, you will need to enter a Client ID and Client Secret, which you can obtain from your Marketo account. To obtain these details, log in to the Marketo Developer Portal and follow the instructions in Marketo Query Authentication Guide.

Click Test and Continue to test your settings and move forward. You can't continue if the test fails.


Choose tables

Choose a Marketo schema from the dropdown list.

Choose any tables you wish to include in the pipeline. Use the arrow buttons to move tables to the Tables to extract and load listbox and then reorder any tables with click-and-drag. Additionally, select multiple tables using the SHIFT key.

Click Continue with X tables to move forward.


Review your data set

Choose the columns from each table to include in the pipeline. By default, Data Loader selects all columns from a table.

Click Configure on a table to open Configure table. This dialog lists columns in a table and the data type of each column. Additionally, you can set a primary key and assign an incremental column state to a column.

  • Primary Key columns should represent a true PRIMARY KEY that uniquely identifies each record in a table. Composite keys work, but you must specify all columns that compose the key. Based on the primary key, this won't permit duplicate records. Jobs may fail or replicate data incorrectly if these rules aren't applied.
  • Make sure an Incremental column is a true change data capture (CDC) column that can identify whether there has been a change for each record in the table. This column should be a TIMESTAMP/DATE/DATETIME type or an INTEGER type representing a date key or UNIX timestamp.

Click Add and remove columns to modify a table before a load. Use the arrow buttons to move columns out of the Columns to extract and load listbox. Order columns with click-and-drag. Select multiple columns using SHIFT.

Click Done adding and removing to continue and then click Done.

Click Continue once you have configured each table.


Choose destination

  1. Choose an existing destination or click Add a new destination.
  2. Select a destination from Snowflake, Amazon Redshift, or Google BigQuery.

Set frequency

Property Description
Pipeline name A descriptive label for your pipeline. This is how the pipeline appears on the pipeline dashboard and how Data Loader refers to the pipeline.
Sync every The frequency at which the pipeline should sync. Day values include 1—7. Hour values include 1—23. Minute values include 5—59. The input is also the length of delay before the first sync.

Currently, you can't specify a start time.

Once you are happy with your pipeline configuration, click Create pipeline to complete the process and add the pipeline to your dashboard.