Facebook AdAccounts
This page describes how to configure a Facebook AdAccounts data source. With Data Loader, you can replicate and load your source data into your target destination.
- Any existing Facebook OAuth connection must be deleted within the Manage Connections dialog, and recreated.
- Once recreated, all existing Facebook AdAccounts, AdInsights, and Content Insights pipelines that use the OAuth connection should run successfully.
- If you encounter issues with Facebook API rate limits, please recreate the pipeline with fewer views/tables specified per pipeline and schedule them to run at wider intervals so that the rate limit can recover between runs.
- One pipeline will be created per target—this is passed as variable
key_value
.
Schema Drift Support: Yes. Read Schema Drift to learn more.
Return to any page of this wizard by clicking Previous.
Click X in the upper-right of the UI and then click Yes, discard to close the pipeline creation wizard.
Prerequisites
- Read the Allowed IP addresses topic before you begin. You may not be able to connect to certain data sources without first allowing the Batch IP addresses. In these circumstances, connection tests will always fail and you will not be able to complete the pipeline.
Create pipeline
- In Data Loader, click Add pipeline.
- Choose Facebook AdAccounts from the grid of data sources.
- Choose Batch Loading.
Connect to Facebook AdAccounts
Configure the Facebook AdAccounts database connection settings, specifying the following:
Property | Description |
---|---|
Facebook AdAccounts Connection | Select a connection from the drop-down menu, or click Add Connection if one doesn't exist. |
Connection Name | Give a unique name for the connection, and click Connect. A new browser tab will open, where Facebook will ask you to confirm authorization using valid credentials. |
Account Value | The ID for your Facebook Ad account. Should take the form act_XXXXXXXXXXXXXXXXXX . |
Advanced settings | Additional JDBC parameters or connection settings. Expand the Advanced settings, and choose a parameter from the drop-down menu. Enter a value for the parameter, and click Add parameter for any extra parameters you want to add. For a list of compatible connection properties, read Allowed connection properties. |
Click Test and Continue to test your settings and move forward. You can't continue if the test fails.
Choose tables
Choose any tables (data sources) you wish to include in the pipeline. Use the arrow buttons to move tables to the Tables to extract and load listbox and then reorder any tables with click-and-drag. Additionally, select multiple tables using the SHIFT
key.
Note
Data Loader will extract all available (historic) data for any of the Campaign
, AdAccounts
, and AdCreatives
views selected by the user, for all the columns you select within that view. For Ads
and AdSets
views, one year of historic data will be extracted by default for all the columns selected by the user within that view.
If you encounter an issue with your rate limit when querying AdCreatives
, AdSets
or Ads
, you can also use an adset id
or ad id
to query the AdCreatives
, Ads
or AdSets
views. This replaces the account value (act_XXXXXXXXXXXXX
).
AdAccounts
, Campaigns
, and AdCreatives
are truncate-loaded each run. This means that every time the process is run, the table is fully reloaded. The target table is completely deleted, and data from Facebook AdAccounts is inserted afresh.
Ads
and Adsets
will be loaded via APPEND
using UpdatedTime
as the high tide mark—this is the maximum date/timestamp for the chosen incremental column that defines change detect capture. You would only get data for a table where the incremental column is equal to or greater than the high tide mark.
Click Continue with X tables to move forward.
Review your data set
Choose the columns from each table to include in the pipeline. By default, Data Loader selects all columns from a table.
Click Configure on a table to open Configure table. This dialog lists columns in a table and the data type of each column. Additionally, you can set a primary key and assign an incremental column state to a column.
Click Add and remove columns to modify a table before a load. Use the arrow buttons to move columns out of the Columns to extract and load listbox. Order columns with click-and-drag. Select multiple columns using SHIFT
.
Click Done adding and removing to continue, and then click Done.
Click Continue once you have configured each table.
Choose destination
- Choose an existing destination or click Add a new destination.
- Select a destination from Snowflake, Amazon Redshift, or Google BigQuery.
Set frequency
Property | Description |
---|---|
Pipeline name | A descriptive label for your pipeline. This is how the pipeline appears on the pipeline dashboard and how Data Loader refers to the pipeline. |
Sync every | The frequency at which the pipeline should sync. Day values include 1—7. Hour values include 1—23. Minute values include 5—59. The input is also the length of delay before the first sync. |
Currently, you can't specify a start time.
Once you are happy with your pipeline configuration, click Create pipeline to complete the process and add the pipeline to your dashboard.