Skip to content

Facebook Content Insights

This page describes how to configure a Facebook Content Insights data source. With Data Loader, you can replicate and load your source data into your target destination.

:::info{title='Note'}

  • Any existing Facebook OAuth connections must be deleted within the Manage Connections dialog, and recreated.
  • Once recreated, all existing Facebook AdAccounts, AdInsights, and Content Insights pipelines that use the OAuth connection should run successfully.
  • If you encounter issues with Facebook API rate limits, please recreate the pipeline with fewer views/tables specified per pipeline and schedule them to run at wider intervals so that the rate limit can recover between runs.
  • One pipeline will be created per target—this is passed as variable key_value. :::

Schema Drift Support: Yes. Read Schema Drift to learn more.

Return to any page of this wizard by clicking Previous.

Click X in the upper-right of the UI and then click Yes, discard to close the pipeline creation wizard.


Prerequisites

  • Read the Allowed IP addresses topic before you begin. You may not be able to connect to certain data sources without first allowing the Batch IP addresses. In these circumstances, connection tests will always fail and you will not be able to complete the pipeline.

Create pipeline

  1. In Data Loader, click Add pipeline.
  2. Choose Facebook Content Insights from the grid of data sources.
  3. Choose Batch Loading.

Connect to Facebook Content Insights

Configure the Facebook Content Insights database connection settings, specifying the following:

Property Description
Facebook Content Insights Connection Select a connection from the drop-down menu, or click Add Connection if one doesn't exist.
Connection Name Give a unique name for the connection, and click Connect. A new browser tab will open, where Facebook will ask you to confirm authorization using valid credentials.
Target Value Enter the Page ID or Post ID for the data you wish to retrieve.
Period Use the drop-down menu to select a time period of content insights data to load. The data source InsightsByTabType isn't compatible with the periods day_28 or lifetime.
Advanced settings Additional JDBC parameters or connection settings. Expand the Advanced settings, and choose a parameter from the drop-down menu. Enter a value for the parameter, and click Add parameter for any extra parameters you want to add. For a list of compatible connection properties, read Allowed connection properties.

Click Test and Continue to test your settings and move forward. You can't continue if the test fails.


Choose tables

Choose any tables you wish to include in the pipeline. Use the arrow buttons to move tables to the Tables to extract and load listbox and then reorder any tables with click-and-drag. Additionally, select multiple tables using the SHIFT key.

Click Continue with X tables to move forward.


Review your data set

Choose the columns from each table to include in the pipeline. By default, Data Loader selects all columns from a table.

Click Configure on a table to open Configure table. This dialog lists columns in a table and the data type of each column. Additionally, you can set a primary key and assign an incremental column state to a column.

:::info{title='Note'}

  • Primary Key columns should represent a true PRIMARY KEY that uniquely identifies each record in a table. Composite keys work, but you must specify all columns that compose the key. Based on the primary key, this won't permit duplicate records. Jobs may fail or replicate data incorrectly if these rules aren't applied.
  • Make sure an Incremental column is a true change data capture (CDC) column that can identify whether there has been a change for each record in the table. This column should be a TIMESTAMP/DATE/DATETIME type or an INTEGER type representing a date key or UNIX timestamp. :::

Click Add and remove columns to modify a table before a load. Use the arrow buttons to move columns out of the Columns to extract and load listbox. Order columns with click-and-drag. Select multiple columns using SHIFT.

Click Done adding and removing to continue and then click Done.

Click Continue once you have configured each table.


Choose destination

  1. Choose an existing destination or click Add a new destination.
  2. Select a destination from Snowflake, Amazon Redshift, or Google BigQuery.

Set frequency

Property Description
Pipeline name A descriptive label for your pipeline. This is how the pipeline appears on the pipeline dashboard and how Data Loader refers to the pipeline.
Sync every The frequency at which the pipeline should sync. Day values include 1—7. Hour values include 1—23. Minute values include 5—59. The input is also the length of delay before the first sync.

Currently, you can't specify a start time.

Once you are happy with your pipeline configuration, click Create pipeline to complete the process and add the pipeline to your dashboard.