Skip to content

Google Custom Search

This page describes how to configure a Google Custom Search data source. With Data Loader, you can replicate and load your source data into your target destination.

Schema Drift Support: Yes. Read Schema Drift to learn more.

Return to any page of this wizard by clicking Previous.

Click X in the upper-right of the UI and then click Yes, discard to close the pipeline creation wizard.


Create pipeline

  1. In Data Loader, click Add pipeline.
  2. Choose Google Custom Search from the grid of data sources.
  3. Choose Batch Loading.

Configure the Google Custom Search database connection settings, specifying the following:

Property Description
API Key A valid Google Custom Search API Key. You can acquire one while logged into the Google Cloud Console. Visit Custom Search JSON API: Introduction and select Get a Key to acquire an API Key.
Customer search ID A Custom Search engine identifier. Visit Google's Programmable search engine site to get started or log in. Matillion offers additional documentation for acquiring your Custom Search ID here.
Advanced settings Additional JDBC parameters or connection settings. Click Advanced settings and then choose a parameter from the dropdown menu and enter a value for the parameter. Click Add parameter for each extra parameter you want to add. For a list of compatible connection properties, read Allowed connection properties.

Click Test and Continue to test your settings and move forward. You can't continue if the test fails for any reason.


Filter out search results

Property Description
Search terms One or more search terms for your search query. At least one term is required. A search term can be as short as a single character.
Search safety Choose an appropriate safety setting for your search query. The default is off.
Site The domain of the site you wish to search, for example, https://news.google.com/. This parameter is optional.
Language restriction Enter a language code to restrict the search results to only that language. The full range of available languages is listed below.
Truncate tables Flattens and converts a nested data layer object into a new object with only one layer of key/value pairs. The default is No.

Available languages

A-G H-N O-Z
lang_ar lang_hr lang_pl
lang_bg lang_hu lang_pt
lang_ca lang_id lang_ro
lang_cs lang_is lang_ru
lang_da lang_it lang_sk
lang_de lang_iw lang_sl
lang_el lang_ja lang_sr
lang_en lang_ko lang_sv
lang_es lang_lt lang_tr
lang_et lang_lv lang_zh-cn
lang_fi lang_nl lang_zh-tw
lang_fr lang_no

Choose tables

Choose any tables you wish to include in the pipeline. Use the arrow buttons to move tables to the Tables to extract and load listbox and then reorder any tables with click-and-drag. Additionally, select multiple tables using the SHIFT key.

Click Continue with X tables to move forward.


Review your data set

Choose the columns from each table to include in the pipeline. By default, Data Loader selects all columns from a table.

Click Configure on a table to open Configure table. This dialog lists columns in a table and the data type of each column. Additionally, you can set a primary key and assign an incremental column state to a column.

Use the arrow buttons to move columns out of the Columns to extract and load listbox. Order columns with click-and-drag. Select multiple columns using SHIFT.

To continue and then click Done.

Click Continue once you have configured each table.


Choose destination

  1. Choose an existing destination or click Add a new destination.
  2. Select a destination from Snowflake, Amazon Redshift, or Google BigQuery.

Set frequency

Property Description
Pipeline name A descriptive label for your pipeline. This is how the pipeline appears on the pipeline dashboard and how Data Loader refers to the pipeline.
Sync every The frequency at which the pipeline should sync. Day values include 1—7. Hour values include 1—23. Minute values include 5—59. The input is also the length of delay before the first sync.

Currently, you can't specify a start time.

Once you are happy with your pipeline configuration, click Create pipeline to complete the process and add the pipeline to your dashboard.