Google Custom Search Query
Overview
The Google Custom Search Query component uses the Google Custom Search API to retrieve data and load it into a table—this stages the data, so the table is reloaded each time. You can then use transformation components to enrich and manage the data in permanent tables.
Warning
This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the load option Recreate Target Table to Off will prevent both recreation and truncation. Do not modify the target table structure manually.
Properties
Name
= string
A human-readable name for the component.
Basic/Advanced Mode
= drop-down
- Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
- Advanced: This mode will require you to write an SQL-like query to call data from Google Custom Search. The available fields and their descriptions are documented in the data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
API Key
= string
Google Custom Search uses the OAuth standard for authenticating 3rd party applications. Enter the API key for your GCP Project, which must be set up in advance. For help acquiring an API key, read Google Custom Search Query authentication guide.
Custom Search ID
= string
The ID of the Custom Search Engine you wish to use. If you are unsure how to set up up a Custom Search Engine or find your ID, please refer to the Custom Search ID section below.
Connection Options
= column editor
- Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
- Value: A value for the given Parameter.
SQL Query
= code editor
Input an SQL-like query, written according to the data model.
This property is only available when Basic/Advanced Mode is set to Advanced.
Data Source
= drop-down
Select a data source.
Data Selection
= dual listbox
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.
Data Source Filter
= column editor
- Input Column: Select an input column. The available input columns vary depending upon the data source.
- Qualifier:
- Is: Compares the column to the value using the comparator.
- Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
- Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character [%] to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
- Value: The value to be compared.
Combine Filters
= drop-down
Select whether to use the defined filters in combination with one another according to either And or Or.
Limit
= integer
Set a numeric value to limit the number of rows that are loaded.
Type
= drop-down
- External: The data will be put into your storage location and referenced by an external table.
- Standard: The data will be staged in your storage location before being loaded into a table. This is the default setting.
Primary Keys
= dual listbox
Select one or more columns to be designated as the table's primary key.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Target Table
= string
The name of the table to be created.
Warning
This table will be recreated and will drop any existing table of the same name.
Stage
= drop-down
Select a managed stage. The special value, [Custom], will create a stage "on the fly" for use solely within this component.
Stage Platform
= drop-down
Select a staging setting.
- Snowflake Managed: Create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
- Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
- Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
- Existing Google Cloud Storage Location: Activates the GCS Staging Area property, allowing users to specify a custom staging area within Google Cloud Storage.
Stage Authentication
= drop-down
Select an authentication method for data staging.
- Credentials: Uses the credentials configured in the environment. If no credentials have been configured, an error will occur.
- Storage Integration: Use a Snowflake storage integration to authentication data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. To learn more, read Create Storage Integration.
Storage Integration
= drop-down
Select a Snowflake storage integration from the drop-down list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location (Amazon S3, Azure Blob Storage, Google Cloud Storage) and must be set up in advance of selection. To learn more about setting up a storage integration for use in Matillion ETL, read Storage Integration Setup Guide. Only available when Stage Authentication is set to Storage Integration.
S3 Staging Area
= S3 bucket
Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Use Accelerated Endpoint
= boolean
When True, data will be loaded via the s3-accelerate
endpoint. Please consider the following information:
- Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Read Using the Amazon S3 Transfer Acceleration Speed Comparison tool to learn how you can compare accelerated and non-accelerated upload speeds across Amazon S3 Regions.
- Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
- This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
- Cases may arise where Matillion ETL cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Matillion ETL will reveal this property for user input on a "just in case" basis. In these cases, Matillion ETL may return a validation message that reads "OK - Bucket could not be validated." You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission,
GetAccelerateConfiguration
) Matillion ETL will again show this property "just in case". - The default setting is False.
Storage Account
= drop-down
Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.
Blob Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Staging Area
= drop-down
The URL and path of the target Google Storage bucket to be used for staging the queried data. For more information, read Creating storage buckets
Encryption
= drop-down
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
- None: No encryption.
- SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID
= drop-down
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Load Options
= multiple drop-downs
- Clean Staged Files: Destroy staged files after loading data. Default is On.
- String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
- File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
- Trim String Columns: Remove leading and trailing characters from a string column. Default is On.
- Compression Type: Set the compression type to either gzip (default) or None.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
New Table Name
= string
The name of the new table to be created. Only available when Type is set to External.
Stage Database
= drop-down
Specify the stage database. The special value, [Environment Default], will use the database defined in the environment. Only available when Type is set to External.
Stage Schema
= drop-down
Specify the stage schema. The special value, [Environment Default], will use the schema defined in the environment. Only available when Type is set to External.
Stage
= drop-down
Select a stage. Only available when Type is set to External.
Auto Debug
= drop-down
Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.
Debug Level
= drop-down
The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
- Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
- Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
- Will additionally log the body of the request and the response.
- Will additionally log transport-level communication with the data source. This includes SSL negotiation.
- Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.
Name
= string
A human-readable name for the component.
Basic/Advanced Mode
= drop-down
- Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
- Advanced: This mode will require you to write an SQL-like query to call data from Google Custom Search. The available fields and their descriptions are documented in the data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
API Key
= string
Google Custom Search uses the OAuth standard for authenticating 3rd party applications. Enter the API key for your GCP Project, which must be set up in advance. For help acquiring an API key, read Google Custom Search Query authentication guide.
Custom Search ID
= string
The ID of the Custom Search Engine you wish to use. If you are unsure how to set up up a Custom Search Engine or find your ID, please refer to the Custom Search ID section below.
Connection Options
= column editor
- Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
- Value: A value for the given Parameter.
SQL Query
= code editor
Input an SQL-like query, written according to the data model.
This property is only available when Basic/Advanced Mode is set to Advanced.
Data Source
= drop-down
Select a data source.
Data Selection
= dual listbox
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.
Data Source Filter
= column editor
- Input Column: Select an input column. The available input columns vary depending upon the data source.
- Qualifier:
- Is: Compares the column to the value using the comparator.
- Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
- Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character [%] to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
- Value: The value to be compared.
Combine Filters
= drop-down
Select whether to use the defined filters in combination with one another according to either And or Or.
Limit
= integer
Set a numeric value to limit the number of rows that are loaded.
Catalog
= drop-down
Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Matillion ETL environment setup. Selecting a catalog will determine which databases are available in the next parameter.
Database
= drop-down
Select the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup.
Target Table
= string
The name of the table to be created.
Warning
This table will be recreated and will drop any existing table of the same name.
Stage Platform
= drop-down
Select a staging setting.
- AWS S3: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3.
- Azure Blob: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure.
- Personal Staging: Uses a Databricks personal staging location. Your Matillion ETL environment connection to Delta Lake on Databricks requires your username to be token and the corresponding password to be a masked entry for a Databricks access token (AWS). If you're on Azure, the token is already set. Read Authentication using Azure Databricks personal access tokens.
Additionally, read Configure Unity Catalog storage account for CORS to learn how to configure CORS to enable Databricks to manage personal staging locations in Unity Catalog (AWS). If you're using Azure, read Configure Unity Catalog storage account for CORS.
S3 Staging Area
= S3 bucket
Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Storage Account
= drop-down
Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.
Blob Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
Encryption
= drop-down
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
- None: No encryption.
- SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID
= drop-down
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Load Options
= multiple drop-downs
- Clean Staged Files: Destroy staged files after loading data. Default is On.
- String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
- File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
- Compression Type: Set the compression type to either gzip (default) or None.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Auto Debug
= drop-down
Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.
Debug Level
= drop-down
The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
- Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
- Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
- Will additionally log the body of the request and the response.
- Will additionally log transport-level communication with the data source. This includes SSL negotiation.
- Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.
Name
= string
A human-readable name for the component.
Basic/Advanced Mode
= drop-down
- Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
- Advanced: This mode will require you to write an SQL-like query to call data from Google Custom Search. The available fields and their descriptions are documented in the data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
API Key
= string
Google Custom Search uses the OAuth standard for authenticating 3rd party applications. Enter the API key for your GCP Project, which must be set up in advance. For help acquiring an API key, read Google Custom Search Query authentication guide.
Custom Search ID
= string
The ID of the Custom Search Engine you wish to use. If you are unsure how to set up up a Custom Search Engine or find your ID, please refer to the Custom Search ID section below.
Connection Options
= column editor
- Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
- Value: A value for the given Parameter.
SQL Query
= code editor
Input an SQL-like query, written according to the data model.
This property is only available when Basic/Advanced Mode is set to Advanced.
Data Source
= drop-down
Select a data source.
Data Selection
= dual listbox
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.
Data Source Filter
= column editor
- Input Column: Select an input column. The available input columns vary depending upon the data source.
- Qualifier:
- Is: Compares the column to the value using the comparator.
- Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
- Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character [%] to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
- Value: The value to be compared.
Combine Filters
= drop-down
Select whether to use the defined filters in combination with one another according to either And or Or.
Limit
= integer
Set a numeric value to limit the number of rows that are loaded.
Type
= drop-down
- External: The data will be put into your chosen S3 bucket and referenced by an external table.
- Standard: The data will be staged on your chosen S3 bucket before being loaded into a table. This is the default setting.
Schema
= drop-down
Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.
Note
An external schema is required if the Type property is set to External.
Target Table
= string
The name of the table to be created.
Warning
This table will be recreated and will drop any existing table of the same name.
Location
= S3 bucket
An S3 bucket path that will be used to store the data. Once the data is on an S3 bucket, it can be referenced by an external table. This property is only available when the Type property is set to External.
S3 Staging Area
= S3 bucket
Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Use Accelerated Endpoint
= boolean
When True, data will be loaded via the s3-accelerate
endpoint. Please consider the following information:
- Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Read Using the Amazon S3 Transfer Acceleration Speed Comparison tool to learn how you can compare accelerated and non-accelerated upload speeds across Amazon S3 Regions.
- Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
- This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
- Cases may arise where Matillion ETL cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Matillion ETL will reveal this property for user input on a "just in case" basis. In these cases, Matillion ETL may return a validation message that reads "OK - Bucket could not be validated." You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission,
GetAccelerateConfiguration
) Matillion ETL will again show this property "just in case". - The default setting is False.
Distribution Style
= drop-down
- All: Copy rows to all nodes in the Redshift cluster.
- Auto: (Default) Allow Redshift to manage your distribution style.
- Even: Distribute rows around the Redshift cluster evenly.
- Key: Distribute rows around the Redshift cluster according to the value of a key column.
Note
Table distribution is critical to good performance. Read the Distribution styles documentation for more information.
Sort Key
= dual listbox
This is optional, and lets users specify one or more columns from the input that should be set as the table's sort key.
Note
Sort keys are critical to good performance. Read Working with sort keys for more information.
Sort Key Options
= drop-down
Decide whether the sort key is of a compound or interleaved variety.
Primary Keys
= dual listbox
Select one or more columns to be designated as the table's primary key.
Load Options
= multiple drop-downs
- Comp Update: Apply automatic compression to the target table. Default is On.
- Stat Update: Automatically update statistics when filling a table. Default is On. In this case, it is updating the statistics of the target table.
- Clean S3 Objects: Automatically remove UUID-based objects on the S3 bucket. Default is On. Effectively, users decide here whether to keep the staged data in the S3 bucket or not.
- String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is On.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
- File Prefix: Give staged file names a prefix of your choice. When this Load Option is selected, users should set their preferred prefix in the text field.
- Compression Type: Set the compression type to either gzip (default) or None.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Encryption
= drop-down
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
- None: No encryption.
- SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID
= drop-down
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Auto Debug
= drop-down
Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.
Debug Level
= drop-down
The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
- Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
- Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
- Will additionally log the body of the request and the response.
- Will additionally log transport-level communication with the data source. This includes SSL negotiation.
- Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.
Name
= string
A human-readable name for the component.
Basic/Advanced Mode
= drop-down
- Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
- Advanced: This mode will require you to write an SQL-like query to call data from Google Custom Search. The available fields and their descriptions are documented in the data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
API Key
= string
Google Custom Search uses the OAuth standard for authenticating 3rd party applications. Enter the API key for your GCP Project, which must be set up in advance. For help acquiring an API key, read Google Custom Search Query authentication guide.
Custom Search ID
= string
The ID of the Custom Search Engine you wish to use. If you are unsure how to set up up a Custom Search Engine or find your ID, please refer to the Custom Search ID section below.
Connection Options
= column editor
- Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
- Value: A value for the given Parameter.
SQL Query
= code editor
Input an SQL-like query, written according to the data model.
This property is only available when Basic/Advanced Mode is set to Advanced.
Data Source
= drop-down
Select a data source.
Data Selection
= dual listbox
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.
Data Source Filter
= column editor
- Input Column: Select an input column. The available input columns vary depending upon the data source.
- Qualifier:
- Is: Compares the column to the value using the comparator.
- Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
- Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character [%] to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
- Value: The value to be compared.
Combine Filters
= drop-down
Select whether to use the defined filters in combination with one another according to either And or Or.
Limit
= integer
Set a numeric value to limit the number of rows that are loaded.
Table Type
= drop-down
Select whether the table is Native (by default in BigQuery) or an external table.
Project
= drop-down
Select the Google Cloud project. The special value, [Environment Default], will use the project defined in the environment. For more information, read Creating and managing projects.
Dataset
= drop-down
Select the Google BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the environment. For more information, read Introduction to datasets.
Target Table
= string
A name for the table. Only available when the table type is Native.
Warning
This table will be recreated and will drop any existing table of the same name.
New Target Table
= string
A name for the new external table. Only available when the table type is External.
Cloud Storage Staging Area
= Google Cloud Storage bucket
The URL and path of the target Google Cloud Storage bucket to be used for staging the queried data. Only available when the table type is Native.
Location
= Google Cloud Storage bucket
The URL and path of the target Google Cloud Storage bucket. Only available when the table type is External.
Load Options
= multiple drop-downs
- Clean Cloud Storage Files: Destroy staged files on Google Cloud Storage after loading data. Default is On.
- Cloud Storage File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Auto Debug
= drop-down
Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.
Debug Level
= drop-down
The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
- Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
- Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
- Will additionally log the body of the request and the response.
- Will additionally log transport-level communication with the data source. This includes SSL negotiation.
- Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.
Name
= string
A human-readable name for the component.
Basic/Advanced Mode
= drop-down
- Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
- Advanced: This mode will require you to write an SQL-like query to call data from Google Custom Search. The available fields and their descriptions are documented in the data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
API Key
= string
Google Custom Search uses the OAuth standard for authenticating 3rd party applications. Enter the API key for your GCP Project, which must be set up in advance. For help acquiring an API key, read Google Custom Search Query authentication guide.
Custom Search ID
= string
The ID of the Custom Search Engine you wish to use. If you are unsure how to set up up a Custom Search Engine or find your ID, please refer to the Custom Search ID section below.
Connection Options
= column editor
- Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
- Value: A value for the given Parameter.
SQL Query
= code editor
Input an SQL-like query, written according to the data model.
This property is only available when Basic/Advanced Mode is set to Advanced.
Data Source
= drop-down
Select a data source.
Data Selection
= dual listbox
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.
Data Source Filter
= column editor
- Input Column: Select an input column. The available input columns vary depending upon the data source.
- Qualifier:
- Is: Compares the column to the value using the comparator.
- Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
- Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character [%] to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
- Value: The value to be compared.
Combine Filters
= drop-down
Select whether to use the defined filters in combination with one another according to either And or Or.
Limit
= integer
Set a numeric value to limit the number of rows that are loaded.
Schema
= drop-down
Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on schemas, read the Azure Synapse documentation.
Target Table
= string
The name of the table to be created.
Warning
This table will be recreated and will drop any existing table of the same name.
Storage Account
= drop-down
Select an Azure blob storage account with your desired blob container to be used for staging the data. Read the Azure documentation for help: creating an Azure Storage Account.
Blob Container
= drop-down
Select a blob container to be used for staging the data. The blob containers available for selection depend on the chosen storage account.
Load Options
= multiple drop-downs
- Clean Staged Files: Destroy staged files after loading data. Default is On.
- String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
- File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
- Compression Type: Set the compression type to either gzip (default) or None.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
Distribution Style
= drop-down
Select the distribution style.
- Hash: This setting assigns each row to one distribution by hashing the value stored in the distribution_column_name. The algorithm is deterministic, meaning it always hashes the same value to the same distribution. The distribution column should be defined as NOT NULL, because all rows that have NULL are assigned to the same distribution.
- Replicate: This setting stores one copy of the table on each compute node. For SQL Data Warehouse, the table is stored on a distribution database on each compute node. For Parallel Data Warehouse, the table is stored in an SQL Server file group that spans the compute node. This behavior is the default for Parallel Data Warehouse.
- Round Robin: Distributes the rows evenly in a round-robin fashion. This is the default behaviour.
For more information, read Guidance for designing distributed tables using dedicated SQL pool in Azure Synapse Analytics.
Distribution Column
= drop-down
Select the column to act as the distribution column. This property is only available when Distribution Style is set to "Hash".
Index Type
= drop-down
Select the table indexing type. Options include:
- Clustered: A clustered index may outperform a clustered columnstore table when a single row needs to be retrieved quickly. The disadvantage to using a clustered index is that the only queries that benefit are the ones that use a highly selective filter on the clustered index column. Choosing this option prompts the Index Column Grid property.
- Clustered Column Store: This is the default setting. Clustered columnstore tables offer both the highest level of data compression and the best overall query performance, especially for large tables. Choosing this option prompts the Index Column Order property.
- Heap: Users may find that using a heap table is faster for temporarily landing data in Synapse SQL pool. This is because loads to heaps are faster than to index tables, and in some cases, the subsequent read can be done from cache. When a user is loading data only to stage it before running additional transformations, loading the table to a heap table is much faster than loading the data to a clustered columnstore table.
To learn more, read Indexes on dedicated SQL pool tables in Azure Synapse Analytics.
Index Column Grid
column editor
Name: The name of each column. Sort: Assign a sort orientation of either ascending (Asc) or descending (Desc).
Index Column Order
= dual listbox
Select the columns in the order to be indexed.
Partition Key
= drop-down
Select the table's partition key. Table partitions determine how rows are grouped and stored within a distribution. To learn more, read Partitioning tables in dedicated SQL pool.
Auto Debug
= drop-down
Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.
Debug Level
= drop-down
The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
- Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
- Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
- Will additionally log the body of the request and the response.
- Will additionally log transport-level communication with the data source. This includes SSL negotiation.
- Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.
Variable exports
This component makes the following values available to export into variables:
Source | Description |
---|---|
Time taken to stage | The amount of time (in seconds) taken to fetch the data from the data source and upload it to cloud storage. |
Time taken to load | The amount of time (in seconds) taken to execute the COPY statement to load the data into the target table from cloud storage. |
Strategy
Connect to the target database and issue the query. Stream the results into objects in cloud storage. Next, create or truncate the target table and issue a COPY command to load the cloud storage objects into the table. Finally, clean up the temporary cloud storage objects.
Custom Search ID
The Google Custom Search Query component requires a Custom Search ID that identifies the Custom Search Engine to be used. To find this key or to make a new Custom Search Engine, log in to your Google account and visit the Custom Search Engine site.
If creating a new CSE, click Create a custom search engine in the upper-right of the page. In the CSE creation page, enter the sites this search needs to encompass and give it a language and a name. Click Create.
Otherwise, on the left of the UI, the Edit search engine option will allow you to browse your current CSEs and access details for each. If you have no CSEs, you can create a new one from here using the Add button.
Selecting a CSE will bring you to its configuration page. Next to Details, click Search engine id to bring up a dialog with the CSE ID that can be copied and pasted into the Google Custom Search Query component under the Custom Search ID property.
Snowflake | Delta Lake on Databricks | Amazon Redshift | Google BigQuery | Azure Synapse Analytics |
---|---|---|---|---|
✅ | ✅ | ✅ | ✅ | ✅ |