Salesforce Bulk Query
Overview
The Salesforce Bulk Query component uses the Salesforce Bulk REST API to retrieve data and load it into a table—this stages the data, so the table is reloaded each time. You can then use transformation components to enrich and manage the data in permanent tables.
Salesforce Bulk Query authentication guide.
Warning
This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the load option Recreate Target Table to Off will prevent both recreation and truncation. Do not modify the target table structure manually.
Properties
Name
= string
A human-readable name for the component.
Version
= drop-down
A list of supported versions of Salesforce.
Authentication
= drop-down
Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorize an OAuth entry, please read our Salesforce Bulk Query authentication guide.
SOQL Query
= code editor
A query for Salesforce Object Query Language. Does not return an error upon validation, but does return errors if found during runtime. For more information, read Introduction to SOQL and SOSL.
Page Size
= integer
Specify the size of each page requested. Must be a positive integer. The default is 50,000. The max size is the max int size in Java.
Max Wait Minutes
= integer
An integer value for the maximum time to wait (in minutes) for the bulk query to run before a failure is returned.
- Must be a positive integer.
- The default is 25 minutes.
- The maximum is 10,080 minutes (7 days).
Type
= drop-down
- External: The data will be put into your storage location and referenced by an external table.
- Standard: The data will be staged in your storage location before being loaded into a table. This is the default setting.
Warehouse
= drop-down
The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
= drop-down
The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Target Table
= string
The name of the table to be created.
Warning
This table will be recreated and will drop any existing table of the same name.
Stage
= drop-down
Select a managed stage. The special value, [Custom], will create a stage "on the fly" for use solely within this component.
Stage Platform
= drop-down
Select a staging setting.
- Snowflake Managed: Create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
- Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
- Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
- Existing Google Cloud Storage Location: Activates the GCS Staging Area property, allowing users to specify a custom staging area within Google Cloud Storage.
Stage Authentication
= drop-down
Select an authentication method for data staging.
- Credentials: Uses the credentials configured in the environment. If no credentials have been configured, an error will occur.
- Storage Integration: Use a Snowflake storage integration to authentication data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. To learn more, read Create Storage Integration.
Storage Integration
= drop-down
Select a Snowflake storage integration from the drop-down list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location (Amazon S3, Azure Blob Storage, Google Cloud Storage) and must be set up in advance of selection. To learn more about setting up a storage integration for use in Matillion ETL, read Storage Integration Setup Guide. Only available when Stage Authentication is set to Storage Integration.
S3 Staging Area
= S3 bucket
Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Use Accelerated Endpoint
= boolean
When True, data will be loaded via the s3-accelerate
endpoint. Please consider the following information:
- Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Please consult Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
- Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
- This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
- Cases may arise where Matillion ETL cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Matillion ETL will reveal this property for user input on a "just in case" basis. In these cases, Matillion ETL may return a validation message that reads "OK - Bucket could not be validated." You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission,
GetAccelerateConfiguration
) Matillion ETL will again show this property "just in case". - The default setting is False.
Storage Account
= drop-down
Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.
Blob Container
= drop-down
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Staging Area
= drop-down
The URL and path of the target Google Storage bucket to be used for staging the queried data. For more information, read Creating storage buckets
Encryption
= drop-down
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
- None: No encryption.
- SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
- SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
KMS Key ID
= drop-down
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Load Options
= multiple drop-downs
- Clean Staged Files: Destroy staged files after loading data. Default is On.
- String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
- Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
- File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
- Trim String Columns: Remove leading and trailing characters from a string column. Default is On.
- Compression Type: Set the compression type to either gzip (default) or None.
- Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
New Table Name
= string
The name of the new table to be created. Only available when Type is set to External.
Stage Database
= drop-down
Specify the stage database. The special value, [Environment Default], will use the database defined in the environment. Only available when Type is set to External.
Stage Schema
= drop-down
Specify the stage schema. The special value, [Environment Default], will use the schema defined in the environment. Only available when Type is set to External.
Stage
= drop-down
Select a stage. Only available when Type is set to External.
Snowflake | Delta Lake on Databricks | Amazon Redshift | Google BigQuery | Azure Synapse Analytics |
---|---|---|---|---|
✅ | ❌ | ❌ | ❌ | ❌ |