Skip to content

Salesforce Query

The Salesforce Query component connects to the Salesforce API to retrieve data to load into a table—this stages the data, so the table is reloaded each time. You can then use transformations to enrich and manage the data in permanent tables.

Please read the full component documentation for guidance on the following topics:

  • Using LoginURL
  • Fetching Deleted Records

Salesforce data model.

Warning

This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the load option Recreate Target Table to Off will prevent both recreation and truncation. Do not modify the target table structure manually.


Properties

Name = string

A human-readable name for the component.


Basic/Advanced Mode = drop-down

  • Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from Salesforce. The available fields and their descriptions are documented in the Salesforce data model.

There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.

While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.


Authentication Method = drop-down

The method of authenticating the Salesforce Query component.

When setting Authentication Method to Other, the following connection options must be set in Connection Options:

  • initiateoauth = OFF
  • OAuthAccessToken = your access token.
  • LoginURL = your Salesforce login URL.

Use Sandbox = boolean

  • No: Connect to a live Salesforce account. This is the default setting.
  • Yes: Connect to a sandbox Salesforce account.

Authentication = drop-down

Opens a dialog to select an OAuth connection. Click Manage to navigate to the OAuth tab to review OAuth connections and to add new connections. Read OAuth to learn how to create a Salesforce OAuth connection.

Note

Salesforce Query and Salesforce Marketing Cloud Query use different types of OAuth, and the two are not compatible. Ensure that you select the correct OAuth for this query component.


Username = string

Enter a valid Salesforce username. Only available when users select User/Password in the Authentication Method property.


Password = drop-down

The secret definition denoting your password tied to your username for your Salesforce account. Your password should be saved as a secret definition before using this component. Only available when users select User/Password in the Authentication Method property.


Security Token = drop-down

The secret definition denoting your Salesforce security token. Your security token should be saved as a secret definition before using this component. Only available when users select User/Password in the Authentication Method property.


Connection Options = column editor

  • Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
  • Value: A value for the given parameter.

SQL Query = code editor

This is an SQL-like SELECT query. Treat collections as table names, and fields as columns. Only available in Advanced mode.


Data Source = drop-down

Select a data source.


Data Selection = dual listbox

Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.


Data Source Filter = column editor

  • Input Column: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character % to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
  • Value: The value to be compared.

Combine Filters = drop-down

Select whether to use the defined filters in combination with one another according to either And or Or.


Limit = integer

Set a numeric value to limit the number of rows that are loaded.


Type = drop-down

  • Standard: The data will be staged in your storage location before being loaded into a table. This is the only setting currently available.

Primary Keys = dual listbox

Select one or more columns to be designated as the table's primary key.


Warehouse = drop-down

The Snowflake warehouse used to run the queries. The special value, [Environment Default], will use the warehouse defined in the environment. Read Overview of Warehouses to learn more.


Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Target Table = string

The name of the table to be created. This table will be recreated and will drop any existing table of the same name.


Stage = drop-down

Select a managed stage. The special value, [Custom], will create a stage "on the fly" for use solely within this component.


Stage Platform = drop-down

Select a staging setting.

  • Snowflake Managed: Create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
  • Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
  • Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.

Stage Authentication = drop-down

Select an authentication method for data staging.

  • Credentials: Uses the credentials configured in the environment. If no credentials have been configured, an error will occur.
  • Storage Integration: Use a Snowflake storage integration to authentication data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. To learn more, read Create Storage Integration.

Storage Integration = drop-down

Select a Snowflake storage integration from the drop-down list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location (Amazon S3, Azure Blob Storage, Google Cloud Storage) and must be set up in advance of selection. Only available when Stage Authentication is set to Storage Integration.


S3 Staging Area = drop-down

Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Secret definitions for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.


Use Accelerated Endpoint = boolean

When True, data will be loaded via the s3-accelerate endpoint. Consider the following information:

  • Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Read Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
  • Users must manually set the acceleration configuration of an existing bucket. To learn more, read PutBucketAccelerateConfiguration in the AWS API Reference.
  • This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
  • Cases may arise where Data Productivity Cloud can't determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Designer will reveal this property for user input on a "just in case" basis. In these cases, Designer may return a validation message that reads "OK - Bucket could not be validated." You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission, GetAccelerateConfiguration), Designer will again show this property "just in case".
  • The default setting is False.

Storage Account = drop-down

Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.


Blob Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.


Encryption = drop-down

Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.


KMS Key ID = drop-down

The ID of the KMS encryption key you have chosen to use in the Encryption property.


Load Options = multiple drop-downs

  • Clean Staged Files: Destroy staged files after loading data. Default is On.
  • String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
  • Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
  • File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
  • Trim String Columns: Remove leading and trailing characters from a string column. Default is On.
  • Compression Type: Set the compression type to either gzip (default) or None.

Auto Debug = drop-down

Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.


Debug Level = drop-down

The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

  1. Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
  2. Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  3. Will additionally log the body of the request and the response.
  4. Will additionally log transport-level communication with the data source. This includes SSL negotiation.
  5. Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Name = string

A human-readable name for the component.


Basic/Advanced Mode = drop-down

  • Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from Salesforce. The available fields and their descriptions are documented in the Salesforce data model.

There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.

While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.


Authentication Method = drop-down

The method of authenticating the Salesforce Query component.

When setting Authentication Method to Other, the following connection options must be set in Connection Options:

  • initiateoauth = OFF
  • OAuthAccessToken = your access token.
  • LoginURL = your Salesforce login URL.

Use Sandbox = boolean

  • No: Connect to a live Salesforce account. This is the default setting.
  • Yes: Connect to a sandbox Salesforce account.

Authentication = drop-down

Opens a dialog to select an OAuth connection. Click Manage to navigate to the OAuth tab to review OAuth connections and to add new connections. Read OAuth to learn how to create an OAuth connection.

Note

Salesforce Query and Salesforce Marketing Cloud Query use different types of OAuth, and the two are not compatible. Ensure that you select the correct OAuth for this query component.


Username = string

Enter a valid Salesforce username. Only available when users select User/Password in the Authentication Method property.


Password = drop-down

The secret definition denoting your password tied to your username for your Salesforce account. Your password should be saved as a secret definition before using this component. Only available when users select User/Password in the Authentication Method property.


Security Token = drop-down

The secret definition denoting your Salesforce security token. Your security token should be saved as a secret definition before using this component. Only available when users select User/Password in the Authentication Method property.


Connection Options = column editor

  • Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
  • Value: A value for the given parameter.

SQL Query = code editor

This is an SQL-like SELECT query. Treat collections as table names, and fields as columns. Only available in Advanced mode.


Data Source = drop-down

Select a data source.


Data Selection = dual listbox

Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.


Data Source Filter = column editor

  • Input Column: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character % to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
  • Value: The value to be compared.

Combine Filters = drop-down

Select whether to use the defined filters in combination with one another according to either And or Or.


Limit = integer

Set a numeric value to limit the number of rows that are loaded.


Catalog = drop-down

Select a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Data Productivity Cloud environment setup. Selecting a catalog will determine which databases are available in the next parameter.


Schema (Database) = drop-down

The Databricks schema. The special value, [Environment Default], will use the schema defined in the environment. Read Create and manage schemas to learn more.


Table = string

The name of the table to be created. This table will be recreated and will drop any existing table of the same name.


Stage Platform = drop-down

Select a staging setting.

  • AWS S3: Lets users specify a custom staging area on Amazon S3.
  • Azure Blob: Lets users specify a custom staging area on Azure Blob storage.
  • Personal Staging: Uses a Databricks personal staging location. Your Data Productivity Cloud environment connection to Delta Lake on Databricks requires your username to be a token and the corresponding password to be a masked entry for a Databricks access token (AWS).

Additionally, read Configure Unity Catalog storage account for CORS to learn how to configure CORS to enable Databricks to manage personal staging locations in Unity Catalog (AWS).


S3 Staging Area = drop-down

Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Secret definitions for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.


Storage Account = drop-down

Select a storage account with your desired Blob container to be used for staging the data. For more information, read Storage account overview.


Blob Container = drop-down

Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.


Encryption = drop-down

Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.


KMS Key ID = drop-down

The ID of the KMS encryption key you have chosen to use in the Encryption property.


Load Options = multiple drop-downs

  • Clean Staged Files: Destroy staged files after loading data. Default is On.
  • String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
  • Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
  • File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
  • Compression Type: Set the compression type to either gzip (default) or None.

Auto Debug = drop-down

Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.


Debug Level = drop-down

The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

  1. Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
  2. Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  3. Will additionally log the body of the request and the response.
  4. Will additionally log transport-level communication with the data source. This includes SSL negotiation.
  5. Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Name = string

A human-readable name for the component.


Basic/Advanced Mode = drop-down

  • Basic: This mode will build a query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from Salesforce. The available fields and their descriptions are documented in the Salesforce data model.

There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.

While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.


Authentication Method = drop-down

The method of authenticating the Salesforce Query component.

When setting Authentication Method to Other, the following connection options must be set in Connection Options:

  • initiateoauth = OFF
  • OAuthAccessToken = your access token.
  • LoginURL = your Salesforce login URL.

Use Sandbox = boolean

  • No: Connect to a live Salesforce account. This is the default setting.
  • Yes: Connect to a sandbox Salesforce account.

Authentication = drop-down

Opens a dialog to select an OAuth connection. Click Manage to navigate to the OAuth tab to review OAuth connections and to add new connections. Read OAuth to learn how to create an OAuth connection.

Note

Salesforce Query and Salesforce Marketing Cloud Query use different types of OAuth, and the two are not compatible. Ensure that you select the correct OAuth for this query component.


Username = string

Enter a valid Salesforce username. Only available when users select User/Password in the Authentication Method property.


Password = drop-down

The secret definition denoting your password tied to your username for your Salesforce account. Your password should be saved as a secret definition before using this component. Only available when users select User/Password in the Authentication Method property.


Security Token = drop-down

The secret definition denoting your Salesforce security token. Your security token should be saved as a secret definition before using this component. Only available when users select User/Password in the Authentication Method property.


Connection Options = column editor

  • Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
  • Value: A value for the given parameter.

SQL Query = code editor

This is an SQL-like SELECT query. Treat collections as table names, and fields as columns. Only available in Advanced mode.


Data Source = drop-down

Select a data source.


Data Selection = dual listbox

Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.


Data Source Filter = column editor

  • Input Column: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character % to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
  • Value: The value to be compared.

Combine Filters = drop-down

Select whether to use the defined filters in combination with one another according to either And or Or.


Limit = integer

Set a numeric value to limit the number of rows that are loaded.


Type = drop-down

  • Standard: The data will be staged in your storage location before being loaded into a table. This is the only setting currently available.

Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Target Table = string

The name of the table to be created. This table will be recreated and will drop any existing table of the same name.


S3 Staging Area = drop-down

Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Secret definitions for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.


Use Accelerated Endpoint = boolean

When True, data will be loaded via the s3-accelerate endpoint. Please consider the following information:

  • Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Please consult Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
  • Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
  • This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
  • Cases may arise where Data Productivity Cloud can't determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Designer will reveal this property for user input on a "just in case" basis. In these cases, Designer may return a validation message that reads "OK - Bucket could not be validated." You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission, GetAccelerateConfiguration) Designer will again show this property "just in case".
  • The default setting is False.

Distribution Style = drop-down

  • All: Copy rows to all nodes in the Redshift cluster.
  • Auto: (Default) Allow Redshift to manage your distribution style.
  • Even: Distribute rows around the Redshift cluster evenly.
  • Key: Distribute rows around the Redshift cluster according to the value of a key column.

Note

Table distribution is critical to good performance. Read the Distribution styles documentation for more information.


Sort Key = dual listbox

This is optional, and lets users specify one or more columns from the input that should be set as the table's sort key.

Note

Sort keys are critical to good performance. Read Working with sort keys for more information.


Sort Key Options = drop-down

Decide whether the sort key is of a compound or interleaved variety.


Primary Key = dual listbox

Select one or more columns to be designated as the table's primary key.


Encryption = drop-down

Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.


KMS Key ID = drop-down

The ID of the KMS encryption key you have chosen to use in the Encryption property.


Load Options = multiple drop-downs

  • Comp Update: Apply automatic compression to the target table. Default is On.
  • Stat Update: Automatically update statistics when filling a table. Default is On. In this case, it is updating the statistics of the target table.
  • Clean S3 Objects: Automatically remove UUID-based objects on the S3 bucket. Default is On. Effectively, users decide here whether to keep the staged data in the S3 bucket or not.
  • String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is On.
  • Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
  • File Prefix: Give staged file names a prefix of your choice. When this Load Option is selected, users should set their preferred prefix in the text field.
  • Compression Type: Set the compression type to either gzip (default) or None.

Auto Debug = drop-down

Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.


Debug Level = drop-down

The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

  1. Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
  2. Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  3. Will additionally log the body of the request and the response.
  4. Will additionally log transport-level communication with the data source. This includes SSL negotiation.
  5. Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Strategy

  1. Connect to the target database and issue the query.
  2. Stream the results into objects in cloud storage.
  3. Create or truncate the target table and issue a COPY command to load the cloud storage objects into the table.
  4. Finally, clean up the temporary cloud storage objects.

Fetching deleted records

The Salesforce Query component supports fetching "deleted" records from Salesforce. To fetch these records in Basic mode, add the following in the Data Source Filter property:

  • Input Column: IsDeleted
  • Qualifier: Is
  • Comparator: Equal to
  • Value: true

To fetch "deleted" records using Advanced mode, use the following query:

GETDELETED
FROM ${object_name}
where systemmodstamp > '${sub_30daysago}'

Note

This advanced query will error when sampling the component. For the advanced query to run successfully, users must run the orchestration pipeline.


Using LoginURL (optional)

Some users may wish to use a custom login URL for authentications.

In Connection Options, select the Other parameter and set the value to:

LoginURL=https://<DevSiteSubdomain>.salesforce.com/services/Soap/c/37.0;

Where <DevSiteSubdomain> is the same subdomain used when logging in to the Salesforce developer site for your account.


Snowflake Databricks Amazon Redshift

Video