Skip to content

Salesforce Output

Overview

The Salesforce Output component uses the Salesforce API to write back the contents of a source table (or view) into a table in Salesforce.

Properties

Name = string

A human-readable name for the component.


Authentication Method = select

Select the authentication method. Users can choose between a username/password combination or an OAuth.


Use Sandbox = drop-down

  • No: Connect to a live Salesforce account. This is the default setting.
  • Yes: Connect to a sandbox Salesforce account.

This property is only available when Authentication Method is set to "User/Password".


Username = drop-down

Provide a valid Salesforce username.

This property is only available when Authentication Method is set to "User/Password".


Password = string

The corresponding password. Store the password in the component, or create a managed entry for the password using Manage Passwords (recommended).


Security Token = string

Provide a valid Salesforce security token.

This property is only available when Authentication Method is set to "User/Password".


Authentication = drop-down

Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorise an OAuth entry, please read our Salesforce Output Authentication Guide.

This property is only available when Authentication Method is set to "OAuth".


Use Bulk API = drop-down

No: Write up to 200 rows in real-time. This is the default setting. Yes: Write up to 10,000 rows asynchronously in the background. This can't be cancelled before completion.


Content Type = drop-down

Select the content type. When set to ZIP-XML (the default), data is loaded to Salesforce and the Content Type in the Salesforce UI is shown as ZIP-XML. This applies accordingly to each other content type. This parameter is only available when Use Bulk API is set to Yes.


Connection Options = column editor

Manual setup is not usually required, since sensible defaults are assumed.

  • Parameter: A JDBC connection parameter supported by the database driver. Available parameters are explained in the Data Model.
  • Value: A value for the given Parameter.

Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.


Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.


Source Table = drop-down

The table in your cloud data warehouse that you wish to output to Microsoft SQL Server.


Target Object = drop-down

Select the Salesforce object (table) into which local data will be loaded (input).


Output Operation = drop-down

Select the output operation to be performed into the target object. Available operations are Delete, Insert, Update, and Upsert.


Salesforce ID = drop-down

Select the unique ID of the row within the Target Object into which the local data will be written.

This property is only available when Output Operation is set to "Delete", "Update", or "Upsert" (not "Insert").


Column Mappings = columns editor

  • Source Columns: Specify the columns in the source table that will be unloaded (output).
  • Target Columns: Specify columns in the target object where the source columns will be output to.

On Warnings = drop-down

  • Continue: Loads data despite records that return an error or that are rejected. This is the default setting.
  • Fail: The load will fail at the point that a single record errors or is rejected, aborting the Salesforce bulk job.

Batch Size = integer

The maximum batch size of records. Accepts an integer between 0 and 10,000 when Use Bulk API is set to Yes. The default value is 10,000.

When Use Bulk API is set to No, accepts an integer between 0 and 2000. The default batch size is 2000.


Records Per Ingest Job = integer

An integer for the number of records to load per bulk job. The component loads the entered number of rows (to a maximum of 10,000 per batch) into each Salesforce Ingest Job, and these jobs will load concurrently.

If this field is empty, Salesforce Output will load all records and batches into Salesforce using a single Salesforce Ingest Job. This field is empty by default.

If a negative value is specified, the job will fail at runtime.

This property is only available when Use Bulk API is set to Yes.


Relationship Columns = columns editor

  • Parent Object: Drop-down list of available parent target objects. For example, if the "child" is User, the "parent" could be Account.
  • Relationships: The relationship to the Parent Relationship column. For example, OwnerId is a relationship column in Account, but the relationship is named Owner.
  • Type: Relationship columns refer to specific target objects. For example, OwnerId in Account refers to User. However, polymorphic objects like OwnerId in Event can refer to Calendar or User, but only one may be used. In the case of customer fields, this will always be.
  • Index Column: The name of the column that uniquely identifies the Parent Target Object. The User of the Parent Target Object is identified by the Email column in User.

Capture Rejected Entries = drop-down

Set this property to On to enable the capture of any rejected/errored records into an exception table, so that they can be flagged for further analysis. This is set to Off by default.

This property is only available when Use Bulk API is set to Yes.


Truncate Rejected Entries = drop-down

When set to Yes, errored results are replaced on each run. When No, error results are appended as additional entries on each run. This property is only available when Use Bulk API is set to Yes.


Rejected Entries Database = drop-down

Select a database to hold the Rejected Entries table. The default is [Environment Default].

This property is only available when Capture Rejected Entries is set to On.


Rejected Entries Schema = drop-down

Select a schema from the chosen Rejected Entries Database. The default is [Environment Default].

This property is only available when Capture Rejected Entries is set to On.


Rejected Entries Table = drop-down

Enter a name for the table that Rejected Entries will be written to. If the table does not already exist, it will be created.

This property is only available when Capture Rejected Entries is set to On.


Capture Batch Results = drop-down

When set to Yes, enables the capture of batch results. This property is only available when Use Bulk API is set to Yes.


Truncate Batch Results = drop-down

When set to Yes, enables the truncation of batch results. This property is only available when Use Bulk API is set to Yes.


Batch Results Database = drop-down

Select a database to hold the batch results. The default is [Environment Default].

This property is only available when Capture Batch Results is set to On.


Batch Results Schema = drop-down

Select a schema from the chosen Batch Results Database. The default is [Environment Default].

This property is only available when Capture Batch Results is set to On.


Batch Results Table = string

Enter a name for the table that batch results will be written to. If the table does not already exist, it will be created.

This property is only available when Capture Batch Results is set to On.


Auto Debug = drop-down

Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.


Debug Level = drop-down

The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

  1. Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
  2. Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  3. Will additionally log the body of the request and the response.
  4. Will additionally log transport-level communication with the data source. This includes SSL negotiation.
  5. Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Name = string

A human-readable name for the component.


Authentication Method = select

Select the authentication method. Users can choose between a username/password combination or an OAuth.


Use Sandbox = drop-down

  • No: Connect to a live Salesforce account. This is the default setting.
  • Yes: Connect to a sandbox Salesforce account.

This property is only available when Authentication Method is set to "User/Password".


Username = drop-down

Provide a valid Salesforce username.

This property is only available when Authentication Method is set to "User/Password".


Password = string

The corresponding password. Store the password in the component, or create a managed entry for the password using Manage Passwords (recommended).


Security Token = string

Provide a valid Salesforce security token.

This property is only available when Authentication Method is set to "User/Password".


Authentication = drop-down

Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorise an OAuth entry, please read our Salesforce Output Authentication Guide.

This property is only available when Authentication Method is set to "OAuth".


Use Bulk API = drop-down

No: Write up to 200 rows in real-time. This is the default setting. Yes: Write up to 10,000 rows asynchronously in the background. This can't be cancelled before completion.


Content Type = drop-down

Select the content type. When set to ZIP-XML (the default), data is loaded to Salesforce and the Content Type in the Salesforce UI is shown as ZIP-XML. This applies accordingly to each other content type. This parameter is only available when Use Bulk API is set to Yes.


Connection Options = column editor

Manual setup is not usually required, since sensible defaults are assumed.

  • Parameter: A JDBC connection parameter supported by the database driver. Available parameters are explained in the Data Model.
  • Value: A value for the given Parameter.

Source Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.


Source Table = drop-down

Select the source table from which data will be unloaded (output). The tables available in the dropdown selection depend on the source schema.


Target Object = drop-down

Select the Salesforce object (table) into which local data will be loaded (input).


Output Operation = drop-down

Select the output operation to be performed into the target object. Available operations are Delete, Insert, Update, and Upsert.


Salesforce ID = drop-down

Select the unique ID of the row within the Target Object into which the local data will be written.

This property is only available when Output Operation is set to "Delete", "Update", or "Upsert" (not "Insert").


Column Mappings = columns editor

  • Source Columns: Specify the columns in the source table that will be unloaded (output).
  • Target Columns: Specify columns in the target object where the source columns will be output to.

On Warnings = drop-down

  • Continue: Loads data despite records that return an error or that are rejected. This is the default setting.
  • Fail: The load will fail at the point that a single record errors or is rejected, aborting the Salesforce bulk job.

Batch Size = integer

The maximum batch size of records. Accepts an integer between 0 and 10,000 when Use Bulk API is set to Yes. The default value is 10,000.

When Use Bulk API is set to No, accepts an integer between 0 and 2000. The default batch size is 2000.


Records Per Ingest Job = integer

An integer for the number of records to load per bulk job. The component loads the entered number of rows (to a maximum of 10,000 per batch) into each Salesforce Ingest Job, and these jobs will load concurrently.

If this field is empty, Salesforce Output will load all records and batches into Salesforce using a single Salesforce Ingest Job. This field is empty by default.

If a negative value is specified, the job will fail at runtime.

This property is only available when Use Bulk API is set to Yes.


Relationship Columns = columns editor

  • Parent Object: Drop-down list of available parent target objects. For example, if the "child" is User, the "parent" could be Account.
  • Relationships: The relationship to the Parent Relationship column. For example, OwnerId is a relationship column in Account, but the relationship is named Owner.
  • Type: Relationship columns refer to specific target objects. For example, OwnerId in Account refers to User. However, polymorphic objects like OwnerId in Event can refer to Calendar or User, but only one may be used. In the case of customer fields, this will always be.
  • Index Column: The name of the column that uniquely identifies the Parent Target Object. The User of the Parent Target Object is identified by the Email column in User.

Capture Rejected Entries = drop-down

Set this property to On to enable the capture of any rejected/errored records into an exception table, so that they can be flagged for further analysis. This is set to Off by default.

This property is only available when Use Bulk API is set to Yes.


Truncate Rejected Entries = drop-down

When set to Yes, errored results are replaced on each run. When No, error results are appended as additional entries on each run. This property is only available when Use Bulk API is set to Yes.


Rejected Entries Database = drop-down

Select a database to hold the Rejected Entries table. The default is [Environment Default].

This property is only available when Capture Rejected Entries is set to On.


Rejected Entries Schema = drop-down

Select a schema from the chosen Rejected Entries Database. The default is [Environment Default].

This property is only available when Capture Rejected Entries is set to On.


Rejected Entries Table = drop-down

Enter a name for the table that Rejected Entries will be written to. If the table does not already exist, it will be created.

This property is only available when Capture Rejected Entries is set to On.


Capture Batch Results = drop-down

When set to Yes, enables the capture of batch results. This property is only available when Use Bulk API is set to Yes.


Truncate Batch Results = drop-down

When set to Yes, enables the truncation of batch results. This property is only available when Use Bulk API is set to Yes.


Batch Results Database = drop-down

Select a database to hold the batch results. The default is [Environment Default].

This property is only available when Capture Batch Results is set to On.


Batch Results Schema = drop-down

Select a schema from the chosen Batch Results Database. The default is [Environment Default].

This property is only available when Capture Batch Results is set to On.


Batch Results Table = string

Enter a name for the table that batch results will be written to. If the table does not already exist, it will be created.

This property is only available when Capture Batch Results is set to On.


Auto Debug = drop-down

Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.


Debug Level = drop-down

The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

  1. Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
  2. Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  3. Will additionally log the body of the request and the response.
  4. Will additionally log transport-level communication with the data source. This includes SSL negotiation.
  5. Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Name = string

A human-readable name for the component.


Authentication Method = select

Select the authentication method. Users can choose between a username/password combination or an OAuth.


Use Sandbox = drop-down

  • No: Connect to a live Salesforce account. This is the default setting.
  • Yes: Connect to a sandbox Salesforce account.

This property is only available when Authentication Method is set to "User/Password".


Username = drop-down

Provide a valid Salesforce username.

This property is only available when Authentication Method is set to "User/Password".


Password = string

The corresponding password. Store the password in the component, or create a managed entry for the password using Manage Passwords (recommended).


Security Token = string

Provide a valid Salesforce security token.

This property is only available when Authentication Method is set to "User/Password".


Authentication = drop-down

Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorise an OAuth entry, please read our Salesforce Output Authentication Guide.

This property is only available when Authentication Method is set to "OAuth".


Use Bulk API = drop-down

No: Write up to 200 rows in real-time. This is the default setting. Yes: Write up to 10,000 rows asynchronously in the background. This can't be cancelled before completion.


Content Type = drop-down

Select the content type. When set to ZIP-XML (the default), data is loaded to Salesforce and the Content Type in the Salesforce UI is shown as ZIP-XML. This applies accordingly to each other content type. This parameter is only available when Use Bulk API is set to Yes.


Connection Options = column editor

Manual setup is not usually required, since sensible defaults are assumed.

  • Parameter: A JDBC connection parameter supported by the database driver. Available parameters are explained in the Data Model.
  • Value: A value for the given Parameter.

Project = drop-down

Select the Google Cloud project. The special value, [Environment Default], will use the project defined in the environment. For more information, read Creating and managing projects.


Dataset = drop-down

Select the Google BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the environment. For more information, read Introduction to datasets.


Source Table = drop-down

Select the source table from which data will be unloaded (output). The tables available in the dropdown selection depend on the source schema.


Target Object = drop-down

Select the Salesforce object (table) into which local data will be loaded (input).


Output Operation = drop-down

Select the output operation to be performed into the target object. Available operations are Delete, Insert, Update, and Upsert.


Salesforce ID = drop-down

Select the unique ID of the row within the Target Object into which the local data will be written.

This property is only available when Output Operation is set to "Delete", "Update", or "Upsert" (not "Insert").


Column Mappings = columns editor

  • Source Columns: Specify the columns in the source table that will be unloaded (output).
  • Target Columns: Specify columns in the target object where the source columns will be output to.

On Warnings = drop-down

  • Continue: Loads data despite records that return an error or that are rejected. This is the default setting.
  • Fail: The load will fail at the point that a single record errors or is rejected, aborting the Salesforce bulk job.

Batch Size = integer

The maximum batch size of records. Accepts an integer between 0 and 10,000 when Use Bulk API is set to Yes. The default value is 10,000.

When Use Bulk API is set to No, accepts an integer between 0 and 2000. The default batch size is 2000.


Records Per Ingest Job = integer

An integer for the number of records to load per bulk job. The component loads the entered number of rows (to a maximum of 10,000 per batch) into each Salesforce Ingest Job, and these jobs will load concurrently.

If this field is empty, Salesforce Output will load all records and batches into Salesforce using a single Salesforce Ingest Job. This field is empty by default.

If a negative value is specified, the job will fail at runtime.

This property is only available when Use Bulk API is set to Yes.


Relationship Columns = columns editor

  • Parent Object: Drop-down list of available parent target objects. For example, if the "child" is User, the "parent" could be Account.
  • Relationships: The relationship to the Parent Relationship column. For example, OwnerId is a relationship column in Account, but the relationship is named Owner.
  • Type: Relationship columns refer to specific target objects. For example, OwnerId in Account refers to User. However, polymorphic objects like OwnerId in Event can refer to Calendar or User, but only one may be used. In the case of customer fields, this will always be.
  • Index Column: The name of the column that uniquely identifies the Parent Target Object. The User of the Parent Target Object is identified by the Email column in User.

Capture Rejected Entries = drop-down

Set this property to On to enable the capture of any rejected/errored records into an exception table, so that they can be flagged for further analysis. This is set to Off by default.

This property is only available when Use Bulk API is set to Yes.


Truncate Rejected Entries = drop-down

When set to Yes, errored results are replaced on each run. When No, error results are appended as additional entries on each run. This property is only available when Use Bulk API is set to Yes.


Rejected Entries Database = drop-down

Select a database to hold the Rejected Entries table. The default is [Environment Default].

This property is only available when Capture Rejected Entries is set to On.


Rejected Entries Schema = drop-down

Select a schema from the chosen Rejected Entries Database. The default is [Environment Default].

This property is only available when Capture Rejected Entries is set to On.


Rejected Entries Table = drop-down

Enter a name for the table that Rejected Entries will be written to. If the table does not already exist, it will be created.

This property is only available when Capture Rejected Entries is set to On.


Capture Batch Results = drop-down

When set to Yes, enables the capture of batch results. This property is only available when Use Bulk API is set to Yes.


Truncate Batch Results = drop-down

When set to Yes, enables the truncation of batch results. This property is only available when Use Bulk API is set to Yes.


Batch Results Database = drop-down

Select a database to hold the batch results. The default is [Environment Default].

This property is only available when Capture Batch Results is set to On.


Batch Results Schema = drop-down

Select a schema from the chosen Batch Results Database. The default is [Environment Default].

This property is only available when Capture Batch Results is set to On.


Batch Results Table = string

Enter a name for the table that batch results will be written to. If the table does not already exist, it will be created.

This property is only available when Capture Batch Results is set to On.


Auto Debug = drop-down

Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.


Debug Level = drop-down

The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

  1. Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
  2. Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  3. Will additionally log the body of the request and the response.
  4. Will additionally log transport-level communication with the data source. This includes SSL negotiation.
  5. Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Name = string

A human-readable name for the component.


Authentication Method = select

Select the authentication method. Users can choose between a username/password combination or an OAuth.


Use Sandbox = drop-down

  • No: Connect to a live Salesforce account. This is the default setting.
  • Yes: Connect to a sandbox Salesforce account.

This property is only available when Authentication Method is set to "User/Password".


Username = drop-down

Provide a valid Salesforce username.

This property is only available when Authentication Method is set to "User/Password".


Password = string

The corresponding password. Store the password in the component, or create a managed entry for the password using Manage Passwords (recommended).


Security Token = string

Provide a valid Salesforce security token.

This property is only available when Authentication Method is set to "User/Password".


Authentication = drop-down

Select an OAuth entry to authenticate this component. An OAuth entry must be set up in advance. To learn how to create and authorise an OAuth entry, please read our Salesforce Output Authentication Guide.

This property is only available when Authentication Method is set to "OAuth".


Use Bulk API = drop-down

No: Write up to 200 rows in real-time. This is the default setting. Yes: Write up to 10,000 rows asynchronously in the background. This can't be cancelled before completion.


Content Type = drop-down

Select the content type. When set to ZIP-XML (the default), data is loaded to Salesforce and the Content Type in the Salesforce UI is shown as ZIP-XML. This applies accordingly to each other content type. This parameter is only available when Use Bulk API is set to Yes.


Connection Options = column editor

Manual setup is not usually required, since sensible defaults are assumed.

  • Parameter: A JDBC connection parameter supported by the database driver. Available parameters are explained in the Data Model.
  • Value: A value for the given Parameter.

Source Schema = drop-down

Select the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on schemas, read the Azure Synapse documentation.


Source Table = drop-down

Select the source table from which data will be unloaded (output). The tables available in the dropdown selection depend on the source schema.


Target Object = drop-down

Select the Salesforce object (table) into which local data will be loaded (input).


Output Operation = drop-down

Select the output operation to be performed into the target object. Available operations are Delete, Insert, Update, and Upsert.


Salesforce ID = drop-down

Select the unique ID of the row within the Target Object into which the local data will be written.

This property is only available when Output Operation is set to "Delete", "Update", or "Upsert" (not "Insert").


Column Mappings = columns editor

  • Source Columns: Specify the columns in the source table that will be unloaded (output).
  • Target Columns: Specify columns in the target object where the source columns will be output to.

On Warnings = drop-down

  • Continue: Loads data despite records that return an error or that are rejected. This is the default setting.
  • Fail: The load will fail at the point that a single record errors or is rejected, aborting the Salesforce bulk job.

Batch Size = integer

The maximum batch size of records. Accepts an integer between 0 and 10,000 when Use Bulk API is set to Yes. The default value is 10,000.

When Use Bulk API is set to No, accepts an integer between 0 and 2000. The default batch size is 2000.


Records Per Ingest Job = integer

An integer for the number of records to load per bulk job. The component loads the entered number of rows (to a maximum of 10,000 per batch) into each Salesforce Ingest Job, and these jobs will load concurrently.

If this field is empty, Salesforce Output will load all records and batches into Salesforce using a single Salesforce Ingest Job. This field is empty by default.

If a negative value is specified, the job will fail at runtime.

This property is only available when Use Bulk API is set to Yes.


Relationship Columns = columns editor

  • Parent Object: Drop-down list of available parent target objects. For example, if the "child" is User, the "parent" could be Account.
  • Relationships: The relationship to the Parent Relationship column. For example, OwnerId is a relationship column in Account, but the relationship is named Owner.
  • Type: Relationship columns refer to specific target objects. For example, OwnerId in Account refers to User. However, polymorphic objects like OwnerId in Event can refer to Calendar or User, but only one may be used. In the case of customer fields, this will always be.
  • Index Column: The name of the column that uniquely identifies the Parent Target Object. The User of the Parent Target Object is identified by the Email column in User.

Auto Debug = drop-down

Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. Turning this on will override any debugging connection options.


Debug Level = drop-down

The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

  1. Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
  2. Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  3. Will additionally log the body of the request and the response.
  4. Will additionally log transport-level communication with the data source. This includes SSL negotiation.
  5. Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Video


Snowflake Delta Lake on Databricks Amazon Redshift Google BigQuery Azure Synapse Analytics