Skip to content

RDS Bulk Output

RDS Bulk Output lets users load the contents of a table (or view) into a table in an Amazon RDS database.


If you're using a Matillion Full SaaS solution, you may need to allow these IP address ranges from which Matillion Full SaaS agents will call out to their source systems or to cloud data platforms.


Currently, you may need to manually enter the endpoint.


Name = string

A human-readable name for the component.

RDS Type = drop-down

Select the database type.

RDS Endpoint = drop-down

Your RDS database endpoint. If the IAM role attached to the instance (or the manually entered credentials associated with the current environment) has the permissions granted to query the RDS endpoints, you may select the RDS endpoint from a list. Otherwise, you must enter it manually—it can be found in the RDS console and is a long dotted-name and port number, separated by a colon. To acquire your database endpoint and provide it manually, follow these steps:

  1. Log in to the AWS Console.
  2. In the Find Services search bar, search for RDS.
  3. In the Amazon RDS navigation column on the left side of your screen, click Databases.
  4. Select a database.
  5. Locate the endpoint for that database in the Connectivity & security section.

Users must include the port number when manually typing the endpoint.

Database Name = string

Provide the name of the database within your RDS instance. In the AWS Console, this is the "DB identifier".

Username = string

The RDS connection username.

Password = drop-down

The secret definition denoting your password tied to your username for your RDS database. Your password should be saved as a secret definition before using this component. Only available when users select User/Password in the Authentication Method property.

JDBC Options = column editor

  • Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
  • Value: A value for the given parameter.

Database = drop-down

The Snowflake database. The special value, [Environment Default], will use the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.

Schema = drop-down

The Snowflake schema. The special value, [Environment Default], will use the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.

Source Table = drop-down

The table (or view) on your cluster to copy to RDS.

Target Table = string

A name for the new table.

Target Schema = string

A schema in the target database. Required if the RDS Type is PostgreSQL or SQL Server.

Load Columns = dual listbox

Choose the columns to load into RDS. If you leave this parameter empty, all columns will be loaded.

Table Maintenance = drop-down

  • None: Assume the RDS database already has the table defined with the correct structure.
  • Create if not exists: Only create the table if it doesn't already exist.
  • Replace: Always drop and recreate the table.

Primary Key = dual listbox

Select one or more columns to be designated as the table's primary key.

Update Strategy = drop-down

Select how the output will handle replacing rows with matching primary keys. Options are:

  • Ignore: Any existing row in the target that matches the primary key is not replaced and the matching row from the source table is not uploaded.
  • Replace: Rows in the target table that match the primary key are replaced with the matching rows from the source table.

Truncate Target Table = yes/no

Whether or not to truncate the target table before loading data.

On Warnings = drop-down

Choose whether to continue with the load if an error is raised, or to fail the run.

Additional Copy Options = column editor

Any additional options that you want to apply to the copy operation. Some of these may conflict with the options the component already sets—in particular, care is taken to escape the data to ensure it loads into the target database even if the data contains row and/or column delimiters, so you should never override the escape or delimiter options.

Batch Size = integer

This is optional, and specifies the number of rows to load to the target between each commit. On a very large export, this may be desirable to keep the size of the Amazon RDS log files from growing very large before the data is committed.

Snowflake Databricks Amazon Redshift