Oracle Unload
Oracle Unload is an orchestration component that writes Redshift data to an Oracle database. In a Redshift project, the Redshift input credentials are taken from the environment, meaning you only need to configure properties to identify the data being written, and to set the connection to the Oracle destination.
Properties
Reference material is provided below for the Configure and Destination properties.
Configure
Mode
= drop-down
- Basic: This mode will build a query for you using settings from the Schema, Data Source, Data Selection, Data Source Filter, Combine Filters, and Limit parameters. In most cases, this mode will be sufficient.
- Advanced: This mode will require you to write an SQL-like query to call data from the service you're connecting to. The available fields and their descriptions are documented in the data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
SQL Query
= code editor
This is an SQL-like SELECT query, written in the SQL accepted by your cloud data warehouse. Treat collections as table names, and fields as columns. Only available in Advanced mode.
Data Source
= drop-down
Select a single data source to be extracted from the source system and loaded into a table in the destination. The source system defines the data sources available. Use multiple components to load multiple data sources.
Data Selection
= dual listbox
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.
To use grid variables, tick the Use Grid Variable checkbox at the bottom of the Data Selection dialog.
Data Source Filter
= column editor
Define one or more filter conditions that each row of data must meet to be included in the load.
- Input Column: Select an input column. The available input columns vary depending upon the data source.
- Qualifier:
- Is: Compares the column to the value using the comparator.
- Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
- Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character
%
to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from. - Value: The value to be compared.
Click the Text Mode toggle at the bottom of the Connection Options dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.
Combine Filters
= drop-down
The data source filters you have defined can be combined using either And or Or logic. If And, then all filter conditions must be satisfied to load the data row. If Or, then only a single filter condition must be satisfied. The default is And.
If you have only one filter, or no filters, this parameter is essentially ignored.
Row Limit
= integer
Set a numeric value to limit the number of rows that are loaded. The default is an empty field, which will load all rows.
Destination
Authentication Type
= drop-down
Currently, only Username & Password credentials are supported.
Username
= string
Your Oracle login username.
Password
= string
Your Oracle password. This is stored as a secret definition. Read Secrets and secret definitions to learn how to store a password as a secret definition.
Endpoint
= string
The Oracle database endpoint. Typically, this is your Oracle database IP. For example, 10.12.2.15
.
Port Number
= integer
The port number that follows your Oracle database endpoint. The default value is 1521
.
Connection Options
= column editor (optional)
- Parameter: A JDBC parameter supported by the database driver. The parameters available to your database will usually be explained in the database's data model.
- Value: A value for the given parameter.
Toggle Text Mode to add information to the Connection Options dialog. For more information, read Text mode.
Database
= string
The name of your Oracle database.
Schema
= string
The name of the Oracle schema to output the target table into. For information about using Oracle schemas, read Schemas and Schema Objects.
Table Name
= string
A name for the output table that is to be created in the Oracle database.
Load Strategy
= drop-down
Define what happens if the table name already exists in the specified database and schema.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run. If the table name doesn't already exist, a new table will be created.
- Truncate and Insert: If the specified table name already exists, all rows within that table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, new rows will be appended to that existing table.
In each case, if the specified table name doesn't already exist then a new table will be created.
Primary Keys
= dual listbox (optional)
Select one or more columns to be designated as the table's primary key.
To use grid variables, tick the Use Grid Variable checkbox at the bottom of the Primary Keys dialog.
Update Strategy
= drop-down
If Load Strategy is Append and Primary Keys have been specified, this property determines the behavior if there are existing rows with the same primary keys as an appended row.
- Ignore: Existing rows with the same primary key values will be ignored. This is the default behavior.
- Replace: Existing rows with the same primary key values will be replaced.
Stage Platform
= drop-down
Data read from the Redshift source will be written directly to the Oracle target, without using an intermediate stage. Currently, therefore, the only option here is Batch Load.
Batch Size
= integer (optional)
The number of rows batched for passing to the Oracle database. Default is 1000
.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
❌ | ❌ | ✅ |
Got feedback or spotted something we can improve?
We'd love to hear from you. Join the conversation in the Documentation forum!