Oracle Unload from Snowflake
The Oracle Unload orchestration component lets you write data in your Snowflake source to a target Oracle database. In a Snowflake Data Productivity Cloud project, the Snowflake account credentials are taken from the enviroment the current branch is connected to. This means you only need to configure parameters to identify the data being written and to set up the connection to the target Oracle database.
Properties
Reference material is provided below for the Configure and Destination properties.
Configure
Mode
= drop-down
- Basic: This mode will build a query for you using settings from the Schema, Data Source, Data Selection, Data Source Filter, Combine Filters, and Limit parameters. In most cases, this mode will be sufficient.
- Advanced: This mode will require you to write an SQL-like query to call data from the service you're connecting to. The available fields and their descriptions are documented in the data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
Note
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
Warehouse
= drop-down
The Snowflake source warehouse. The special value [Environment Default]
uses the warehouse defined in the environment. Read Snowflake's Virtual Warehouses documentation to learn more.
Database
= drop-down
The Snowflake source database. The special value [Environment Default]
uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
= drop-down
The Snowflake source schema. The special value [Environment Default]
uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Data Source
= drop-down
Select a single data source to be extracted from the source system and loaded into a table in the destination. The source system defines the data sources available. Use multiple components to load multiple data sources.
Data Selection
= dual listbox
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.
To use grid variables, tick the Use Grid Variable checkbox at the bottom of the Data Selection dialog.
Data Source Filter
= column editor
Define one or more filter conditions that each row of data must meet to be included in the load.
- Input Column: Select an input column. The available input columns vary depending upon the data source.
- Qualifier:
- Is: Compares the column to the value using the comparator.
- Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
- Comparator: Choose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null". "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than" and "Less than", will work only with numerics. The "Like" operator allows the wildcard character
%
to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from. - Value: The value to be compared.
Click the Text Mode toggle at the bottom of the Connection Options dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.
Combine Filters
= drop-down
The data source filters you have defined can be combined using either And or Or logic. If And, then all filter conditions must be satisfied to load the data row. If Or, then only a single filter condition must be satisfied. The default is And.
If you have only one filter, or no filters, this parameter is essentially ignored.
Row Limit
= integer
Set a numeric value to limit the number of rows that are loaded. The default is an empty field, which will load all rows.
Destination
Authentication Type
= drop-down
Currently, only Username & Password credentials are supported.
Username
= string
Your Oracle login username.
Password
= string
Your Oracle password. This is stored as a secret definition. Read Secrets and secret definitions to learn how to store a password as a secret definition.
Endpoint
= string
The Oracle database endpoint. Typically, this is your Oracle database IP. For example, 10.12.2.15
.
Port Number
= integer
The port number that follows your Oracle database endpoint. The default value is 1521
.
Connection Identifier
= drop-down
Select a method of identifying the Oracle database to connect to.
- Service Name: In Oracle, the SERVICE_NAMES parameter specifies one or more names by which clients can connect to the Oracle instance.
- SID: The Oracle Service Identifier prefix is the first 8 characters of the database name. This prefix can only contain the characters a-z, A-Z, and 0-9.
Database
= string
Specify the full name of your Oracle database. This is the name of the database as it is known to the Oracle instance. It can be up to 30 characters long and can contain alphanumeric, underscore _
dollar $
, and pound #
characters, but must begin with an alphabetic character. No other special characters are permitted in a database name.
Read Selecting a Database Name to learn more about Oracle database names.
Connection Options
= column editor (optional)
- Parameter: A JDBC parameter supported by the database driver. The parameters available to your database will usually be explained in the database's data model.
- Value: A value for the given parameter.
Toggle Text Mode to add information to the Connection Options dialog. For more information, read Text mode.
Schema
= string
The name of the Oracle schema to output the target table into. For information about using Oracle schemas, read Schemas and Schema Objects.
Table Name
= string
A name for the output table that is to be created in the Oracle database.
Load Strategy
= drop-down
Define what happens if the table name already exists in the specified database and schema.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run. If the table name doesn't already exist, a new table will be created.
- Truncate and Insert: If the specified table name already exists, all rows within that table will be removed and new rows will be inserted per the next run of this pipeline.
- Append: If the specified table name already exists, new rows will be appended to that existing table.
In each case, if the specified table name doesn't already exist then a new table will be created.
Primary Keys
= dual listbox (optional)
Select one or more columns to be designated as the table's primary key.
To use grid variables, tick the Use Grid Variable checkbox at the bottom of the Primary Keys dialog.
Update Strategy
= drop-down
If Load Strategy is Append and Primary Keys have been specified, this property determines the behavior if there are existing rows with the same primary keys as an appended row.
- Ignore: Existing rows with the same primary key values will be ignored. This is the default behavior.
- Replace: Existing rows with the same primary key values will be replaced.
Stage Platform
= drop-down
Data read from the Snowflake source will be written directly to the Oracle target, without using an intermediate stage. Currently, therefore, the only option here is Batch Load.
Batch Size
= integer (optional)
The number of rows batched for passing to the Oracle database. Default is 1000
.
Snowflake | Databricks | Amazon Redshift |
---|---|---|
✅ | ❌ | ❌ |