Skip to content

Configure your Oracle database

The steps in this guide for configuring an Oracle database are necessary to use the Oracle connector. These steps are based on advice in the Oracle Database Administration documentation.

Note

  • Setting up Oracle for streaming may require a database restart, depending on the archive log configuration.
  • Streaming has been tested against a multitenancy configuration with a container database, and for non-CDB database sources. If you need any additional advice and guidance, contact support.

Prerequisites

To complete the steps detailed in this guide, you'll need:

  • Access to the SYSDBA account. Many of the steps require administrator privileges.
  • An understanding of which database and schemas you will be targeting for streaming.
  • A list of tables that you want to monitor and capture changes for.

Configuring your Oracle database for streaming

First confirm the state of the Database log mode. This mode can be viewed by running the following command:

ARCHIVE LOG LIST;

Example output:

-----------------------------------------------------------------------------
Database log mode                         Archive Mode
Automatic archival                          Enabled
Archive destination                         USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence          8
Next log sequence to archive        9
Current log sequence                9

If the output value for Database log mode is No Archive Mode and the Automatic archival is disabled, you will need to adjust these settings to enable archiving of logs, as follows.

  1. Enable the database archive log by running the SQL statements below:

    ALTER DATABASE ARCHIVELOG;
    ALTER DATABASE OPEN;
    
    -- You can then view the log status again which should have updated:
    ARCHIVE LOG LIST;
    
  2. To adjust the size of the recovery file destination or the path to this destination, run the following SQL statements:

    ALTER SYSTEM SET db_recovery_file_dest_size = <REPLACE_WITH_YOUR_SIZE>;
    ALTER SYSTEM SET db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
    
    -- If the above has been set, you will need to restart your database
    SHUTDOWN IMMEDIATE;
    STARTUP MOUNT;
    
  3. Enable SUPPLEMENTAL logging for any tables you would like to monitor and consume changes from, using the following commands:

    -- It is recommended to enable supplemental logging for individual tables to minimize the amount of information captured in the redo logs
    ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
    
    -- Minimal supplemental logging must be enabled at the database level and can be configured as follows.
    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
    
    -- You can view the status of logging for all tables owned by the current user using the following query
    SELECT OWNER, LOG_GROUP_NAME, TABLE_NAME, LOG_GROUP_TYPE, "ALWAYS", "GENERATED"
    FROM DBA_LOG_GROUPS
    WHERE OWNER = '<YOUR_USER>'
    

    To enable supplemental logging for your selected tables, you may need to log in with a different user who has access to the correct schemas and tables.

    Note

    LogMiner requires table or column names that are 30 characters or less. The LogMiner utility (DBMS_LOGMNR) doesn't support long table or column names when supplemental logging is enabled. For more information, read the Oracle documentation related to your database version.

  4. Create a new TABLESPACE with these specific size and auto extend settings:

    -- Note that the size properties can be adjusted as required
    -- Additionally, there is an option to configure MAXSIZE UNLIMITED. Monitoring of disk usage should be considered if applying this setting
    CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf' SIZE 50M REUSE AUTOEXTEND ON NEXT 10M;
    

The preparation work is now complete. The next step is to create a common user account and grant the necessary permissions to enable the user to perform streaming operations. These steps are covered in the next section.


Setting up permissions

Create a common user account in the database. This user account will be used by the streaming agent for the monitoring and consumption of changes as they occur within the database. To continue with the setup, a user account with SYSDBA administrative privilege is required.

Note

  • If you follow this method of setting up permissions, it assumes the use of a multitenancy configuration with a container database, and at least one connectable database. If you don't intend to use a multitenancy configuration, refer to Setting up permissions - Non-CDB environment.
  • Common user accounts are created in cdb$root and must use the convention: c##<name> or C##<name>. The user c##cdcuser is used throughout in this guide.
  1. Create a new user following the recommended naming convention:

    -- Ensure a secure password is provided
    -- The tablespace created earlier should be set as the default
    CREATE USER C##STREAMING_USER IDENTIFIED BY password DEFAULT TABLESPACE logminer_tbs QUOTA UNLIMITED ON logminer_tbs CONTAINER=ALL;
    
  2. A number of permissions will need to be granted to this user to enable correct monitoring of the archive logs. These are detailed below:

    GRANT CREATE SESSION TO c##cdcuser CONTAINER=ALL;
    GRANT SET CONTAINER TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$DATABASE to c##cdcuser CONTAINER=ALL;
    GRANT FLASHBACK ANY TABLE TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ANY TABLE TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT_CATALOG_ROLE TO c##cdcuser CONTAINER=ALL;
    GRANT EXECUTE_CATALOG_ROLE TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ANY TRANSACTION TO c##cdcuser CONTAINER=ALL;
    GRANT LOGMINING TO c##cdcuser CONTAINER=ALL;
    GRANT CREATE TABLE TO c##cdcuser CONTAINER=ALL;
    GRANT LOCK ANY TABLE TO c##cdcuser CONTAINER=ALL;
    GRANT CREATE SEQUENCE TO c##cdcuser CONTAINER=ALL;
    GRANT EXECUTE ON DBMS_LOGMNR TO c##cdcuser CONTAINER=ALL;
    GRANT EXECUTE ON DBMS_LOGMNR_D TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$LOG TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$LOG_HISTORY TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$LOGMNR_LOGS TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$LOGMNR_CONTENTS TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$LOGMNR_PARAMETERS TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$LOGFILE TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$ARCHIVED_LOG TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO c##cdcuser CONTAINER=ALL;
    GRANT SELECT ON V_$TRANSACTION TO c##cdcuser CONTAINER=ALL;
    

Once all the appropriate permissions have been granted, this user is now ready to be used for a streaming pipeline.


Setting up permissions in a non-CDB environment

First, create a local user account within the non-CDB database. This user account will be used by the streaming agent for the monitoring and consumption of changes as they occur in the database. To continue with the set up, a user account with SYSDBA administrative privilege will be required.

Note

The local user cdcuser is used throughout this guide.

  1. Create a new user following the recommended naming convention:

    -- Ensure a secure password is provided
    -- The tablespace created earlier should be set as the default
    CREATE USER STREAMING_USER IDENTIFIED BY password DEFAULT TABLESPACE logminer_tbs QUOTA UNLIMITED ON logminer_tbs;
    
  2. A number of permissions will need to be granted to this user to enable correct monitoring of the archive logs. These are detailed below:

    GRANT CREATE SESSION TO cdcuser;
    GRANT SET CONTAINER TO cdcuser;
    GRANT SELECT ON V_$DATABASE to cdcuser;
    GRANT FLASHBACK ANY TABLE TO cdcuser;
    GRANT SELECT ANY TABLE TO cdcuser;
    GRANT SELECT_CATALOG_ROLE TO cdcuser;
    GRANT EXECUTE_CATALOG_ROLE TO cdcuser;
    GRANT SELECT ANY TRANSACTION TO cdcuser;
    GRANT LOGMINING TO cdcuser;
    GRANT CREATE TABLE TO cdcuser;
    GRANT LOCK ANY TABLE TO cdcuser;
    GRANT CREATE SEQUENCE TO cdcuser;
    GRANT EXECUTE ON DBMS_LOGMNR TO cdcuser;
    GRANT EXECUTE ON DBMS_LOGMNR_D TO cdcuser;
    GRANT SELECT ON V_$LOG TO cdcuser;
    GRANT SELECT ON V_$LOG_HISTORY TO cdcuser;
    GRANT SELECT ON V_$LOGMNR_LOGS TO cdcuser;
    GRANT SELECT ON V_$LOGMNR_CONTENTS TO cdcuser;
    GRANT SELECT ON V_$LOGMNR_PARAMETERS TO cdcuser;
    GRANT SELECT ON V_$LOGFILE TO cdcuser;
    GRANT SELECT ON V_$ARCHIVED_LOG TO cdcuser;
    GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO cdcuser;
    GRANT SELECT ON V_$TRANSACTION TO cdcuser;
    

Once all the appropriate permissions have been granted, this user is now ready to be used for a streaming pipeline in a non-CDB environment.

Note

In Oracle Database 19c (19.1), the continuous_mine option for the dbms_logmnr.start_logmnr package has been deprecated and is no longer available. However, it's important to note that Matillion's streaming offering does not rely on the continuous_mine functionality of LogMiner. Therefore, Matillion's streaming solution remains unaffected by this deprecation.


Descriptions of grants

The following table provides additional context for each role.

Role name Description
CREATE SESSION Allows the connector to establish a connection with the Oracle database.
SET CONTAINER Enables the connector to switch between pluggable databases. This is necessary only when the Oracle setup supports container database (CDB) functionality.
SELECT ON V_$DATABASE Allows the connector to retrieve data from the V$DATABASE table.
FLASHBACK ANY TABLE Allows the connector to execute Flashback queries, used for the initial data snapshot process.
SELECT ANY TABLE Grants the connector permission to read data from any table.
SELECT_CATALOG_ROLE Provides access to the data dictionary for the connector, essential for Oracle LogMiner sessions.
EXECUTE_CATALOG_ROLE Allows the connector to write the data dictionary to the Oracle redo logs, enabling tracking of schema modifications.
SELECT ANY TRANSACTION Enables the snapshot process to conduct a Flashback snapshot query on any transaction. This grant is necessary alongside FLASHBACK ANY TABLE.
LOGMINING This role is applicable in newer Oracle versions, granting comprehensive access to Oracle LogMiner and its associated packages. In older Oracle versions lacking this role, this grant can be disregarded.
CREATE TABLE Permits the connector to create a flush table within its default tablespace. The flush table facilitates explicit control over flushing LGWR internal buffers to disk.
LOCK ANY TABLE Allows the connector to lock tables during schema snapshots. If snapshot locks are disabled through configuration, this grant can be ignored.
CREATE SEQUENCE Allows the connector to create a sequence within its default tablespace.
EXECUTE ON DBMS_LOGMNR Allows the connector to execute methods within the DBMS_LOGMNR package, crucial for interaction with Oracle LogMiner. In newer Oracle versions, this is granted via the LOGMINING role, but in older versions, explicit granting is necessary.
EXECUTE ON DBMS_LOGMNR_D Allows the connector to execute methods within the DBMS_LOGMNR_D package, vital for interaction with Oracle LogMiner. In newer Oracle versions, this is granted via the LOGMINING role, but in older versions, explicit granting is necessary.
SELECT ONV_$…​ Allows the connector to read these specific tables. The connector must be able to read information about the Oracle redo and archive logs, and the current transaction state, to prepare the Oracle LogMiner session. Without these grants, the connector can't operate.