Skip to content

Db2 for IBM i streaming connector

The Db2 for IBM i connector can monitor and capture row-level changes within Db2 for IBM i schemas. The connector produces a change event record for every insert, update, and delete event occurring in any tables being monitored by the streaming agent.

Note

We are using the Debezium Db2 for IBM i connector to connect to this source. For more information, read Debezium connector for Db2 for IBM i.


How the connector works

The connector captures row-level changes from your Db2 for IBM i instance. This works by reading journal entries for the tracked tables. The tracked tables must therefore be journaled, and the Db2 for IBM i user requires permissions to query these journals.

The connector initially queries the database to determine which journals capture the selected tables—and when streaming, will read from each of the relevant journal receivers.

The streaming process uses a Hybrid SaaS architecture and requires an agent container to be deployed within your virtual private cloud (VPC).

The first time the connector connects to the Db2 for IBM i instance, it performs a full load snapshot of the selected tables for which it is configured to capture changed data. This ensures the historical data of the selected tables is captured for replication to your chosen destination.

Note

Snapshot completion times can vary depending on the dataset selection and infrastructure speeds.

Once the full load snapshot process has completed, the connector begins streaming changes by querying the journal for any new entries.

For a guide to setting up user permissions and configuring journaling in Db2 for IBM i, read Configure your Db2 for IBM i database.


Source setup

Refer to this section when you create a streaming pipeline.

Server address = string

The server address of your Db2 for IBM i database.


Port = integer

The port number used to access your Db2 for IBM i database. The default is 8471.


Username = string

The username used to log in to the specified database.


Secrets Manager = drop-down

Choose the service you use to manage secrets.


Secret name = string

The name of the secret in your secret manager service that references your Db2 for IBM i password.


JDBC parameters and specific connection settings = column editor

This parameter is optional.

Specify any parameter:value pairs as part of an advanced connection. Click Save to finish adding parameters.

Click Connect to establish the connection to your source database.


Create a signal table

Follow this section to create a Db2 for IBM i signal table.

Create the table

Create a signal table with the following SQL statement:

CREATE TABLE IF NOT EXISTS <schema>.<table> (
id VARCHAR(42) PRIMARY KEY,
type VARCHAR(32) NOT NULL,
data VARCHAR(2048) NULL
);

The signal table should have three columns:

  • id: An identifier for each signal execution of type VARCHAR(42).
  • type: Indicates the type of signal being sent of type VARCHAR(32).
  • data: Stores additional data related to the signal of type VARCHAR(2048).

Commence signal table journaling

Start journaling for the signal table, where <schema> is the schema, <table> is the signal table name and <journal-name> is name of the journal that will record changes to the signal table:

```sql
    STRJRNPF FILE(<schema>/<table>) JRN(<schema>/<journal-name>) IMAGES(*BOTH)
```

3. Grant database role privileges

The database role that is configured for running a pipeline will need to have the expected privileges on the signal table. The on-demand snapshot requires USAGE on the schema containing the signal table and INSERT on the signal table.

Grant SQL USAGE privileges on the schema containing the signal table:

    GRANT USAGE ON <schema> TO <streaming_user>;

Where <schema> is the schema name and <streaming_user> is the user account name.

Grant SQL INSERT privileges on the signal table:

    GRANT INSERT ON <schema>.<table> TO <streaming_user>;

Where <schema> is the schema name, <table> is the signal table name, and <streaming_user> is the user account name.

Once you set up the signal table, on-demand snapshots can be enabled, and new tables added to the pipeline after the initial startup will be snapshotted. If on-demand snapshots are disabled, any new tables added to the pipeline will capture changes from that point onwards but will not sync their historical data with the target, and the pipeline will immediately have a STREAMING status.


Configuring new tables

When adding tables to a pipeline, they must be correctly configured to allow the on-demand snapshot to take place and to ensure future changes on those tables will be streamed. Read Configure your Db2 for IBM i database for a more comprehensive outline of these requirements.

If on-demand snapshots are disabled, any new tables added to a pipeline will have their changes captured from that point onwards, but will not synchronize their historical data with the target and the pipeline will immediately have a STREAMING status.

Migrating tables to a single journal

You may already have tables configured to be tracked by different journals—even though on-demand snapshots currently only support a single journal. You can change the journal associated with a table by running the following commands:

  1. End journaling for a table:

        ENDJRNPF FILE(<schema>/<table>)
    

    Where <schema> is the schema name and <table> is the table name to end journal tracking for.

  2. Restart journal tracking for a table under a predefined journal:

        STRJRNPF FILE(<schema>/<table>) JRN(<schema>/<journal-name>) IMAGES(*BOTH)
    

    Where <schema> is the schema, <table> is the table name for the table we are moving to the journal, and <journal-name> is name of the journal that will record changes for the table going forwards.