{prodname}'s Oracle connector captures and records row-level changes that occur in databases on an Oracle server,
including tables that are added while the connector is running.
You can configure the connector to emit change events for specific subsets of schemas and tables, or to ignore, mask, or truncate values in specific columns.
For information about the Oracle Database versions that are compatible with this connector, see the link:https://debezium.io/releases/[{prodname} release overview].
For information about the Oracle Database versions that are compatible with this connector, see the link:{LinkDebeziumSupportedConfigurations}[{NameDebeziumSupportedConfigurations}].
, the https://docs.oracle.com/database/121/XSTRM/xstrm_intro.htm#XSTRM72647[XStream API], or https://www.bersler.com/openlogreplicator/[OpenLogReplicator].
To optimally configure and run a {prodname} Oracle connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, uses metadata, and implements event buffering.
Typically, the redo logs on an Oracle server are configured to not retain the complete history of the database.
As a result, the {prodname} Oracle connector cannot retrieve the entire history of the database from the logs.
To enable the connector to establish a baseline for the current state of the database, the first time that the connector starts, it performs an initial _consistent snapshot_ of the database.
If the time needed to complete the initial snapshot exceeds the `UNDO_RETENTION` time that is set for the database (fifteen minutes, by default), an ORA-01555 exception can occur.
For more information about the error, and about the steps that you can take to recover from it, see the xref:what-causes-ora-01555-and-how-to-handle-it[Frequently asked questions].
During a table's snapshot, it's possible for Oracle to raise an ORA-01466 exception.
This happens when a user modifies the schema of the table or adds, changes, or drops an index or related object associated with the table being snapshot.
In the event this happens, the connector will stop and the initial snapshot will need to be taken from the beginning.
To remediate the problem, you can configure the xref:oracle-property-snapshot-database-errors-max-retries[`snapshot.database.errors.max.retries`] property with a value greater than `0` so that the specific table's snapshot will restart.
While the entire snapshot will not start from the beginning when retrying, the specific table in question will be re-read from the beginning and the table's topic will contain duplicate snapshot events.
The following workflow lists the steps that {prodname} takes to create a snapshot.
These steps describe the process for a snapshot when the xref:{context}-property-snapshot-mode[`snapshot.mode`] configuration property is set to its default value, which is `initial`.
By default, the connector captures all tables except those with xref:schemas-that-the-debezium-oracle-connector-excludes-when-capturing-change-events[schemas that exclude them from capture].
After the snapshot completes, the connector continues to stream data for the specified tables.
If you want the connector to capture data only from specific tables you can direct the connector to capture the data for only a subset of tables or table elements by setting properties such as xref:{context}-property-table-include-list[`table.include.list`] or xref:{context}-property-table-exclude-list[`table.exclude.list`].
3. Obtain a `ROW SHARE MODE` lock on each of the captured tables to prevent structural changes from occurring during creation of the snapshot.
{prodname} holds the locks for only a short time.
4. Read the current system change number (SCN) position from the server's redo log.
5. Capture the structure of all database tables, or all tables that are designated for capture.
The connector persists schema information in its internal database schema history topic.
The schema history provides information about the structure that is in effect when a change event occurs. +
+
[NOTE]
====
By default, the connector captures the schema of every table in the database that is in capture mode, including tables that are not configured for capture.
If tables are not configured for capture, the initial snapshot captures only their structure; it does not capture any table data.
For more information about why snapshots persist schema information for tables that you did not include in the initial snapshot, see xref:understanding-why-initial-snapshots-capture-the-schema-history-for-all-tables[Understanding why initial snapshots capture the schema for all tables].
====
6. Release the locks obtained in Step 3.
Other database clients can now write to any previously locked tables.
7. At the SCN position that was read in Step 4, the connector scans the tables that are designated for capture (`SELECT * FROM ... AS OF SCN 123`).
During the scan, the connector completes the following tasks:
.. Confirms that the table was created before the snapshot began.
If the table was created after the snapshot began, the connector skips the table.
After the snapshot is complete, and the connector transitions to streaming, it emits change events for any tables that were created after the snapshot began.
.. Produces a `read` event for each row that is captured from a table.
All `read` events contain the same SCN position, which is the SCN position that was obtained in step 4.
.. Emits each `read` event to the Kafka topic for the source table.
.. Releases data table locks, if applicable.
8. Record the successful completion of the snapshot in the connector offsets.
The resulting initial snapshot captures the current state of each row in the captured tables.
From this baseline state, the connector captures subsequent changes as they occur.
After the snapshot process begins, if the process is interrupted due to connector failure, rebalancing, or other reasons, the process restarts after the connector restarts.
After the connector completes the initial snapshot, it continues streaming from the position that it read in Step 3 so that it does not miss any updates.
If the connector stops again for any reason, after it restarts, it resumes streaming changes from where it previously left off.
|The connector performs a database snapshot as described in the xref:default-workflow-for-performing-an-initial-snapshot[default workflow for creating an initial snapshot].
After the snapshot completes, the connector begins to stream event records for subsequent database changes.
|The connector performs a database snapshot and stops before streaming any change event records, not allowing any subsequent change events to be captured.
|The connector captures the structure of all relevant tables, performing all of the steps described in the xref:default-workflow-for-performing-an-initial-snapshot[default snapshot workflow], except that it does not create `READ` events to represent the data set at the point of the connector's start-up (Step 6).
|Set the snapshot mode to `configuration_based` to control snapshot behavior through the set of connector properties that have the prefix 'snapshot.mode.configuration.based'.
==== Understanding why initial snapshots capture the schema history for all tables
The initial snapshot that a connector runs captures two types of information:
Table data::
Information about `INSERT`, `UPDATE`, and `DELETE` operations in tables that are named in the connector's xref:{context}-property-table-include-list[`table.include.list`] property.
Schema data::
DDL statements that describe the structural changes that are applied to tables.
Schema data is persisted to both the internal schema history topic, and to the connector's schema change topic, if one is configured.
After you run an initial snapshot, you might notice that the snapshot captures schema information for tables that are not designated for capture.
By default, initial snapshots are designed to capture schema information for every table that is present in the database, not only from tables that are designated for capture.
Connectors require that the table's schema is present in the schema history topic before they can capture a table.
By enabling the initial snapshot to capture schema data for tables that are not part of the original capture set, {prodname} prepares the connector to readily capture event data from these tables should that later become necessary.
If the initial snapshot does not capture a table's schema, you must add the schema to the history topic before the connector can capture data from the table.
In some cases, you might want to limit schema capture in the initial snapshot.
This can be useful when you want to reduce the time required to complete a snapshot.
Or when {prodname} connects to the database instance through a user account that has access to multiple logical databases, but you want the connector to capture changes only from tables in a specific logic database.
* xref:oracle-capturing-data-from-tables-not-captured-by-the-initial-snapshot-no-schema-change[Capturing data from tables not captured by the initial snapshot (no schema change)]
* xref:oracle-capturing-data-from-new-tables-with-schema-changes[Capturing data from tables not captured by the initial snapshot (schema change)]
* Setting the xref:{context}-property-database-history-store-only-captured-tables-ddl[`schema.history.internal.store.only.captured.tables.ddl`] property to specify the tables from which to capture schema information.
* Setting the xref:{context}-property-database-history-store-only-captured-databases-ddl[`schema.history.internal.store.only.captured.databases.ddl`] property to specify the logical databases from which to capture schema changes.
==== Capturing data from tables not captured by the initial snapshot (no schema change)
In some cases, you might want the connector to capture data from a table whose schema was not captured by the initial snapshot.
Depending on the connector configuration, the initial snapshot might capture the table schema only for specific tables in the database.
If the table schema is not present in the history topic, the connector fails to capture the table, and reports a missing schema error.
You might still be able to capture data from the table, but you must perform additional steps to add the table schema.
.Prerequisites
* You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
* All entries for the table in the transaction log use the same schema.
For information about capturing data from a new table that has undergone structural changes, see xref:oracle-capturing-data-from-new-tables-with-schema-changes[].
.Procedure
1. Stop the connector.
2. Remove the internal database schema history topic that is specified by the xref:{context}-property-database-history-kafka-topic[`schema.history.internal.kafka.topic property`].
3. In the connector configuration:
.. Set the xref:{context}-property-snapshot-mode[`snapshot.mode`] to `schema_only_recovery`.
.. (Optional) Set the value of xref:{context}-property-database-history-store-only-captured-tables-ddl[`schema.history.internal.store.only.captured.tables.ddl`] to `false` to ensure that in the future the connector can readily capture data for tables that are not currently designated for capture.
Connectors can capture data from a table only if the table's schema history is present in the history topic.
The incremental snapshot first streams the historical data of the newly added tables, and then resumes reading changes from the redo and archive logs for previously configured tables, including changes that occur while that connector was off-line.
==== Capturing data from tables not captured by the initial snapshot (schema change)
If a schema change is applied to a table, records that are committed before the schema change have different structures than those that were committed after the change.
When {prodname} captures data from a table, it reads the schema history to ensure that it applies the correct schema to each event.
If the schema is not present in the schema history topic, the connector is unable to capture the table, and an error results.
If you want to capture data from a table that was not captured by the initial snapshot, and the schema of the table was modified, you must add the schema to the history topic, if it is not already available.
You can add the schema by running a new schema snapshot, or by running an initial snapshot for the table.
.Prerequisites
* You want to capture data from a table with a schema that the connector did not capture during the initial snapshot.
* A schema change was applied to the table so that the records to be captured do not have a uniform structure.
.Procedure
Initial snapshot captured the schema for all tables (`store.only.captured.tables.ddl` was set to `false`)::
Initial snapshot did not capture the schema for all tables (`store.only.captured.tables.ddl` was set to `true`)::
If the initial snapshot did not save the schema of the table that you want to capture, complete one of the following procedures:
Procedure 1: Schema snapshot, followed by incremental snapshot:::
In this procedure, the connector first performs a schema snapshot.
You can then initiate an incremental snapshot to enable the connector to synchronize data.
1. Stop the connector.
2. Remove the internal database schema history topic that is specified by the xref:{context}-property-database-history-kafka-topic[`schema.history.internal.kafka.topic property`].
3. Clear the offsets in the configured Kafka Connect link:{link-kafka-docs}/#connectconfigs_offset.storage.topic[`offset.storage.topic`].
For more information about how to remove offsets, see the link:https://debezium.io/documentation/faq/#how_to_remove_committed_offsets_for_a_connector[{prodname} community FAQ].
+
[WARNING]
====
Removing offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data.
This operation is potentially destructive, and should be performed only as a last resort.
====
4. Set values for properties in the connector configuration as described in the following steps:
2. Remove the internal database schema history topic that is specified by the xref:oracle-property-database-history-kafka-topic[`schema.history.internal.kafka.topic property`].
3. Clear the offsets in the configured Kafka Connect link:{link-kafka-docs}/#connectconfigs_offset.storage.topic[`offset.storage.topic`].
For more information about how to remove offsets, see the link:https://debezium.io/documentation/faq/#how_to_remove_committed_offsets_for_a_connector[{prodname} community FAQ].
+
[WARNING]
====
Removing offsets should be performed only by advanced users who have experience in manipulating internal Kafka Connect data.
This operation is potentially destructive, and should be performed only as a last resort.
====
4. Edit the xref:{context}-property-table-include-list[`table.include.list`] to add the tables that you want to capture.
5. Set values for properties in the connector configuration as described in the following steps:
.. Set the value of the xref:{context}-property-snapshot-mode[`snapshot.mode`] property to `initial`.
.. (Optional) Set xref:{context}-property-database-history-store-only-captured-tables-ddl[`schema.history.internal.store.only.captured.tables.ddl`] to `false`.
6. Restart the connector.
The connector takes a full database snapshot.
After the snapshot completes, the connector transitions to streaming.
7. (Optional) To capture any data that changed while the connector was off-line, initiate an xref:debezium-oracle-incremental-snapshots[incremental snapshot].
By default, the Oracle connector writes change events for all `INSERT`, `UPDATE`, and `DELETE` operations that occur in a table to a single Apache Kafka topic that is specific to that table.
The connector uses the following convention to name change event topics:
For example, if `fulfillment` is the server name, `inventory` is the schema name, and the database contains tables with the names `orders`, `customers`, and `products`,
the {prodname} Oracle connector emits events to the following Kafka topics, one for each table in the database:
The connector applies similar naming conventions to label its internal database schema history topics, xref:oracle-schema-change-topic[schema change topics], and xref:oracle-transaction-metadata[transaction metadata topics].
For more information about using the logical topic routing SMT to customize topic naming, see {link-prefix}:{link-topic-routing}#topic-routing[Topic routing].
// Title: How {prodname} Oracle connectors handle database schema changes
[[oracle-schema-history-topic]]
=== Schema history topic
When a database client queries a database, the client uses the database’s current schema.
However, the database schema can be changed at any time, which means that the connector must be able to identify what the schema was at the time each insert, update, or delete operation was recorded.
Also, a connector cannot necessarily apply the current schema to every event.
If an event is relatively old, it's possible that it was recorded before the current schema was applied.
To ensure correct processing of events that occur after a schema change, Oracle includes in the redo log not only the row-level changes that affect the data, but also the DDL statements that are applied to the database.
As the connector encounters these DDL statements in the redo log, it parses them and updates an in-memory representation of each table’s schema.
The connector uses this schema representation to identify the structure of the tables at the time of each insert, update, or delete operation and to produce the appropriate change event.
In a separate database schema history Kafka topic, the connector records all DDL statements along with the position in the redo log where each DDL statement appeared.
When the connector restarts after either a crash or a graceful stop, it starts reading the redo log from a specific position, that is, from a specific point in time.
The connector rebuilds the table structures that existed at this point in time by reading the database schema history Kafka topic and parsing all DDL statements up to the point in the redo log where the connector is starting.
This database schema history topic is internal for internal connector use only.
Optionally, the connector can also xref:oracle-schema-change-topic[emit schema change events to a different topic that is intended for consumer applications].
.Additional resources
* xref:oracle-topic-names[Default names for topics] that receive {prodname} event records.
You can configure a {prodname} Oracle connector to produce schema change events that describe structural changes that are applied to tables in the database.
The connector writes schema change events to a Kafka topic named `_<serverName>_`, where `_serverName_` is the namespace that is specified in the xref:oracle-property-topic-prefix[`topic.prefix`] configuration property.
`tableChanges`:: A structured representation of the entire table schema after the schema change.
The `tableChanges` field contains an array that includes entries for each column of the table.
Because the structured representation presents data in JSON or Avro format, consumers can easily read messages without first processing them through a DDL parser.
You can modify settings so that the schema history topic stores a different subset of tables.
Use one of the following methods to alter the set of tables that the topic stores:
* Change the permissions of the account that {prodname} uses to access the database so that a different set of tables are visible in the `ALL_TABLES` view.
* Set the connector property link:#oracle-property-database-history-store-only-captured-tables-ddl[`schema.history.internal.store.only.captured.tables.ddl`] to `true`.
When the connector is configured to capture a table, it stores the history of the table's schema changes not only in the schema change topic, but also in an internal database schema history topic.
The internal database schema history topic is for connector use only and it is not intended for direct use by consuming applications.
Never partition the database schema history topic.
For the database schema history topic to function correctly, it must maintain a consistent, global order of the event records that the connector emits to it.
* If you create the database schema history topic manually, specify a partition count of `1`.
* If you use the Apache Kafka broker to create the database schema history topic automatically, the topic is created, set the value of the link:{link-kafka-docs}/#brokerconfigs_num.partitions[Kafka `num.partitions`] configuration option to `1`.
In the source object, ts_ms indicates the time that the change was made in the database. By comparing the value for payload.source.ts_ms with the value for payload.ts_ms, you can determine the lag between the source database update and Debezium.
`id`:: String representation of the unique transaction identifier.
`ts_ms`:: The time of a transaction boundary event (`BEGIN` or `END` event) at the data source.
If the data source does not provide {prodname} with the event time, then the field instead represents the time at which {prodname} processes the event.
`data_collections` (for `END` events):: An array of pairs of `data_collection` and `event_count` elements that indicates the number of events that the connector emits for changes that originate from a data collection.
Entries in the Oracle redo logs do not store the original SQL statements that users submit to make DML changes.
Instead, a redo entry holds a set of change vectors and a set of object identifiers that represent the tablespace, table, and columns related to these vectors.
In other words, redo log entries don't include the names of the schemas, tables, or columns affected by DML changes.
The {prodname} Oracle connector uses the xref:oracle-property-log-mining-strategy[`log.mining.strategy`] configuration property to control how Oracle LogMiner handles the lookup of the object identifiers in the change vectors.
In certain situations, one log mining strategy might prove more reliable than another with regard to schema changes.
However, before you choose a log mining strategy, it's important to consider the implications it might have on performance and overhead.
==== Writing the data dictionary to redo logs
The default mining strategy is called `redo_log_catalog`.
In this strategy, the database flushes a copy of the data dictionary to the redo logs immediately after each redo log switch.
This is the most reliable strategy for tracking schema changes that are interwoven with data changes, because Oracle LogMiner has a way to interpolate between the starting and ending data dictionary states across a series of change vectors.
However, the `redo_log_catalog` mode is also the most expensive, because it requires several key steps to function.
First, this mode requires the data dictionary to be flushed to the redo logs after every log switch.
Flushing the logs after each switch can quickly consume valuable space in the archive log, and the high volume of archive logs might exceed the number that database administrators prepared for.
If you intend to use this mode, coordinate with your database administrators to ensure that the database is configured appropriately.
[IMPORTANT]
====
If you configure the connector to use the `redo_log_catalog` mode, do not use multiple {prodname} Oracle connectors to capture changes from the same logical database.
====
==== Using the online catalog directly
The next strategy mode, `online_catalog`, works differently from the `redo_log_catalog` mode.
When the strategy is set to `online_catalog`, the database never flushes the data dictionary to the redo logs.
Instead, Oracle LogMiner always uses the most current data dictionary state to perform comparisons.
By always using the current dictionary, and eliminating flushing to the redo logs, this strategy requires less overhead, and operates more efficiently.
However, these benefits are offset by the inability to parse interwoven schema changes and data changes.
As a result, this strategy can sometimes result in event failures.
If LogMiner was unable to reconstruct the SQL reliability after a schema change, check the redo logs for evidence.
Look for references to tables with names like `OBJ# 123456` (where the number is the table's object identifier), or for columns with names like `COL1` or `COL2`.
When you configure the connector to use the `online_catalog` strategy, take steps to ensure that the table schema and its indices remain static and free from change.
If the {prodname} connector is configured to use the `online_catalog` mode, and you must apply a schema change, perform the following steps:
1. Wait for the connector to capture all existing data changes (DML).
2. Perform the schema (DDL) change, and then wait for the connector to capture the change.
3. Resume data changes (DML) on the table.
Following this procedure helps to ensure that Oracle LogMiner can safely reconstruct the SQL for all data changes.
ifdef::community[]
==== Hybrid approach
This is a new, experimental strategy that can be enabled by setting the strategy to `hybrid`.
The goal of this strategy is to provide the reliability of the `redo_log_catalog` strategy with the performance and low overhead of the `online_catalog` strategy, without incurring the disadvantages of either strategy.
The `hybrid` strategy works by primarily operating in the `online_catalog` mode, meaning that the {prodname} Oracle connector first delegates event reconstruction to Oracle LogMiner.
If Oracle LogMiner successfully reconstructs the SQL, {prodname} processes the event normally, as if it were configured to use the `online_catalog` strategy.
If the connector detects that Oracle LogMiner could not reconstruct the SQL, the connector attempts to reconstruct the SQL directly by using the schema history for that table object.
The connector reports a failure only if both Oracle LogMiner and the connector are unable to reconstruct the SQL.
[IMPORTANT]
====
You cannot use the `hybrid` mining strategy if the xref:oracle-property-lob-enabled[`lob.enabled`] property is set to `true`.
If you require streaming `CLOB`, `BLOB`, or `XML` data, only the `online_catalog` or `redo_log_catalog` strategies can be used.
The {prodname} Oracle connector integrates with Oracle LogMiner by default.
This integration requires a specialized set of steps which includes generating a complex JDBC SQL query to ingest the changes recorded in the transaction logs as change events.
The `V$LOGMNR_CONTENTS` view used by the JDBC SQL query does not have any indices to improve the query's performance, and so there are different query modes that can be used that control how the SQL query is generated as a way to improve the query's execution.
The xref:oracle-property-log-mining-query-filter-mode[`log.mining.query.filter.mode`] connector property can be configured with one of the following to influence how the JDBC SQL query is generated:
`none`:: (Default) This mode creates a JDBC query that only filters based on the different operation types, such as inserts, updates, or deletes, at the database level.
When filtering the data based on the schema, table, or username include/exclude lists, this is done during the processing loop within the connector. +
+
This mode is often useful when capturing a small number of tables from a database that is not heavily saturated with changes.
The generated query is quite simple, and focuses primarily on reading as quickly as possible with low database overhead.
`in`:: This mode creates a JDBC query that filters not only operation types at the database level, but also schema, table, and username include/exclude lists.
The query's predicates are generated using a SQL in-clause based on the values specified in the include/exclude list configuration properties. +
+
This mode is often useful when capturing a large number of tables from a database that is heavily saturated with changes.
The generated query is much more complex than the `none` mode, and focuses on reducing network overhead and performing as much filtering at the database level as possible. +
+
Finally, **do not** specify regular expressions as part of schema and table include/exclude configuration properties.
Using regular expressions will cause the connector to not match changes based on these configuration properties, causing changes to be missed.
`regex`:: This mode creates a JDBC query that filters not only operation types at the database level, but also schema, table, and username include/exclude lists.
However, unlike the `in` mode, this mode generates a SQL query using the Oracle `REGEXP_LIKE` operator using a conjunction or disjunction depending on whether include or excluded values are specified. +
+
This mode is often useful when capturing a variable number of tables that can be identified using a small number of regular expressions.
The generated query is much more complex than any other mode, and focuses on reducing network overhead and performing as much filtering at the database level as possible. +
Oracle writes all changes to the redo logs in the order in which they occur, including changes that are later discarded by a rollback.
As a result, concurrent changes from separate transactions are intertwined.
When the connector first reads the stream of changes, because it cannot immediately determine which changes are committed or rolled back, it temporarily stores the change events in an internal buffer.
After a change is committed, the connector writes the change event from the buffer to Kafka.
The connector drops change events that are discarded by a rollback.
You can configure the buffering mechanism that the connector uses by setting the property xref:oracle-property-log-mining-buffer-type[`log.mining.buffer.type`].
Under the default `memory` setting, the connector uses the heap memory of the JVM process to allocate and manage buffered event records.
If you use the `memory` buffer setting, be sure that the amount of memory that you allocate to the Java process can accommodate long-running and large transactions in your environment.
The {prodname} Oracle connector can also be configured to use Infinispan as its cache provider, supporting cache stores both locally with embedded mode or remotely on a server cluster.
In order to use Infinispan, the xref:oracle-property-log-mining-buffer-type[`log.mining.buffer.type`] must be configured using either `infinispan_embedded` or `infinispan_remote`.
In order to allow flexibility with Infinispan cache configurations, the connector expects a series of cache configuration properties to be supplied when using Infinispan to buffer event data.
See the xref:oracle-connector-properties[configuration properties] in the `log.mining.buffer.infinispan.cache` namespace.
The contents of these configuration properties depend on whether the connector is to integrate with a remote Infinispan cluster or to use the embedded engine.
For example, the following illustrates what an embedded configuration would look like for the transaction cache property when using Infinispan in embedded mode:
Looking at the configuration in-depth, the cache is configured to be persistent.
All caches should be configured this way to avoid loss of transaction events across connector restarts if a transaction is in-progress.
Additionally, the location where the cache is kept is defined by the `path` attribute and this should be a shared location accessible all possible runtime environments.
The Infinispan buffer type is considered incubating; the cache formats may change between versions and may require a re-snapshot.
The migration notes will indicate whether this is needed.
Additionally, when removing a {prodname} Oracle connector that uses the Infinispan buffer, the persisted cache files are not removed from disk automatically.
If the same buffer location will be used by a new connector deployment, the files should be removed manually before deploying the new connector.
The {prodname} Oracle connector utilizes the Hotrod client to communicate with the Infinispan cluster.
Any connector property that is prefixed with `log.mining.buffer.infinispan.client.` will be passed directly to the Hotrod client using the `infinispan.client.` namespace, allowing for complete customization of how the client is to interact with the cluster.
There is at least one required configuration property that must be supplied when using this Infinspan mode:
When the {prodname} Oracle connector is configured to use LogMiner, it collects change events from Oracle by using a start and end range that is based on system change numbers (SCNs).
The connector manages this range automatically, increasing or decreasing the range depending on whether the connector is able to stream changes in near real-time, or must process a backlog of changes due to the volume of large or bulk transactions in the database.
Under certain circumstances, the Oracle database advances the SCN by an unusually high amount, rather than increasing the SCN value at a constant rate.
Such a jump in the SCN value can occur because of the way that a particular integration interacts with the database, or as a result of events such as hot backups.
If the difference between the current SCN value and the highest SCN value is greater than the minimum gap size, then the connector has potentially detected a SCN gap.
This allows the connector to quickly catch up to the real-time events without mining smaller ranges in between that return no changes because the SCN value was increased by an unexpectedly large number.
When the connector performs the preceding steps in response to an SCN gap, it ignores the value that is specified by the xref:oracle-property-log-mining-batch-size-max[log.mining.batch.size.max] property.
After the connector finishes the mining session and catches back up to real-time events, it resumes enforcement of the maximum log mining batch size.
// Title: How {prodname} manages offsets in databases that change infrequently
[[low-change-frequency-offset-management]]
=== Low change frequency offset management
The {prodname} Oracle connector tracks system change numbers in the connector offsets so that when the connector is restarted, it can begin where it left off.
These offsets are part of each emitted change event; however, when the frequency of database changes are low (every few hours or days), the offsets can become stale and prevent the connector from successfully restarting if the system change number is no longer available in the transaction logs.
For connectors that use non-CDB mode to connect to Oracle, you can enable xref:oracle-property-heartbeat-interval-ms[`heartbeat.interval.ms`] to force the connector to emit a heartbeat event at regular intervals so that offsets remain synchronized.
For connectors that use CDB mode to connect to Oracle, maintaining synchronization is more complicated.
Not only must you set xref:oracle-property-heartbeat-interval-ms[`heartbeat.interval.ms`], but it's also necessary to set xref:oracle-property-heartbeat-action-query[`heartbeat.action.query`].
Specifying both properties is required, because in CDB mode, the connector specifically tracks changes inside the PDB only.
A supplementary mechanism is needed to trigger change events from within the pluggable database.
At regular intervals, the heartbeat action query causes the connector to insert a new table row, or update an existing row in the pluggable database.
{prodname} detects the table changes and emits change events for them, ensuring that offsets remain synchronized, even in pluggable databases that process changes infrequently.
[NOTE]
====
For the connector to use the `heartbeat.action.query` with tables that are not owned by the xref:creating-users-for-the-connector[connector user account], you must grant the connector user permission to run the necessary `INSERT` or `UPDATE` queries on those tables.
The {prodname} Oracle connector ensures that all Kafka Connect _schema names_ are http://avro.apache.org/docs/current/spec.html#names[valid Avro schema names].
This means that the logical server name must start with alphabetic characters or an underscore ([a-z,A-Z,\_]),
and the remaining characters in the logical server name and all characters in the schema and table names must be alphanumeric characters or an underscore ([a-z,A-Z,0-9,\_]).
The connector automatically replaces invalid characters with an underscore character.
Unexpected naming conflicts can result when the only distinguishing characters between multiple logical server names, schema names, or table names are not valid characters, and those characters are replaced with underscores.
For each changed table, the change event key is structured such that a field exists for each column in the primary key (or unique key constraint) of the table at the time when the event is created.
The `schema` portion of the key contains a Kafka Connect schema that describes the content of the key portion.
In the preceding example, the `payload` value is not optional, the structure is defined by a schema named `server1.DEBEZIUM.CUSTOMERS.Key`, and there is one required field named `id` of type `int32`.
The value of the key's `payload` field indicates that it is indeed a structure (which in JSON is just an object) with a single `id` field, whose value is `1004`.
Therefore, you can interpret this key as describing the row in the `inventory.customers` table (output from the connector named `server1`) whose `id` primary key column had a value of `1004`.
Although the `column.exclude.list` configuration property allows you to remove columns from the event values, all columns in a primary or unique key are always included in the event's key.
If the table does not have a primary or unique key, then the change event's key is null. This makes sense since the rows in a table without a primary or unique key constraint cannot be uniquely identified.
The structure of a value in a change event message mirrors the structure of the xref:oracle-change-event-keys[message key in the change event] in the message, and contains both a _schema_ section and a _payload_ section.
.Payload of a change event value
An _envelope_ structure in the payload sections of a change event value contains the following fields:
`op`:: A mandatory field that contains a string value describing the type of operation.
The `op` field in the payload of an Oracle connector change event value contains one of the following values: `c` (create or insert), `u` (update), `d` (delete), or `r` (read, which indicates a snapshot).
`before`:: An optional field that, if present, describes the state of the row _before_ the event occurred.
The structure is described by the `server1.INVENTORY.CUSTOMERS.Value` Kafka Connect schema, which the `server1` connector uses for all rows in the `inventory.customers` table.
// Whether or not this field and its elements are available is highly dependent on the https://docs.oracle.com/database/121/SUTIL/GUID-D2DDD67C-E1CC-45A6-A2A7-198E4C142FA3.htm#SUTIL1583[Supplemental Logging] configuration applying to the table.
`ts_ms`:: An optional field that, if present, contains the time (based on the system clock in the JVM that runs the Kafka Connect task) at which the connector processed the event.
.Schema of a change event value
The _schema_ portion of the event message's value contains a schema that describes the envelope structure of the payload and the nested fields within it.
ifdef::product[]
For more information about change event values, see the following topics:
The following example shows the value of a _create_ event value from the `customers` table that is described in the xref:oracle-change-event-keys[change event keys] example:
The names of the schemas for the `before` and `after` fields are of the form `_<logicalName>_._<schemaName>_._<tableName>_.Value`, and thus are entirely independent from the schemas for all other tables.
As a result, when you use the {link-prefix}:{link-avro-serialization}#avro-serialization[Avro converter], the Avro schemas for tables in each logical source have their own evolution and history.
The `payload` portion of this event's _value_, provides information about the event.
It describes that a row was created (`op=c`), and shows that the `after` field value contains the values that were inserted into the `ID`, `FIRST_NAME`, `LAST_NAME`, and `EMAIL` columns of the row.
You can use the {link-prefix}:{link-avro-serialization}#avro-serialization[Avro Converter] to decrease the size of messages that the connector writes to Kafka topics.
* The value of the `op` field is `u`, signifying that this row changed because of an update.
* The `before` field shows the former state of the row with the values that were present before the `update` database commit.
* The `after` field shows the updated state of the row, with the `EMAIL` value now set to `anne@example.com`.
* The structure of the `source` field includes the same fields as before, but the values are different, because the connector captured the event from a different position in the redo log.
* The `ts_ms` field shows the timestamp that indicates when {prodname} processed the event.
* The value of the `op` field is `d`, signifying that the row was deleted.
* The `before` field shows the former state of the row that was deleted with the database commit.
* The value of the `after` field is `null`, signifying that the row no longer exists.
* The structure of the `source` field includes many of the keys that exist in _create_ or _update_ events, but the values in the `ts_ms`, `scn`, and `txId` fields are different.
* The `ts_ms` shows a timestamp that indicates when {prodname} processed this event.
When a row is deleted, the _delete_ event value shown in the preceding example still works with log compaction, because Kafka is able to remove all earlier messages that use the same key.
The message value must be set to `null` to instruct Kafka to remove _all messages_ that share the same key.
To make this possible, by default, {prodname}'s Oracle connector always follows a _delete_ event with a special _tombstone_ event that has the same key but `null` value.
You can change the default behavior by setting the connector property xref:oracle-property-tombstones-on-delete[`tombstones.on.delete`].
a|Mandatory field that describes the source metadata for the event. In a _truncate_ event value, the `source` field structure is the same as for _create_, _update_, and _delete_ events for the same table, provides this metadata:
* {prodname} version
* Connector type and name
* Database and table that contains the new row
* Schema name
* If the event was part of a snapshot (always `false` for _truncate_ events)
* ID of the transaction in which the operation was performed
* SCN of the operation
* Timestamp for when the change was made in the database
In the `source` object, `ts_ms` indicates the time that the change was made in the database. By comparing the value for `payload.source.ts_ms` with the value for `payload.ts_ms`, you can determine the lag between the source database update and {prodname}.
Because _truncate_ events represent changes made to an entire table, and have no message key, in topics with multiple partitions, there is no guarantee that consumers receive _truncate_ events and change events (_create_, _update_, etc.) for to a table in order.
For example, when a consumer reads events from different partitions, it might receive an _update_ event for a table after it receives a _truncate_ event for the same table.
Ordering can be guaranteed only if a topic uses a single partition.
When the {prodname} {connector-name} connector detects a change in the value of a table row, it emits a change event that represents the change.
Each change event record is structured in the same way as the original table, with the event record containing a field for each column value.
The data type of a table column determines how the connector represents the column's values in change event fields, as shown in the tables in the following sections.
For each column in a table, {prodname} maps the source data type to a _literal type_ and, and in some cases, a _semantic type_, in the corresponding event field.
Literal types:: Describe how the value is literally represented, using one of the following Kafka Connect schema types: `INT8`, `INT16`, `INT32`, `INT64`, `FLOAT32`, `FLOAT64`, `BOOLEAN`, `STRING`, `BYTES`, `ARRAY`, `MAP`, and `STRUCT`.
Semantic types:: Describe how the Kafka Connect schema captures the _meaning_ of the field, by using the name of the Kafka Connect schema for the field.
If the default data type conversions do not meet your needs, you can {link-prefix}:{link-custom-converters}#custom-converters[create a custom converter] for the connector.
For some Oracle large object (CLOB, NCLOB, and BLOB) and numeric data types, you can manipulate the way that the connector performs the type mapping by changing default configuration property settings.
For more information about how {prodname} properties control mappings for these data types, see xref:oracle-binary-character-lob-types[Binary and Character LOB types] and xref:oracle-numeric-types[Numeric types].
Support for `BLOB`, `CLOB`, and `NCLOB` is currently in incubating state, that is, the exact semantics, configuration options and so forth might change in future revisions, based on feedback we receive.
Use of the `BLOB`, `CLOB`, and `NCLOB` with the {prodname} Oracle connector is a Technology Preview feature only.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview[https://access.redhat.com/support/offerings/techpreview].
|Depending on the setting of the xref:oracle-property-binary-handling-mode[`binary.handling.mode`] property in the connector configuration, the connector maps LOB values of this type to one of the following semantic types:
|Depending on the setting of the xref:oracle-property-binary-handling-mode[`binary.handling.mode`] property in the connector configuration, the connector maps LOB values of this type to one of the following semantic types:
If the value of a `CLOB`, `NCLOB`, or `BLOB` column is updated, the new value is placed in the `after` element of the corresponding update change event.
The `before` element contains the unavailable value placeholder.
You can modify the way that the connector maps the Oracle `DECIMAL`, `NUMBER`, `NUMERIC`, and `REAL` data types by changing the value of the connector's xref:oracle-property-decimal-handling-mode[`decimal.handling.mode`] configuration property.
When the property is set to its default value of `precise`, the connector maps these Oracle data types to the Kafka Connect `org.apache.kafka.connect.data.Decimal` logical type, as indicated in the table.
When the value of the property is set to `double` or `string`, the connector uses alternate mappings for some Oracle data types.
For more information, see the _Semantic type and Notes_ column in the following table.
When the `decimal.handling.mode` property is set to `double`, the connector represents `DECIMAL` values as Java `double` values with schema type `FLOAT64`.
When the `decimal.handling.mode` property is set to `string`, the connector represents DECIMAL values as their formatted string representation with schema type `STRING`.
Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|`FLOAT[(P)]`
|`STRUCT`
|`io.debezium.data.VariableScaleDecimal` +
+
Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|`INTEGER`, `INT`
|`BYTES`
|`org.apache.kafka.connect.data.Decimal` +
+
`INTEGER` is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the `INT` types could store
Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
When the `decimal.handling.mode` property is set to `double`, the connector represents `NUMBER` values as Java `double` values with schema type `FLOAT64`.
When the `decimal.handling.mode` property is set to `string`, the connector represents `NUMBER` values as their formatted string representation with schema type `STRING`.
* P - S >= 19, `BYTES` (`org.apache.kafka.connect.data.Decimal`)
When the `decimal.handling.mode` property is set to `double`, the connector represents `NUMBER` values as Java `double` values with schema type `FLOAT64`.
When the `decimal.handling.mode` property is set to `string`, the connector represents `NUMBER` values as their formatted string representation with schema type `STRING`.
When the `decimal.handling.mode` property is set to `double`, the connector represents `NUMERIC` values as Java `double` values with schema type `FLOAT64`.
When the `decimal.handling.mode` property is set to `string`, the connector represents `NUMERIC` values as their formatted string representation with schema type `STRING`.
Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
When the `decimal.handling.mode` property is set to `double`, the connector represents `REAL` values as Java `double` values with schema type `FLOAT64`.
When the `decimal.handling.mode` property is set to `string`, the connector represents `REAL` values as their formatted string representation with schema type `STRING`.
As mention above, Oracle allows negative scales in `NUMBER` type.
This can cause an issue during conversion to the Avro format when the number is represented as the `Decimal`.
`Decimal` type includes scale information, but https://avro.apache.org/docs/1.11.1/specification/#decimal[Avro specification] allows only positive values for the scale.
Depending on the schema registry used, it may result into Avro serialization failure.
To avoid this issue, you can use `NumberToZeroScaleConverter`, which converts sufficiently high numbers (P - S >= 19) with negative scale into `Decimal` type with zero scale.
By default, the number is converted to `Decimal` type (`zero_scale.decimal.mode=precise`), but for completeness remaining two supported types (`double` and `string`) are supported as well.
To enable you to convert source columns to Boolean data types, {prodname} provides a `NumberOneToBooleanConverter` {link-prefix}:{link-custom-converters}#custom-converters[custom converter] that you can use in one of the following ways:
To use this type of conversion, you must set the xref:oracle-property-converters[`converters`] configuration property with the `selector` parameter, as shown in the following example:
Other than the Oracle `INTERVAL`, `TIMESTAMP WITH TIME ZONE`, and `TIMESTAMP WITH LOCAL TIME ZONE` data types, the way that the connector converts temporal types depends on the value of the `time.precision.mode` configuration property.
When the `time.precision.mode` configuration property is set to `adaptive` (the default), then the connector determines the literal and semantic type for the temporal types based on the column's data type definition so that events _exactly_ represent the values in the database:
The number of micro seconds for a time interval using the `365.25 / 12.0` formula for days per month average. +
+
`io.debezium.time.Interval` (when `interval.handling.mode` is set to `string`) +
+
The string representation of the interval value that follows the pattern `P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S`, for example, `P1Y2M3DT4H5M6.78S`.
The number of micro seconds for a time interval using the `365.25 / 12.0` formula for days per month average. +
+
`io.debezium.time.Interval` (when `interval.handling.mode` is set to `string`) +
+
The string representation of the interval value that follows the pattern `P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S`, for example, `P1Y2M3DT4H5M6.78S`.
When the `time.precision.mode` configuration property is set to `connect`, then the connector uses the predefined Kafka Connect logical types.
This can be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values.
Because the level of precision that Oracle supports exceeds the level that the logical types in Kafka Connect support, if you set `time.precision.mode` to `connect`, *a loss of precision* results when the _fractional second precision_ value of a database column is greater than 3:
The number of micro seconds for a time interval using the `365.25 / 12.0` formula for days per month average. +
+
`io.debezium.time.Interval` (when `interval.handling.mode` is set to `string`) +
+
The string representation of the interval value that follows the pattern `P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S`, for example, `P1Y2M3DT4H5M6.78S`.
The number of micro seconds for a time interval using the `365.25 / 12.0` formula for days per month average. +
+
`io.debezium.time.Interval` (when `interval.handling.mode` is set to `string`) +
+
The string representation of the interval value that follows the pattern `P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S`, for example, `P1Y2M3DT4H5M6.78S`.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview[https://access.redhat.com/support/offerings/techpreview].
====
endif::product[]
The following table describes how the connector maps XMLTYPE data types.
If a default value is specified for a column in the database schema, the Oracle connector will attempt to propagate this value to the schema of the corresponding Kafka record field.
If a temporal type uses a function call such as `TO_TIMESTAMP` or `TO_DATE` to represent the default value, the connector will resolve the default value by making an additional database call to evaluate the function.
For example, if a `DATE` column is defined with the default value of `TO_DATE('2021-01-02', 'YYYY-MM-DD')`, the column's default value will be the number of days since the UNIX epoch for that date or `18629` in this case.
If a temporal type uses the `SYSDATE` constant to represent the default value, the connector will resolve this based on whether the column is defined as `NOT NULL` or `NULL`.
If the column is nullable, no default value will be set; however, if the column isn't nullable then the default value will be resolved as either `0` (for `DATE` or `TIMESTAMP(n)` data types) or `1970-01-01T00:00:00Z` (for `TIMESTAMP WITH TIME ZONE` or `TIMESTAMP WITH LOCAL TIME ZONE` data types).
The default value type will be numeric except if the column is a `TIMESTAMP WITH TIME ZONE` or `TIMESTAMP WITH LOCAL TIME ZONE` in which case its emitted as a string.
By default, the {prodname} Oracle connector provides several `CustomConverter` implementations specific to Oracle data types.
These custom converters provide alternative mappings for specific data types based on the connector configuration.
To add a `CustomConverter` to the connector, follow the instructions in the link:../development/converters.adoc[Custom Converters documentation].
=== `NUMBER(1)` to Boolean
Beginning with version 23, Oracle database provides a `BOOLEAN` logical data type.
In earlier versions, the database simulates a `BOOLEAN` type by using a `NUMBER(1)` data type, constrained with a value of `0` for false, or a value of `1` for true.
By default, when {prodname} emits change events for source columns that use the `NUMBER(1)` data type, it converts the data to the `INT8` literal type.
If the default mapping for `NUMBER(1)` data types does not meet your needs, you can configure the connector to use the logical `BOOL` type when it emits these columns by configuring the `NumberOneToBooleanConverter`, as shown in the following example:
In the preceding example, the `selector` property is optional.
The `selector` property specifies a regular expression that designates which tables or columns the converter applies to.
If you omit the `selector` property, when {prodname} emits an event, every column with the `NUMBER(1)` data type is converted to a field that uses the logical `BOOL` type.
=== `NUMBER` To Zero Scale
Oracle supports creating `NUMBER` based columns with negative scale, that is, `NUMBER(-2)`.
Not all systems can process negative scale values, so these values can result in processing problems in your pipeline.
For example, because Apache Avro does not support these values, problems can occur if {prodname} converts events to Avro format.
Similarly, downstream consumers that do not support these values can also encounter errors.
In the preceding example, the `selector` property enables you to define a regular expression that specifies the tables or columns that the converter processes.
If you omit the `selector` property, the converter maps all `RAW` column types to logical `STRING` field types.
For information about using Vagrant to set up Oracle in a virtual machine, see the https://github.com/debezium/oracle-vagrant-box/[Debezium Vagrant Box for Oracle database] GitHub repository.
Oracle AWS RDS does not allow you to execute the commands above nor does it allow you to log in as sysdba. AWS provides these alternative commands to configure LogMiner. Before executing these commands, ensure that your Oracle AWS RDS instance is enabled for backups.
To confirm that Oracle has backups enabled, execute the command below first. The LOG_MODE should say ARCHIVELOG. If it does not, you may need to reboot your Oracle AWS RDS instance.
.Configuration needed for Oracle AWS RDS LogMiner
[source,indent=0]
----
SQL> SELECT LOG_MODE FROM V$DATABASE;
LOG_MODE
------------
ARCHIVELOG
----
Once LOG_MODE is set to ARCHIVELOG, execute the commands to complete LogMiner configuration. The first command set the database to archivelogs and the second adds supplemental logging.
To enable {prodname} to capture the _before_ state of changed database rows, you must also enable supplemental logging for captured tables or for the entire database.
The following example illustrates how to configure supplemental logging for all columns in a single `inventory.customers` table.
Administrators can set parameters for each destination to designate it for a specific use, for example, log shipping for physical standbys, or external storage to allow for extended log retention.
Oracle reports details about archive log destinations in the `V$ARCHIVE_DEST_STATUS` view.
If your Oracle environment includes multiple destinations that satisfy that criteria, consult with your Oracle administrator to determine which archive log destination {prodname} should use.
* To specify the archive log destination that you want {prodname} to use, set the xref:oracle-property-log-mining-archive-destination-name[`log.mining.archive.destination.name`] property in the connector configuration. +
+
For example, in an organization with archive destinations `LOG_ARCHIVE_DEST_2` and `LOG_ARCHIVE_DEST_3`, if both destinations satisfy the criteria for use with {prodname} (that is, `status` is `VALID` and `type` is `LOCAL`), to configure the connector to use `LOG_ARCHIVE_DEST_3`, set the value of the `log.mining.archive.destination.name` property as follows:
If your Oracle environment includes multiple destinations that satisfy that criteria, and you fail to specify the preferred destination, the {prodname} Oracle connector selects the destination path at random.
Because the retention policy that is configured for each destination might differ, this can lead to errors if the connector selects a path from which the requested log data was deleted.
The connector must be able to read information about the Oracle redo and archive logs, and the current transaction state, to prepare the Oracle LogMiner session.
The ability for the {prodname} Oracle connector to ingest changes from a read-only logical standby database is a Developer Preview feature.
Developer Preview features are not supported by Red{nbsp}Hat in any way and are not functionally complete or production-ready.
Do not use Developer Preview software for production or business-critical workloads.
Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red{nbsp}Hat product offering.
Customers can use this software to test functionality and provide feedback during the development process.
This software might not have any documentation, is subject to change or removal at any time, and has received limited testing.
Red{nbsp}Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.
For more information about the support scope of Red{nbsp}Hat Developer Preview software, see link:https://access.redhat.com/support/offerings/devpreview/[Developer Preview Support Scope].
It is customary for a logical or physical standby to exist in the case of an Oracle production failure.
When a failure occurs and the standby instance is promoted to production, the database must be opened for read/write transactions before the {prodname} Oracle connector can connect to the database.
When using a physical standby, it is sufficient to reconfigure the {prodname} Oracle connector to use the hostname of the standby once the database is open.
In the case of a logical standby, the standby is not an exact copy of the production database, so the SCN offsets in the standby differ from those in the production database.
If you use a logical standby, to help ensure that {prodname} does not miss any change events, after the database is open, configure a new connector and perform a new database snapshot.
To deploy a {prodname} Oracle connector, you install the {prodname} Oracle connector archive, configure the connector, and start the connector by adding its configuration to Kafka Connect.
.Prerequisites
* link:https://zookeeper.apache.org/[Apache ZooKeeper], link:http://kafka.apache.org/[Apache Kafka], and link:{link-kafka-docs}.html#connect[Kafka Connect] are installed.
. Download the link:https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/{ojdbc8-version}/ojdbc8-{ojdbc8-version}.jar[JDBC driver for Oracle] from Maven Central and extract the downloaded driver file to the directory that contains the {prodname} Oracle connector JAR file.
. Download the link:https://repo1.maven.org/maven2/com/oracle/database/xml/xdb/{ojdbc8-version}/xdb-{ojdbc8-version}.jar[XDB library for Oracle] from Maven Central and extract the downloaded file to the directory that contains the {prodname} Oracle connector JAR file.
* xref:oracle-example-configuration[Configure the connector] and xref:oracle-adding-connector-configuration[add the configuration to your Kafka Connect cluster.]
Due to licensing requirements, the {prodname} Oracle connector archive does not include the Oracle JDBC driver that the connector requires to connect to an Oracle database.
To enable the connector to access the database, you must add the driver to your connector environment.
For more information, see xref:obtaining-the-oracle-jdbc-driver[Obtaining the Oracle JDBC driver].
Due to licensing requirements, the Oracle JDBC driver file that {prodname} requires to connect to an Oracle database is not included in the {prodname} Oracle connector archive.
The driver is available for download from Maven Central.
Depending on the deployment method that you use, you retrieve the driver by adding a command to the Kafka Connect custom resource or to the Dockerfile that you use to build the connector image.
* If you use {StreamsName} to add the connector to your Kafka Connect image, add the Maven Central location for the driver to `builds.plugins.artifact.url` in the `KafkaConnect` custom resource as shown in xref:using-streams-to-deploy-debezium-oracle-connectors[].
* If you use a Dockerfile to build a container image for the connector, insert a `curl` command in the Dockerfile to specify the URL for downloading the required driver file from Maven Central.
For more information, see xref:deploying-debezium-oracle-connectors[Deploying a {prodname} Oracle connector by building a custom Kafka Connect container image from a Dockerfile].
To deploy a {prodname} Oracle connector, you must build a custom Kafka Connect container image that contains the {prodname} connector archive, and then push this container image to a container registry.
You then need to create the following custom resources (CRs):
* A `KafkaConnect` CR that defines your Kafka Connect instance.
The `image` property in the CR specifies the name of the container image that you create to run your {prodname} connector.
You apply this CR to the OpenShift instance where link:https://access.redhat.com/products/red-hat-amq#streams[Red Hat {StreamsName}] is deployed.
{StreamsName} offers operators and images that bring Apache Kafka to OpenShift.
* A `KafkaConnector` CR that defines your {prodname} Oracle connector.
Apply this CR to the same OpenShift instance where you apply the `KafkaConnect` CR.
* Oracle Database is running and you completed the steps to xref:setting-up-oracle-to-work-with-debezium[set up Oracle to work with a {prodname} connector].
* You have an account and permissions to create and manage containers in the container registry (such as `quay.io` or `docker.io`) to which you plan to add the container that will run your {prodname} connector.
|`metadata.annotations` indicates to the Cluster Operator that `KafkaConnector` resources are used to configure connectors in this Kafka Connect cluster.
|2
|`spec.image` specifies the name of the image that you created to run your Debezium connector.
This property overrides the `STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE` variable in the Cluster Operator.
.. Apply the `KafkaConnect` CR to the OpenShift Kafka Connect environment by entering the following command:
+
[source,shell,options="nowrap"]
----
oc create -f dbz-connect.yaml
----
+
The command adds a Kafka Connect instance that specifies the name of the image that you created to run your {prodname} connector.
. Create a `KafkaConnector` custom resource that configures your {prodname} Oracle connector instance.
+
You configure a {prodname} Oracle connector in a `.yaml` file that specifies the configuration properties for the connector.
The connector configuration might instruct {prodname} to produce events for a subset of the schemas and tables, or it might set properties so that {prodname} ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed.
+
The following example configures a {prodname} connector that connects to an Oracle host IP address, on port `1521`.
This host has a database named `ORCLCDB`, and `server1` is the server's logical name.
|The name of the database schema history topic where the connector writes and recovers DDL statements. This topic is for internal use only and should not be used by consumers.
. Create your connector instance with Kafka Connect.
For example, if you saved your `KafkaConnector` resource in the `inventory-connector.yaml` file, you would run the following command:
+
[source,shell,options="nowrap"]
----
oc apply -f inventory-connector.yaml
----
+
The preceding command registers `inventory-connector` and the connector starts to run against the `server1` database as defined in the `KafkaConnector` CR.
<11> The list of Kafka brokers that this connector uses to write and recover DDL statements to the database schema history topic.
<12> The name of the database schema history topic where the connector writes and recovers DDL statements. This topic is for internal use only and should not be used by consumers.
However, in more complex Oracle deployments, or in deployments that use Transparent Network Substrate (TNS) names, you can use an alternative method in which you specify a JDBC URL.
"database.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 1>)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 2>)(PORT=1521)))(CONNECT_DATA=SERVICE_NAME=)(SERVER=DEDICATED)))",
For the complete list of the configuration properties that you can set for the {prodname} Oracle connector, see xref:oracle-connector-properties[Oracle connector properties].
When you configure a {prodname} Oracle connector for use with an Oracle CDB, you must specify a value for the property `database.pdb.name`, which names the PDB that you want the connector to capture changes from.
For non-CDB installation, do *not* specify the `database.pdb.name` property.
====
.Example: {prodname} Oracle connector configuration for non-CDB deployments
* xref:debezium-oracle-connector-database-history-configuration-properties[Database schema history connector configuration properties] that control how {prodname} processes events that it reads from the database schema history topic.
** xref:oracle-pass-through-database-history-properties-for-configuring-producer-and-consumer-clients[Pass-through database schema history properties]
* xref:debezium-oracle-connector-pass-through-database-driver-configuration-properties[Pass-through database driver properties] that control the behavior of the database driver.
|Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.)
|Enumerates a comma-separated list of the symbolic names of the {link-prefix}:{link-custom-converters}#custom-converters[custom converter] instances that the connector can use. +
For each converter that you configure for a connector, you must also add a `.type` property, which specifies the fully-qualified name of the class that implements the converter interface.
If you want to further control the behavior of a configured converter, you can add one or more configuration parameters to pass values to the converter.
To associate any additional configuration parameters with a converter, prefix the parameter names with the symbolic name of the converter. +
For example, to define a `selector` parameter that specifies the subset of columns that the `boolean` converter processes, add the following property: +
|The maximum number of tasks to create for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable.
If you change the name value, after a restart, instead of continuing to emit events to the original topics, the connector emits subsequent events to topics whose names are based on the new value.
Note this mode is only safe to be used when it is guaranteed that no schema changes happened since the point in time the connector was shut down before and the point in time the snapshot is taken.
After the snapshot is complete, the connector continues to read change events from the database's redo logs except when `snapshot.mode` is configured as `initial_only`.
`configuration_based`:: With this option, you control snapshot behavior through a set of connector properties that have the prefix 'snapshot.mode.configuration.based'.
`custom`:: The connector performs a snapshot according to the implementation specified by the xref:postgresql-property-snapshot-mode-custom-name[`snapshot.mode.custom.name`] property, which defines a custom implementation of the `io.debezium.spi.snapshot.Snapshotter` interface.
|If the `snapshot.mode` is set to `configuration_based`, set this property to specify whether the connector includes table data when it performs a snapshot.
|If the `snapshot.mode` is set to `configuration_based`, set this property to specify whether the connector includes the table schema when it performs a snapshot.
|If the `snapshot.mode` is set to `configuration_based`, set this property to specify whether the connector begins to stream change events after a snapshot completes.
|If the `snapshot.mode` is set to `configuration_based`, set this property to specify whether the connector includes table schema in a snapshot if the schema history topic is not available.
|If the `snapshot.mode` is set to `configuration_based`, this property specifies whether the connector attempts to snapshot table data if it does not find the last committed offset in the transaction log. +
Set the value to `true` to instruct the connector to perform a new snapshot.
|If `snapshot.mode` is set to `custom`, use this setting to specify the name of the custom implementation that is provided in the `name()` method that is defined in the 'io.debezium.spi.snapshot.Snapshotter' interface.
After a connector restart, {prodname} calls the specified custom implementation to determine whether to perform a snapshot.
a|Controls whether and for how long the connector holds a table lock. Table locks prevent certain types of changes table operations from occurring while the connector performs a snapshot.
You can set the following values:
`shared`:: Enables concurrent access to the table, but prevents any session from acquiring an exclusive table lock.
The connector acquires a `ROW SHARE` level lock while it captures table schema.
`none`:: Prevents the connector from acquiring any table locks during the snapshot.
Use this setting only if no schema changes might occur during the creation of the snapshot.
`custom`:: The connector performs a snapshot according to the implementation specified by the xref:oracle-property-snapshot-locking-mode-custom-name[`snapshot.locking.mode.custom.name`] property, which is a custom implementation of the `io.debezium.spi.snapshot.SnapshotLock` interface.
| When `snapshot.locking.mode` is set as `custom`, use this setting to specify the name of the custom implementation provided in the `name()` method that is defined by the 'io.debezium.spi.snapshot.SnapshotLock' interface.
For more information, see xref:connector-custom-snapshot[custom snapshotter SPI].
`select_all`:: The connector performs a `select all` query by default, optionally adjusting the columns selected based on the column include and exclude list configurations.
`custom`:: The connector performs a snapshot query according to the implementation specified by the xref:oracle-property-snapshot-snapshot-query-mode-custom-name[`snapshot.query.mode.custom.name`] property, which defines a custom implementation of the `io.debezium.spi.snapshot.SnapshotQuery` interface. +
endif::community[]
This setting enables you to manage snapshot content in a more flexible manner compared to using the xref:oracle-property-snapshot-select-statement-overrides[`snapshot.select.statement.overrides`] property.
| When xref:oracle-property-snapshot-query-mode[`snapshot.query.mode`] is set to `custom`, use this setting to specify the name of the custom implementation provided in the `name()` method that is defined by the 'io.debezium.spi.snapshot.SnapshotQuery' interface.
| All tables specified in the connector's xref:{context}-property-table-include-list[`table.include.list`] property.
|An optional, comma-separated list of regular expressions that match the fully-qualified names (`__<databaseName>.____<schemaName>__.__<tableName>__`) of the tables to include in a snapshot.
In a multitenant container database (CDB) environment, the regular expression must include the xref:oracle-property-database-pdb-name[pluggable database (PDB) name], using the format `__<pdbName>__.__<schemaName>__.__<tableName>__`.
To match the name of a table, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. +
This property takes effect only if the connector's xref:oracle-property-snapshot-mode[`snapshot.mode`] property is set to a value other than `never`. +
This property does not affect the behavior of incremental snapshots. +
For each table in the list, add a further configuration property that specifies the `SELECT` statement for the connector to run on the table when it takes a snapshot.
The specified `SELECT` statement determines the subset of table rows to include in the snapshot.
Use the following format to specify the name of this `SELECT` statement property: +
From a `customers.orders` table that includes the soft-delete column, `delete_flag`, add the following properties if you want a snapshot to include only those records that are not soft-deleted:
Any schema name not included in `schema.include.list` is excluded from having its changes captured.
By default, all non-system schemas have their changes captured. +
To match the name of a schema, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. +
If you include this property in the configuration, do not also set the `schema.exclude.list` property.
|Boolean value that specifies whether the connector should parse and publish table and column comments on metadata objects. Enabling this option will bring the implications on memory usage. The number and size of logical schema objects is what largely impacts how much memory is consumed by the Debezium connectors, and adding potentially large string data to each of them can potentially be quite expensive.
|An optional, comma-separated list of regular expressions that match names of schemas for which you *do not* want to capture changes.
ifdef::product[]
Only POSIX regular expressions are valid.
endif::product[]
ifdef::community[]
In environments that use the LogMiner implementation, you must use POSIX regular expressions only. +
endif::community[]
Any schema whose name is not included in `schema.exclude.list` has its changes captured, with the exception of system schemas. +
To match the name of a schema, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the schema; it does not match substrings that might be present in a schema name. +
If you include this property in the configuration, do not set the`schema.include.list` property.
By default, the connector monitors every non-system table in each captured database. +
To match the name of a table, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. +
If you include this property in the configuration, do not also set the `table.exclude.list` property.
To match the name of a table, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the table; it does not match substrings that might be present in a table name. +
If you include this property in the configuration, do not also set the `table.include.list` property.
|An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that want to include in the change event message values.
The primary key column is always included in an event's key, even if you do not use this property to explicitly include its value. +
To match the name of a column, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the column it does not match substrings that might be present in a column name. +
|An optional, comma-separated list of regular expressions that match the fully-qualified names of columns that you want to exclude from change event message values.
The primary key column is always included in an event's key, even if you use this property to explicitly exclude its value. +
To match the name of a column, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the column it does not match substrings that might be present in a column name. +
| Specifies whether to skip publishing messages when there is no change in included columns. This would essentially filter messages if there is no change in columns included as per `column.include.list` or `column.exclude.list` properties.
To match the name of a column {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name. +
A pseudonym consists of the hashed value that results from applying the specified _hashAlgorithm_ and _salt_.
Based on the hash function that is used, referential integrity is maintained, while column values are replaced with pseudonyms.
Supported hash functions are described in the {link-java7-standard-names}[MessageDigest section] of the Java Cryptography Architecture Standard Algorithm Name Documentation. +
+
In the following example, `CzQMA0cB5K` is a randomly selected salt. +
|Specifies how binary (`blob`) columns should be represented in change events, including: `bytes` represents binary data as byte array (default), `base64` represents binary data as base64-encoded String, `base64-url-safe` represents binary data as base64-url-safe-encoded String, `hex` represents binary data as hex-encoded (base16) String
* `avro_unicode` replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java +
|Specifies how field names should be adjusted for compatibility with the message converter used by the connector. Possible settings: +
* `none` does not apply any adjustment. +
* `avro` replaces the characters that cannot be used in the Avro type name with underscore. +
* `avro_unicode` replaces the underscore or characters that cannot be used in the Avro type name with corresponding unicode like _uxxxx. Note: _ is an escape sequence like backslash in Java +
| Specifies how the connector should handle values for `interval` columns: +
+
`numeric` represents intervals using approximate number of microseconds. +
+
`string` represents intervals exactly by using the string pattern representation `P<years>Y<months>M<days>DT<hours>H<minutes>M<seconds>S`. For example: `P1Y2M3DT4H5M6.78S`.
If xref:oracle-property-max-queue-size[`max.queue.size`] is also set, writing to the queue is blocked when the size of the queue reaches the limit specified by either property.
For example, if you set `max.queue.size=1000`, and `max.queue.size.in.bytes=5000`, writing to the queue is blocked after the queue contains 1000 records, or after the volume of the records in the queue reaches 5000 bytes.
After a source record is deleted, a tombstone event (the default behavior) enables Kafka to completely delete all events that share the key of the deleted row in topics that have {link-kafka-docs}/#compaction[log compaction] enabled.
|A list of expressions that specify the columns that the connector uses to form custom message keys for change event records that it publishes to the Kafka topics for specified tables.
By default, {prodname} uses the primary key column of a table as the message key for records that it emits.
In place of the default, or to specify a key for tables that lack a primary key, you can configure custom message keys based on one or more columns. +
To establish a custom message key for a table, list the table, followed by the columns to use as the message key.
|An optional, comma-separated list of regular expressions that match the fully-qualified names of character-based columns.
Set this property if you want the connector to mask the values for a set of columns, for example, if they contain sensitive data.
Set `_length_` to a positive integer to replace data in the specified columns with the number of asterisk (`*`) characters specified by the _length_ in the property name.
Set _length_ to `0` (zero) to replace data in the specified columns with an empty string.
The fully-qualified name of a column observes the following format: `_<schemaName>_._<tableName>_._<columnName>_`.
To match the name of a column, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
You can specify multiple properties with different lengths in a single configuration.
|An optional comma-separated list of regular expressions for masking column names in change event messages by replacing characters with asterisks (`*`). +
Specify the number of characters to replace in the name of the property, for example, `column.mask.with.8.chars`. +
|An optional, comma-separated list of regular expressions that match the fully-qualified names of columns for which you want the connector to emit extra parameters that represent column metadata.
When this property is set, the connector adds the following fields to the schema of event records:
Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases.
The fully-qualified name of a column observes one of the following formats: `_<tableName>_._<columnName>_`, or `_<schemaName>_._<tableName>_._<columnName>_`. +
To match the name of a column, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the column; the expression does not match substrings that might be present in a column name.
|An optional, comma-separated list of regular expressions that specify the fully-qualified names of data types that are defined for columns in a database.
When this property is set, for columns with matching data types, the connector emits event records that include the following extra fields in their schema:
Enabling the connector to emit this extra data can assist in properly sizing specific numeric or character-based columns in sink databases.
The fully-qualified name of a column observes one of the following formats: `_<tableName>_._<typeName>_`, or `_<schemaName>_._<tableName>_._<typeName>_`. +
To match the name of a data type, {prodname} applies the regular expression that you specify as an _anchored_ regular expression.
That is, the specified expression is matched against the entire name string of the data type; the expression does not match substrings that might be present in a type name.
For the list of Oracle-specific data type names, see the xref:oracle-data-type-mappings[Oracle data type mappings].
It can also be useful to set the property in situations where no change events occur in captured tables for an extended period. +
In such a case, although the connector continues to read the redo log, it emits no change event messages, so that the offset in the Kafka topic remains unchanged.
Because the connector does not flush the latest system change number (SCN) that it read from the database, the database might retain the redo log files for longer than necessary.
|Specifies a query that the connector executes on the source database when the connector sends a heartbeat message. +
+
For example: +
+
`INSERT INTO test_heartbeat_table (text) VALUES ('test_heartbeat')` +
+
The connector runs the query after it emits a xref:oracle-property-heartbeat-interval-ms[heartbeat message].
Set this property and create a heartbeat table to receive the heartbeat messages to resolve situations in which xref:low-change-frequency-offset-management[{prodname} fails to synchronize offsets on low-traffic databases that are on the same host as a high-traffic database].
After the connector inserts records into the configured table, it is able to receive changes from the low-traffic database and acknowledge SCN changes in the database, so that offsets can be synchronized with the broker.
|Set the property to `true` if you want {prodname} to generate events with transaction boundaries and enriches data events envelope with transaction metadata.
|Specifies the mining strategy that controls how Oracle LogMiner builds and uses a given data dictionary for resolving table and column ids to names. +
If the captured table(s) schema changes infrequently or never, this is the ideal choice. +
+
`hybrid`:: Uses a combination of the database's current data dictionary and the {prodname} in-memory schema model to resole table and column names seamlessly.
This mode performs at the level of the `online_catalog` LogMiner strategy with the schema tracking resilience of the `redo_log_catalog` strategy while not incurring the overhead of archive log generation and performance costs of the `redo_log_catalog` strategy.
|Specifies the mining query mode that controls how the Oracle LogMiner query is built. +
+
`none`:: The query is generated without doing any schema, table, or username filtering in the query. +
+
`in`:: The query is generated using a standard SQL in-clause to filter schema, table, and usernames on the database side.
The schema, table, and username configuration include/exclude lists should not specify any regular expressions as the query is built using the values directly. +
+
`regex`:: The query is generated using Oracle's `REGEXP_LIKE` operator to filter schema and table names on the database side, along with usernames using a SQL in-clause.
The schema and table configuration include/exclude lists can safely specify regular expressions.
|The maximum number of milliseconds that a LogMiner session can be active before a new session is used. +
+
For low volume systems, a LogMiner session may consume too much PGA memory when the same session is used for a long period of time.
The default behavior is to only use a new LogMiner session when a log switch is detected.
By setting this value to something greater than `0`, this specifies the maximum number of milliseconds a LogMiner session can be active before it gets stopped and started to deallocate and reallocate PGA memory.
|The minimum SCN interval size that this connector attempts to read from redo/archive logs. Active batch size is also increased/decreased by this amount for tuning connector throughput when needed.
This also servers as a measure for adjusting batch size - when the difference between current SCN and beginning/end SCN of the batch is bigger than this value, batch size is increased/decreased.
|The minimum amount of time that the connector sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
|The maximum amount of time that the connector ill sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
|The starting amount of time that the connector sleeps after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
|The maximum amount of time up or down that the connector uses to tune the optimal sleep time when reading data from logminer. Value is in milliseconds.
Redo logs use a circular buffer that can be archived at any point.
In environments where online redo logs are archived frequently, this can lead to LogMiner session failures.
In contrast to redo logs, archive logs are guaranteed to be reliable.
Set this option to `true` to force the connector to mine archive logs only.
After you set the connector to mine only the archive logs, the latency between an operation being committed and the connector emitting an associated change event might increase.
The degree of latency depends on how frequently the database is configured to archive online redo logs.
Any transaction that exceeds this configured value is discarded entirely, and the connector does not emit any messages for the operations that were part of the transaction.
If the difference between the SCN values is greater than the specified value, and the time difference is smaller than xref:oracle-property-log-mining-scn-gap-detection-time-interval-max-ms[`log.mining.scn.gap.detection.time.interval.max.ms`] then an SCN gap is detected, and the connector uses a mining window larger than the configured maximum batch.
|Specifies a value, in milliseconds, that the connector compares to the difference between the current and previous SCN timestamps to determine whether an SCN gap exists.
If the difference between the timestamps is less than the specified value, and the SCN delta is greater than xref:oracle-property-log-mining-scn-gap-detection-gap-size-min[`log.mining.scn.gap.detection.gap.size.min`], then an SCN gap is detected and the connector uses a mining window larger than the configured maximum batch.
Specify the list of RAC nodes by using one of the following methods:
* Specify a value for xref:oracle-property-database-port[`database.port`], and use the specified port value for each address in the `rac.nodes` list.
For example:
+
[source,properties]
----
database.port=1521
rac.nodes=192.168.1.100,192.168.1.101
----
* Specify a value for xref:oracle-property-database-port[`database.port`], and override the default port for one or more entries in the list.
The list can include entries that use the default `database.port` value, and entries that define their own unique port values.
For example:
+
[source,properties]
----
database.port=1521
rac.nodes=192.168.1.100,192.168.1.101:1522
----
If you supply a raw JDBC URL for the database by using the xref:oracle-property-database-url[`database.url`] property, instead of defining a value for `database.port`, each RAC node entry must explicitly specify a port value.
a|Fully-qualified name of the data collection that is used to send {link-prefix}:{link-signalling}#debezium-signaling-enabling-source-signaling-channel[signals] to the connector.
|Specifies the watermarking mechanism that the connector uses during an incremental snapshot to deduplicate events that might be captured by an incremental snapshot and then recaptured after streaming resumes. +
You can specify one of the following options:
`insert_insert`:: When you send a signal to initiate an incremental snapshot, for every chunk that {prodname} reads during the snapshot, it writes an entry to the signaling data collection to record the signal to open the snapshot window.
After the snapshot completes, {prodname} inserts a second entry that records the signal to close the window.
`insert_delete`:: When you send a signal to initiate an incremental snapshot, for every chunk that {prodname} reads, it writes a single entry to the signaling data collection to record the signal to open the snapshot window.
After the snapshot completes, this entry is removed.
No entry is created for the signal to close the snapshot window.
Set this option to prevent rapid growth of the signaling data collection.
|The name of the TopicNamingStrategy class that should be used to determine the topic name for data change, schema change, transaction, heartbeat event etc., defaults to `SchemaTopicNamingStrategy`.
|The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
Parallel initial snapshots is a Technology Preview feature only.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
Red Hat does not recommend using them in production.
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope].
The {prodname} Oracle connector provides three metric types in addition to the built-in support for JMX metrics that Apache Zookeeper, Apache Kafka, and Kafka Connect have.
Please refer to the {link-prefix}:{link-debezium-monitoring}#monitoring-debezium[monitoring documentation] for details of how to expose these metrics via JMX.
|The number of transactions that were discarded because their size exceeded <<oracle-property-log-mining-buffer-transaction-events-threshold, `log.mining.buffer.transaction.events.threshold`>>.
|The number of times that the system change number was checked for advancement and remains unchanged.
A high value can indicate that a long-running transactions is ongoing and is preventing the connector from flushing the most recently processed system change number to the connector's offsets.
When conditions are optimal, the value should be close to or equal to `0`.
|The number of DDL records that have been detected but could not be parsed by the DDL parser.
This should always be `0`; however when allowing unparsable DDL to be skipped, this metric can be used to determine if any warnings have been written to the connector logs.
By default, the connector stops when it encounters a DDL statement that it cannot parse.
You can use {prodname} link:/documentation/reference/configuration/signalling[signaling] to trigger the update of the database schema from such DDL statements.
After the `schema-changes` signal is inserted, the connector must be restarted with an altered configuration that includes specifying the <<{context}-property-database-history-skip-unparseable-ddl, `+schema.history.internal.skip.unparseable.ddl+`>> option as `true`.
After the connector's commit SCN advances beyond the DDL change, to prevent unparseable DDL statements from being skipped unexpectedly, return the connector configuration to its previous state.
The OpenLogReplicator ingestion adapter is currently in incubating state, i.e. exact semantics, configuration options etc. may change in future revisions, based on the feedback we receive.
The {prodname} Oracle connector by default ingests changes using native Oracle LogMiner.
However, the connector can be toggled to use OpenLogReplicator, an open-source and free third-party application that reads Oracle changes directly from the redo and archive logs with low impact on the database.
To configure the connector to use OpenLogReplicator, you must apply specific database and connector configurations that differ from those that you use with LogMiner.
.Prerequisites
* Download and compile OpenLogReplicator for your database environment.
* OpenLogReplicator must be installed with direct access to the archive and redo log files. This does not necessarily require installation on the physical database server if the archive and redo logs can be accessed via some shared filesystem.
=== How OpenLogReplicator works
OpenLogReplicator takes on the role of Oracle LogMiner and Oracle XStream when the {prodname} Oracle connector streams changes.
It is responsible for the capturing of changes in the redo and archive logs as they occur and batching those changes into logical transactions.
{prodname} Oracle connector is a consumer of OpenLogReplicator by connecting to the network endpoint provided by OpenLogReplicator and ingesting the transactions as they're batched.
After the OpenLogReplicator adapter ingests changes, the {prodname} Oracle connector transforms the events into xref:#oracle-events[data change events] just like any other adapter.
From a network topology perspective, the {prodname} Oracle connector relies on network connections to both the Oracle database and to the OpenLogReplicator.
Similarly, OpenLogReplicator requires a network connection to the Oracle database, as well as direct access to the raw redo and archive logs.
The database must be placed in `ARCHIVELOG` mode before it can archive redo log files.
To place the database in `ARCHIVELOG` mode, you must set specific configuration properties to specify the destination for saving the archive log files.
For {prodname} to generate change events that show the `before` and `after` states of a table row, supplemental logging must be active on the database.
Supplemental logging adds column data to the redo logs to identify the rows that are affected when a table is modified.
For each captured table, you must explicitly configure a higher fidelity supplemental logging, called `(ALL) COLUMNS`.
The `(ALL) COLUMNS` logging level guarantees that Oracle captures the state of a column regardless of whether the column changed when a redo entry is written to the redo log.
Enabling the higher logging level enables {prodname} for Oracle to generate change events that provide the accurate _before_ and _after_ states for a row.
Whenever new tables are added to the {prodname} Oracle's connector configuration, you must configure supplemental logging for each table.
If a table that is configured for capture is not correctly configured for supplemental logging, after the connector begins streaming, it returns a warning message.
The following example shows a possible user account configuration for deploying {prodname} with OpenLogReplicator in a multi-tenant Oracle environment.
You can modify the default settings to configure the connector to use the OpenLogReplicator adapter in place of LogMiner.
In the example that follows, the following properties are added to the connector configuration to enable the connector to use the OpenLogReplicator adapter:
OpenLogReplicator is written in C++; however, the author provides a container image to build the source for a variety of operating systems, including RHEL, Fedora, CentOS, Debian, and others.
For information about how to build the tool using a container, see the This https://github.com/bersler/OpenLogReplicator-docker[OpenLogReplicator GitHub repository].
You configure OpenLogReplicator by using a JSON file called `scripts/OpenLogReplicator.json`.
For more information about the required format of this file, see the https://github.com/bersler/OpenLogReplicator/blob/master/documentation/reference-manual/reference-manual.adoc#openlogreplicator-json-file-format[OpenLogReplicator documentation].
<1> This should match the `openlogreplicator.source` connector configuration.
<2> List of file path pairs `[before1,after1,befor2,after2,...]` where if a log file path matches one of the `beforeX` prefixes, the prefix is replaced with the `afterX` path. This is useful when OpenLogReplicator is running on a different host than the source database and the path to the redo and archive logs different between the database and the OpenLogReplicator process.
<3> This should match the `database.user` connector property.
<4> This should match the `database.password` connector property.
<5> This should point to the database host, port, and Oracle SID.
<6> This specifies the payload format ingested by {prodname} Oracle connector. Use these values as specified, as these are the only format options that we require beyond the defaults.
<7> This must specify `network`, as {prodname} Oracle connector communicates with OpenLogReplicator via a network connection.
<8> This specifies the bind host and port that OpenLogReplicator listens for connections on. This should be accessible by {prodname} Oracle connector.
The connector can be toggled to use Oracle XStream instead.
To configure the connector to use Oracle XStream, you must apply specific database and connector configurations that differ from those that you use with LogMiner.
.Prerequisites
* To use the XStream API, you must have a license for the GoldenGate product.
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
alter system set enable_goldengate_replication=true;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should show "Database log mode: Archive Mode"
archive log list
exit;
----
In addition, supplemental logging must be enabled for captured tables or the database in order for data changes to capture the _before_ state of changed database rows.
The following illustrates how to configure this on a specific table, which is the ideal choice to minimize the amount of information captured in the Oracle redo logs.
[source,indent=0]
----
ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
The following configuration example adds the properties `database.connection.adapter` and `database.out.server.name` to enable the connector to use the XStream API implementation.
=== Obtaining the Oracle JDBC driver and XStream API files
The {prodname} Oracle connector requires the Oracle JDBC driver (`ojdbc8.jar`) to connect to Oracle databases.
If the connector uses XStream to access the database, you must also have the XStream API (`xstreams.jar`).
Licensing requirements prohibit {prodname} from including these files in the Oracle connector archive.
However, the required files are available for free download as part of the Oracle Instant Client.
The following steps describe how to download the Oracle Instant Client and extract the required files.
.Procedure
. From a browser, download the https://www.oracle.com/database/technologies/instant-client/downloads.html[Oracle Instant Client package] for your operating system.
. Extract the archive, and then open the `instantclient___<version>__` directory.
Oracle provides a database package called `DBMS_LOB` that consists of a collection of programs to operate on BLOB, CLOB, and NCLOB columns.
Most of these programs manipulate the LOB column in totality, however, one program, `WRITEAPPEND`, is capable of manipulating a subset of the LOB data buffer.
When using XStream, `WRITEAPPEND` emits a logical change record (LCR) event for each invocation of the program.
These LCR events are not combined into a single change event like they are when using the Oracle LogMiner adapter, and so consumers of the topic should be prepared to receive events with partial column values.
This diverged behavior is captured in https://issues.redhat.com/browse/DBZ-4741[DBZ-4741] and will be addressed in a future release.
No, Oracle only deprecated the continuous mining option with Oracle LogMiner in Oracle 12c and removed that option starting with Oracle 19c.
The {prodname} Oracle connector does not rely on this option to function, and therefore can safely be used with newer versions of Oracle without any impact.
The key for `inventory-connector` is `["inventory-connector",{"server":"server1"}]`, the partition is `11` and the last offset is the contents that follows the key.
. To move back to a previous offset the connector should be stopped and the following command has to be issued:
*What happens if the connector cannot find logs with a given offset SCN?*::
The {prodname} connector maintains a low and high -watermark SCN value in the connector offsets.
The low-watermark SCN represents the starting position and must exist in the available online redo or archive logs in order for the connector to start successfully.
When the connector reports it cannot find this offset SCN, this indicates that the logs that are still available do not contain the SCN and therefore the connector cannot mine changes from where it left off.
+
When this happens, there are two options.
The first is to remove the history topic and offsets for the connector and restart the connector, taking a new snapshot as suggested.
This will guarantee that no data loss will occur for any topic consumers.
The second is to manually manipulate the offsets, advancing the SCN to a position that is available in the redo or archive logs.
This will cause changes that occurred between the old SCN value and the newly provided SCN value to be lost and not written to the topics.
This is not recommended.
*What's the difference between the various mining strategies?*::
The default is `redo_in_catalog`, and this instructs the connector to write the Oracle data dictionary to the redo logs everytime a log switch is detected.
This data dictionary is necessary for Oracle LogMiner to track schema changes effectively when parsing the redo and archive logs.
This option will generate more than usual numbers of archive logs but allows tables being captured to be manipulated in real-time without any impact on capturing data changes.
This option generally requires more Oracle database memory and will cause the Oracle LogMiner session and process to take slightly longer to start after each log switch.
Instead, Oracle LogMiner will always use the online data dictionary that contains the current state of the table's structure.
This also means that if a table's structure changes and no longer matches the online data dictionary, Oracle LogMiner will be unable to resolve table or column names if the table's structure is changed.
This mining strategy option should not be used if the tables being captured are subject to frequent schema changes.
It's important that all data changes be lock-stepped with the schema change such that all changes have been captured from the logs for the table, stop the connector, apply the schema change, and restart the connector and resume data changes on the table.
This option requires less Oracle database memory and Oracle LogMiner sessions generally start substantially faster since the data dictionary does not need to be loaded or primed by the LogMiner process.
The final option, `hybrid`, combines the strengths of the above two strategies with none of their weaknesses.
This strategy harnesses the performance of the `online_catalog` with the resilience in schema tracking of the `redo_in_catalog` while also avoiding the overhead and performance costs with the higher than normal archive log generation.
This mode utilizes a fallback mode where if LogMiner fails to reconstruct the SQL for a database change, the {prodname} connector will rely on the in-memory schema model maintained by the connector to reconstruct the SQL in-flight.
The intent is that this mode will eventually transition to the default, and likely only mode of operation in the future.
*Are there any limitations with the Hybrid mining strategy with LogMiner?*::
Yes, the Hybrid mode for `log.mining.strategy` is still a work-in-progress strategy, and therefore does not yet support all data types.
At this time, this mode cannot reconstruct SQL statements that include operations against `CLOB`, `NCLOB`, `BLOB`, `XML`, nor `JSON` data types.
So in short, if you enable `lob.enabled` with a value of `true`, you will be unable to use the Hybrid strategy and the connector will fail to start as this combination is unsupported.
Due to the https://aws.amazon.com/blogs/networking-and-content-delivery/best-practices-for-deploying-gateway-load-balancer[fixed idle timeout of 350 seconds on the AWS Gateway Load Balancer],
JDBC calls that require more than 350 seconds to complete can hang indefinitely.
In situations where calls to the Oracle LogMiner API take more than 350 seconds to complete, a timeout can be triggered, causing the AWS Gateway Load Balancer to hang.
For example, such timeouts can occur when a LogMiner session that processes large amounts of data runs concurrently with Oracle's periodic checkpointing task.
To prevent timeouts from occurring on the AWS Gateway Load Balancer, enable keep-alive packets from the Kafka Connect environment, by performing the following steps as root or a super-user:
To prevent timeouts from occurring on the AWS Gateway Load Balancer, enable keep-alive packets from the kafka Connect or Debezium Server environment, by performing the following task as the root user or as a super-user in the environment that hosts the connector:
. Reconfigure the {prodname} for Oracle connector to use the `database.url` property rather than `database.hostname` and add the `(ENABLE=broken)` Oracle connect string descriptor as shown in the following example:
The preceding steps configure the TCP network stack to send keep-alive packets every 60 seconds.
As a result, the AWS Gateway Load Balancer does not timeout when JDBC calls to the LogMiner API take more than 350 seconds to complete, enabling the connector to continue to read changes from the database's transaction logs.
*What's the cause for ORA-01555 and how to handle it?*::
The {prodname} Oracle connector uses flashback queries when the initial snapshot phase executes.
A flashback query is a special type of query that relies on the flashback area, maintained by the database's `UNDO_RETENTION` database parameter, to return the results of a query based on what the contents of the table had at a given time, or in our case at a given SCN.
By default, Oracle generally only maintains an undo or flashback area for approximately 15 minutes unless this has been increased or decreased by your database administrator.
For configurations that capture large tables, it may take longer than 15 minutes or your configured `UNDO_RETENTION` to perform the initial snapshot and this will eventually lead to this exception:
+
```
ORA-01555: snapshot too old: rollback segment number 12345 with name "_SYSSMU11_1234567890$" too small
```
+
The first way to deal with this exception is to work with your database administrator and see whether they can increase the `UNDO_RETENTION` database parameter temporarily.
This does not require a restart of the Oracle database, so this can be done online without impacting database availability.
However, changing this may still lead to the above exception or a "snapshot too old" exception if the tablespace has inadequate space to store the necessary undo data.
+
The second way to deal with this exception is to not rely on the initial snapshot at all, setting the `snapshot.mode` to `schema_only` and then instead relying on incremental snapshots.
An incremental snapshot does not rely on a flashback query and therefore isn't subject to ORA-01555 exceptions.
*What's the cause for ORA-04036 and how to handle it?*::
The {prodname} Oracle connector may report an ORA-04036 exception when the database changes occur infrequently.
An Oracle LogMiner session is started and re-used until a log switch is detected.
The session is re-used as it provides the optimal performance utilization with Oracle LogMiner, but should a long-running mining session occur, this can lead to excessive PGA memory usage, eventually causing an exception like this:
+
```
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
```
+
This exception can be avoided by specifying how frequent Oracle switches redo logs or how long the {prodname} Oracle connector is allowed to re-use the mining session.
The {prodname} Oracle connector provides a configuration option, xref:oracle-property-log-mining-session-max-ms[`log.mining.session.max.ms`], which controls how long the current Oracle LogMiner session can be re-used for before being closed and a new session started.
This allows the database resources to be kept in-check without exceeding the PGA memory allowed by the database.
*What's the cause for ORA-25191 and how to handle it?*::
The {prodname} Oracle connector automatically ignores index-organized tables (IOT) as they are not supported by Oracle LogMiner.
However, if an ORA-25191 exception is thrown, this could be due to a unique corner case for such a mapping and the additional rules may be necessary to exclude these automatically.
An example of an ORA-25191 exception might look like this:
+
```
ORA-25191: cannot reference overflow table of an index-organized table
```
+
If an ORA-25191 exception is thrown, please raise a Jira issue with the details about the table and it's mappings, related to other parent tables, etc.
As a workaround, the include/exclude configuration options can be adjusted to prevent the connector from accessing such tables.
*How to solve SAX feature external-general-entities not supported*::
Debezium 2.4 introduced support for Oracle's `XMLTYPE` column type and to support this feature, the Oracle `xdb` and `xmlparserv2` dependencies are required. +
+
Oracle's `xmlparserv2` dependency implements a SAX-based parser and if the runtime finds an uses this implementation rather than the other on the classpath, this error will occur.
In order to influence specifically which SAX implementation is used generally, the JVM will need to be started with a specific argument. +
+
When the following JVM argument is provided, the Oracle connector will start successfully without this error.