{prodname} ingests change events from Oracle using the native LogMiner database package or the https://docs.oracle.com/database/121/XSTRM/xstrm_intro.htm#XSTRM72647[XStream API].
To optimally configure and run a {prodname} Oracle connector, it is helpful to understand how the connector performs snapshots, streams change events, determines Kafka topic names, and uses metadata.
3. Read the current SCN ("system change number") position in the server's redo log.
4. Capture the structure of all relevant tables.
5. Release the locks obtained in step 2, i.e. the locks are held only for a short period of time.
6. Scan all of the relevant database tables and schemas as valid at the SCN position read in step 3 (`SELECT * FROM ... AS OF SCN 123`), and generate a `READ` event for each row and write that event to the appropriate table-specific Kafka topic.
7. Record the successful completion of the snapshot in the connector offsets.
If the connector fails, is rebalanced, or stops after step 1 begins but before step 7 completes,
upon restart the connector will begin a new snapshot.
|The connector captures the structure of all relevant tables, performing all the steps described above, except it does not create any `READ` events representing the dataset at the point of the connector's start-up.
The {prodname} Oracle connector stores the history of schema changes in a database history topic.
This topic reflects an internal connector state and you should not use it directly.
Applications that require notifications about schema changes should obtain the information from the public schema change topic.
the connector writes all of these events to a Kafka topic named `<serverName>`, where `serverName` is the name of the connector that is specified in the `database.server.name` configuration property.
* `id` - string representation of unique transaction identifier
* `event_count` (for `END` events) - total number of events emmitted by the transaction
* `data_collections` (for `END` events) - an array of pairs of `data_collection` and `event_count` that provides number of events emitted by changes originating from given data collection
The transaction events are written to the topic named `<database.server.name>.transaction`.
==== Data events enrichment
When transaction metadata is enabled the data message `Envelope` is enriched with a new `transaction` field.
This field provides information about every event in the form of a composite of fields:
* `id` - string representation of unique transaction identifier
* `total_order` - the absolute position of the event among all events generated by the transaction
* `data_collection_order` - the per-data collection position of the event among all events that were emitted by the transaction
Following is an example of what a message looks like:
[source,json,indent=0,subs="attributes"]
----
{
"before": null,
"after": {
"pk": "2",
"aa": "1"
},
"source": {
...
},
"op": "c",
"ts_ms": "1580390884335",
"transaction": {
"id": "5.6.641",
"total_order": "1",
"data_collection_order": "1"
}
}
----
[[oracle-events]]
== Data change events
All data change events produced by the Oracle connector have a key and a value, although the structure of the key and value depend on the table from which the change events originated (see {link-prefix}:{link-oracle-connector}#oracle-topic-names[Topic names]).
[WARNING]
====
The {prodname} Oracle connector ensures that all Kafka Connect _schema names_ are http://avro.apache.org/docs/current/spec.html#names[valid Avro schema names].
This means that the logical server name must start with Latin letters or an underscore (e.g., [a-z,A-Z,\_]),
and the remaining characters in the logical server name and all characters in the schema and table names must be Latin letters, digits, or an underscore (e.g., [a-z,A-Z,0-9,\_]).
If not, then all invalid characters will automatically be replaced with an underscore character.
This can lead to unexpected conflicts when the logical server name, schema names, and table names contain other characters, and the only distinguishing characters between table full names are invalid and thus replaced with underscores.
====
{prodname} and Kafka Connect are designed around _continuous streams of event messages_, and the structure of these events may change over time.
This could be difficult for consumers to deal with, so to make it easy Kafka Connect makes each event self-contained.
Every message key and value has two parts: a _schema_ and _payload_.
The schema describes the structure of the payload, while the payload contains the actual data.
For a given table, the change event's key will have a structure that contains a field for each column in the primary key (or unique key constraint) of the table at the time the event was created.
Consider a `customers` table defined in the `inventory` database schema:
[source,sql,indent=0]
----
CREATE TABLE customers (
id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY,
first_name VARCHAR2(255) NOT NULL,
last_name VARCHAR2(255) NOT NULL,
email VARCHAR2(255) NOT NULL UNIQUE
);
----
If the `database.server.name` configuration property has the value `server1`,
every change event for the `customers` table while it has this definition will feature the same key structure, which in JSON looks like this:
The `schema` portion of the key contains a Kafka Connect schema describing what is in the key portion, and in our case that means that the `payload` value is not optional, is a structure defined by a schema named `server1.DEBEZIUM.CUSTOMERS.Key`, and has one required field named `id` of type `int32`.
If you look at the value of the key's `payload` field, you can see that it is indeed a structure (which in JSON is just an object) with a single `id` field, whose value is `1004`.
Therefore, you can interpret this key as describing the row in the `inventory.customers` table (output from the connector named `server1`) whose `id` primary key column had a value of `1004`.
Although the `column.exclude.list` configuration property allows you to remove columns from the event values, all columns in a primary or unique key are always included in the event's key.
If the table does not have a primary or unique key, then the change event's key will be null. This makes sense since the rows in a table without a primary or unique key constraint cannot be uniquely identified.
Like the message key, the value of a change event message has a _schema_ section and _payload_ section.
The payload section of every change event value produced by the Oracle connector has an _envelope_ structure with the following fields:
* `op` is a mandatory field that contains a string value describing the type of operation. Values for the Oracle connector are `c` for create (or insert), `u` for update, `d` for delete, and `r` for read (in the case of a snapshot).
* `before` is an optional field that if present contains the state of the row _before_ the event occurred. The structure will be described by the `server1.INVENTORY.CUSTOMERS.Value` Kafka Connect schema, which the `server1` connector uses for all rows in the `inventory.customers` table.
[WARNING]
====
Whether or not this field and its elements are available is highly dependent on the https://docs.oracle.com/database/121/SUTIL/GUID-D2DDD67C-E1CC-45A6-A2A7-198E4C142FA3.htm#SUTIL1583[Supplemental Logging] configuration applying to the table.
====
* `after` is an optional field that if present contains the state of the row _after_ the event occurred. The structure is described by the same `server1.INVENTORY.CUSTOMERS.Value` Kafka Connect schema used in `before`.
* `source` is a mandatory field that contains a structure describing the source metadata for the event, which in the case of Oracle contains these fields: the {prodname} version, the connector name, whether the event is part of an ongoing snapshot or not, the transaction id (not while snapshotting), the SCN of the change, and a timestamp representing the point in time when the record was changed in the source database (during snapshotting, this is the point in time of snapshotting).
[TIP]
====
The `commit_scn` field is optional and describes the SCN of the transaction commit that the change event participates within.
This field is only present when using the LogMiner connection adapter.
* `ts_ms` is optional and if present contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event.
And of course, the _schema_ portion of the event message's value contains a schema that describes this envelope structure and the nested fields within it.
If we look at the `schema` portion of this event's _value_, we can see the schema for the _envelope_, the schema for the `source` structure (which is specific to the Oracle connector and reused across all events), and the table-specific schemas for the `before` and `after` fields.
[TIP]
====
The names of the schemas for the `before` and `after` fields are of the form _logicalName_._schemaName_._tableName_.Value, and thus are entirely independent from all other schemas for all other tables.
This means that when using the link:/docs/faq/#avro-converter[Avro Converter], the resulting Avro schems for _each table_ in each _logical source_ have their own evolution and history.
====
If we look at the `payload` portion of this event's _value_, we can see the information in the event, namely that it is describing that the row was created (since `op=c`), and that the `after` field value contains the values of the new inserted row's' `ID`, `FIRST_NAME`, `LAST_NAME`, and `EMAIL` columns.
[TIP]
====
It may appear that the JSON representations of the events are much larger than the rows they describe.
This is true, because the JSON representation must include the _schema_ and the _payload_ portions of the message.
It is possible and even recommended to use the link:/docs/faq/#avro-converter[Avro Converter] to dramatically decrease the size of the actual messages written to the Kafka topics.
The value of an _update_ change event on this table will actually have the exact same _schema_, and its payload will be structured the same but will hold different values.
When we compare this to the value in the _insert_ event, we see a couple of differences in the `payload` section:
* The `op` field value is now `u`, signifying that this row changed because of an update
* The `before` field now has the state of the row with the values before the database commit
* The `after` field now has the updated state of the row, and here was can see that the `EMAIL` value is now `anne@example.com`.
* The `source` field structure has the same fields as before, but the values are different since this event is from a different position in the redo log.
There are several things we can learn by just looking at this `payload` section. We can compare the `before` and `after` structures to determine what actually changed in this row because of the commit.
The `source` structure tells us information about Oracle's record of this change (providing traceability), but more importantly this has information we can compare to other events in this and other topics to know whether this event occurred before, after, or as part of the same Oracle commit as other events.
When the columns for a row's primary/unique key are updated, the value of the row's key has changed so {prodname} will output _three_ events: a `DELETE` event and a {link-prefix}:{link-oracle-connector}#oracle-tombstone-events[tombstone event] with the old key for the row, followed by an `INSERT` event with the new key for the row.
So far we've seen samples of _create_ and _update_ events.
Now, let's look at the value of a _delete_ event for the same table. Once again, the `schema` portion of the value will be exactly the same as with the _create_ and _update_ events:
When a row is deleted, the _delete_ event value listed above still works with log compaction, since Kafka can still remove all earlier messages with that same key.
But only if the message value is `null` will Kafka know that it can remove _all messages_ with that same key.
To make this possible, {prodname}'s Oracle connector always follows the _delete_ event with a special _tombstone_ event that has the same key but `null` value.
The Oracle connector represents changes to rows with events that are structured like the table in which the rows exists. The event contains a field for each column value. How that value is represented in the event depends on the Oracle data type of the column. The following sections describe how the connector maps oracle data types to a _literal type_ and a _semantic type_ in event fields.
* _literal type_ describes how the value is literally represented using Kafka Connect schema types: `INT8`, `INT16`, `INT32`, `INT64`, `FLOAT32`, `FLOAT64`, `BOOLEAN`, `STRING`, `BYTES`, `ARRAY`, `MAP`, and `STRUCT`.
Support for these data types is currently in incubating state, i.e. exact semantics, configuration options etc. may change in future revisions, based on feedback we receive.
Please let us know if you encounter any problems while using these data types.
====
The following table describes how the connector maps binary and character LOB types.
.Mappings for Oracle binary and character LOB types
|`org.apache.kafka.connect.data.Decimal` if using `BYTES` +
+
Handled equivalently to `NUMBER` (note that S defaults to 0 for `DECIMAL`).
|`DOUBLE PRECISION`
|`STRUCT`
|`io.debezium.data.VariableScaleDecimal` +
+
Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|`FLOAT[(P)]`
|`STRUCT`
|`io.debezium.data.VariableScaleDecimal` +
+
Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|`INTEGER`, `INT`
|`BYTES`
|`org.apache.kafka.connect.data.Decimal` +
+
`INTEGER` is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the `INT` types could store
Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|`NUMBER` columns with a scale of 0 represent integer numbers; a negative scale indicates rounding in Oracle, e.g. a scale of -2 will cause rounding to hundreds. +
Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
Oracle does not natively have support for a `BOOLEAN` data type; however,
it is common practice to use other data types with certain semantics to simulate the concept of a logical `BOOLEAN` data type.
The operator can configure the out-of-the-box `NumberOneToBooleanConverter` custom converter that would either map all `NUMBER(1)` columns to a `BOOLEAN` or if the `selector` parameter is set,
then a subset of columns could be enumerated using a comma-separated list of regular expressions.
The setting of the Oracle connector configuration property, `decimal.handling.mode` determines how the connector maps decimal types.
When the `decimal.handling.mode` property is set to `precise`, the connector uses Kafka Connect `org.apache.kafka.connect.data.Decimal` logical type for all `DECIMAL` and `NUMERIC` columns.
However, when the `decimal.handling.mode` property is set to `double`, the connector will represent the values as Java double values with schema type `FLOAT64`.
Other than Oracle's `INTERVAL`, `TIMESTAMP WITH TIME ZONE` and `TIMESTAMP WITH LOCAL TIME ZONE` data types, the other temporal types depend on the value of the `time.precision.mode` configuration property.
When the `time.precision.mode` configuration property is set to `adaptive` (the default), then the connector will determine the literal and semantic type for the temporal types based on the column's data type definition so that events _exactly_ represent the values in the database:
A string representation of a timestamp with timezone information.
|`TIMESTAMP WITH LOCAL TIME ZONE`
|`STRING`
|`io.debezium.time.ZonedTimestamp` +
+
A string representation of a timestamp in UTC.
|===
When the `time.precision.mode` configuration property is set to `connect`, then the connector will use the predefined Kafka Connect logical types.
This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values.
Since Oracle supports precision that exceeds what Kafka Connect's logical types support, using `connect` time precision will *result in a loss of precision* when the database column has a _fractional second precision_ value that is greater than 3:
[cols="25%a,20%a,55%a",options="header"]
|===
|Oracle data type |Literal type (schema type) |Semantic type (schema name) and Notes
|`DATE`
|`INT32`
|`org.apache.kafka.connect.data.Date` +
+
Represents the number of days since the epoch.
|`INTERVAL DAY[(M)] TO SECOND`
|`FLOAT64`
|`io.debezium.time.MicroDuration` +
+
The number of micro seconds for a time interval using the `365.25 / 12.0` formula for days per month average.
|`INTERVAL YEAR[(M)] TO MONTH`
|`FLOAT64`
|`io.debezium.time.MicroDuration` +
+
The number of micro seconds for a time interval using the `365.25 / 12.0` formula for days per month average.
|`TIMESTAMP(0 - 3)`
|`INT64`
|`org.apache.kafka.connect.data.Timestamp` +
+
Represents the number of milliseconds since epoch, and does not include timezone information.
|`TIMESTAMP(4 - 6)`
|`INT64`
|`org.apache.kafka.connect.data.Timestamp` +
+
Represents the number of milliseconds since epoch, and does not include timezone information.
|`TIMESTAMP(7 - 9)`
|`INT64`
|`org.apache.kafka.connect.data.Timestamp` +
+
Represents the number of milliseconds since epoch, and does not include timezone information.
|`TIMESTAMP WITH TIME ZONE`
|`STRING`
|`io.debezium.time.ZonedTimestamp` +
+
A string representation of a timestamp with timezone information.
You can find a template for setting up Oracle in a virtual machine (via Vagrant) in the https://github.com/debezium/oracle-vagrant-box/[oracle-vagrant-box/] repository.
In addition, supplemental logging must be enabled for captured tables or the database in order for data changes to capture the _before_ state of changed database rows.
The following illustrates how to configure this on a specific table, which is the ideal choice to minimize the amount of information captured in the Oracle redo logs.
[source,indent=0]
----
ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
To deploy a {prodname} Oracle connector, you install the {prodname} Oracle connector archive, configure the connector, and start the connector by adding its configuration to Kafka Connect.
.Prerequisites
* link:https://zookeeper.apache.org/[Apache ZooKeeper], link:http://kafka.apache.org/[Apache Kafka], and link:{link-kafka-docs}.html#connect[Kafka Connect] are installed.
* Oracle Database is installed, and is {link-prefix}:{link-oracle-connector}#setting-up-oracle[configured to work with the {prodname} connector].
* You have a copies of the Oracle JDBC driver and the XStream API JAR.
For more information, see {link-prefix}:{link-oracle-connector}#obtaining-oracle-jdbc-driver-and-xstreams-api-files[Obtaining the Oracle JDBC driver and XStream API files].
. Extract the files into your Kafka Connect environment.
. Add the directory with the JAR files to {link-kafka-docs}/#connectconfigs[Kafka Connect's `plugin.path`].
. {link-prefix}:{link-oracle-connector}#oracle-example-configuration[Configure the connector] and {link-prefix}:{link-oracle-connector}#oracle-adding-connector-configuration[add the configuration to your Kafka Connect cluster.]
. Restart your Kafka Connect process to pick up the new JAR files.
To deploy a {prodname} Oracle connector, you add the connector files to Kafka Connect, create a custom container to run the connector, and then add connector configuration to your container.
For details about deploying the {prodname} Oracle connector, see the following topics:
Depending on your Oracle environment, you must have either the Oracle JDBC driver (`ojdbc8.jar`) or XStream API (`xstreams.jar`) to run the {prodname} Oracle connector.
Licensing requirements prohibit {prodname} from including these files in the Oracle connector archive.
However, the required files are available for free download as part of the Oracle Instant Client.
. From a browser, download the https://www.oracle.com/database/technologies/instant-client/downloads.html[Oracle Instant Client package] for your operating system.
. Extract the archive and then open the `instantclient___<VERSION>__` directory.
. If you are using the XStreams implementation, create an environment variable, `LD_LIBRARY_PATH`, and set its value to the path to the Instant Client directory, for example:
To deploy a {prodname} Oracle connector, you must build a custom Kafka Connect container image that contains the {prodname} connector archive, and then push this container image to a container registry.
You then need to create the following custom resources (CRs):
* A `KafkaConnect` CR that defines your Kafka Connect instance.
The `image` property in the CR specifies the name of the container image that you create to run your {prodname} connector.
You apply this CR to the OpenShift instance where link:https://access.redhat.com/products/red-hat-amq#streams[Red Hat {StreamsName}] is deployed.
{StreamsName} offers operators and images that bring Apache Kafka to OpenShift.
* A `KafkaConnector` CR that defines your {prodname} Oracle connector.
Apply this CR to the same OpenShift instance where you apply the `KafkaConnect` CR.
.Prerequisites
* Oracle Database is running and you completed the steps to {LinkDebeziumUserGuide}#setting-up-oracle-for-use-with-the-debezium-oracle-connector[set up Oracle to work with a {prodname} connector].
* {StreamsName} is deployed on OpenShift and is running Apache Kafka and Kafka Connect.
For more information, see link:{LinkDeployStreamsOpenShift}[{NameDeployStreamsOpenShift}]
* You have an account and permissions to create and manage containers in the container registry (such as `quay.io` or `docker.io`) to which you plan to add the container that will run your {prodname} connector.
.. Create a new {prodname} Oracle KafkaConnect custom resource (CR).
For example, create a KafkaConnect CR with the name `dbz-connect.yaml` that specifies `annotations` and `image` properties as shown in the following example:
+
[source,yaml,subs="+attributes"]
----
apiVersion: {KafkaConnectApiVersion}
kind: KafkaConnect
metadata:
name: my-connect-cluster
annotations:
strimzi.io/use-connector-resources: "true" // <1>
spec:
#...
image: debezium-container-for-oracle // <2>
----
<1> `metadata.annotations` indicates to the Cluster Operator that KafkaConnector resources are used to configure connectors in this Kafka Connect cluster.
This property overrides the `STRIMZI_DEFAULT_KAFKA_CONNECT_IMAGE` variable in the Cluster Operator
.. Apply the `KafkaConnect` CR to the OpenShift Kafka Connect environment by entering the following command:
+
[source,shell,options="nowrap"]
----
oc create -f dbz-connect.yaml
----
+
The command adds a Kafka Connect instance that specifies the name of the image that you created to run your {prodname} connector.
. Create a `KafkaConnector` custom resource that configures your {prodname} Oracle connector instance.
+
You configure a {prodname} Oracle connector in a `.yaml` file that specifies the configuration properties for the connector.
The connector configuration might instruct {prodname} to produce events for a subset of the schemas and tables, or it might set properties so that {prodname} ignores, masks, or truncates values in specified columns that are sensitive, too large, or not needed.
+
The following example configures a {prodname} connector that connects to an Oracle host IP address, on port `1521`.
This host has a database named `ORCLCDB`, and `server1` is the server's logical name.
|The name of our connector when we register it with a Kafka Connect service.
|2
|The name of this Oracle connector class.
|3
|The address of the Oracle instance.
|4
|The port number of the Oracle instance.
|5
|The name of the Oracle user.
|6
|The password for the Oracle user.
|7
|The name of the database to capture changes from.
|9
|Logical name that identifies and provides a namespace for the Oracle database server from which the connector captures changes.
|10
|The list of Kafka brokers that this connector uses to write and recover DDL statements to the database history topic.
|11
|The name of the database history topic where the connector writes and recovers DDL statements. This topic is for internal use only and should not be used by consumers.
|===
. Create your connector instance with Kafka Connect.
For example, if you saved your `KafkaConnector` resource in the `inventory-connector.yaml` file, you would run the following command:
+
[source,shell,options="nowrap"]
----
oc apply -f inventory-connector.yaml
----
+
The preceding command registers `inventory-connector` and the connector starts to run against the `server1` database as defined in the `KafkaConnector` CR.
. Verify that the connector was created and has started:
.. Display the Kafka Connect log output to verify that the connector was created and has started to capture changes in the specified database:
+
[source,shell,options="nowrap"]
----
oc logs $(oc get pods -o name -l strimzi.io/cluster=my-connect-cluster)
<1> The name of our connector when we register it with a Kafka Connect service.
<2> The name of this Oracle connector class.
<3> The address of the Oracle instance.
<4> The port number of the Oracle instance.
<5> The name of the Oracle user
<6> The password for the Oracle user
<7> The name of the database to capture changes from.
<8> Logical name that identifies and provides a namespace for the Oracle database server from which the connector captures changes.
<9> The maximum number of tasks to create for this connector.
<10> The name of the Oracle pluggable database that the connector captures changes from. Used in container database (CDB) installations only.
<11> The list of Kafka brokers that this connector will use to write and recover DDL statements to the database history topic.
<12> The name of the database history topic where the connector will write and recover DDL statements. This topic is for internal use only and should not be used by consumers.
In more complex Oracle deployments, and those that use TNS names, you can provide a raw JDBC URL in place of the hostname and port pair in the preceding example.
The following JSON example describes a configuration that is similar to the preceding one, except that it passes a raw JDBC URL instead of a hostname and port:
"database.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 1>)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 2>)(PORT=1521)))(CONNECT_DATA=SERVICE_NAME=)(SERVER=DEDICATED)))",
The {prodname} Oracle connector supports both deployment practices of pluggable databases (CDB mode) as well as non-pluggable databases (non-CDB mode).
For the complete list of the configuration properties that you can set for the {prodname} Oracle connector, see {link-prefix}:{link-oracle-connector}#oracle-connector-properties[Oracle connector properties].
ifdef::community[]
You can send this configuration with a `POST` command to a running Kafka Connect service.
The service records the configuration and starts a connector task that performs the following operations:
When the connector starts, it {link-prefix}:{link-oracle-connector}#oracle-snapshots[performs a consistent snapshot] of the Oracle databases that the connector is configured for.
The connector then starts generating data change events for row-level operations and streaming the change event records to Kafka topics.
* xref:debezium-{context}-connector-database-history-configuration-properties[Database history connector configuration properties] that control how {prodname} processes events that it reads from the database history topic.
** xref:{context}-pass-through-database-history-properties-for-configuring-producer-and-consumer-clients[Pass-through database history properties]
* xref:debezium-{context}-connector-pass-through-database-driver-configuration-properties[Pass-through database driver properties] that control the behavior of the database driver.
|Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.)
|The maximum number of tasks that should be created for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable.
|Logical name that identifies and provides a namespace for the particular Oracle database server being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names emanating from this connector.
|A mode for taking an initial snapshot of the structure and optionally data of captured tables. Supported values are _initial_ (will take a snapshot of structure and data of captured tables; useful if topics should be populated with a complete representation of the data from the captured tables) and _schema_only_ (will take a snapshot of the structure of captured tables only; useful if only changes happening from now onwards should be propagated to topics). Once the snapshot is complete, the connector will continue reading change events from the database's redo logs.
a|Controls whether and how long the connector holds a table lock, which prevents certain types of changes table operations, while the connector is performing a snapshot. Possible settings are: +
`shared` - allows concurrent access to the table but prevents any session from acquiring a table exclusive lock (specifically, the connector will acquire a `ROW SHARE`level lock while capturing the schemas of the tables). +
`none` - prevents the connector from acquiring any table locks during the snapshot. This setting is only safe to use if and _only_ if no schema changes are happening while the snapshot is running.
|An optional, comma-separated list of regular expressions that match names of *fully-qualified* table names (`<db-name>.<schema-name>.<name>`) included in `table.include.list` for which you *want* to take the snapshot.
|Controls which rows from tables are included in snapshot. +
This property contains a comma-separated list of fully-qualified tables _(SCHEMA_NAME.TABLE_NAME)_. Select statements for the individual tables are specified in further configuration properties, one for each table, identified by the id `snapshot.select.statement.overrides.[SCHEMA_NAME].[TABLE_NAME]`. The value of those properties is the SELECT statement to use when retrieving data from the specific table during snapshotting. _A possible use case for large append-only tables is setting a specific point where to start (resume) snapshotting, in case a previous snapshotting was interrupted._ +
*Note*: This setting has impact on snapshots only. Events captured during log reading are not affected by it.
|An optional, comma-separated list of regular expressions that match names of schemas for which you *want* to capture changes. Any schema name not included in `schema.include.list` is excluded from having its changes captured. By default, all non-system schemas have their changes captured. Do not also set the `schema.exclude.list` property. When using LogMiner, only POSIX regular expressions are supported.
|An optional, comma-separated list of regular expressions that match names of schemas for which you *do not* want to capture changes. Any schema whose name is not included in `schema.exclude.list` has its changes captured, with the exception of system schemas. Do not also set the `schema.include.list` property. When using LogMiner, only POSIX regular expressions are supported.
|An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the include list will be excluded from monitoring. Each identifier is of the form _schemaName_._tableName_. By default the connector will monitor every non-system table in each monitored database. May not be used with `table.exclude.list`. When using LogMiner, only POSIX regular expressions are supported.
|An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the exclude list will be monitored. Each identifier is of the form _schemaName_._tableName_. May not be used with `table.include.list`. When using LogMiner, only POSIX regular expressions are supported.
|An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be included in the change event message values.
Fully-qualified names for columns are of the form _schemaName_._tableName_._columnName_.
Note that primary key columns are always included in the event's key, even if not included in the value.
Do not also set the `column.exclude.list` property.
|An optional comma-separated list of regular expressions that match the fully-qualified names of columns that should be excluded from change event message values.
Fully-qualified names for columns are of the form _schemaName_._tableName_._columnName_.
Note that primary key columns are always included in the event's key, also if excluded from the value.
Do not also set the `column.include.list` property.
|An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be pseudonyms in the change event message values with a field value consisting of the hashed value using the algorithm `_hashAlgorithm_` and salt `_salt_`.
Based on the used hash function referential integrity is kept while data is pseudonymized. Supported hash functions are described in the {link-java7-standard-names}[MessageDigest section] of the Java Cryptography Architecture Standard Algorithm Name Documentation.
The hash is automatically shortened to the length of the column.
Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form _pdbName_._schemaName_._tableName_._columnName_.
Note: Depending on the `_hashAlgorithm_` used, the `_salt_` selected and the actual data set, the resulting masked data set may not be completely anonymized.
| Specifies how the connector should handle floating point values for `NUMBER`, `DECIMAL` and `NUMERIC` columns: `precise` (the default) represents them precisely using `java.math.BigDecimal` values represented in change events in a binary form; or `double` represents them using `double` values, which may result in a loss of precision but will be far easier to use. `string` option encodes values as formatted string which is easy to consume but a semantic information about the real type is lost. See <<oracle-decimal-types>>.
|Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the `max.batch.size` property.
|Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048.
|Long value for the maximum size in bytes of the blocking queue. The feature is disabled by default, it will be active if it's set with a positive long value.
|Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second.
|Controls whether a _delete_ event is followed by a tombstone event. +
+
`true` - a delete operation is represented by a _delete_ event and a subsequent tombstone event. +
+
`false` - only a _delete_ event is emitted. +
+
After a source record is deleted, emitting a tombstone event (the default behavior) allows Kafka to completely delete all events that pertain to the key of the deleted row in case {link-kafka-docs}/#compaction[log compaction] is enabled for the topic.
|An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be truncated in the change event message values if the field values are longer than the specified number of characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer. Fully-qualified names for columns are of the form _pdbName_._schemaName_._tableName_._columnName_.
|An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be replaced in the change event message values with a field value consisting of the specified number of asterisk (`*`) characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form _pdbName_._schemaName_._tableName_._columnName_.
|An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages.
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
|An optional comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages.
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
|`true` when connector configuration explicitly specifies the `key.converter` or `value.converter` parameters to use Avro, otherwise defaults to `false`.
|The minimum SCN interval size that this connector will try to read from redo/archive logs. Active batch size will be also increased/decreased by this amount for tuning connector throughput when needed.
|The minimum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
|The maximum amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
|The starting amount of time that the connector will sleep after reading data from redo/archive logs and before starting reading data again. Value is in milliseconds.
|The maximum amount of time up or down that the connector will use to tune the optimal sleep time when reading data from logminer. Value is in milliseconds.
|Controls whether or not the connector mines changes from just archive logs or a combination of the online redo logs and archive logs (the default). +
+
For some environments where online redo logs are archived frequently enough to cause LogMiner session failures,
this opt-in feature only mines archive logs which are guaranteed to be reliable unlike the circular buffer nature of redo logs that can be archived at any point, leading to LogMiner session errors.
Note that by using this opt-in feature, events will be emitted with a certain amount of latency that is entirely dependent on how frequently online redo logs are archived by Oracle.
Any transaction that exceeds this configured value will be discarded entirely and no messages emitted for the operations that were part of the transaction.
While this option allows the behavior to be configured on a case-by-case basis,
we have plans to enhance this behavior in a future release by means of adding a scalable transaction buffer, (see {link-prefix}:{jira-url}/browse/DBZ-3123[DBZ-3123]).
|List of database users to exclude from the LogMiner query; useful if there's changes from specific users which you'd always like to exclude from the capturing process.
The {prodname} Oracle connector has three metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.
* <<oracle-snapshot-metrics, snapshot metrics>>; for monitoring the connector when performing snapshots
* <<oracle-streaming-metrics, streaming metrics>>; for monitoring the connector when processing change events
* <<oracle-schema-history-metrics, schema history metrics>>; for monitoring the status of the connector's schema history
Please refer to the {link-prefix}:{link-debezium-monitoring}#monitoring-debezium[monitoring documentation] for details of how to expose these metrics via JMX.
|The number of hours that transactions will be retained by the connector's in-memory buffer without being committed or rolled back before being discarded.
See <<oracle-property-log-mining-transaction-retention-hours, `log.mining.transaction.retention`>> for more details.
|The number of times the system change number has been checked for advancement and remains unchanged.
This is an indicator that long-running transaction(s) are ongoing and preventing the connector from flushing the latest processed system change number to the connector's offsets.
Under optimal operations, this should always be or remain close to `0`.
|The number of DDL records that have been detected but could not be parsed by the DDL parser.
This should always be `0`; however when allowing unparsable DDL to be skipped, this metric can be used to determine if any warnings have been written to the connector logs.
The Oracle connector will automatically track and apply table schema changes by parsing DDL from the redo logs.
In the event that the DDL parser encounters an unsupported statement, the connector offers an alternative way to handle applying the schema change should the need arise.
By default, the connector will stop when an unparseable DDL statement is encountered.
This DDL can be manually performed by using {prodname} link:/documentation/reference/configuration/signalling[signalling] to trigger the update of the database schema.
Once the `schema-changes` signal has been inserted, the connector will need to be restarted with an altered configuration that includes specifying the <<oracle-property-database-history-skip-unparseable-ddl, `+database.history.skip.unparseable.ddl+`>> option as `true`.
Once the connector's commit SCN has advanced beyond the DDL change, it's recommended that the connector's configuration be returned to its previous state so that unparseable DDL statements aren't skipped unexpectedly.
The {prodname} Oracle connector by default ingests changes using native Oracle LogMiner.
The connector can be toggled to use Oracle XStream instead and to do so specific database and connector configurations must be used that differ from that of LogMiner.
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
alter system set enable_goldengate_replication=true;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should show "Database log mode: Archive Mode"
archive log list
exit;
----
In addition, supplemental logging must be enabled for captured tables or the database in order for data changes to capture the _before_ state of changed database rows.
The following illustrates how to configure this on a specific table, which is the ideal choice to minimize the amount of information captured in the Oracle redo logs.
[source,indent=0]
----
ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
This error means that the connector has attempted to execute an operation that must be executed against the parent index-organized table that contains the specified overflow table.
The connector's `table.include.list` or `table.exclude.list` configuration options should then be adjusted to explicitly include or exclude the appropriate tables to avoid the connector from attempting to capture changes from the child index-organized table.
=== LogMiner adapter does not capture changes made by SYS or SYSTEM
Oracle uses the `SYS` and `SYSTEM` accounts for lots of internal changes and therefore the connector automatically filters changes made by these users when fetching changes from LogMiner.