tet123/documentation/modules/ROOT/pages/connectors/oracle.adoc

1499 lines
64 KiB
Plaintext

[id="debezium-connector-for-oracle"]
= {prodname} Connector for Oracle
:context: oracle
:toc:
:toc-placement: macro
:linkattrs:
:icons: font
toc::[]
[NOTE]
====
This connector is currently in incubating state, i.e. exact semantics, configuration options etc. may change in future revisions, based on the feedback we receive. Please let us know if you encounter any problems.
====
{prodname}'s Oracle Connector can monitor and record all of the row-level changes in the databases on an Oracle server.
Most notably, the connector does not yet support changes to the structure of captured tables (e.g. `ALTER TABLE...`) after the initial snapshot has been completed
(see {jira-url}/browse/DBZ-718[DBZ-718]).
It is supported though to capture tables newly added while the connector is running
(provided the new table's name matches the connector's filter configuration).
[[oracle-overview]]
== Overview
{prodname} ingests change events from Oracle using the https://docs.oracle.com/database/121/XSTRM/xstrm_intro.htm#XSTRM72647[XStream API] or directly by LogMiner.
In order to use the XStream API, you need to have a license for the GoldenGate product
(though it is not required that GoldenGate itself is installed).
[[setting-up-oracle]]
== Setting up Oracle
The following steps need to be performed in order to prepare the database so the {prodname} connector can be used.
This assumes the multi-tenancy configuration (with a container database and at least one pluggable database);
if you're not using this model, adjust the steps accordingly.
You can find a template for setting up Oracle in a virtual machine (via Vagrant) in the https://github.com/debezium/oracle-vagrant-box/[oracle-vagrant-box/] repository.
=== Preparing the Database
==== XStream
Enable GoldenGate replication and archive log mode:
[source,indent=0]
----
ORACLE_SID=ORCLCDB dbz_oracle sqlplus /nolog
CONNECT sys/top_secret AS SYSDBA
alter system set db_recovery_file_dest_size = 5G;
alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
alter system set enable_goldengate_replication=true;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should show "Database log mode: Archive Mode"
archive log list
exit;
----
Furthermore, in order to capture the _before_ state of changed rows, supplemental logging must be enabled for the captured tables or the database in general.
E.g. like so for a specific table:
[source,indent=0]
----
ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
----
==== LogMiner
[NOTE]
====
The LogMiner implementation requires the database to be configured with a minimum number of redo log groups and files per group.
It's recommended to have a minimum of `5` redo log groups each with `2` redo files per group.
Please refer to the Oracle documentation on how to manipulate redo log groups and files for your database version.
====
Enable archive log mode:
[source,indent=0]
----
ORACLE_SID=ORACLCDB dbz_oracle sqlplus /nolog
CONNECT sys/top_secret AS SYSDBA
alter system set db_recovery_file_dest_size = 10G;
alter system set db_recovery_file_dest = '/opt/oracle/oradta/recovery_area' scope=spfile;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should now "Database log mode: Archive Mode"
archive log list
exit;
----
Furthermore, in order to capture the _before_ state of changed rows, supplemental logging must be enabled for the captured tables or the database in general.
E.g. like so for a specific table:
[source,indent=0]
----
ALTER TABLE inventory.customers ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
----
=== Creating Users for the connector
==== XStream
Create an XStream admin user in the container database (used per Oracle's recommendation for administering XStream):
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_adm_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
----
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
CREATE TABLESPACE xstream_adm_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_adm_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
----
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE USER c##xstrmadmin IDENTIFIED BY xsa
DEFAULT TABLESPACE xstream_adm_tbs
QUOTA UNLIMITED ON xstream_adm_tbs
CONTAINER=ALL;
GRANT CREATE SESSION, SET CONTAINER TO c##xstrmadmin CONTAINER=ALL;
BEGIN
DBMS_XSTREAM_AUTH.GRANT_ADMIN_PRIVILEGE(
grantee => 'c##xstrmadmin',
privilege_type => 'CAPTURE',
grant_select_privileges => TRUE,
container => 'ALL'
);
END;
/
exit;
----
Create XStream user (used by the {prodname} connector to connect to the XStream outbound server):
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/xstream_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
----
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
CREATE TABLESPACE xstream_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/xstream_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
----
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE USER c##xstrm IDENTIFIED BY xs
DEFAULT TABLESPACE xstream_tbs
QUOTA UNLIMITED ON xstream_tbs
CONTAINER=ALL;
GRANT CREATE SESSION TO c##xstrm CONTAINER=ALL;
GRANT SET CONTAINER TO c##xstrm CONTAINER=ALL;
GRANT SELECT ON V_$DATABASE to c##xstrm CONTAINER=ALL;
GRANT FLASHBACK ANY TABLE TO c##xstrm CONTAINER=ALL;
exit;
----
==== LogMiner
Create LogMiner user (used by the {prodname} connector to connect to Oracle):
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/logminer_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
----
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLPDB1 as sysdba
CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/logminer_tbs.dbf'
SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
exit;
----
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
CREATE USER c##logminer IDENTIFIED BY lm
DEFAULT TABLESPACE logminer_tbs
QUOTA UNLIMITED ON logminer_tbs
CONTAINER=ALL;
GRANT CREATE SESSION TO c##logminer CONTAINER=ALL;
GRANT SET CONTAINER TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$DATABASE to c##logminer CONTAINER=ALL;
GRANT FLASHBACK ANY TABLE TO c##logminer CONTAINER=ALL;
GRANT LOGMINING TO c##logminer CONTAINER=ALL;
GRANT LOCK ANY TABLE TO c##logminer CONTAINER=ALL;
GRANT CREATE TABLE TO c##logminer CONTAINER=ALL;
GRANT SELECT ON V_$LOG TO c##logminer CONTAINER=ALL;
exit;
----
=== Create an XStream Outbound Server
[NOTE]
====
If you're using the LogMiner implementation, this step is not necessary.
====
Create an https://docs.oracle.com/cd/E11882_01/server.112/e16545/xstrm_cncpt.htm#XSTRM1088[XStream Outbound server]
(given the right privileges, this may be done automatically by the connector going forward, see {jira-url}/browse/DBZ-721[DBZ-721]):
[source,indent=0]
----
sqlplus c##xstrmadmin/xsa@//localhost:1521/ORCLCDB
DECLARE
tables DBMS_UTILITY.UNCL_ARRAY;
schemas DBMS_UTILITY.UNCL_ARRAY;
BEGIN
tables(1) := NULL;
schemas(1) := 'debezium';
DBMS_XSTREAM_ADM.CREATE_OUTBOUND(
server_name => 'dbzxout',
table_names => tables,
schema_names => schemas);
END;
/
exit;
----
Alter the XStream Outbound server to allow the xstrm user to connect to it:
[source,indent=0]
----
sqlplus sys/top_secret@//localhost:1521/ORCLCDB as sysdba
BEGIN
DBMS_XSTREAM_ADM.ALTER_OUTBOUND(
server_name => 'dbzxout',
connect_user => 'c##xstrm');
END;
/
exit;
----
Note that a given outbound server must not be used by multiple connector instances at the same time.
If you wish to set up multiple instances of the {prodname} Oracle connector, a specific XStreamOutbound server is needed for each of them.
=== Supported Configurations
So far, the connector has been tested with the pluggable database set-up (CDB/PDB model).
It should monitor a single PDB in this model.
It should also work with traditional (non-CDB) set-ups, though this could not be tested so far.
[[how-the-oracle-connector-works]]
== How the Oracle Connector Works
[[oracle-database-schema-history]]
=== Database Schema History
tbd.
[[oracle-snapshots]]
=== Snapshots
Most Oracle servers are configured to not retain the complete history of the database in the redo logs,
so the {prodname} Oracle connector would be unable to see the entire history of the database by simply reading the logs.
So, by default (snapshotting mode *initial*) the connector will upon first startup perform an initial _consistent snapshot_ of the database
(meaning the structure and data within any tables to be captured as per the connector's filter configuration).
Each snapshot consists of the following steps:
1. Determine the tables to be captured
2. Obtain an `IN EXCLUSIVE MODE` lock on each of the monitored tables to ensure that no structural changes can occur to any of the tables.
3. Read the current SCN ("system change number") position in the server's redo log.
4. Capture the structure of all relevant tables.
5. Release the locks obtained in step 2, i.e. the locks are held only for a short period of time.
6. Scan all of the relevant database tables and schemas as valid at the SCN position read in step 3 (`SELECT * FROM ... AS OF SCN 123`), and generate a `READ` event for each row and write that event to the appropriate table-specific Kafka topic.
7. Record the successful completion of the snapshot in the connector offsets.
If the connector fails, is rebalanced, or stops after step 1 begins but before step 7 completes,
upon restart the connector will begin a new snapshot.
Once the Oracle connector does complete its initial snapshot, it continues streaming from the position read during step 3,
ensuring that it does not miss any updates that occurred while the snapshot was taken.
If the connector stops again for any reason, upon restart it will simply continue streaming changes from where it previously left off.
A second snapshotting mode is *schema_only*.
In this case, step 6 from the snapshotting routine described above is not applied.
In other words, the connector still captures the structure of the relevant tables, but it does not create any `READ` events representing the complete dataset at the point of connector start-up.
This can be useful if you are interested in data changes only from now onwards but not the complete current state of all records.
[[oracle-reading-the-log]]
=== Reading the Redo Log
Upon first start-up, the connector takes a snapshot of the structure of the captured tables (DDL)
and persists this information in its internal database history topic.
It then proceeds to listen for change events right from the SCN at which the schema structure was captured.
Processed SCNs are passed as offsets to Kafka Connect and regularly acknowledged with the database server
(allowing it to discard older log files).
After restart, the connector will resume from the offset (SCN) where it left off before.
[[oracle-topic-names]]
=== Topics Names
[[oracle-schema-change-topic]]
=== Schema Change Topic
[WARNING]
====
The format of the schema change topic messages is in incubating state and it can change without further notice.
====
Oracle connector stores the historic schema structure of database tables in a database history topic.
This topic should be considered as an internal state of the connector and should not be used by the user.
If the application needs to track changes in the source database there is the public-facing schema change topic.
The topic name is the same as the logical server name configured in connector configuration.
{prodname} emits a new message to this topic whenever a new table is streamed from or when the structure of the table is altered ({link-prefix}:{link-oracle-connector}#oracle-schema-evolution[schema evolution procedure must be followed]).
The message contains a logical representation of the table schema.
The example of the message is:
[source,json,indent=0,subs="attributes"]
----
{
"schema": {
...
},
"payload": {
"source": {
"version": "{debezium-version}",
"connector": "oracle",
"name": "server1",
"ts_ms": 1588252618953,
"snapshot": "true",
"db": "ORCLPDB1",
"schema": "DEBEZIUM",
"table": "CUSTOMERS",
"txId" : null,
"scn" : "1513734",
"commit_scn": "1513734",
"lcr_position" : null
},
"databaseName": "ORCLPDB1",
"schemaName": "DEBEZIUM",
"ddl": "CREATE TABLE \"DEBEZIUM\".\"CUSTOMERS\" \n ( \"ID\" NUMBER(9,0) NOT NULL ENABLE, \n \"FIRST_NAME\" VARCHAR2(255), \n \"LAST_NAME" VARCHAR2(255), \n \"EMAIL\" VARCHAR2(255), \n PRIMARY KEY (\"ID\") ENABLE, \n SUPPLEMENTAL LOG DATA (ALL) COLUMNS\n ) SEGMENT CREATION IMMEDIATE \n PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 \n NOCOMPRESS LOGGING\n STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645\n PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1\n BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)\n TABLESPACE \"USERS\" ",
"tableChanges": [
{
"type": "CREATE",
"id": "\"ORCLPDB1\".\"DEBEZIUM\".\"CUSTOMERS\"",
"table": {
"defaultCharsetName": null,
"primaryKeyColumnNames": [
"ID"
],
"columns": [
{
"name": "ID",
"jdbcType": 2,
"nativeType": null,
"typeName": "NUMBER",
"typeExpression": "NUMBER",
"charsetName": null,
"length": 9,
"scale": 0,
"position": 1,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "FIRST_NAME",
"jdbcType": 12,
"nativeType": null,
"typeName": "VARCHAR2",
"typeExpression": "VARCHAR2",
"charsetName": null,
"length": 255,
"scale": null,
"position": 2,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "LAST_NAME",
"jdbcType": 12,
"nativeType": null,
"typeName": "VARCHAR2",
"typeExpression": "VARCHAR2",
"charsetName": null,
"length": 255,
"scale": null,
"position": 3,
"optional": false,
"autoIncremented": false,
"generated": false
},
{
"name": "EMAIL",
"jdbcType": 12,
"nativeType": null,
"typeName": "VARCHAR2",
"typeExpression": "VARCHAR2",
"charsetName": null,
"length": 255,
"scale": null,
"position": 4,
"optional": false,
"autoIncremented": false,
"generated": false
}
]
}
}
]
}
}
----
The fields
* `databaseName` and `schemaName` describe which database/schema has been affected
* `ddl` contains the DDL responsible for the schema change
* `tableChanges` array contains one or more schema changes generated by the DDL command
* `type` describes the kind of the change: `CREATE` - table created, `ALTER` - table modified, `DROP` - table deleted
* `id` is the full identifier of the table
* `table` represents table metadata after the applied change
* `primaryKeyColumnNames` is a list of columns that compose the table primary key
* `columns` are column metadata of each of the columns
The schema change messages use as the key the name of the database to which the changes apply:
[source,json,indent=0,subs="attributes"]
----
{
"schema": {
"type": "struct",
"fields": [
{
"type": "string",
"optional": false,
"field": "databaseName"
}
],
"optional": false,
"name": "io.debezium.connector.oracle.SchemaChangeKey"
},
"payload": {
"databaseName": "ORCLPDB1"
}
}
----
[[oracle-events]]
=== Events
All data change events produced by the Oracle connector have a key and a value, although the structure of the key and value depend on the table from which the change events originated (see {link-prefix}:{link-oracle-connector}#oracle-topic-names[Topic names]).
[WARNING]
====
The {prodname} Oracle connector ensures that all Kafka Connect _schema names_ are http://avro.apache.org/docs/current/spec.html#names[valid Avro schema names].
This means that the logical server name must start with Latin letters or an underscore (e.g., [a-z,A-Z,\_]),
and the remaining characters in the logical server name and all characters in the schema and table names must be Latin letters, digits, or an underscore (e.g., [a-z,A-Z,0-9,\_]).
If not, then all invalid characters will automatically be replaced with an underscore character.
This can lead to unexpected conflicts when the logical server name, schema names, and table names contain other characters, and the only distinguishing characters between table full names are invalid and thus replaced with underscores.
====
{prodname} and Kafka Connect are designed around _continuous streams of event messages_, and the structure of these events may change over time.
This could be difficult for consumers to deal with, so to make it easy Kafka Connect makes each event self-contained.
Every message key and value has two parts: a _schema_ and _payload_.
The schema describes the structure of the payload, while the payload contains the actual data.
[[oracle-change-event-keys]]
==== Change Event Keys
For a given table, the change event's key will have a structure that contains a field for each column in the primary key (or unique key constraint) of the table at the time the event was created.
Consider a `customers` table defined in the `inventory` database schema:
[source,sql,indent=0]
----
CREATE TABLE customers (
id NUMBER(9) GENERATED BY DEFAULT ON NULL AS IDENTITY (START WITH 1001) NOT NULL PRIMARY KEY,
first_name VARCHAR2(255) NOT NULL,
last_name VARCHAR2(255) NOT NULL,
email VARCHAR2(255) NOT NULL UNIQUE
);
----
If the `database.server.name` configuration property has the value `server1`,
every change event for the `customers` table while it has this definition will feature the same key structure, which in JSON looks like this:
[source,json,indent=0]
----
{
"schema": {
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "ID"
}
],
"optional": false,
"name": "server1.INVENTORY.CUSTOMERS.Key"
},
"payload": {
"ID": 1004
}
}
----
The `schema` portion of the key contains a Kafka Connect schema describing what is in the key portion, and in our case that means that the `payload` value is not optional, is a structure defined by a schema named `server1.DEBEZIUM.CUSTOMERS.Key`, and has one required field named `id` of type `int32`.
If you look at the value of the key's `payload` field, you can see that it is indeed a structure (which in JSON is just an object) with a single `id` field, whose value is `1004`.
Therefore, you can interpret this key as describing the row in the `inventory.customers` table (output from the connector named `server1`) whose `id` primary key column had a value of `1004`.
////
[NOTE]
====
Although the `column.exclude.list` configuration property allows you to remove columns from the event values, all columns in a primary or unique key are always included in the event's key.
====
[WARNING]
====
If the table does not have a primary or unique key, then the change event's key will be null. This makes sense since the rows in a table without a primary or unique key constraint cannot be uniquely identified.
====
////
[[oracle-change-event-values]]
==== Change Event Values
Like the message key, the value of a change event message has a _schema_ section and _payload_ section.
The payload section of every change event value produced by the Oracle connector has an _envelope_ structure with the following fields:
* `op` is a mandatory field that contains a string value describing the type of operation. Values for the Oracle connector are `c` for create (or insert), `u` for update, `d` for delete, and `r` for read (in the case of a snapshot).
* `before` is an optional field that if present contains the state of the row _before_ the event occurred. The structure will be described by the `server1.INVENTORY.CUSTOMERS.Value` Kafka Connect schema, which the `server1` connector uses for all rows in the `inventory.customers` table.
[WARNING]
====
Whether or not this field and its elements are available is highly dependent on the https://docs.oracle.com/database/121/SUTIL/GUID-D2DDD67C-E1CC-45A6-A2A7-198E4C142FA3.htm#SUTIL1583[Supplemental Logging] configuration applying to the table.
====
* `after` is an optional field that if present contains the state of the row _after_ the event occurred. The structure is described by the same `server1.INVENTORY.CUSTOMERS.Value` Kafka Connect schema used in `before`.
* `source` is a mandatory field that contains a structure describing the source metadata for the event, which in the case of Oracle contains these fields: the {prodname} version, the connector name, whether the event is part of an ongoing snapshot or not, the transaction id (not while snapshotting), the SCN of the change, and a timestamp representing the point in time when the record was changed in the source database (during snapshotting, this is the point in time of snapshotting).
[TIP]
====
The `commit_scn` field is optional and describes the SCN of the transaction commit that the change event participates within.
This field is only present when using the LogMiner connection adapter.
====
* `ts_ms` is optional and if present contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event.
And of course, the _schema_ portion of the event message's value contains a schema that describes this envelope structure and the nested fields within it.
[[oracle-create-events]]
===== Create events
Let's look at what a _create_ event value might look like for our `customers` table:
[source,json,indent=0,subs="attributes"]
----
{
"schema": {
"type": "struct",
"fields": [
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "ID"
},
{
"type": "string",
"optional": false,
"field": "FIRST_NAME"
},
{
"type": "string",
"optional": false,
"field": "LAST_NAME"
},
{
"type": "string",
"optional": false,
"field": "EMAIL"
}
],
"optional": true,
"name": "server1.DEBEZIUM.CUSTOMERS.Value",
"field": "before"
},
{
"type": "struct",
"fields": [
{
"type": "int32",
"optional": false,
"field": "ID"
},
{
"type": "string",
"optional": false,
"field": "FIRST_NAME"
},
{
"type": "string",
"optional": false,
"field": "LAST_NAME"
},
{
"type": "string",
"optional": false,
"field": "EMAIL"
}
],
"optional": true,
"name": "server1.DEBEZIUM.CUSTOMERS.Value",
"field": "after"
},
{
"type": "struct",
"fields": [
{
"type": "string",
"optional": true,
"field": "version"
},
{
"type": "string",
"optional": false,
"field": "name"
},
{
"type": "int64",
"optional": true,
"field": "ts_ms"
},
{
"type": "string",
"optional": true,
"field": "txId"
},
{
"type": "int64",
"optional": true,
"field": "scn"
},
{
"type": "int64",
"optional": true,
"field": "commit_scn"
},
{
"type": "boolean",
"optional": true,
"field": "snapshot"
}
],
"optional": false,
"name": "io.debezium.connector.oracle.Source",
"field": "source"
},
{
"type": "string",
"optional": false,
"field": "op"
},
{
"type": "int64",
"optional": true,
"field": "ts_ms"
}
],
"optional": false,
"name": "server1.DEBEZIUM.CUSTOMERS.Envelope"
},
"payload": {
"before": null,
"after": {
"ID": 1004,
"FIRST_NAME": "Anne",
"LAST_NAME": "Kretchmar",
"EMAIL": "annek@noanswer.org"
},
"source": {
"version": "0.9.0.Alpha1",
"name": "server1",
"ts_ms": 1520085154000,
"txId": "6.28.807",
"scn": 2122185,
"commit_scn": 2122185,
"snapshot": false
},
"op": "c",
"ts_ms": 1532592105975
}
}
----
If we look at the `schema` portion of this event's _value_, we can see the schema for the _envelope_, the schema for the `source` structure (which is specific to the Oracle connector and reused across all events), and the table-specific schemas for the `before` and `after` fields.
[TIP]
====
The names of the schemas for the `before` and `after` fields are of the form _logicalName_._schemaName_._tableName_.Value, and thus are entirely independent from all other schemas for all other tables.
This means that when using the link:/docs/faq/#avro-converter[Avro Converter], the resulting Avro schems for _each table_ in each _logical source_ have their own evolution and history.
====
If we look at the `payload` portion of this event's _value_, we can see the information in the event, namely that it is describing that the row was created (since `op=c`), and that the `after` field value contains the values of the new inserted row's' `ID`, `FIRST_NAME`, `LAST_NAME`, and `EMAIL` columns.
[TIP]
====
It may appear that the JSON representations of the events are much larger than the rows they describe.
This is true, because the JSON representation must include the _schema_ and the _payload_ portions of the message.
It is possible and even recommended to use the link:/docs/faq/#avro-converter[Avro Converter] to dramatically decrease the size of the actual messages written to the Kafka topics.
====
[[oracle-update-events]]
===== Update events
The value of an _update_ change event on this table will actually have the exact same _schema_, and its payload will be structured the same but will hold different values.
Here's an example:
[source,json,indent=0,subs="attributes"]
----
{
"schema": { ... },
"payload": {
"before": {
"ID": 1004,
"FIRST_NAME": "Anne",
"LAST_NAME": "Kretchmar",
"EMAIL": "annek@noanswer.org"
},
"after": {
"ID": 1004,
"FIRST_NAME": "Anne",
"LAST_NAME": "Kretchmar",
"EMAIL": "anne@example.com"
},
"source": {
"version": "0.9.0.Alpha1",
"name": "server1",
"ts_ms": 1520085811000,
"txId": "6.9.809",
"scn": 2125544,
"commit_scn": 2125544,
"snapshot": false
},
"op": "u",
"ts_ms": 1532592713485
}
}
----
When we compare this to the value in the _insert_ event, we see a couple of differences in the `payload` section:
* The `op` field value is now `u`, signifying that this row changed because of an update
* The `before` field now has the state of the row with the values before the database commit
* The `after` field now has the updated state of the row, and here was can see that the `EMAIL` value is now `anne@example.com`.
* The `source` field structure has the same fields as before, but the values are different since this event is from a different position in the redo log.
* The `ts_ms` shows the timestamp that {prodname} processed this event.
There are several things we can learn by just looking at this `payload` section. We can compare the `before` and `after` structures to determine what actually changed in this row because of the commit.
The `source` structure tells us information about Oracle's record of this change (providing traceability), but more importantly this has information we can compare to other events in this and other topics to know whether this event occurred before, after, or as part of the same Oracle commit as other events.
[NOTE]
====
When the columns for a row's primary/unique key are updated, the value of the row's key has changed so {prodname} will output _three_ events: a `DELETE` event and a {link-prefix}:{link-oracle-connector}#oracle-tombstone-events[tombstone event] with the old key for the row, followed by an `INSERT` event with the new key for the row.
====
[[oracle-delete-events]]
===== Delete events
So far we've seen samples of _create_ and _update_ events.
Now, let's look at the value of a _delete_ event for the same table. Once again, the `schema` portion of the value will be exactly the same as with the _create_ and _update_ events:
[source,json,indent=0,subs="attributes"]
----
{
"schema": { ... },
"payload": {
"before": {
"ID": 1004,
"FIRST_NAME": "Anne",
"LAST_NAME": "Kretchmar",
"EMAIL": "anne@example.com"
},
"after": null,
"source": {
"version": "0.9.0.Alpha1",
"name": "server1",
"ts_ms": 1520085153000,
"txId": "6.28.807",
"scn": 2122184,
"commit_scn": 2122184,
"snapshot": false
},
"op": "d",
"ts_ms": 1532592105960
}
}
----
If we look at the `payload` portion, we see a number of differences compared with the _create_ or _update_ event payloads:
* The `op` field value is now `d`, signifying that this row was deleted
* The `before` field now has the state of the row that was deleted with the database commit.
* The `after` field is null, signifying that the row no longer exists
* The `source` field structure has many of the same values as before, except the `ts_ms`, `scn` and `txId` fields have changed
* The `ts_ms` shows the timestamp that {prodname} processed this event.
This event gives a consumer all kinds of information that it can use to process the removal of this row.
The Oracle connector's events are designed to work with https://cwiki.apache.org/confluence/display/KAFKA/Log+Compaction[Kafka log compaction],
which allows for the removal of some older messages as long as at least the most recent message for every key is kept.
This allows Kafka to reclaim storage space while ensuring the topic contains a complete dataset and can be used for reloading key-based state.
[[oracle-tombstone-events]]
When a row is deleted, the _delete_ event value listed above still works with log compaction, since Kafka can still remove all earlier messages with that same key.
But only if the message value is `null` will Kafka know that it can remove _all messages_ with that same key.
To make this possible, {prodname}'s Oracle connector always follows the _delete_ event with a special _tombstone_ event that has the same key but `null` value.
[[oracle-transaction-metadata]]
=== Transaction Metadata
{prodname} can generate events that represents tranaction metadata boundaries and enrich data messages.
==== Transaction boundaries
{prodname} generates events for every transaction `BEGIN` and `END`.
Every event contains
* `status` - `BEGIN` or `END`
* `id` - string representation of unique transaction identifier
* `event_count` (for `END` events) - total number of events emmitted by the transaction
* `data_collections` (for `END` events) - an array of pairs of `data_collection` and `event_count` that provides number of events emitted by changes originating from given data collection
Following is an example of what a message looks like:
[source,json,indent=0,subs="attributes"]
----
{
"status": "BEGIN",
"id": "5.6.641",
"event_count": null,
"data_collections": null
}
{
"status": "END",
"id": "5.6.641",
"event_count": 2,
"data_collections": [
{
"data_collection": "ORCLPDB1.DEBEZIUM.CUSTOMER",
"event_count": 1
},
{
"data_collection": "ORCLPDB1.DEBEZIUM.ORDER",
"event_count": 1
}
]
}
----
The transaction events are written to the topic named `<database.server.name>.transaction`.
==== Data events enrichment
When transaction metadata is enabled the data message `Envelope` is enriched with a new `transaction` field.
This field provides information about every event in the form of a composite of fields:
* `id` - string representation of unique transaction identifier
* `total_order` - the absolute position of the event among all events generated by the transaction
* `data_collection_order` - the per-data collection position of the event among all events that were emitted by the transaction
Following is an example of what a message looks like:
[source,json,indent=0,subs="attributes"]
----
{
"before": null,
"after": {
"pk": "2",
"aa": "1"
},
"source": {
...
},
"op": "c",
"ts_ms": "1580390884335",
"transaction": {
"id": "5.6.641",
"total_order": "1",
"data_collection_order": "1"
}
}
----
[[oracle-data-types]]
=== Data Types
As described above, the {prodname} Oracle connector represents the changes to rows with events that are structured like the table in which the row exist.
The event contains a field for each column value, and how that value is represented in the event depends on the Oracle data type of the column.
This section describes this mapping from Oracle's data types to a _literal type_ and _semantic type_ within the events' fields.
Here, the _literal type_ describes how the value is literally represented using Kafka Connect schema types, namely `INT8`, `INT16`, `INT32`, `INT64`, `FLOAT32`, `FLOAT64`, `BOOLEAN`, `STRING`, `BYTES`, `ARRAY`, `MAP`, and `STRUCT`.
The _semantic type_ describes how the Kafka Connect schema captures the _meaning_ of the field using the name of the Kafka Connect schema for the field.
Support for further data types will be added in subsequent releases.
Please file a {jira-url}/browse/DBZ[JIRA issue] for any specific types you are missing.
[[oracle-character-values]]
==== Character Values
[cols="20%a,15%a,30%a,35%a"]
|===
|Oracle Data Type
|Literal type (schema type)
|Semantic type (schema name)
|Notes
|`CHAR[(M)]`
|`STRING`
|n/a
|
|`NCHAR[(M)]`
|`STRING`
|n/a
|
|`VARCHAR[(M)]`
|`STRING`
|n/a
|
|`VARCHAR2[(M)]`
|`STRING`
|n/a
|
|`NVARCHAR2[(M)]`
|`STRING`
|n/a
|
|===
[[oracle-numeric-values]]
==== Numeric Values
[cols="20%a,15%a,30%a,35%a"]
|===
|Oracle Data Type
|Literal type (schema type)
|Semantic type (schema name)
|Notes
|`NUMBER[(P[, *])]`
|`STRUCT`
|`io.debezium.data.VariableScaleDecimal`
|Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|`NUMBER(P, S > 0)`
|`BYTES`
|`org.apache.kafka.connect.data.Decimal`
|
|`NUMBER(P, S <= 0)`
|`INT8` / `INT16` / `INT32` / `INT64`
|n/a
|`NUMBER` columns with a scale of 0 represent integer numbers; a negative scale indicates rounding in Oracle, e.g. a scale of -2 will cause rounding to hundreds. +
Depending on the precision and scale, a matching Kafka Connect integer type will be chosen: `INT8` if P - S < 3, `INT16` if P - S < 5, `INT32` if P - S < 10 and `INT64` if P - S < 19. +
If P - S >= 19, the column will be mapped to `BYTES` (`org.apache.kafka.connect{zwsp}.data.Decimal`).
|`SMALLINT`
|`BYTES`
|`org.apache.kafka.connect.data.Decimal`
|`SMALLINT` is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the `INT` types could store
|`INTEGER`, `INT`
|`BYTES`
|`org.apache.kafka.connect.data.Decimal`
|`INTEGER` is mapped in Oracle to NUMBER(38,0) and hence can hold values larger than any of the `INT` types could store
|`NUMERIC[(P, S)]`
|`BYTES` / `INT8` / `INT16` / `INT32` / `INT64`
|`org.apache.kafka.connect.data.Decimal` if using `BYTES`
|Handled equivalently to `NUMBER` (note that S defaults to 0 for `NUMERIC`).
|`DECIMAL[(P, S)]`
|`BYTES` / `INT8` / `INT16` / `INT32` / `INT64`
|`org.apache.kafka.connect.data.Decimal` if using `BYTES`
|Handled equivalently to `NUMBER` (note that S defaults to 0 for `DECIMAL`).
|`BINARY_FLOAT`
|`FLOAT32`
|n/a
|
|`BINARY_DOUBLE`
|`FLOAT64`
|n/a
|
|`FLOAT[(P)]`
|`STRUCT`
|`io.debezium.data.VariableScaleDecimal`
|Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|`DOUBLE PRECISION`
|`STRUCT`
|`io.debezium.data.VariableScaleDecimal`
|Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|`REAL`
|`STRUCT`
|`io.debezium.data.VariableScaleDecimal`
|Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|===
[[oracle-decimal-values]]
==== Decimal Values
When `decimal.handling.mode` configuration property is set to `precise`, then the connector will use the predefined Kafka Connect `org.apache.kafka.connect.data.Decimal` or `io.debezium.data.VariableScaleDecimal` logical types for numeric columns as described above.
This is the default mode.
However, when `decimal.handling.mode` configuration property is set to `double`, then the connector will represent the values as Java double values with schema type `FLOAT64`.
The last option for `decimal.handling.mode` configuration property is `string`. In this case the connector will represent the values as their formatted string representation with schema type `STRING`.
[[oracle-temporal-values]]
==== Temporal Values
[cols="20%a,15%a,30%a,35%a"]
|===
|Oracle Data Type
|Literal type (schema type)
|Semantic type (schema name)
|Notes
|`DATE`
|`INT64`
|`io.debezium.time.Timestamp`
| Represents the number of milliseconds past epoch, and does not include timezone information.
|`TIMESTAMP(0 - 3)`
|`INT64`
|`io.debezium.time.Timestamp`
| Represents the number of milliseconds past epoch, and does not include timezone information.
|`TIMESTAMP, TIMESTAMP(4 - 6)`
|`INT64`
|`io.debezium.time.MicroTimestamp`
| Represents the number of microseconds past epoch, and does not include timezone information.
|`TIMESTAMP(7 - 9)`
|`INT64`
|`io.debezium.time.NanoTimestamp`
| Represents the number of nanoseconds past epoch, and does not include timezone information.
|`TIMESTAMP WITH TIME ZONE`
|`STRING`
|`io.debezium.time.ZonedTimestamp`
| A string representation of a timestamp with timezone information
|`INTERVAL`
|`FLOAT64`
|`io.debezium.time.MicroDuration`
|The number of micro seconds for a time interval using the `365.25 / 12.0` formula for days per month average
|===
[[oracle-deploying-a-connector]]
== Deploying a Connector
Due to licensing requirements, the {prodname} Oracle Connector does not ship with the Oracle JDBC driver and the XStream API JAR.
You can obtain them for free by downloading the http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html[Oracle Instant Client].
Extract the archive into a directory, e.g. _/path/to/instant_client/.
Copy the files _ojdbc8.jar_ and _xstreams.jar_ from the Instant Client into Kafka's _libs_ directory.
Create the environment variable `LD_LIBRARY_PATH`, pointing to the Instant Client directory:
[source,bash,indent=0]
----
LD_LIBRARY_PATH=/path/to/instant_client/
----
[[oracle-example-configuration]]
=== Example Configuration
The following shows an example JSON request for registering an instance of the {prodname} Oracle connector:
[source,json,indent=0]
----
{
"name": "inventory-connector",
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##xstrm",
"database.password" : "xs",
"database.dbname" : "ORCLCDB",
"database.pdb.name" : "ORCLPDB1",
"database.out.server.name" : "dbzxout",
"database.history.kafka.bootstrap.servers" : "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory"
}
}
----
[[selecting-the-adapter]]
== Selecting the adapter
{prodname} provides multiple ways to ingest change events from Oracle.
By default {prodname} uses the XStream API but this isn't always applicable for every installation.
The following example configuration illustrates that by adding the `database.connection.adapter`, the connector can be toggled to use the LogMiner implementation.
[source,json,indent=0]
----
{
"name": "inventory-connector",
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##xstrm",
"database.password" : "xs",
"database.dbname" : "ORCLCDB",
"database.pdb.name" : "ORCLPDB1",
"database.out.server.name" : "dbzxout",
"database.history.kafka.bootstrap.servers" : "kafka:9092",
"database.history.kafka.topic": "schema-changes.inventory"
"database.connection.adapter": "logminer"
}
}
----
[NOTE]
====
We do encourage the use of the LogMiner implementation for testing purposes and providing us with your feedback, but we do not yet recommend its use in production as its still under active development.
====
[[oracle-monitoring]]
=== Monitoring
The {prodname} Oracle connector has three metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.
* <<snapshot-metrics, snapshot metrics>>; for monitoring the connector when performing snapshots
* <<streaming-metrics, streaming metrics>>; for monitoring the connector when processing change events
* <<schema-history-metrics, schema history metrics>>; for monitoring the status of the connector's schema history
Please refer to the {link-prefix}:{link-debezium-monitoring}#monitoring-debezium[monitoring documentation] for details of how to expose these metrics via JMX.
[[oracle-monitoring-snapshots]]
[[oracle-snapshot-metrics]]
==== Snapshot Metrics
The *MBean* is `debezium.oracle:type=connector-metrics,context=snapshot,server=_<database.server.name>_`.
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-snapshot-metrics.adoc[leveloffset=+1]
[[oracle-monitoring-streaming]]
[[oracle-streaming-metrics]]
==== Streaming Metrics
The *MBean* is `debezium.oracle:type=connector-metrics,context=streaming,server=_<database.server.name>_`.
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-streaming-metrics.adoc[leveloffset=+1]
[[oracle-monitoring-streaming-logminer]]
==== LogMiner Metrics
The *MBean* is `debezium.oracle:type=connector-metrics,context=log-miner,server=_<database.server.name>_`.
[cols="45%a,25%a,30%a"]
|===
|Attributes |Type |Description
|[[log-miner-metrics-currentscn]]<<log-miner-metrics-currentscn, `CurrentScn`>>
|`long`
|The most recent SCN has has been processed.
|[[log-miner-metrics-captureddmlcount]]<<log-miner-metrics-captureddmlcount, `CapturedDmlCount`>>
|`int`
|The number of DML operations observed.
|[[log-miner-metrics-currentlogfilename]]<<log-miner-metrics-currentlogfilename, `CurrentLogFileName`>>
|`string[]`
|The current redo log filename.
|[[log-miner-metrics-redologstatus]]<<log-miner-metrics-redologstatus, `RedoLogStatus`>>
|`string[]`
|The current status of the redo logs.
|[[log-miner-metrics-switchcounter]]<<log-miner-metrics-switchcounter, `SwitchCounter`>>
|`long`
|The number of times the redo logs have switched.
|[[log-miner-metrics-lastlogminerqueryduration]]<<log-miner-metrics-lastlogminerqueryduration, `LastLogMinerQueryDuration`>>
|`Duration`
|The duration it took for the last log mining query to prepare results for processing.
|[[log-miner-metrics-averagelogminerqueryduration]]<<log-miner-metrics-averagelogminerqueryduration, `AverageLogMinerQueryDuration`>>
|`Duration`
|The average duration it has taken for log mining queries to prepare results for processing.
|[[log-miner-metrics-logminerquerycount]]<<log-miner-metrics-logminerquerycount, `LogMinerQueryCount`>>
|`int`
|The number of log mining queries executed.
|[[log-miner-metrics-lastprocessedcapturedbatchduration]]<<log-miner-metrics-lastprocessedcapturedbatchduration, `LastProcessedCapturedBatchDuration`>>
|`Duration`
|The duration it took for the last log mining query results to be processed.
|[[log-miner-metrics-averageprocessedcapturedbatchduration]]<<log-miner-metrics-averageprocessedcapturedbatchduration, `AverageProcessedCapturedBatchDuration`>>
|`Duration`
|The average duration it has taken for the log mining query results to be processed.
|[[log-miner-metrics-processedcapturebatchcount]]<<log-miner-metrics-processedcapturebatchcount, `ProcesssedCaptureBatchCount`>>
|`int`
|The number of log mining query results processed.
|[[log-miner-metrics-batchsize]]<<log-miner-metrics-batchsize, `BatchSize`>>
|`int`
|The number of entries fetched by the log mining query per database round-trip.
|[[log-miner-metrics-millisecondtosleepbetweenminingquery]]<<log-miner-metrics-millisecondtosleepbetweenminingquery, `MillisecondToSleepBetweenMiningQuery`>>
|`int`
|The number of milliseconds the connector sleeps before fetching another batch of results from the log mining view.
|===
[[oracle-monitoring-schema-history]]
[[oracle-schema-history-metrics]]
==== Schema History Metrics
The *MBean* is `debezium.mysql:type=connector-metrics,context=schema-history,server=_<database.server.name>_`.
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-schema-history-metrics.adoc[leveloffset=+1]
[[oracle-connector-properties]]
=== Connector Properties
The following configuration properties are _required_ unless a default value is available.
[cols="30%a,25%a,45%a"]
|===
|Property
|Default
|Description
|[[oracle-property-name]]<<oracle-property-name, `name`>>
|
|Unique name for the connector. Attempting to register again with the same name will fail. (This property is required by all Kafka Connect connectors.)
|[[oracle-property-connector-class]]<<oracle-property-connector-class, `connector.class`>>
|
|The name of the Java class for the connector. Always use a value of `io.debezium{zwsp}.connector.oracle.OracleConnector` for the Oracle connector.
|[[oracle-property-tasks-max]]<<oracle-property-tasks-max, `tasks.max`>>
|`1`
|The maximum number of tasks that should be created for this connector. The Oracle connector always uses a single task and therefore does not use this value, so the default is always acceptable.
|[[oracle-property-database-hostname]]<<oracle-property-database-hostname, `database.hostname`>>
|
|IP address or hostname of the Oracle database server.
|[[oracle-property-database-port]]<<oracle-property-database-port, `database.port`>>
|
|Integer port number of the Oracle database server.
|[[oracle-property-database-user]]<<oracle-property-database-user, `database.user`>>
|
|Name of the user to use when connecting to the Oracle database server.
|[[oracle-property-database-password]]<<oracle-property-database-password, `database.password`>>
|
|Password to use when connecting to the Oracle database server.
|[[oracle-property-database-dbname]]<<oracle-property-database-dbname, `database.dbname`>>
|
|Name of the database to connect to. Must be the CDB name when working with the CDB + PDB model.
|[[oracle-property-database-pdb-name]]<<oracle-property-database-pdb-name, `database.pdb.name`>>
|
|Name of the PDB to connect to, when working with the CDB + PDB model.
|[[oracle-property-database-out-server-name]]<<oracle-property-database-out-server-name, `database.out.server.name`>>
|
|Name of the XStream outbound server configured in the database.
|[[oracle-property-database-server-name]]<<oracle-property-database-server-name, `database.server.name`>>
|
|Logical name that identifies and provides a namespace for the particular Oracle database server being monitored. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names emanating from this connector.
Only alphanumeric characters and underscores should be used.
|[[oracle-property-database-connection-adapter]]<<oracle-database-connection-adapter, `database.connection.adapter`>>
|`xstream`
|The adapter implementation to use.
`xstream` uses the Oracle XStreams API.
`logminer` uses the native Oracle LogMiner API.
|[[oracle-property-rac-nodes]]<<oracle-property-rac-nodes, `rac.nodes`>>
|
|A comma-separated list of RAC node host names or addresses.
This field is required to enable Oracle RAC support.
|[[oracle-property-database-history-kafka-topic]]<<oracle-property-database-history-kafka-topic, `database.history.kafka.topic`>>
|
|The full name of the Kafka topic where the connector will store the database schema history.
|[[oracle-property-database-history-kafka-bootstrap-servers]]<<oracle-property-database-history-kafka-bootstrap-servers, `database.history{zwsp}.kafka.bootstrap.servers`>>
|
|A list of host/port pairs that the connector will use for establishing an initial connection to the Kafka cluster. This connection will be used for retrieving database schema history previously stored by the connector, and for writing each DDL statement read from the source database. This should point to the same Kafka cluster used by the Kafka Connect process.
|[[oracle-property-snapshot-mode]]<<oracle-property-snapshot-mode, `snapshot.mode`>>
|_initial_
|A mode for taking an initial snapshot of the structure and optionally data of captured tables. Supported values are _initial_ (will take a snapshot of structure and data of captured tables; useful if topics should be populated with a complete representation of the data from the captured tables) and _schema_only_ (will take a snapshot of the structure of captured tables only; useful if only changes happening from now onwards should be propagated to topics). Once the snapshot is complete, the connector will continue reading change events from the database's redo logs.
|[[oracle-property-table-whitelist]]
[[oracle-property-table-include-list]]<<oracle-property-table-include-list, `table.include.list`>>
|_empty string_
|An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be monitored; any table not included in the include list will be excluded from monitoring. Each identifier is of the form _schemaName_._tableName_. By default the connector will monitor every non-system table in each monitored database. May not be used with `table.exclude.list`.
|[[oracle-property-table-blacklist]]
[[oracle-property-table-exclude-list]]<<oracle-property-table-exclude-list, `table.exclude.list`>>
|_empty string_
|An optional comma-separated list of regular expressions that match fully-qualified table identifiers for tables to be excluded from monitoring; any table not included in the exclude list will be monitored. Each identifier is of the form _schemaName_._tableName_. May not be used with `table.include.list`.
|[[oracle-property-column-mask-hash]]<<oracle-property-column-mask-hash, `column.mask.hash._hashAlgorithm_.with.salt._salt_`>>
|_n/a_
|An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be pseudonyms in the change event message values with a field value consisting of the hashed value using the algorithm `_hashAlgorithm_` and salt `_salt_`.
Based on the used hash function referential integrity is kept while data is pseudonymized. Supported hash functions are described in the {link-java7-standard-names}[MessageDigest section] of the Java Cryptography Architecture Standard Algorithm Name Documentation.
The hash is automatically shortened to the length of the column.
Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form _pdbName_._schemaName_._tableName_._columnName_.
Example:
column.mask.hash.SHA-256.with.salt.CzQMA0cB5K = inventory.orders.customerName, inventory.shipment.customerName
where `CzQMA0cB5K` is a randomly selected salt.
Note: Depending on the `_hashAlgorithm_` used, the `_salt_` selected and the actual data set, the resulting masked data set may not be completely anonymized.
|[[oracle-property-decimal-handling-mode]]<<oracle-property-decimal-handling-mode, `decimal.handling.mode`>>
|`precise`
| Specifies how the connector should handle floating point values for `NUMBER`, `DECIMAL` and `NUMERIC` columns: `precise` (the default) represents them precisely using `java.math.BigDecimal` values represented in change events in a binary form; or `double` represents them using `double` values, which may result in a loss of precision but will be far easier to use. `string` option encodes values as formatted string which is easy to consume but a semantic information about the real type is lost. See <<decimal-values>>.
|[[oracle-property-event-processing-failure-handling-mode]]<<oracle-property-event-processing-failure-handling-mode, `event.processing{zwsp}.failure.handling.mode`>>
|`fail`
| Specifies how the connector should react to exceptions during processing of events.
`fail` will propagate the exception (indicating the offset of the problematic event), causing the connector to stop. +
`warn` will cause the problematic event to be skipped and the offset of the problematic event to be logged. +
`skip` will cause the problematic event to be skipped.
|[[oracle-property-max-queue-size]]<<oracle-property-max-queue-size, `max.queue.size`>>
|`8192`
|Positive integer value that specifies the maximum size of the blocking queue into which change events read from the database log are placed before they are written to Kafka. This queue can provide backpressure to the binlog reader when, for example, writes to Kafka are slower or if Kafka is not available. Events that appear in the queue are not included in the offsets periodically recorded by this connector. Defaults to 8192, and should always be larger than the maximum batch size specified in the `max.batch.size` property.
|[[oracle-property-max-batch-size]]<<oracle-property-max-batch-size, `max.batch.size`>>
|`2048`
|Positive integer value that specifies the maximum size of each batch of events that should be processed during each iteration of this connector. Defaults to 2048.
|[[oracle-property-max-queue-size-in-bytes]]<<oracle-property-max-queue-size-in-bytes, `max.queue.size.in.bytes`>>
|`0`
|Long value for the maximum size in bytes of the blocking queue. The feature is disabled by default, it will be active if it's set with a positive long value.
|[[oracle-property-poll-interval-ms]]<<oracle-property-poll-interval-ms, `poll.interval.ms`>>
|`1000`
|Positive integer value that specifies the number of milliseconds the connector should wait during each iteration for new change events to appear. Defaults to 1000 milliseconds, or 1 second.
|[[oracle-property-tombstones-on-delete]]<<oracle-property-tombstones-on-delete, `tombstones.on.delete`>>
|`true`
| Controls whether a tombstone event should be generated after a delete event. +
When `true` the delete operations are represented by a delete event and a subsequent tombstone event. When `false` only a delete event is sent. +
Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted.
|[[oracle-property-message-key-columns]]<<oracle-property-message-key-columns, `message.key.columns`>>
|_empty string_
| A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key. +
Each item (regular expression) must match the `<fully-qualified table>:<a comma-separated list of columns>` representing the custom key. +
Fully-qualified tables could be defined as _pdbName_._schemaName_._tableName_.
|[[oracle-property-column-truncate-to-length-chars]]<<oracle-property-column-truncate-to-length-chars, `column.truncate.to._length_.chars`>>
|_n/a_
|An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be truncated in the change event message values if the field values are longer than the specified number of characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer. Fully-qualified names for columns are of the form _pdbName_._schemaName_._tableName_._columnName_.
|[[oracle-property-column-mask-with-length-chars]]<<oracle-property-column-mask-with-length-chars, `column.mask.with._length_.chars`>>
|_n/a_
|An optional comma-separated list of regular expressions that match the fully-qualified names of character-based columns whose values should be replaced in the change event message values with a field value consisting of the specified number of asterisk (`*`) characters. Multiple properties with different lengths can be used in a single configuration, although in each the length must be a positive integer or zero. Fully-qualified names for columns are of the form _pdbName_._schemaName_._tableName_._columnName_.
|[[oracle-property-column-propagate-source-type]]<<oracle-property-column-propagate-source-type, `column.propagate.source.type`>>
|_n/a_
|An optional comma-separated list of regular expressions that match the fully-qualified names of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages.
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
Fully-qualified names for columns are of the form _tableName_._columnName_, or _schemaName_._tableName_._columnName_.
|[[oracle-property-datatype-propagate-source-type]]<<oracle-property-datatype-propagate-source-type, `datatype.propagate.source.type`>>
|_n/a_
|An optional comma-separated list of regular expressions that match the database-specific data type name of columns whose original type and length should be added as a parameter to the corresponding field schemas in the emitted change messages.
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
Fully-qualified data type names are of the form _tableName_._typeName_, or _schemaName_._tableName_._typeName_.
See the {link-prefix}:{link-oracle-connector}#oracle-data-types[list of Oracle-specific data type names].
|[[oracle-property-heartbeat-interval-ms]]<<oracle-property-heartbeat-interval-ms, `heartbeat.interval.ms`>>
|`0`
|Controls how frequently heartbeat messages are sent. +
This property contains an interval in milli-seconds that defines how frequently the connector sends messages into a heartbeat topic.
This can be used to monitor whether the connector is still receiving change events from the database.
You also should leverage heartbeat messages in cases where only records in non-captured tables are changed for a longer period of time.
In such situation the connector would proceed to read the log from the database but never emit any change messages into Kafka,
which in turn means that no offset updates will be committed to Kafka.
This will cause the redo log files to be retained by the database longer than needed
(as the connector actually has processed them already but never got a chance to flush the latest retrieved SCN to the database)
and also may result in more change events to be re-sent after a connector restart.
Set this parameter to `0` to not send heartbeat messages at all. +
Disabled by default.
|[[oracle-property-heartbeat-topics-prefix]]<<oracle-property-heartbeat-topics-prefix, `heartbeat.topics.prefix`>>
|`__debezium-heartbeat`
|Controls the naming of the topic to which heartbeat messages are sent. +
The topic is named according to the pattern `<heartbeat.topics.prefix>.<server.name>`.
|[[oracle-property-snapshot-delay-ms]]<<oracle-property-snapshot-delay-ms, `snapshot.delay.ms`>>
|
|An interval in milli-seconds that the connector should wait before taking a snapshot after starting up; +
Can be used to avoid snapshot interruptions when starting multiple connectors in a cluster, which may cause re-balancing of connectors.
|[[oracle-property-snapshot-fetch-size]]<<oracle-property-snapshot-fetch-size, `snapshot.fetch.size`>>
|`2000`
|Specifies the maximum number of rows that should be read in one go from each table while taking a snapshot.
The connector will read the table contents in multiple batches of this size. Defaults to 2000.
|[[oracle-property-sanitize-field-names]]<<oracle-property-sanitize-field-names, `sanitize.field.names`>>
|`true` when connector configuration explicitly specifies the `key.converter` or `value.converter` parameters to use Avro, otherwise defaults to `false`.
|Whether field names will be sanitized to adhere to Avro naming requirements.
See {link-prefix}:{link-avro-serialization}#avro-naming[Avro naming] for more details.
|[[oracle-property-provide-transaction-metadata]]<<oracle-property-provide-transaction-metadata, `provide.transaction.metadata`>>
|`false`
|When set to `true` {prodname} generates events with transaction boundaries and enriches data events envelope with transaction metadata.
See {link-prefix}:{link-oracle-connector}#oracle-transaction-metadata[Transaction Metadata] for additional details.
|[[oracle-property-log-mining-history-recorder-class]]<<oracle-property-log-mining-history-recorder-class, `log.mining.history.recorder.class`>>
|
|The fully qualified class name to an implementation of `HistoryReader` to be used during LogMining streaming.
|[[oracle-property-database-history-retention-hours]]<<oracle-property-database-history-retention-hours, `database.history.retention.hours`>>
|`0`
|The number of hours to retain entries in log mining history table.
When set to `0`, log mining history is is disabled.
|===