DBZ-695 Table format changes
This commit is contained in:
parent
7142e6124f
commit
bf9cbac5ab
@ -847,19 +847,15 @@ The following table describes how the connector maps each of the Db2 data types
|
||||
Here, the _literal type_ describes how the value is literally represented using Kafka Connect schema types, namely `INT8`, `INT16`, `INT32`, `INT64`, `FLOAT32`, `FLOAT64`, `BOOLEAN`, `STRING`, `BYTES`, `ARRAY`, `MAP`, and `STRUCT`.
|
||||
The _semantic type_ describes how the Kafka Connect schema captures the _meaning_ of the field using the name of the Kafka Connect schema for the field.
|
||||
|
||||
[cols="20%a,15%a,30%a,35%a",width=100,options="header,footer",role="table table-bordered table-striped code-wordbreak-col3 code-wordbreak-col4"]
|
||||
[cols="20%a,15%a,30%a,35%a"]
|
||||
|=======================
|
||||
|Db2 Data Type
|
||||
|Literal type (schema type)
|
||||
|Semantic type (schema name)
|
||||
|Notes
|
||||
|Db2 Data Type |Literal type (schema type) |Semantic type (schema name) |Notes
|
||||
|
||||
|`BOOLEAN`
|
||||
|`BOOLEAN`
|
||||
|n/a
|
||||
|
|
||||
|
||||
|
||||
|`BIGINT`
|
||||
|`INT64`
|
||||
|n/a
|
||||
@ -870,13 +866,11 @@ The _semantic type_ describes how the Kafka Connect schema captures the _meaning
|
||||
|n/a
|
||||
|
|
||||
|
||||
|
||||
|`BLOB`
|
||||
|`BYTES`
|
||||
|n/a
|
||||
|
|
||||
|
||||
|
||||
|`CHAR[(N)]`
|
||||
|`STRING`
|
||||
|n/a
|
||||
@ -892,19 +886,16 @@ The _semantic type_ describes how the Kafka Connect schema captures the _meaning
|
||||
|`io.debezium.time.Date`
|
||||
| A string representation of a timestamp without timezone information
|
||||
|
||||
|
||||
|`DECFLOAT`
|
||||
|`BYTES`
|
||||
|`org.apache.kafka.connect.data.Decimal`
|
||||
|
|
||||
|
||||
|
|
||||
|
||||
|`DECIMAL`
|
||||
|`BYTES`
|
||||
|`org.apache.kafka.connect.data.Decimal`
|
||||
|
|
||||
|
||||
|
||||
|`DBCLOB`
|
||||
|`STRING`
|
||||
|n/a
|
||||
@ -916,20 +907,16 @@ The _semantic type_ describes how the Kafka Connect schema captures the _meaning
|
||||
|n/a
|
||||
|
|
||||
|
||||
|
||||
|`INTEGER`
|
||||
|`INT32`
|
||||
|n/a
|
||||
|
|
||||
|
||||
|
||||
|
||||
|`REAL`
|
||||
|`FLOAT32`
|
||||
|n/a
|
||||
|
|
||||
|
||||
|
||||
|`SMALLINT`
|
||||
|`INT16`
|
||||
|n/a
|
||||
@ -940,13 +927,11 @@ The _semantic type_ describes how the Kafka Connect schema captures the _meaning
|
||||
|`io.debezium.time.Time`
|
||||
| A string representation of a times without timezone information
|
||||
|
||||
|
||||
|`TIMESTAMP`
|
||||
|`INT64`
|
||||
|`io.debezium.time.MicroTimestamp`
|
||||
| A string representation of a timestamp without timezone information
|
||||
|
||||
|
||||
|`VARBINARY`
|
||||
|`BYTES`
|
||||
|n/a
|
||||
@ -962,14 +947,10 @@ The _semantic type_ describes how the Kafka Connect schema captures the _meaning
|
||||
|n/a
|
||||
|
|
||||
|
||||
|
||||
|`XML`
|
||||
|`STRING`
|
||||
|`io.debezium.data.Xml`
|
||||
|Contains the string representation of a XML document
|
||||
|
||||
|
||||
|
||||
|=======================
|
||||
|
||||
Other data type mappings are described in the following sections.
|
||||
@ -984,12 +965,9 @@ Passing the default value helps though with satisfying the compatibility rules w
|
||||
|
||||
Other than Db2's `DATETIMEOFFSET` data type (which contain time zone information), the other temporal types depend on the value of the `time.precision.mode` configuration property. When the `time.precision.mode` configuration property is set to `adaptive` (the default), then the connector will determine the literal type and semantic type for the temporal types based on the column's data type definition so that events _exactly_ represent the values in the database:
|
||||
|
||||
[cols="20%a,15%a,30%a,35%a",width=150,options="header,footer",role="table table-bordered table-striped code-wordbreak-col3 code-wordbreak-col4"]
|
||||
[cols="20%a,15%a,30%a,35%a"]
|
||||
|=======================
|
||||
|Db2 Data Type
|
||||
|Literal type (schema type)
|
||||
|Semantic type (schema name)
|
||||
|Notes
|
||||
|Db2 Data Type |Literal type (schema type) |Semantic type (schema name) |Notes
|
||||
|
||||
|`DATE`
|
||||
|`INT32`
|
||||
@ -1035,17 +1013,13 @@ Other than Db2's `DATETIMEOFFSET` data type (which contain time zone information
|
||||
|`INT64`
|
||||
|`io.debezium.time.NanoTimestamp`
|
||||
| Represents the number of nanoseconds past epoch, and does not include timezone information.
|
||||
|
||||
|=======================
|
||||
|
||||
When the `time.precision.mode` configuration property is set to `connect`, then the connector will use the predefined Kafka Connect logical types. This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. On the other hand, since Db2 supports tenth of microsecond precision, the events generated by a connector with the `connect` time precision mode will *result in a loss of precision* when the database column has a _fractional second precision_ value greater than 3:
|
||||
|
||||
[cols="20%a,15%a,30%a,35%a",width=150,options="header,footer",role="table table-bordered table-striped code-wordbreak-col3 code-wordbreak-col4"]
|
||||
[cols="20%a,15%a,30%a,35%a"]
|
||||
|=======================
|
||||
|Db2 Data Type
|
||||
|Literal type (schema type)
|
||||
|Semantic type (schema name)
|
||||
|Notes
|
||||
|Db2 Data Type |Literal type (schema type) |Semantic type (schema name) |Notes
|
||||
|
||||
|`DATE`
|
||||
|`INT32`
|
||||
@ -1071,7 +1045,6 @@ When the `time.precision.mode` configuration property is set to `connect`, then
|
||||
|`INT64`
|
||||
|`org.apache.kafka.connect.data.Timestamp`
|
||||
| Represents the number of milliseconds since epoch, and does not include timezone information. Db2 allows `P` to be in the range 0-7 to store up to tenth of microsecond precision, though this mode results in a loss of precision when `P` > 3.
|
||||
|
||||
|=======================
|
||||
|
||||
[[timestamp-values]]
|
||||
@ -1085,12 +1058,9 @@ Note that the timezone of the JVM running Kafka Connect and Debezium does not af
|
||||
|
||||
==== Decimal values
|
||||
|
||||
[cols="15%a,15%a,35%a,35%a",width=100,options="header,footer",role="table table-bordered table-striped code-wordbreak-col3 code-wordbreak-col4"]
|
||||
[cols="15%a,15%a,35%a,35%a"]
|
||||
|=======================
|
||||
|Db2 Data Type
|
||||
|Literal type (schema type)
|
||||
|Semantic type (schema name)
|
||||
|Notes
|
||||
|Db2 Data Type |Literal type (schema type) |Semantic type (schema name) |Notes
|
||||
|
||||
|`NUMERIC[(P[,S])]`
|
||||
|`BYTES`
|
||||
@ -1115,7 +1085,6 @@ The `connect.decimal.precision` schema parameter contains an integer representin
|
||||
|`org.apache.kafka.connect.data.Decimal`
|
||||
|The `scale` schema parameter contains an integer representing how many digits the decimal point was shifted.
|
||||
The `connect.decimal.precision` schema parameter contains an integer representing the precision of the given decimal value.
|
||||
|
||||
|=======================
|
||||
|
||||
[[deploying-a-connector]]
|
||||
@ -1193,11 +1162,9 @@ Kafka, Zookeeper, and Kafka Connect all have xref:operations/monitoring.adoc[bui
|
||||
|
||||
===== *MBean: debezium.db2:type=connector-metrics,context=snapshot,server=_<database.server.name>_*
|
||||
|
||||
[cols="30%a,10%a,60%a",width=100,options="header,footer",role="table table-bordered table-striped"]
|
||||
[cols="30%a,10%a,60%a"]
|
||||
|=======================
|
||||
|Attribute Name
|
||||
|Type
|
||||
|Description
|
||||
|Attribute Name |Type |Description
|
||||
|
||||
|`LastEvent`
|
||||
|`string`
|
||||
@ -1254,7 +1221,6 @@ Kafka, Zookeeper, and Kafka Connect all have xref:operations/monitoring.adoc[bui
|
||||
|`RowsScanned`
|
||||
|`Map<String, Long>`
|
||||
|Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.
|
||||
|
||||
|=======================
|
||||
|
||||
|
||||
@ -1264,11 +1230,9 @@ Kafka, Zookeeper, and Kafka Connect all have xref:operations/monitoring.adoc[bui
|
||||
|
||||
===== *MBean: debezium.db2:type=connector-metrics,context=streaming,server=_<database.server.name>_*
|
||||
|
||||
[cols="30%a,10%a,60%a",width=100,options="header,footer",role="table table-bordered table-striped"]
|
||||
[cols="30%a,10%a,60%a"]
|
||||
|=======================
|
||||
|Attribute Name
|
||||
|Type
|
||||
|Description
|
||||
|Attribute Name |Type |Description
|
||||
|
||||
|`LastEvent`
|
||||
|`string`
|
||||
@ -1317,7 +1281,6 @@ Kafka, Zookeeper, and Kafka Connect all have xref:operations/monitoring.adoc[bui
|
||||
|`LastTransactionId`
|
||||
|`string`
|
||||
|Transaction identifier of the last processed transaction.
|
||||
|
||||
|=======================
|
||||
|
||||
[[monitoring-schema-history]]
|
||||
@ -1326,11 +1289,9 @@ Kafka, Zookeeper, and Kafka Connect all have xref:operations/monitoring.adoc[bui
|
||||
|
||||
===== *MBean: debezium.db2:type=connector-metrics,context=schema-history,server=_<database.server.name>_*
|
||||
|
||||
[cols="30%a,10%a,60%a",width=100,options="header,footer",role="table table-bordered table-striped"]
|
||||
[cols="30%a,10%a,60%a"]
|
||||
|=======================
|
||||
|Attribute Name
|
||||
|Type
|
||||
|Description
|
||||
|Attribute Name |Type |Description
|
||||
|
||||
|`Status`
|
||||
|`string`
|
||||
@ -1363,7 +1324,6 @@ Kafka, Zookeeper, and Kafka Connect all have xref:operations/monitoring.adoc[bui
|
||||
|`LastAppliedChange`
|
||||
|`string`
|
||||
|The string representation of the last applied change.
|
||||
|
||||
|=======================
|
||||
|
||||
|
||||
@ -1373,11 +1333,9 @@ Kafka, Zookeeper, and Kafka Connect all have xref:operations/monitoring.adoc[bui
|
||||
|
||||
The following configuration properties are _required_ unless a default value is available.
|
||||
|
||||
[cols="35%a,10%a,55%a",options="header,footer",role="table table-bordered table-striped code-wordbreak-col"]
|
||||
[cols="35%a,10%a,55%a"]
|
||||
|=======================
|
||||
|Property
|
||||
|Default
|
||||
|Description
|
||||
|Property |Default |Description
|
||||
|
||||
|`name`
|
||||
|
|
||||
@ -1454,7 +1412,6 @@ The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pa
|
||||
Useful to properly size corresponding columns in sink databases.
|
||||
Fully-qualified names for columns are of the form _schemaName_._tableName_._columnName_.
|
||||
|
||||
|
||||
|`message.key.columns`
|
||||
|_empty string_
|
||||
| A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key. +
|
||||
@ -1464,11 +1421,9 @@ Fully-qualified tables could be defined as `DB_NAME.TABLE_NAME` or `SCHEMA_NAME.
|
||||
|
||||
The following _advanced_ configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector's configuration.
|
||||
|
||||
[cols="35%a,10%a,55%a",width=100,options="header,footer",role="table table-bordered table-striped code-wordbreak-col"]
|
||||
[cols="35%a,10%a,55%a"]
|
||||
|=======================
|
||||
|Property
|
||||
|Default
|
||||
|Description
|
||||
|Property |Default |Description
|
||||
|
||||
|`snapshot.mode`
|
||||
|_initial_
|
||||
@ -1551,7 +1506,6 @@ See xref:configuration/avro.adoc#names[Avro naming] for more details.
|
||||
|When set to `true` Debezium generates events with transaction boundaries and enriches data events envelope with transaction metadata.
|
||||
|
||||
See link:#transaction-metadata[Transaction Metadata] for additional details.
|
||||
|
||||
|=======================
|
||||
|
||||
The connector also supports _pass-through_ configuration properties that are used when creating the Kafka producer and consumer. Specifically, all connector configuration properties that begin with the `database.history.producer.` prefix are used (without the prefix) when creating the Kafka producer that writes to the database history, and all those that begin with the prefix `database.history.consumer.` are used (without the prefix) when creating the Kafka consumer that reads the database history upon connector startup.
|
||||
|
Loading…
Reference in New Issue
Block a user