DBZ-2105 Updated AsciiDoc markup for tables to be only what is needed

This commit is contained in:
TovaCohen 2020-05-26 13:17:56 -04:00 committed by Chris Cranford
parent 2a5557b61d
commit 6c13e8265e
15 changed files with 154 additions and 156 deletions

View File

@ -89,8 +89,8 @@ value.op == 'u' ? 'updates' : null
[[content-based-router-configuration-options]]
== Configuration options
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -111,4 +111,4 @@ value.op == 'u' ? 'updates' : null
|`keep`
|Prescribes how the transformation should handle `null` (tombstone) messages. The options are: `keep` (the default) to pass the message through, `drop` to remove the messages completely, or `evaluate` to run the message through the topic expression.
|=======================
|===

View File

@ -93,8 +93,8 @@ value.op == 'u' && value.before.id == 2
[[filter-configuration-options]]
== Configuration options
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -115,4 +115,4 @@ value.op == 'u' && value.before.id == 2
|`keep`
|Prescribes how the transformation should handle `null` (tombstone) messages. The options are: `keep` (the default) to pass the message through, `drop` to remove the messages completely or `evaluate` to run the message through the condition expression.
|=======================
|===

View File

@ -83,8 +83,8 @@ This result was achieved with the {link-prefix}:{link-outbox-event-router}#outbo
[[outbox-event-router-configuration-options]]
=== Configuration options
[cols="30%a,10%a,10%a,50%a",options="header"]
|=======================
[cols="30%a,10%a,10%a,50%a"]
|===
|Property
|Default
|Group
@ -149,7 +149,7 @@ This result was achieved with the {link-prefix}:{link-outbox-event-router}#outbo
|`warn`
|{prodname}
|While {prodname} is monitoring the table, it's not expecting to see 'update' row events, in case it happens, this transform can log it as warning, error or stop the process. Options are `warn`, `error` and `fatal`
|=======================
|===
=== Default table columns
@ -168,8 +168,8 @@ payload | jsonb |
After observing all those pieces we can see what the default configuration does:
[cols="30%a,70%a",options="header"]
|=======================
[cols="30%a,70%a"]
|===
|Table Column
|Effect
@ -184,7 +184,7 @@ After observing all those pieces we can see what the default configuration does:
|`payload`
|The JSON representation of the event itself, becomes either part of the message as `payload` or if other metadata including `eventType` are delivered as headers then the payload becomes the message itself without an encapsulation in an envelope
|=======================
|===
=== Basic configuration

View File

@ -633,8 +633,8 @@ As described above, the Cassandra connector represents the changes to rows with
The following table describes how the connector maps each of the Cassandra data types to an Kafka Connect data type.
[cols="30%a, 30%a, 40%a",options="header"]
|=======================
[cols="30%a, 30%a, 40%a"]
|===
|Cassandra Data Type
|Literal Type (Schema Type)
|Semantic Type (Schema Name)
@ -743,7 +743,7 @@ The following table describes how the connector maps each of the Cassandra data
|`int64`
|`io.debezium.time.NanoDuration` (an approximate representation of the duration value in nano-seconds)
|=======================
|===
**TODO**: add logical types
@ -824,8 +824,8 @@ Cassandra connector has built-in support for JMX metrics. The Cassandra driver a
[[cassandra-snapshot-metrics]]
==== Snapshot Metrics
[cols="30%a,10%a,60%a",options="header"]
|=======================
[cols="30%a,10%a,60%a"]
|===
|Attribute Name
|Type
|Description
@ -857,13 +857,13 @@ Cassandra connector has built-in support for JMX metrics. The Cassandra driver a
|`rows-scanned`
|`Map<String, Long>`
|Map containing the number of rows scanned for each table in the snapshot. Tables are incrementally added to the Map during processing. Updates every 10,000 rows scanned and upon completing a table.
|=======================
|===
[[cassandra-commitlog-metrics]]
==== Commitlog Metrics
[cols="30%a,10%a,60%a",options="header"]
|=======================
[cols="30%a,10%a,60%a"]
|===
|Attribute Name
|Type
|Description
@ -884,14 +884,14 @@ Cassandra connector has built-in support for JMX metrics. The Cassandra driver a
|`long`
|The number of unrecoverable errors while processing commit logs.
|=======================
|===
[[cassandra-connector-properties]]
=== Connector properties
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -1007,7 +1007,7 @@ refreshing the cached Cassandra table schemas.
|Whether field names will be sanitized to adhere to Avro naming requirements.
See {link-prefix}:{link-avro-serialization}#avro-naming[Avro naming] for more details.
|=======================
|===
The connector also supports pass-through configuration properties that are used when creating the Kafka producer. Specifically, all connector configuration properties that begin with the `kafka.producer.` prefix are used (without the prefix) when creating the Kafka producer that writes events to Kafka.

View File

@ -1000,7 +1000,7 @@ Here, the _literal type_ describes how the value is literally represented using
The _semantic type_ describes how the Kafka Connect schema captures the _meaning_ of the field using the name of the Kafka Connect schema for the field.
[cols="20%a,15%a,30%a,35%a"]
|=======================
|===
|Db2 Data Type |Literal type (schema type) |Semantic type (schema name) |Notes
|`BOOLEAN`
@ -1103,7 +1103,7 @@ The _semantic type_ describes how the Kafka Connect schema captures the _meaning
|`STRING`
|`io.debezium.data.Xml`
|Contains the string representation of a XML document
|=======================
|===
Other data type mappings are described in the following sections.
@ -1118,7 +1118,7 @@ Passing the default value helps though with satisfying the compatibility rules w
Other than Db2's `DATETIMEOFFSET` data type (which contain time zone information), the other temporal types depend on the value of the `time.precision.mode` configuration property. When the `time.precision.mode` configuration property is set to `adaptive` (the default), then the connector will determine the literal type and semantic type for the temporal types based on the column's data type definition so that events _exactly_ represent the values in the database:
[cols="20%a,15%a,30%a,35%a"]
|=======================
|===
|Db2 Data Type |Literal type (schema type) |Semantic type (schema name) |Notes
|`DATE`
@ -1165,12 +1165,12 @@ Other than Db2's `DATETIMEOFFSET` data type (which contain time zone information
|`INT64`
|`io.debezium.time.NanoTimestamp`
| Represents the number of nanoseconds past epoch, and does not include timezone information.
|=======================
|===
When the `time.precision.mode` configuration property is set to `connect`, then the connector will use the predefined Kafka Connect logical types. This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. On the other hand, since Db2 supports tenth of microsecond precision, the events generated by a connector with the `connect` time precision mode will *result in a loss of precision* when the database column has a _fractional second precision_ value greater than 3:
[cols="20%a,15%a,30%a,35%a"]
|=======================
|===
|Db2 Data Type |Literal type (schema type) |Semantic type (schema name) |Notes
|`DATE`
@ -1197,7 +1197,7 @@ When the `time.precision.mode` configuration property is set to `connect`, then
|`INT64`
|`org.apache.kafka.connect.data.Timestamp`
| Represents the number of milliseconds since epoch, and does not include timezone information. Db2 allows `P` to be in the range 0-7 to store up to tenth of microsecond precision, though this mode results in a loss of precision when `P` > 3.
|=======================
|===
[[db2-timestamp-values]]
===== Timestamp values
@ -1211,7 +1211,7 @@ Note that the timezone of the JVM running Kafka Connect and {prodname} does not
==== Decimal values
[cols="15%a,15%a,35%a,35%a"]
|=======================
|===
|Db2 Data Type |Literal type (schema type) |Semantic type (schema name) |Notes
|`NUMERIC[(P[,S])]`
@ -1237,7 +1237,7 @@ The `connect.decimal.precision` schema parameter contains an integer representin
|`org.apache.kafka.connect.data.Decimal`
|The `scale` schema parameter contains an integer representing how many digits the decimal point was shifted.
The `connect.decimal.precision` schema parameter contains an integer representing the precision of the given decimal value.
|=======================
|===
[[db2-deploying-a-connector]]
== Deploying a connector
@ -1344,7 +1344,7 @@ include::{partialsdir}/modules/cdc-all-connectors/r_connector-monitoring-schema-
The following configuration properties are _required_ unless a default value is available.
[cols="35%a,10%a,55%a"]
|=======================
|===
|Property |Default |Description
|[[db2-property-name]]<<db2-property-name, `name`>>
@ -1460,12 +1460,12 @@ See {link-prefix}:{link-db2-connector}#db2-data-types[Db2 data types] for the li
| A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key. +
Each item (regular expression) must match the fully-qualified `<fully-qualified table>:<a comma-separated list of columns>` representing the custom key. +
Fully-qualified tables could be defined as _schemaName_._tableName_.
|=======================
|===
The following _advanced_ configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector's configuration.
[cols="35%a,10%a,55%a"]
|=======================
|===
|Property |Default |Description
|[[db2-property-snapshot-mode]]<<db2-property-snapshot-mode, `snapshot.mode`>>
@ -1556,7 +1556,7 @@ See {link-prefix}:{link-avro-serialization}#avro-naming[Avro naming] for more de
|When set to `true` {prodname} generates events with transaction boundaries and enriches data events envelope with transaction metadata.
See {link-prefix}:{link-db2-connector}#db2-transaction-metadata[Transaction Metadata] for additional details.
|=======================
|===
The connector also supports _pass-through_ configuration properties that are used when creating the Kafka producer and consumer. Specifically, all connector configuration properties that begin with the `database.history.producer.` prefix are used (without the prefix) when creating the Kafka producer that writes to the database history, and all those that begin with the prefix `database.history.consumer.` are used (without the prefix) when creating the Kafka consumer that reads the database history upon connector startup.

View File

@ -263,7 +263,7 @@ This example used a document with an integer identifier, but any valid MongoDB d
different types will get encoded as the event key's payload:
[options="header",role="code-wordbreak-col2 code-wordbreak-col3"]
|==========================================
|===
|Type |MongoDB `_id` Value|Key's payload
|Integer |1234|`{ "id" : "1234" }`
|Float |12.34|`{ "id" : "12.34" }`
@ -271,7 +271,7 @@ different types will get encoded as the event key's payload:
|Document|{ "hi" : "kafka", "nums" : [10.0, 100.0, 1000.0] }|`{ "id" : "{\"hi\" : \"kafka\", \"nums\" : [10.0, 100.0, 1000.0]}" }`
|ObjectId|ObjectId("596e275826f08b2730779e1f")|`{ "id" : "{\"$oid\" : \"596e275826f08b2730779e1f\"}" }`
|Binary |BinData("a2Fma2E=",0)|`{ "id" : "{\"$binary\" : \"a2Fma2E=\", \"$type\" : \"00\"}" }`
|==========================================
|===
ifdef::community[]
[WARNING]
@ -780,7 +780,7 @@ The {prodname} MongoDB connector also provides the following custom streaming me
The following configuration properties are _required_ unless a default value is available.
[cols="35%a,10%a,55%a"]
|=======================
|===
|Property |Default |Description
|[[mongodb-property-name]]<<mongodb-property-name, `name`>>
@ -874,13 +874,13 @@ Can be used to avoid snapshot interruptions when starting multiple connectors in
The connector will read the collection contents in multiple batches of this size. +
Defaults to 0, which indicates that the server chooses an appropriate fetch size.
|=======================
|===
The following _advanced_ configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector's configuration.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -953,7 +953,7 @@ endif::community[]
| comma-separated list of oplog operations that will be skipped during streaming.
The operations include: `i` for inserts, `u` for updates, and `d` for deletes.
By default, no operations are skipped.
|=======================
|===
[[mongodb-fault-tolerance]]
[[mongodb-when-things-go-wrong]]
@ -981,8 +981,8 @@ The attempts to reconnect are controlled by three properties:
Each delay is double that of the prior delay, up to the maximum delay. Given the default values, the following table shows the delay for each failed connection attempt and the total accumulated time before failure.
[cols="30%a,30%a,40%a",options="header"]
|=======================
[cols="30%a,30%a,40%a"]
|===
|Reconnection attempt number
|Delay before attempt, in seconds
|Total delay before attempt, in minutes and seconds
@ -1003,7 +1003,7 @@ Each delay is double that of the prior delay, up to the maximum delay. Given the
|14 |120|16:07
|15 |120|18:07
|16 |120|20:07
|=======================
|===
=== Kafka Connect process stops gracefully

View File

@ -850,8 +850,8 @@ Please file a {jira-url}/browse/DBZ[JIRA issue] for any specific types you are m
[[oracle-character-values]]
==== Character Values
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|Oracle Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -882,13 +882,13 @@ Please file a {jira-url}/browse/DBZ[JIRA issue] for any specific types you are m
|n/a
|
|=======================
|===
[[oracle-numeric-values]]
==== Numeric Values
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|Oracle Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -956,7 +956,7 @@ If P - S >= 19, the column will be mapped to `BYTES` (`org.apache.kafka.connect{
|`io.debezium.data.VariableScaleDecimal`
|Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|=======================
|===
[[oracle-decimal-values]]
==== Decimal Values
@ -969,8 +969,8 @@ The last option for `decimal.handling.mode` configuration property is `string`.
[[oracle-temporal-values]]
==== Temporal Values
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|Oracle Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1006,7 +1006,7 @@ The last option for `decimal.handling.mode` configuration property is `string`.
|`io.debezium.time.MicroDuration`
|The number of micro seconds for a time interval using the `365.25 / 12.0` formula for days per month average
|=======================
|===
[[oracle-deploying-a-connector]]
== Deploying a Connector
@ -1090,8 +1090,8 @@ include::{partialsdir}/modules/cdc-all-connectors/r_connector-monitoring-schema-
The following configuration properties are _required_ unless a default value is available.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -1275,4 +1275,4 @@ See {link-prefix}:{link-avro-serialization}#avro-naming[Avro naming] for more de
See {link-prefix}:{link-oracle-connector}#oracle-transaction-metadata[Transaction Metadata] for additional details.
|=======================
|===

View File

@ -1035,8 +1035,8 @@ Here, the _literal type_ describes how the value is literally represented using
The _semantic type_ describes how the Kafka Connect schema captures the _meaning_ of the field using the name of the Kafka Connect schema for the field.
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1199,7 +1199,7 @@ The _semantic type_ describes how the Kafka Connect schema captures the _meaning
|`io.debezium.data.Enum`
|Contains the string representation of the PostgreSQL ENUM value. The set of allowed values are maintained in the schema parameter named `allowed`.
|=======================
|===
Other data type mappings are described in the following sections.
@ -1208,8 +1208,8 @@ Other data type mappings are described in the following sections.
Other than PostgreSQL's `TIMESTAMPTZ` and `TIMETZ` data types (which contain time zone information), the other temporal types depend on the value of the `time.precision.mode` configuration property. When the `time.precision.mode` configuration property is set to `adaptive` (the default), then the connector will determine the literal type and semantic type for the temporal types based on the column's data type definition so that events _exactly_ represent the values in the database:
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1240,12 +1240,12 @@ Other than PostgreSQL's `TIMESTAMPTZ` and `TIMETZ` data types (which contain tim
|`io.debezium.time.MicroTimestamp`
| Represents the number of microseconds past epoch, and does not include timezone information.
|=======================
|===
When the `time.precision.mode` configuration property is set to `adaptive_time_microseconds`, then the connector will determine the literal type and semantic type for the temporal types based on the column's data type definition so that events _exactly_ represent the values in the database, except that all TIME fields will be captured as microseconds:
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1271,12 +1271,12 @@ When the `time.precision.mode` configuration property is set to `adaptive_time_m
|`io.debezium.time.MicroTimestamp`
| Represents the number of microseconds past epoch, and does not include timezone information.
|=======================
|===
When the `time.precision.mode` configuration property is set to `connect`, then the connector will use the predefined Kafka Connect logical types. This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. On the other hand, since PostgreSQL supports microsecond precision, the events generated by a connector with the `connect` time precision mode will *result in a loss of precision* when the database column has a _fractional second precision_ value greater than 3:
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1297,7 +1297,7 @@ When the `time.precision.mode` configuration property is set to `connect`, then
|`org.apache.kafka.connect.data.Timestamp`
| Represents the number of milliseconds since epoch, and does not include timezone information. PostgreSQL allows `P` to be in the range 0-6 to store up to microsecond precision, though this mode results in a loss of precision when `P` > 3.
|=======================
|===
[[postgresql-timestamp-values]]
===== `TIMESTAMP` values
@ -1314,8 +1314,8 @@ Note that the timezone of the JVM running Kafka Connect and {prodname} does not
When `decimal.handling.mode` configuration property is set to `precise`, then the connector will use the predefined Kafka Connect `org.apache.kafka.connect.data.Decimal` logical type for all `DECIMAL` and `NUMERIC` columns. This is the default mode.
[cols="15%a,15%a,35%a,35%a",options="header"]
|=======================
[cols="15%a,15%a,35%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1331,14 +1331,14 @@ When `decimal.handling.mode` configuration property is set to `precise`, then th
|`org.apache.kafka.connect.data.Decimal`
|The `scaled` schema parameter contains an integer representing how many digits the decimal point was shifted.
|=======================
|===
There is an exception to this rule.
When the `NUMERIC` or `DECIMAL` types are used without any scale constraints then it means that the values coming from the database have a different (variable) scale for each value.
In this case a type `io.debezium.data.VariableScaleDecimal` is used and it contains both value and scale of the transferred value.
[cols="15%a,15%a,35%a,35%a",options="header"]
|=======================
[cols="15%a,15%a,35%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1354,12 +1354,12 @@ In this case a type `io.debezium.data.VariableScaleDecimal` is used and it conta
|`io.debezium.data.VariableScaleDecimal`
|Contains a structure with two fields: `scale` of type `INT32` that contains the scale of the transferred value and `value` of type `BYTES` containing the original value in an unscaled form.
|=======================
|===
However, when `decimal.handling.mode` configuration property is set to `double`, then the connector will represent all `DECIMAL` and `NUMERIC` values as Java double values and encodes them as follows:
[cols="15%a,15%a,35%a,35%a",options="header"]
|=======================
[cols="15%a,15%a,35%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1375,12 +1375,12 @@ However, when `decimal.handling.mode` configuration property is set to `double`,
|
|
|=======================
|===
The last option for `decimal.handling.mode` configuration property is `string`. In this case the connector will represent all `DECIMAL` and `NUMERIC` values as their formatted string representation and encodes them as follows:
[cols="15%a,15%a,35%a,35%a",options="header"]
|=======================
[cols="15%a,15%a,35%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1396,7 +1396,7 @@ The last option for `decimal.handling.mode` configuration property is `string`.
|
|
|=======================
|===
PostgreSQL supports `NaN` (not a number) special value to be stored in the `DECIMAL`/`NUMERIC` values. Only `string` and `double` modes are able to handle such values encoding them as either `Double.NaN` or string constant `NAN`.
@ -1405,8 +1405,8 @@ PostgreSQL supports `NaN` (not a number) special value to be stored in the `DECI
When `hstore.handling.mode` configuration property is set to `json` (the default), the connector will represent all `HSTORE` values as string-ified JSON values and encode them as follows:
[cols="15%a,15%a,35%a,35%a",options="header"]
|=======================
[cols="15%a,15%a,35%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1417,12 +1417,12 @@ When `hstore.handling.mode` configuration property is set to `json` (the default
|`io.debezium.data.Json`
| Example: output representation using the JSON converter is `{\"key\" : \"val\"}`
|=======================
|===
When `hstore.handling.mode` configuration property is set to `map`, then the connector will use the `MAP` schema type for all `HSTORE` columns.
[cols="15%a,15%a,35%a,35%a",options="header"]
|=======================
[cols="15%a,15%a,35%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1433,7 +1433,7 @@ When `hstore.handling.mode` configuration property is set to `map`, then the con
|
| Example: output representation using the JSON converter is `{"key" : "val"}`
|=======================
|===
[[postgresql-domain-types]]
==== PostgreSQL Domain Types
@ -1457,8 +1457,8 @@ When a column is defined using a domain type that extends another domain type th
PostgreSQL also have data types that can store IPv4, IPv6, and MAC addresses. It is better to use these instead of plain text types to store network addresses, because these types offer input error checking and specialized operators and functions.
[cols="15%a,15%a,35%a,35%a",options="header"]
|=======================
[cols="15%a,15%a,35%a,35%a"]
|===
|PostgreSQL Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1484,13 +1484,13 @@ PostgreSQL also have data types that can store IPv4, IPv6, and MAC addresses. It
|
|MAC addresses in EUI-64 format
|=======================
|===
===== PostGIS Types
The PostgreSQL connector also has full support for all of the http://postgis.net[PostGIS data types]
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|PostGIS Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1515,7 +1515,7 @@ Please see http://www.opengeospatial.org/standards/sfa[Open Geospatial Consortiu
* `wkb (BYTES)` - a binary representation of the geometry object encoded in the Well-Known-Binary format.
Please see http://www.opengeospatial.org/standards/sfa[Open Geospatial Consortium Simple Features Access specification] for the format details.
|=======================
|===
[[postgresql-toasted-values]]
===== Toasted values
@ -1706,8 +1706,8 @@ include::{partialsdir}/modules/cdc-all-connectors/r_connector-monitoring-streami
The following configuration properties are _required_ unless a default value is available.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -1898,13 +1898,13 @@ See the {link-prefix}:{link-postgresql-connector}#postgresql-data-types[list of
Each item (regular expression) must match the fully-qualified `<fully-qualified table>:<a comma-separated list of columns>` representing the custom key. +
Fully-qualified tables could be defined as _schemaName_._tableName_.
|=======================
|===
The following _advanced_ configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector's configuration.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -2052,7 +2052,7 @@ See {link-prefix}:{link-postgresql-connector}#postgresql-toasted-values[Toasted
See {link-prefix}:{link-postgresql-connector}#postgresql-transaction-metadata[Transaction Metadata] for additional details.
|=======================
|===
The connector also supports _pass-through_ configuration properties that are used when creating the Kafka producer and consumer.

View File

@ -72,9 +72,9 @@ GO
Then enable _CDC_ for each table that you plan to monitor
[source,sql]
----
-- =========
-- ====
-- Enable a Table Specifying Filegroup Option Template
-- =========
-- ====
USE MyDB
GO
@ -90,9 +90,9 @@ GO
Verify that the user have access to the _CDC_ table.
[source, sql]
----
-- =========
-- ====
-- Verify the user of the connector have access, this query should not have empty result
-- =========
-- ====
EXEC sys.sp_cdc_help_change_data_capture
GO
@ -947,8 +947,8 @@ The following table describes how the connector maps each of the SQL Server data
Here, the _literal type_ describes how the value is literally represented using Kafka Connect schema types, namely `INT8`, `INT16`, `INT32`, `INT64`, `FLOAT32`, `FLOAT64`, `BOOLEAN`, `STRING`, `BYTES`, `ARRAY`, `MAP`, and `STRUCT`.
The _semantic type_ describes how the Kafka Connect schema captures the _meaning_ of the field using the name of the Kafka Connect schema for the field.
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|SQL Server Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1029,7 +1029,7 @@ The _semantic type_ describes how the Kafka Connect schema captures the _meaning
|`io.debezium.time.ZonedTimestamp`
| A string representation of a timestamp with timezone information, where the timezone is GMT
|=======================
|===
Other data type mappings are described in the following sections.
@ -1045,8 +1045,8 @@ endif::community[]
Other than SQL Server's `DATETIMEOFFSET` data type (which contain time zone information), the other temporal types depend on the value of the `time.precision.mode` configuration property. When the `time.precision.mode` configuration property is set to `adaptive` (the default), then the connector will determine the literal type and semantic type for the temporal types based on the column's data type definition so that events _exactly_ represent the values in the database:
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|SQL Server Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1097,12 +1097,12 @@ Other than SQL Server's `DATETIMEOFFSET` data type (which contain time zone info
|`io.debezium.time.NanoTimestamp`
| Represents the number of nanoseconds past epoch, and does not include timezone information.
|=======================
|===
When the `time.precision.mode` configuration property is set to `connect`, then the connector will use the predefined Kafka Connect logical types. This may be useful when consumers only know about the built-in Kafka Connect logical types and are unable to handle variable-precision time values. On the other hand, since SQL Server supports tenth of microsecond precision, the events generated by a connector with the `connect` time precision mode will *result in a loss of precision* when the database column has a _fractional second precision_ value greater than 3:
[cols="20%a,15%a,30%a,35%a",options="header"]
|=======================
[cols="20%a,15%a,30%a,35%a"]
|===
|SQL Server Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1133,7 +1133,7 @@ When the `time.precision.mode` configuration property is set to `connect`, then
|`org.apache.kafka.connect.data.Timestamp`
| Represents the number of milliseconds since epoch, and does not include timezone information. SQL Server allows `P` to be in the range 0-7 to store up to tenth of microsecond precision, though this mode results in a loss of precision when `P` > 3.
|=======================
|===
[[sqlserver-timestamp-values]]
===== Timestamp values
@ -1146,8 +1146,8 @@ Note that the timezone of the JVM running Kafka Connect and {prodname} does not
==== Decimal values
[cols="15%a,15%a,35%a,35%a",options="header"]
|=======================
[cols="15%a,15%a,35%a,35%a"]
|===
|SQL Server Data Type
|Literal type (schema type)
|Semantic type (schema name)
@ -1177,7 +1177,7 @@ The `connect.decimal.precision` schema parameter contains an integer representin
|The `scale` schema parameter contains an integer representing how many digits the decimal point was shifted.
The `connect.decimal.precision` schema parameter contains an integer representing the precision of the given decimal value.
|=======================
|===
[[sqlserver-deploying-a-connector]]
== Deploying the SQL Server connector
@ -1353,8 +1353,8 @@ include::{partialsdir}/modules/cdc-all-connectors/r_connector-monitoring-schema-
The following configuration properties are _required_ unless a default value is available.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -1474,12 +1474,12 @@ See {link-prefix}:{link-sqlserver-connector}#sqlserver-data-types[] for the list
| A semi-colon list of regular expressions that match fully-qualified tables and columns to map a primary key. +
Each item (regular expression) must match the fully-qualified `<fully-qualified table>:<a comma-separated list of columns>` representing the custom key. +
Fully-qualified tables could be defined as _databaseName_._schemaName_._tableName_.
|=======================
|===
The following _advanced_ configuration properties have good defaults that will work in most situations and therefore rarely need to be specified in the connector's configuration.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -1604,7 +1604,7 @@ Possible values include "Z", "UTC", offset values like "+02:00", short zone ids
See {link-prefix}:{link-sqlserver-connector}#sqlserver-transaction-metadata[Transaction Metadata] for additional details.
|=======================
|===
The connector also supports _pass-through_ configuration properties that are used when creating the Kafka producer and consumer. Specifically, all connector configuration properties that begin with the `database.history.producer.` prefix are used (without the prefix) when creating the Kafka producer that writes to the database history, and all those that begin with the prefix `database.history.consumer.` are used (without the prefix) when creating the Kafka consumer that reads the database history upon connector startup.

View File

@ -357,8 +357,8 @@ DebeziumEngine<RecordChangeEvent<SourceRecord>> engine = DebeziumEngine.create(C
The following configuration properties are _required_ unless a default value is available (for the sake of text formatting the package names of Java classes are replaced with `<...>`).
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -417,7 +417,7 @@ The default is a periodic commity policy based upon time intervals.
|`internal.value.converter`
|`<...>.JsonConverter`
|The Converter class that should be used to serialize and deserialize value data for offsets. The default is JSON converter.
|=======================
|===
== Handling Failures

View File

@ -119,8 +119,8 @@ Finally, it's also possible to use Avro for the entire envelope as well as the `
The following configuration options exist when usi
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -145,7 +145,7 @@ can be `json` or `avro`, if `serializer.type` is `json` or `avro`.
|`avro. \...`
|N/A
|Any configuration options to be passed through to the underlying converter when using Avro (the "avro." prefix will be removed)
|=======================
|===
The following shows an example configuration for using JSON as envelope format
(the default, so `value.converter.serializer.type` could also be omitted) and Avro as data content type:

View File

@ -126,8 +126,8 @@ The extension works out-of-the-box with a default configuration, but this config
=== Build time configuration options
[cols="65%a,>12%a,>23%",options="header"]
|=======================
[cols="65%a,>12%a,>23%"]
|===
|Configuration property
|Type
|Default
@ -235,7 +235,7 @@ e.g. `com.company.TheAttributeConverter`
|string
|
|=======================
|===
[NOTE]
====
@ -245,8 +245,8 @@ When not using the default values, be sure that the SMT configuration matches.
=== Runtime configuration options
[cols="65%a,>15%a,>20%",options="header"]
|=======================
[cols="65%a,>15%a,>20%"]
|===
|Configuration property
|Type
|Default
@ -259,4 +259,4 @@ This is used as a way to keep the table's underlying storage from growing over t
|boolean
|true
|=======================
|===

View File

@ -94,8 +94,8 @@ The deserializer behaviour is driven by the `from.field` configuration option an
[[serdes-configuration_options]]
=== Configuration options
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -109,4 +109,4 @@ The deserializer behaviour is driven by the `from.field` configuration option an
|`unknown.properties.ignored`
|`false`
|Determines when an unknown property is encountered whether it should be silently ignored or if a runtime exception should be thrown.
|=======================
|===

View File

@ -178,8 +178,8 @@ __ ____ __ _____ ___ __ ____ ______
The source configuration uses the same configuration properties that are described on the specific connector documentation pages (just with `debezium.source` prefix), together with few more specific ones, necessary for running outside of Kafka Connect:
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -196,7 +196,7 @@ The source configuration uses the same configuration properties that are describ
|
|Defines how frequently the offsets are flushed into the file.
|=======================
|===
[id="debezium-format-configuration-options"]
=== Format configuration
@ -204,8 +204,8 @@ The source configuration uses the same configuration properties that are describ
The message output format can be configured for both key and value separately.
By default the output is in JSON format but an arbitrary implementation of Kafka Connect's `Converter` can be used.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -226,7 +226,7 @@ By default the output is in JSON format but an arbitrary implementation of Kafka
|
|Configuration properties passed to the value converter.
|=======================
|===
[id="debezium-transformations-configuration-options"]
=== Transformation configuration
@ -235,8 +235,8 @@ Before the messages are delivered to the sink, they can run through a sequence o
The server supports https://cwiki.apache.org/confluence/display/KAFKA/KIP-66%3A+Single+Message+Transforms+for+Kafka+Connect[single message transformations] defined by Kafka Connect.
The configuration will need to contain the list of transformations, implementation class for each transformation and configuration options for each of the transformations.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -256,7 +256,7 @@ The configuration will need to contain the list of transformations, implementati
|
|Configuration properties passed to the transformation with name `<name>`.
|=======================
|===
=== Sink configuration
@ -272,8 +272,8 @@ The sink is selected by configuration property `debezium.sink.type`.
Amazon Kinesis is an implementation of data streaming system with support for stream sharding and other techniques for high scalability.
Kinesis exposes a set of REST APIs and provides a (not-only) Java SDK that is used to implement the sink.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Property
|Default
|Description
@ -294,7 +294,7 @@ Kinesis exposes a set of REST APIs and provides a (not-only) Java SDK that is us
|`default`
|Kinesis does not support the notion of messages without key. So this string will be used as message key for messages from tables without primary key.
|=======================
|===
==== Injection points
@ -302,8 +302,8 @@ Kinesis exposes a set of REST APIs and provides a (not-only) Java SDK that is us
The Kinesis sink behaviour can be modified by a custom logic providing alternative implementations for specific functionalities.
When the alternative implementations are not available then the default ones are used.
[cols="35%a,10%a,55%a",options="header"]
|=======================
[cols="35%a,10%a,55%a"]
|===
|Interface
|CDI classifier
|Description
@ -316,7 +316,7 @@ When the alternative implementations are not available then the default ones are
|
|Custom implementation maps the planned destination (topic) name into a physical Kinesis stream name. By default the same name is used.
|=======================
|===
== Extensions

View File

@ -18,7 +18,7 @@ Typically, you configure the {prodname} MySQL connector in a `.json` file using
TIP: For a complete list of configuration properties, see xref:mysql-connector-configuration-properties_{context}[MySQL connector configuration properties].
.MySQL connector example configuration
=========
[source,json]
----
{
@ -38,7 +38,6 @@ TIP: For a complete list of configuration properties, see xref:mysql-connector-c
}
}
----
=========
== Example configuration properties explained
@ -72,7 +71,7 @@ Typically, you configure the {prodname} MySQL connector in a `.yaml` file using
TIP: For a complete list of configuration properties, see xref:mysql-connector-configuration-properties_{context}[MySQL connector configuration properties].
.MySQL connector example configuration
=========
[source,yaml,options="nowrap"]
----
apiVersion: kafka.strimzi.io/v1alpha1
@ -112,6 +111,5 @@ This name will be used as the prefix for all Kafka topics.
<7> The connector will store the history of the database schemas in Kafka using this broker (the same broker to which you are sending events) and topic name.
Upon restart, the connector will recover the schemas of the database that existed at the point in time in the `binlog` when the connector should begin reading.
----
=========
endif::product[]