DBZ-2105 Replaced "xref:" cross references with {link-prefix}:{link-whatever}

This commit is contained in:
TovaCohen 2020-05-22 17:52:29 -04:00 committed by Chris Cranford
parent 747245146a
commit 046ab999dc
13 changed files with 38 additions and 32 deletions

View File

@ -23,6 +23,7 @@ asciidoc:
link-mongodb-event-flattening: 'configuration/mongodb-event-flattening.adoc'
link-outbox-event-router: 'configuration/outbox-event-router.adoc'
link-topic-routing: 'configuration/topic-routing.adoc'
link-connectors: 'connectors/index.adoc'
link-mysql-connector: 'connectors/mysql.adoc'
link-mongodb-connector: 'connectors/mongodb.adoc'
link-postgresql-connector: 'connectors/postgresql.adoc'
@ -32,7 +33,10 @@ asciidoc:
link-cassandra-connector: 'connectors/cassandra.adoc'
link-debezium-monitoring: 'operations/monitoring.adoc'
link-custom-converters: 'development/converters.adoc'
link-engine: 'development/engine.adoc'
link-cloud-events: 'integrations/cloudevents.adoc'
link-postgresql-plugins: 'postgres-plugins.adoc'
link-tutorial: 'tutorial.adoc'
link-mysql-plugin-snapshot: 'https://oss.sonatype.org/service/local/artifact/maven/redirect?r=snapshots&g=io.debezium&a=debezium-connector-mysql&v=LATEST&c=plugin&e=tar.gz'
link-postgres-plugin-snapshot: 'https://oss.sonatype.org/service/local/artifact/maven/redirect?r=snapshots&g=io.debezium&a=debezium-connector-postgres&v=LATEST&c=plugin&e=tar.gz'
link-mongodb-plugin-snapshot: 'https://oss.sonatype.org/service/local/artifact/maven/redirect?r=snapshots&g=io.debezium&a=debezium-connector-mongodb&v=LATEST&c=plugin&e=tar.gz'

View File

@ -17,14 +17,15 @@ For this purpose, the two connectors establish a connection to the two source da
using a client library for accessing the binlog in case of MySQL and reading from a logical replication stream in case of Postgres.
By default, the changes from one capture table are written to a corresponding Kafka topic.
If needed, the topic name can be adjusted with help of Debezium's xref:configuration/topic-routing.adoc[topic routing SMT],
If needed, the topic name can be adjusted with help of Debezium's {link-prefix}:{link-topic-routing}[topic routing SMT],
e.g. to use topic names that deviate from the captured tables names or to stream changes from multiple tables into a single topic.
Once the change events are in Apache Kafka, different connectors from the Kafka Connect eco-system can be used
to stream the changes to other systems and databases such as Elasticsearch, data warehouses and analytics systems or caches such as Infinispan.
Depending on the chosen sink connector, it may be needed to apply Debezium's xref:configuration/event-flattening.adoc[new record state extraction] SMT,
Depending on the chosen sink connector, it may be needed to apply Debezium's {link-prefix}:{link-event-flattening}[new record state extraction] SMT,
which will only propagate the "after" structure from Debezium's event envelope to the sink connector.
ifdef::community[]
== Embedded Engine
An alternative way for using the Debezium connectors is the xref:operations/embedded.adoc[embedded engine].
@ -33,3 +34,4 @@ This can be useful for either consuming change events within your application it
without the needed for deploying complete Kafka and Kafka Connect clusters,
or for streaming changes to alternative messaging brokers such as Amazon Kinesis.
You can find https://github.com/debezium/debezium-examples/tree/master/kinesis[an example] for the latter in the examples repository.
endif::community[]

View File

@ -18,7 +18,7 @@ in which case the same converters are used for all connectors deployed to that w
Alternatively, they can be specified for an individual connector.
Kafka Connect comes with a _JSON converter_ that serializes the message keys and values into JSON documents.
The JSON converter can be configured to include or exclude the message schema using the (`key.converter.schemas.enable` and `value.converter.schemas.enable`) properties.
Our xref:tutorial.adoc[tutorial] shows what the messages look like when both payload and schemas are included, but the schemas make the messages very verbose.
Our {link-prefix}:{link-tutorial}[tutorial] shows what the messages look like when both payload and schemas are included, but the schemas make the messages very verbose.
If you want your messages serialized with JSON, consider setting these properties to `false` to exclude the verbose schema information.
Alternatively, you can serialize the message keys and values using https://avro.apache.org/[Apache Avro].

View File

@ -14,7 +14,7 @@ toc::[]
[NOTE]
====
This single message transformation (SMT) is supported for only the SQL database connectors. For the MongoDB connector, see the xref:configuration/mongodb-event-flattening.adoc[documentation for the MongoDB equivalent to this SMT].
This single message transformation (SMT) is supported for only the SQL database connectors. For the MongoDB connector, see the {link-prefix}:{link-mongodb-event-flattening}[documentation for the MongoDB equivalent to this SMT].
====
endif::community[]

View File

@ -17,7 +17,7 @@ Please see below for a descriptions of known limitations of this transformation.
[NOTE]
====
This SMT is supported only for the MongoDB connector.
See xref:configuration/event-flattening.adoc[here] for the relational database equivalent to this SMT.
See {link-prefix}:{link-event-flattening}[Extracting source record `after` state from {prodname} change events] for the relational database equivalent to this SMT.
====
The Debezium MongoDB connector generates the data in a form of a complex message structure.
@ -38,7 +38,7 @@ E.g. the general message structure for a insert event looks like this:
}
----
More details about the message structure are provided in xref:connectors/mongodb.adoc[the documentation] of the MongoDB connector.
More details about the message structure are provided in {link-prefix}:{link-mongodb-connector}[the documentation] of the MongoDB connector.
While this structure is a good fit to represent changes to MongoDB's schemaless collections,
it's not understood by existing sink connectors as for instance the Confluent JDBC sink connector.
@ -217,13 +217,13 @@ transforms.unwrap.type=io.debezium.connector.mongodb.transforms.ExtractNewDocume
transforms.unwrap.operation.header=true
----
The possible values are the ones from the `op` field of xref:connectors/mongodb.adoc#mongodb-change-events-value[MongoDB connector change events].
The possible values are the ones from the `op` field of {link-prefix}:{link-mongodb-connector}#mongodb-change-events-value[MongoDB connector change events].
=== Adding source metadata fields
The SMT can optionally add metadata fields from the original change event's `source` structure to the final flattened record (prefixed with "__").
This functionality can be used to add things like the collection from the change event, or connector-specific fields like the replica set name.
For more information on what's available in the source structure see xref:connectors/mongodb.adoc[the documentation] for the MongoDB connector.
For more information on what's available in the source structure see {link-prefix}:{link-mongodb-connector}[the documentation] for the MongoDB connector.
For example, the configuration
@ -265,7 +265,7 @@ For `DELETE` events, this option is only supported when the `delete.handling.mod
ifdef::community[]
|[[mongodb-extract-new-record-state-operation-header]]<<mongodb-extract-new-record-state-operation-header, `operation.header`>>
|`false`
|The SMT adds the xref:connectors/mongodb.adoc#mongodb-change-events-value[event operation] as a message header. +
|The SMT adds the {link-prefix}:{link-mongodb-connector}#mongodb-change-events-value[event operation] as a message header. +
This is deprecated as of Debezium 1.2, please use <<mongodb-extract-new-record-state-add-headers, `add.headers`>> and <<mongodb-extract-new-record-state-add-fields, `add.fields`>> instead.
endif::community[]

View File

@ -1109,7 +1109,7 @@ Other data type mappings are described in the following sections.
If present, a column's default value will be propagated to the corresponding field's Kafka Connect schema.
Change messages will contain the field's default value
(unless an explicit column value had been given), so there should rarely be the need to obtain the default value from the schema.
Passing the default value helps though with satisfying the compatibility rules when xref:configuration/avro.adoc[using Avro] as serialization format together with the Confluent schema registry.
Passing the default value helps though with satisfying the compatibility rules when {link-prefix}:{link-avro-serialization}[using Avro] as serialization format together with the Confluent schema registry.
[[db2-temporal-values]]
==== Temporal values
@ -1292,7 +1292,7 @@ Using the Db2 connector is straightforward. Here is an example of the configurat
<5> The name of the Db2 user
<6> The password for the Db2 user
<7> The name of the database to capture changes from
<8> The logical name of the Db2 instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the xref:configuration/avro.adoc[Avro Connector] is used.
<8> The logical name of the Db2 instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the {link-prefix}:{link-avro-serialization}[Avro Connector] is used.
<9> A list of all tables whose changes Debezium should capture
<10> The list of Kafka brokers that this connector will use to write and recover DDL statements to the database history topic.
<11> The name of the database history topic where the connector will write and recover DDL statements. This topic is for internal use only and should not be used by consumers.
@ -1311,7 +1311,7 @@ The {prodname} Db2 connector has three metric types in addition to the built-in
* <<streaming-metrics, streaming metrics>>; for monitoring the connector when reading CDC table data
* <<schema-history-metrics, schema history metrics>>; for monitoring the status of the connector's schema history
Please refer to the xref:operations/monitoring.adoc[monitoring documentation] for details of how to expose these metrics via JMX.
Please refer to the {link-prefix}:{link-debezium-monitoring}[monitoring documentation] for details of how to expose these metrics via JMX.
[[db2-monitoring-snapshots]]
[[db2-snapshot-metrics]]
@ -1452,7 +1452,7 @@ Fully-qualified names for columns are of the form _schemaName_._tableName_._colu
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
Fully-qualified data type names are of the form _schemaName_._tableName_._typeName_.
See xref:data-types[] for the list of Db2-specific data type names.
See {link-prefix}:{link-db2-connector}#db2-data-types[Db2 data types] for the list of Db2-specific data type names.
|[[db2-property-message-key-columns]]<<db2-property-message-key-columns, `message.key.columns`>>
|_empty string_

View File

@ -1058,7 +1058,7 @@ The {prodname} Oracle connector has three metric types in addition to the built-
* <<streaming-metrics, streaming metrics>>; for monitoring the connector when processing change events
* <<schema-history-metrics, schema history metrics>>; for monitoring the status of the connector's schema history
Please refer to the xref:operations/monitoring.adoc[monitoring documentation] for details of how to expose these metrics via JMX.
Please refer to the {link-prefix}:{link-debezium-monitoring}#monitoring-debezium[monitoring documentation] for details of how to expose these metrics via JMX.
[[oracle-monitoring-snapshots]]
[[oracle-snapshot-metrics]]
@ -1232,7 +1232,7 @@ Fully-qualified names for columns are of the form _databaseName_._tableName_._co
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
Fully-qualified data type names are of the form _databaseName_._tableName_._typeName_, or _databaseName_._schemaName_._tableName_._typeName_.
See xref:data-types[] for the list of Oracle-specific data type names.
See the {link-prefix}:{link-oracle-connector}#oracle-data-types[list of Oracle-specific data type names].
|[[oracle-property-heartbeat-interval-ms]]<<oracle-property-heartbeat-interval-ms, `heartbeat.interval.ms`>>
|`0`

View File

@ -151,7 +151,7 @@ As of January 2019, the following Postgres versions on RDS come with an up-to-da
[TIP]
====
Also see xref:postgres-plugins.adoc[Logical Decoding Output Plug-in Installation for PostgreSQL] for more detailed instructions of setting up and testing logical decoding plug-ins.
Also see {link-prefix}:{link-postgresql-plugins}[Logical Decoding Output Plug-in Installation for PostgreSQL] for more detailed instructions of setting up and testing logical decoding plug-ins.
====
[NOTE]
@ -1685,7 +1685,7 @@ The {prodname} PostgreSQL connector has two metric types in addition to the buil
* <<snapshot-metrics, snapshot metrics>>; for monitoring the connector when performing snapshots
* <<streaming-metrics, streaming metrics>>; for monitoring the connector when processing change events via logical decoding
Please refer to the xref:operations/monitoring.adoc[monitoring documentation] for details of how to expose these metrics via JMX.
Please refer to the {link-prefix}:{link-debzium-monitoring}#monitoring-debezium[monitoring documentation] for details of how to expose these metrics via JMX.
[[postgresql-snapshot-metrics]]
==== Snapshot Metrics
@ -1890,7 +1890,7 @@ Fully-qualified names for columns are of the form _databaseName_._tableName_._co
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
Fully-qualified data type names are of the form _databaseName_._tableName_._typeName_, or _databaseName_._schemaName_._tableName_._typeName_.
See xref:data-types[] for the list of PostgreSQL-specific data type names.
See the {link-prefix}:{link-postgresql-connector}#postgresql-data-types[list of PostgreSQL-specific data type names].
|[[postgresql-property-message-key-columns]]<<postgresql-property-message-key-columns, `message.key.columns`>>
|_empty string_
@ -1987,7 +1987,7 @@ The topic is named according to the pattern `<heartbeat.topics.prefix>.<server.n
|
|If specified, this query will be executed upon every heartbeat against the source database.
This can be used to overcome the situation described in xref:wal-disk-space[],
This can be used to overcome the situation described in {link-prefix}:{link-postgresql-connector}#postgresql-wal-disk-space[WAL Disk Space Consumption],
where capturing changes from a low-traffic database on the same host as a high-traffic database prevents Debezium from processing any WAL records and thus acknowledging WAL positions with the database.
Inserting records into some heartbeat table (which must have been created upfront) will allow the connector to receive changes from the low-traffic database and acknowledge their LSNs,

View File

@ -1036,7 +1036,7 @@ If present, a column's default value is propagated to the corresponding field's
Change messages will contain the field's default value
(unless an explicit column value had been given), so there should rarely be the need to obtain the default value from the schema.
ifdef::community[]
Passing the default value helps though with satisfying the compatibility rules when xref:configuration/avro.adoc[using Avro] as serialization format together with the Confluent schema registry.
Passing the default value helps though with satisfying the compatibility rules when {link-prefix}:{link-avro-serialization}[using Avro] as serialization format together with the Confluent schema registry.
endif::community[]
[[sqlserver-temporal-values]]
@ -1324,7 +1324,7 @@ The {prodname} SQL Server connector has three metric types in addition to the bu
* <<streaming-metrics, streaming metrics>>; for monitoring the connector when reading CDC table data
* <<schema-history-metrics, schema history metrics>>; for monitoring the status of the connector's schema history
Please refer to the xref:operations/monitoring.adoc[monitoring documentation] for details of how to expose these metrics via JMX.
Please refer to the {link-prefix}:{link-debezium-monitoring}[monitoring documentation] for details of how to expose these metrics via JMX.
[[sqlserver-snapshot-metrics]]
==== Snapshot Metrics
@ -1466,7 +1466,7 @@ Fully-qualified names for columns are of the form _schemaName_._tableName_._colu
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
Fully-qualified data type names are of the form _schemaName_._tableName_._typeName_.
See xref:sqlserver-data-types[] for the list of SQL Server-specific data type names.
See {link-prefix}:{link-sqlserver-connector}#sqlserver-data-types[] for the list of SQL Server-specific data type names.
|[[sqlserver-property-message-key-columns]]<<sqlserver-property-message-key-columns, `message.key.columns`>>
|_empty string_

View File

@ -75,11 +75,11 @@ You create the `DebeziumEngine` instance using its builder API,
providing the following things:
* The format in which you want to receive the message, e.g. JSON, Avro or as Kafka Connect `SourceRecord`
(see xref:output-message-formats[output message formats])
(see {link-prefix}:{link-engine}#engine-output-message-formats[output message formats])
* Configuration properties (perhaps loaded from a properties file) that define the environment for both the engine and the connector
* A method that will be called for every data change event produced by the connector
Here's an example of code that configures and runs an embedded xref:connectors/mysql.adoc[MySQL connector]:
Here's an example of code that configures and runs an embedded {link-prefix}:{link-mysql-connector}[MySQL connector]:
[source,java,indent=0]
----
@ -261,7 +261,7 @@ Allowed values are:
* `Connect.class` - the output value is change event wrapping Kafka Connect's `SourceRecord`
* `Json.class` - the output value is a pair of key and value encoded as `JSON` strings
* `Avro.class` - the output value is a pair of key and value encoded as Avro serialized records
* `CloudEvents.class` - the output value is a pair of key and value encoded as xref:integrations/cloudevents.adoc[Cloud Events] messages
* `CloudEvents.class` - the output value is a pair of key and value encoded as {link-prefix}:{link-cloud-events}[Cloud Events] messages
Internally, the engine uses the apropriate Kafka Connect converter implementation to which the conversion is delegated.
The converter can be parametrized using engine properties to modify its behaviour.

View File

@ -25,8 +25,8 @@ different modes exist for snapshotting, refer to the docs of the specific connec
* *Masking:* the values from specific columns can be masked, e.g. for sensitive data
* *Monitoring:* most connectors can be monitored using JMX
* Different ready-to-use *message transformations*:
e.g. for xref:configuration/topic-routing.adoc[message routing],
extraction of new record state (xref:configuration/event-flattening.adoc[relational connectors], xref:configuration/mongodb-event-flattening.adoc[MongoDB])
and xref:configuration/outbox-event-router.adoc[routing of events] from a transactional outbox table
e.g. for {link-prefix}:{link-topic-routing}[message routing],
extraction of new record state ({link-prefix}:{link-event-flattening}[relational connectors], {link-prefix}:{link-mongodb-event-flattening}[MongoDB])
and {link-prefix}:{link-outbox-event-router}[routing of events] from a transactional outbox table
Refer to the xref:connectors/index.adoc[connector documentation] for a list of all supported databases and detailed information about the features and configuration options of each connector.
Refer to the {link-prefix}:{link-connectors}[connector documentation] for a list of all supported databases and detailed information about the features and configuration options of each connector.

View File

@ -15,7 +15,7 @@ This feature is currently in incubating state, i.e. exact semantics, configurati
== Overview
This extension is inspired by the xref::configuration/outbox-event-router.adoc[Outbox Event Router] single message transformation (SMT).
This extension is inspired by the {link-prefix}:{link-outbox-event-router}[Outbox Event Router] single message transformation (SMT).
As discussed in the blog post link:/blog/2019/02/19/reliable-microservices-data-exchange-with-the-outbox-pattern/[Reliable Microservices Data Exchange with the Outbox Pattern], microservices often need to exchange information with one another and an excellent way to deal with that is using the Outbox pattern combined with Debezium's Outbox Event Router SMT.
The following image shows the overall architecture of this pattern:
@ -258,4 +258,4 @@ This is used as a way to keep the table's underlying storage from growing over t
|boolean
|true
|=======================
|=======================

View File

@ -88,7 +88,7 @@ The deserializer behaviour is driven by the `from.field` configuration option an
* if the value is deserialized and contains the Debezium event envelope then:
** if `from.field` is not set, then deserialize the complete envelope into the target type
** otherwise deserialize and map only content of the field configured into the target type, thus effectively flatting the message
* if the value is deserialized and contains already a flattened message (i.e. when using the SMT for xref:configuration/event-flattening.adoc[Event Flattening]) then map the flattened record into the target logical type
* if the value is deserialized and contains already a flattened message (i.e. when using the SMT for {link-prefix}:{link-event-flattening}[Event Flattening]) then map the flattened record into the target logical type
[[serdes-configuration_options]]
=== Configuration options