Merge pull request #1395 from ahus1/DBZ-1935

DBZ-1935 fixed broken links and anchors
This commit is contained in:
Gunnar Morling 2020-04-06 22:42:42 +02:00 committed by GitHub
commit a03c64bfd0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 22 additions and 18 deletions

View File

@ -22,7 +22,7 @@ e.g. to use topic names that deviate from the captured tables names or to stream
Once the change events are in Apache Kafka, different connectors from the Kafka Connect eco-system can be used
to stream the changes to other systems and databases such as Elasticsearch, data warehouses and analytics systems or caches such as Infinispan.
Depending on the chosen sink connector, it may be needed to apply Debezium's xref:configuration/event-flattening.adoc[new record state extration] SMT,
Depending on the chosen sink connector, it may be needed to apply Debezium's xref:configuration/event-flattening.adoc[new record state extraction] SMT,
which will only propagate the "after" structure from Debezium's event envelope to the sink connector.
== Embedded Engine

View File

@ -14,7 +14,7 @@ When you watched the connector start up,
you saw that events were written to the following topics with the `dbserver1` prefix (the name of the connector):
`dbserver1`::
The xref:connectors/mysql.adoc#schema-change-topic[schema change topic] to which all of the DDL statements are written.
The xref:assemblies/cdc-mysql-connector/as_overview-of-how-the-mysql-connector-works.adoc#how-the-mysql-connector-handles-schema-change-topics_{context}[schema change topic] to which all of the DDL statements are written.
`dbserver1.inventory.products`::
Captures change events for the `products` table in the `inventory` database.

View File

@ -217,7 +217,7 @@ transforms.unwrap.type=io.debezium.connector.mongodb.transforms.ExtractNewDocume
transforms.unwrap.operation.header=true
----
The possible values are the ones from the `op` field of xref:connectors/mongodb.adoc#change-events-value[MongoDB connector change events].
The possible values are the ones from the `op` field of xref:connectors/mongodb.adoc#mongodb-change-events-value[MongoDB connector change events].
=== Adding source metadata fields
@ -266,7 +266,7 @@ For `DELETE` events, this option is only supported when the `delete.handling.mod
|`operation.header`
|`false`
|The SMT adds the xref:connectors/mongodb.adoc#change-events-value[event operation] as a message header.
|The SMT adds the xref:connectors/mongodb.adoc#mongodb-change-events-value[event operation] as a message header.
|`drop.tombstones`
|`true`

View File

@ -228,6 +228,7 @@ VALUES ASNCDC.ASNCDCSERVICES('reinit','asncdc');
== How the Db2 connector works
[[snapshots]]
=== Snapshots
Db2 ASN is not designed to store the complete history of database changes.
@ -264,6 +265,7 @@ After a restart, the connector will resume from the offset (commit and change LS
The connector is able to detect whether the CDC is enabled or disabled for whitelisted source table during the runtime and modify its behaviour.
[[topic-names]]
=== Topic names
The Db2 connector writes events for all insert, update, and delete operations on a single table to a single Kafka topic. The name of the Kafka topics always takes the form _databaseName_._schemaName_._tableName_, where _databaseName_ is the logical name of the connector as specified with the `database.server.name` configuration property, _schemaName_ is the name of the schema where the operation occurred, and _tableName_ is the name of the database table on which the operation occurred.
@ -959,7 +961,7 @@ Other data type mappings are described in the following sections.
If present, a column's default value will be propagated to the corresponding field's Kafka Connect schema.
Change messages will contain the field's default value
(unless an explicit column value had been given), so there should rarely be the need to obtain the default value from the schema.
Passing the default value helps though with satisfying the compatibility rules when xref:configuration/avro.advoc[using Avro] as serialization format together with the Confluent schema registry.
Passing the default value helps though with satisfying the compatibility rules when xref:configuration/avro.adoc[using Avro] as serialization format together with the Confluent schema registry.
[[temporal-values]]
==== Temporal values

View File

@ -180,7 +180,7 @@ The bottom line is that the MongoDB connector will continue running under most s
=== Topics names
The MongoDB connector writes events for all insert, update, and delete operations to documents in each collection to a single Kafka topic.
The name of the Kafka topics always takes the form _logicalName_._databaseName_._collectionName_, where _logicalName_ is the link:logical-name[logical name] of the connector as specified with the `mongodb.name` configuration property, _databaseName_ is the name of the database where the operation occurred, and _collectionName_ is the name of the MongoDB collection in which the affected document existed.
The name of the Kafka topics always takes the form _logicalName_._databaseName_._collectionName_, where _logicalName_ is the link:#logical-name[logical name] of the connector as specified with the `mongodb.name` configuration property, _databaseName_ is the name of the database where the operation occurred, and _collectionName_ is the name of the MongoDB collection in which the affected document existed.
For example, consider a MongoDB replica set with an `inventory` database that contains four collections: `products`, `products_on_hand`, `customers`, and `orders`.
If the connector monitoring this database were given a logical name of `fulfillment`, then the connector would produce events on these four Kafka topics:

View File

@ -124,6 +124,7 @@ endif::cdc-product[]
[[how-the-connector-works]]
== How the SQL Server connector works
[[snapshots]]
=== Snapshots
SQL Server CDC is not designed to store the complete history of database changes.
@ -160,6 +161,7 @@ After a restart, the connector will resume from the offset (commit and change LS
The connector is able to detect whether the CDC is enabled or disabled for whitelisted source table during the runtime and modify its behaviour.
[[topic-names]]
=== Topic names
The SQL Server connector writes events for all insert, update, and delete operations on a single table to a single Kafka topic. The name of the Kafka topics always takes the form _serverName_._schemaName_._tableName_, where _serverName_ is the logical name of the connector as specified with the `database.server.name` configuration property, _schemaName_ is the name of the schema where the operation occurred, and _tableName_ is the name of the database table on which the operation occurred.
@ -890,7 +892,7 @@ If present, a column's default value is propagated to the corresponding field's
Change messages will contain the field's default value
(unless an explicit column value had been given), so there should rarely be the need to obtain the default value from the schema.
ifndef::cdc-product[]
Passing the default value helps though with satisfying the compatibility rules when xref:configuration/avro.advoc[using Avro] as serialization format together with the Confluent schema registry.
Passing the default value helps though with satisfying the compatibility rules when xref:configuration/avro.adoc[using Avro] as serialization format together with the Confluent schema registry.
endif::cdc-product[]
[[sqlserver-temporal-values]]
@ -1123,7 +1125,7 @@ Here is an example of the configuration for a connector instance that monitors a
<10> The list of Kafka brokers that this connector will use to write and recover DDL statements to the database history topic.
<11> The name of the database history topic where the connector will write and recover DDL statements. This topic is for internal use only and should not be used by consumers.
See the link:#connector-properties[complete list of connector properties] that can be specified in these configurations.
See the link:#sqlserver-connector-properties[complete list of connector properties] that can be specified in these configurations.
This configuration can be sent via POST to a running Kafka Connect service, which will then record the configuration and start up the one connector task that will connect to the SQL Server database, read the transaction log, and record events to Kafka topics.
@ -1388,7 +1390,7 @@ Note that primary key columns are always included in the event's key, also if bl
|`time.precision.mode`
|`adaptive`
| Time, date, and timestamps can be represented with different kinds of precision, including: `adaptive` (the default) captures the time and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column's type; or `connect` always represents time and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision. See link:#temporal-values[temporal values].
| Time, date, and timestamps can be represented with different kinds of precision, including: `adaptive` (the default) captures the time and timestamp values exactly as in the database using either millisecond, microsecond, or nanosecond precision values based on the database column's type; or `connect` always represents time and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which uses millisecond precision regardless of the database columns' precision. See link:#sqlserver-temporal-values[temporal values].
|`tombstones.on.delete`
|`true`
@ -1409,7 +1411,7 @@ Fully-qualified names for columns are of the form _schemaName_._tableName_._colu
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
Fully-qualified data type names are of the form _schemaName_._tableName_._typeName_.
See xref:data-types[] for the list of SQL Server-specific data type names.
See xref:sqlserver-data-types[] for the list of SQL Server-specific data type names.
|`message.key.columns`
|_empty string_

View File

@ -70,10 +70,10 @@ endif::[]
== Using a Debezium Connector
To use a connector to produce change events for a particular source server/cluster, simply create a configuration file for the
xref:connectors/mysql.adoc#deploying-a-connector[MySQL Connector],
xref:assemblies/cdc-mysql-connector/as_deploy-the-mysql-connector.adoc[MySQL Connector],
xref:connectors/postgresql.adoc#deploying-a-connector[Postgres Connector],
xref:connectors/mongodb.adoc#deploying-a-connector[MongoDB Connector],
xref:connectors/sqlserver.adoc#deploying-a-connector[SQL Server Connector],
xref:connectors/mongodb.adoc#mongodb-deploying-a-connector[MongoDB Connector],
xref:connectors/sqlserver.adoc#sqlserver-deploying-a-connector[SQL Server Connector],
xref:connectors/oracle.adoc#deploying-a-connector[Oracle Connector],
xref:connectors/db2.adoc#deploying-a-connector[Db2 Connector]
or xref:connectors/cassandra.adoc#deploying-a-connector[Cassandra Connector]

View File

@ -57,7 +57,7 @@ a| Records the completed snapshot in the connector offsets.
If the connector fails, stops, or is rebalanced while making the _initial snapshot_, the connector creates a new snapshot once restarted. Once that _intial snapshot_ is completed, the {prodname} MySQL connector restarts from the same position in the binlog so it does not miss any updates.
NOTE: If the connector stops for long enough, MySQL could purge old binlog files and the connector's position would be lost. If the position is lost, the connector reverts to the _initial snapshot_ for its starting position. For more tips on troubleshooting the {prodname} MySQL connector, see <<connector-common-issues>>.
NOTE: If the connector stops for long enough, MySQL could purge old binlog files and the connector's position would be lost. If the position is lost, the connector reverts to the _initial snapshot_ for its starting position. For more tips on troubleshooting the {prodname} MySQL connector, see xref:assemblies/cdc-mysql-connector/as_connector-common-issues.adoc#connector-common-issues[MySQL connector common issues].
== What if Global Read Locks are not allowed?
[[no-global-read-lock-mysql-connect_{context}]]

View File

@ -29,7 +29,7 @@ mysql> GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIE
TIP: See xref:permissions-explained-mysql-connector[permissions explained] for notes on each permission.
IMPORTANT: If using a hosted option such as Amazon RDS or Amazon Aurora that do not allow a *global read lock*, table-level locks are used to create the _consistent snapshot_. In this case, you need to also grant `LOCK_TABLES` permissions to the user that you create. See <<overview-of-how-the-mysql-connector-works>> for more details.
IMPORTANT: If using a hosted option such as Amazon RDS or Amazon Aurora that do not allow a *global read lock*, table-level locks are used to create the _consistent snapshot_. In this case, you need to also grant `LOCK_TABLES` permissions to the user that you create. See xref:assemblies/cdc-mysql-connector/as_overview-of-how-the-mysql-connector-works.adoc[Overview of how the MySQL connector works] for more details.
[start=3]
. Finalize the user's permissions:

View File

@ -98,7 +98,7 @@ Fully-qualified names for columns are of the form _databaseName_._tableName_._co
The schema parameters `pass:[_]pass:[_]debezium.source.column.type`, `pass:[_]pass:[_]debezium.source.column.length` and `pass:[_]pass:[_]debezium.source.column.scale` will be used to propagate the original type name and length (for variable-width types), respectively.
Useful to properly size corresponding columns in sink databases.
Fully-qualified data type names are of the form _databaseName_._tableName_._typeName_, or _databaseName_._schemaName_._tableName_._typeName_.
See xref:data-types[] for the list of MySQL-specific data type names.
See xref:assemblies/cdc-mysql-connector/as_overview-of-how-the-mysql-connector-works.adoc#how-the-mysql-connector-maps-data-types_{context}[] for the list of MySQL-specific data type names.
|`time.precision.mode`
|`adaptive_time{zwsp}_microseconds`
@ -187,8 +187,8 @@ Fully-qualified tables could be defined as `DB_NAME.TABLE_NAME` or `SCHEMA_NAME.
|===
== Advanced MySQL connector properties
[[advanced-mysql-connector-properties]]
== Advanced MySQL connector properties
[cols="3,2,5"]
|===
@ -302,7 +302,7 @@ The connector will read the table contents in multiple batches of this size.
|`snapshot.lock.timeout.ms`
|`10000`
|Positive integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot.
If table locks cannot be acquired in this time interval, the snapshot will fail. See link:#snapshots[snapshots]
If table locks cannot be acquired in this time interval, the snapshot will fail. See xref:assemblies/cdc-mysql-connector/as_overview-of-how-the-mysql-connector-works.adoc#how-the-mysql-connector-performs-database-snapshots_{context}[How the MySQL connector performs database snapshots].
|`enable.time.adjuster`
|