DBZ-5043 Update documentation ot use topic.prefix instead of database.server.name

This commit is contained in:
Vojtech Juranek 2022-08-30 21:59:04 +02:00 committed by Jiri Pechanec
parent eb6e2648f4
commit 853f43c5de
17 changed files with 104 additions and 153 deletions

View File

@ -284,7 +284,7 @@ spec:
database.user: debezium
database.password: dbz
database.server.id: 184054
database.server.name: dbserver1
topic.prefix: dbserver1
database.include.list: inventory
schema.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
schema.history.kafka.topic: schema-changes.inventory

View File

@ -179,11 +179,11 @@ While the connector is running and emitting change events, if you remove a table
By default, the Db2 connector writes change events for all of the `INSERT`, `UPDATE`, and `DELETE` operations that occur in a table to a single Apache Kafka topic that is specific to that table.
The connector uses the following convention to name change event topics:
_databaseName_._schemaName_._tableName_
_topicPrefix_._schemaName_._tableName_
The following list provides definitions for the components of the default name:
_databaseName_:: The logical name of the connector as specified by the xref:db2-property-database-server-name[`database.server.name`] connector configuration property.
_topicPrefix_:: The topic prefix as specified by the xref:db2-property-topic-prefix[`topic.prefix`] connector configuration property.
_schemaName_:: The name of the schema in which the operation occurred.
@ -215,7 +215,7 @@ You can configure a {prodname} Db2 connector to produce schema change events tha
* A table is removed from capture mode.
* During a xref:{link-db2-connector}#db2-schema-evolution[database schema update], there is a change in the schema for a table that is in capture mode.
The connector writes schema change events to a Kafka schema change topic that has the name `_<serverName>_` where `_<serverName>_` is the logical server name that is specified in the xref:db2-property-database-server-name[`database.server.name`] connector configuration property.
The connector writes schema change events to a Kafka schema change topic that has the name `_<topicPrefix>_` where `_<topicPrefix>_` is the topic prefix that is specified in the xref:db2-property-topic-prefix[`topic.prefix`] connector configuration property.
Messages that the connector sends to the schema change topic contain a payload that includes the following elements:
`databaseName`:: The name of the database to which the statements are applied.
@ -492,7 +492,7 @@ If the data source does not provide {prodname} with the event time, then the fie
----
Unless overridden via the xref:db2-property-topic-transaction[`topic.transaction`] option,
the connector emits transaction events to the xref:db2-property-database-server-name[`_<database.server.name>_`]`.transaction` topic.
the connector emits transaction events to the xref:db2-property-topic-prefix[`_<topic.prefix>_`]`.transaction` topic.
.Data change event enrichment
@ -1829,7 +1829,7 @@ apiVersion: {KafkaConnectApiVersion}
database.user: db2inst1 // <7>
database.password: Password! // <8>
database.dbname: mydatabase // <9>
database.server.name: fullfillment // <10>
topic.prefix: fullfillment // <10>
database.include.list: public.inventory // <11>
----
+
@ -1907,7 +1907,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
"database.user": "db2inst1", // <5>
"database.password": "Password!", // <6>
"database.dbname": "mydatabase", // <7>
"database.server.name": "fullfillment", // <8>
"topic.prefix": "fullfillment", // <8>
"table.include.list": "MYSCHEMA.CUSTOMERS", // <9>
"schema.history.kafka.bootstrap.servers": "kafka:9092", // <10>
"schema.history.kafka.topic": "dbhistory.fullfillment" // <11>
@ -2027,11 +2027,11 @@ The following configuration properties are _required_ unless a default value is
|No default
|The name of the Db2 database from which to stream the changes
|[[db2-property-database-server-name]]<<db2-property-database-server-name, `+database.server.name+`>>
|[[db2-property-topic-prefix]]<<db2-property-topic-prefix, `+topic.prefix+`>>
|No default
|Logical name that identifies and provides a namespace for the particular Db2 database server that hosts the database for which {prodname} is capturing changes.
Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name.
The logical name should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector. +
|Topic prefix which provides a namespace for the particular Db2 database server that hosts the database for which {prodname} is capturing changes.
Only alphanumeric characters, hyphens, dots and underscores must be used in the topic prefix name.
The topic prefix should be unique across all other connectors, since this topic prefix is used for all Kafka topics that receive records from this connector. +
+
[WARNING]
====
@ -2357,14 +2357,6 @@ Adjust the chunk size to a value that provides the best performance in your envi
|`.`
|Specify the delimiter for topic name, defaults to `.`.
|[[db2-property-topic-prefix]]<<db2-property-topic-prefix, `topic.prefix`>>
|`${database.server.name}`
|The name of the prefix to be used for all topics, defaults to xref:db2-property-database-server-name[`${database.server.name}`].
[NOTE]
====
Once specify the prefix value to this property, the xref:db2-property-database-server-name[`${database.server.name}`] will not play the prefix role of all topics.
====
|[[db2-property-topic-cache-size]]<<db2-property-topic-cache-size, `topic.cache.size`>>
|`10000`
|The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
@ -2375,7 +2367,7 @@ Once specify the prefix value to this property, the xref:db2-property-database-s
+
_topic.heartbeat.prefix_._topic.prefix_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
For example, if the topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
|[[db2-property-topic-transaction]]<<db2-property-topic-transaction, `topic.transaction`>>
|`transaction`
@ -2383,7 +2375,7 @@ For example, if the database server name or topic prefix is `fulfillment`, the d
+
_topic.prefix_._topic.transaction_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
For example, if the topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
|===

View File

@ -1670,7 +1670,7 @@ Once specify the prefix value to this property, the xref:mongodb-property-mongod
+
_topic.heartbeat.prefix_._topic.prefix_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
For example, if the topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
|[[mongodb-property-topic-transaction]]<<mongodb-property-topic-transaction, `topic.transaction`>>
|`transaction`
@ -1678,7 +1678,7 @@ For example, if the database server name or topic prefix is `fulfillment`, the d
+
_topic.prefix_._topic.transaction_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
For example, if the topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
|===

View File

@ -136,7 +136,7 @@ See xref:{link-mysql-connector}#mysql-topic-names[default names for topics] that
=== Schema change topic
You can configure a {prodname} MySQL connector to produce schema change events that describe schema changes that are applied to captured tables in the database.
The connector writes schema change events to a Kafka topic named `_<serverName>_`, where `_serverName_` is the logical server name that is specified in the xref:mysql-property-database-server-name[`database.server.name`] connector configuration property.
The connector writes schema change events to a Kafka topic named `_<topicPrefix>_`, where `_topicPrefix_` is the namespace specified in the xref:mysql-property-topic-prefix[`topic.prefix`] connector configuration property.
Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message.
The payload of a schema change event message includes the following elements:
@ -492,7 +492,7 @@ you must set one of the following options:
When the MySQL connection is read-only, the xref:{link-signalling}[signaling table] mechanism can also run a snapshot by sending a message to the Kafka topic that is specified in
the xref:{link-mysql-connector}#mysql-property-signal-kafka-topic[signal.kafka.topic] property.
The key of the Kafka message must match the value of the `database.server.name` connector configuration option.
The key of the Kafka message must match the value of the `topic.prefix` connector configuration option.
The value is a JSON object with `type` and `data` fields.
@ -565,7 +565,7 @@ Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.produ
When the MySQL connection is read-only, the xref:{link-signalling}[signaling table] mechanism can also stop a snapshot by sending a message to the Kafka topic that is specified in
the xref:{link-mysql-connector}#mysql-property-signal-kafka-topic[signal.kafka.topic] property.
The key of the Kafka message must match the value of the `database.server.name` connector configuration option.
The key of the Kafka message must match the value of the `topic.prefix` connector configuration option.
The value is a JSON object with `type` and `data` fields.
@ -622,9 +622,9 @@ By default, the MySQL connector writes change events for all of the `INSERT`, `U
The connector uses the following convention to name change event topics:
_serverName.databaseName.tableName_
_topicPrefix.databaseName.tableName_
Suppose that `fulfillment` is the server name, `inventory` is the database name, and the database contains tables named `orders`, `customers`, and `products`.
Suppose that `fulfillment` is the topic prefix, `inventory` is the database name, and the database contains tables named `orders`, `customers`, and `products`.
The {prodname} MySQL connector emits events to three Kafka topics, one for each table in the database:
----
@ -635,7 +635,7 @@ fulfillment.inventory.products
The following list provides definitions for the components of the default name:
_serverName_:: The logical name of the server as specified by the xref:mysql-property-database-server-name[`database.server.name`] connector configuration property.
_topicPrefix_:: The topic prefix as specified by the xref:mysql-property-topic-prefix[`topic.prefix`] connector configuration property.
_schemaName_:: The name of the schema in which the operation occurred.
@ -700,7 +700,7 @@ If the data source does not provide {prodname} with the event time, then the fie
----
Unless overridden via the xref:mysql-property-topic-transaction[`topic.transaction`] option,
the connector emits transaction events to the xref:mysql-property-database-server-name[`_<database.server.name>_`]`.transaction` topic.
the connector emits transaction events to the xref:mysql-property-topic-prefix[`_<topic.prefix>_`]`.transaction` topic.
.Change data event enrichment
@ -2230,7 +2230,7 @@ and captures changes to the `inventory` database.
database.user: debezium
database.password: dbz
database.server.id: 184054 // <5>
database.server.name: dbserver1 // <6>
topic.prefix: dbserver1 // <6>
database.include.list: inventory // <7>
schema.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092 // <8>
schema.history.kafka.topic: schema-changes.inventory // <9>
@ -2263,7 +2263,7 @@ those tasks will be redistributed to running services.
|Unique ID of the connector.
|6
|Logical name of the MySQL server or cluster.
|Topic prefix for the MySQL server or cluster.
This name is used as the prefix for all Kafka topics that receive change event records.
|7
@ -2310,7 +2310,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
"database.user": "debezium-user", // <5>
"database.password": "debezium-user-pw", // <6>
"database.server.id": "184054", <7>
"database.server.name": "fullfillment", // <8>
"topic.prefix": "fullfillment", // <8>
"database.include.list": "inventory", // <9>
"schema.history.kafka.bootstrap.servers": "kafka:9092", // <10>
"schema.history.kafka.topic": "dbhistory.fullfillment", // <11>
@ -2325,7 +2325,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
<5> MySQL user with the appropriate privileges.
<6> MySQL user's password.
<7> Unique ID of the connector.
<8> Logical name of the MySQL server or cluster.
<8> Topic prefix for the MySQL server or cluster.
<9> List of databases hosted by the specified server.
<10> List of Kafka brokers that the connector uses to write and recover DDL statements to the database schema history topic.
<11> Name of the database schema history topic. This topic is for internal use only and should not be used by consumers.
@ -2425,9 +2425,9 @@ The following configuration properties are _required_ unless a default value is
|No default
|Password to use when connecting to the MySQL database server.
|[[mysql-property-database-server-name]]<<mysql-property-database-server-name, `+database.server.name+`>>
|[[mysql-property-topic-prefix]]<<mysql-property-topic-prefix, `+topic.prefix+`>>
|No default
|Logical name that identifies and provides a namespace for the particular MySQL database server/cluster in which {prodname} is capturing changes. The logical name should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector.
|Topic prefix that provides a namespace for the particular MySQL database server/cluster in which {prodname} is capturing changes. The topic prefix should be unique across all other connectors, since it is used as a prefix for all Kafka topic names that receive events emitted by this connector.
Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name. +
+
[WARNING]
@ -2931,14 +2931,6 @@ endif::community[]
|`.`
|Specify the delimiter for topic name, defaults to `.`.
|[[mysql-property-topic-prefix]]<<mysql-property-topic-prefix, `topic.prefix`>>
|`${database.server.name}`
|The name of the prefix to be used for all topics, defaults to xref:mysql-property-database-server-name[`${database.server.name}`].
[NOTE]
====
Once specify the prefix value to this property, the xref:mysql-property-database-server-name[`${database.server.name}`] will not play the prefix role of all topics.
====
|[[mysql-property-topic-cache-size]]<<mysql-property-topic-cache-size, `topic.cache.size`>>
|`10000`
|The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
@ -2949,7 +2941,7 @@ Once specify the prefix value to this property, the xref:mysql-property-database
+
_topic.heartbeat.prefix_._topic.prefix_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
For example, if the topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
|[[mysql-property-topic-transaction]]<<mysql-property-topic-transaction, `topic.transaction`>>
|`transaction`
@ -2957,7 +2949,7 @@ For example, if the database server name or topic prefix is `fulfillment`, the d
+
_topic.prefix_._topic.transaction_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
For example, if the topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
|===

View File

@ -150,11 +150,11 @@ The {prodname} connector for Oracle does not support schema changes while an inc
By default, the Oracle connector writes change events for all `INSERT`, `UPDATE`, and `DELETE` operations that occur in a table to a single Apache Kafka topic that is specific to that table.
The connector uses the following convention to name change event topics:
_serverName.schemaName.tableName_
_topicPrefix.schemaName.tableName_
The following list provides definitions for the components of the default name:
_serverName_:: The logical name of the server as specified by the xref:oracle-property-database-server-name[`database.server.name`] connector configuration property.
_topicPrefix_:: The topic prefix as specified by the xref:oracle-property-topic-prefix[`topic.prefix`] connector configuration property.
_schemaName_:: The name of the schema in which the operation occurred.
@ -182,7 +182,7 @@ For more information about using the logical topic routing SMT to customize topi
=== Schema change topic
You can configure a {prodname} Oracle connector to produce schema change events that describe structural changes that are applied to captured tables in the database.
The connector writes schema change events to a Kafka topic named `_<serverName>_`, where `_serverName_` is the logical server name that is specified in the xref:oracle-property-database-server-name[`database.server.name`] configuration property.
The connector writes schema change events to a Kafka topic named `_<serverName>_`, where `_topicName_` is the namespace that is specified in the xref:oracle-property-topic-prefix[`topic.prefix`] configuration property.
{prodname} emits a new message to this topic whenever it streams data from a new table.
@ -480,7 +480,7 @@ The following example shows a typical transaction boundary message:
----
Unless overridden via the xref:oracle-property-topic-transaction[`topic.transaction`] option,
the connector emits transaction events to the xref:oracle-property-database-server-name[`_<database.server.name>_`]`.transaction` topic.
the connector emits transaction events to the xref:oracle-property-topic-prefix[`_<topic.prefix>_`]`.transaction` topic.
// Type: concept
// ModuleID: how-the-debezium-oracle-connector-enriches-change-event-messages-with-transaction-metadata
@ -718,7 +718,7 @@ CREATE TABLE customers (
);
----
If the value of the xref:oracle-property-database-server-name[`_<database.server.name>_`]`.transaction` configuration property is set to `server1`,
If the value of the xref:oracle-property-topic-prefix[`_<topic.prefix>_`]`.transaction` configuration property is set to `server1`,
the JSON representation for every change event that occurs in the `customers` table in the database features the following key structure:
[source,json,indent=0,sub="attributes"]
@ -2204,7 +2204,7 @@ spec:
database.password: dbz // <6>
database.dbname: ORCLCDB // <7>
database.pdb.name : ORCLPDB1, <8>
database.server.name: server1 // <9>
topic.prefix: server1 // <9>
schema.history.kafka.bootstrap.servers: kafka:9092 // <10>
schema.history.kafka.topic: schema-changes.inventory // <11>
----
@ -2239,7 +2239,7 @@ spec:
|The name of the Oracle pluggable database that the connector captures changes from. Used in container database (CDB) installations only.
|9
|Logical name that identifies and provides a namespace for the Oracle database server from which the connector captures changes.
|Topic prefix identifies and provides a namespace for the Oracle database server from which the connector captures changes.
|10
|The list of Kafka brokers that this connector uses to write and recover DDL statements to the database schema history topic.
@ -2282,7 +2282,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
"database.user" : "c##dbzuser", // <5>
"database.password" : "dbz", // <6>
"database.dbname" : "ORCLCDB", // <7>
"database.server.name" : "server1", // <8>
"topic.prefix" : "server1", // <8>
"tasks.max" : "1", // <9>
"database.pdb.name" : "ORCLPDB1", // <10>
"schema.history.kafka.bootstrap.servers" : "kafka:9092", // <11>
@ -2297,7 +2297,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
<5> The name of the Oracle user, as specified in xref:{link-oracle-connector}#creating-users-for-the-connector[Creating users for the connector].
<6> The password for the Oracle user, as specified in xref:{link-oracle-connector}#creating-users-for-the-connector[Creating users for the connector].
<7> The name of the database to capture changes from.
<8> Logical name that identifies and provides a namespace for the Oracle database server from which the connector captures changes.
<8> Topic prefix that identifies and provides a namespace for the Oracle database server from which the connector captures changes.
<9> The maximum number of tasks to create for this connector.
<10> The name of the Oracle pluggable database that the connector captures changes from. Used in container database (CDB) installations only.
<11> The list of Kafka brokers that this connector uses to write and recover DDL statements to the database schema history topic.
@ -2316,7 +2316,7 @@ The following JSON example shows the same configuration as in the preceding exam
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"topic.prefix" : "server1",
"database.user" : "c##dbzuser",
"database.password" : "dbz",
"database.url": "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 1>)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=<oracle ip 2>)(PORT=1521)))(CONNECT_DATA=SERVICE_NAME=)(SERVER=DEDICATED)))",
@ -2388,7 +2388,7 @@ ifdef::community[]
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"topic.prefix" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##dbzuser",
@ -2414,7 +2414,7 @@ For non-CDB installation, do *not* specify the `database.pdb.name` property.
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"topic.prefix" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##dbzuser",
@ -2528,11 +2528,11 @@ endif::product[]
|No default
|Name of the Oracle pluggable database to connect to. Use this property with container database (CDB) installations only.
|[[oracle-property-database-server-name]]<<oracle-property-database-server-name, `+database.server.name+`>>
|[[oracle-property-topic-prefix]]<<oracle-property-topic-prefix, `+topic.prefix+`>>
|No default
|Logical name that identifies and provides a namespace for the Oracle database server from which the connector captures changes.
|Topic prefix that provides a namespace for the Oracle database server from which the connector captures changes.
The value that you set is used as a prefix for all Kafka topic names that the connector emits.
Specify a logical name that is unique among all connectors in your {prodname} environment.
Specify a topic prefix that is unique among all connectors in your {prodname} environment.
The following characters are valid: alphanumeric characters, hyphens, dots, and underscores. +
+
[WARNING]
@ -3127,14 +3127,6 @@ Adjust the chunk size to a value that provides the best performance in your envi
|`.`
|Specify the delimiter for topic name, defaults to `.`.
|[[oracle-property-topic-prefix]]<<oracle-property-topic-prefix, `topic.prefix`>>
|`${database.server.name}`
|The name of the prefix to be used for all topics, defaults to xref:oracle-property-database-server-name[`${database.server.name}`].
[NOTE]
====
Once specify the prefix value to this property, the xref:oracle-property-database-server-name[`${database.server.name}`] will not play the prefix role of all topics.
====
|[[oracle-property-topic-cache-size]]<<oracle-property-topic-cache-size, `topic.cache.size`>>
|`10000`
|The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
@ -3145,7 +3137,7 @@ Once specify the prefix value to this property, the xref:oracle-property-databas
+
_topic.heartbeat.prefix_._topic.prefix_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
For example, if the topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
|[[oracle-property-topic-transaction]]<<oracle-property-topic-transaction, `topic.transaction`>>
|`transaction`
@ -3153,7 +3145,7 @@ For example, if the database server name or topic prefix is `fulfillment`, the d
+
_topic.prefix_._topic.transaction_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
For example, if the topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
|===
@ -3787,7 +3779,7 @@ The following configuration example adds the properties `database.connection.ada
"config": {
"connector.class" : "io.debezium.connector.oracle.OracleConnector",
"tasks.max" : "1",
"database.server.name" : "server1",
"topic.prefix" : "server1",
"database.hostname" : "<oracle ip>",
"database.port" : "1521",
"database.user" : "c##dbzuser",

View File

@ -333,11 +333,11 @@ See xref:{link-postgresql-connector}#setting-up-postgresql[Setting up PostgreSQL
By default, the PostgreSQL connector writes change events for all `INSERT`, `UPDATE`, and `DELETE` operations that occur in a table to a single Apache Kafka topic that is specific to that table.
The connector uses the following convention to name change event topics:
_serverName.schemaName.tableName_
_topicPrefix.schemaName.tableName_
The following list provides definitions for the components of the default name:
_serverName_:: The logical name of the connector, as specified by the xref:postgresql-property-database-server-name[`database.server.name`] configuration property.
_topicPrefix_:: The topic prefix as specified by the xref:postgresql-property-topic-prefix[`topic.prefix`] configuration property.
_schemaName_:: The name of the database schema in which the change event occurred.
_tableName_:: The name of the database table in which the change event occurred.
@ -382,7 +382,7 @@ In addition to a xref:{link-postgresql-connector}#postgresql-events[_database ch
"kafkaPartition": null
----
* `sourcePartition` always defaults to the setting of the `database.server.name` connector configuration property.
* `sourcePartition` always defaults to the setting of the `topic.prefix` connector configuration property.
* `sourceOffset` contains information about the location of the server where the event occurred:
@ -446,7 +446,7 @@ If the data source does not provide {prodname} with the event time, then the fie
----
Unless overridden via the xref:postgresql-property-topic-transaction[`topic.transaction`] option,
transaction events are written to the topic named xref:postgresql-property-database-server-name[`_<database.server.name>_`]`.transaction`.
transaction events are written to the topic named xref:postgresql-property-topic-prefix[`_<topic.prefix>_`]`.transaction`.
.Change data event enrichment
@ -581,7 +581,7 @@ CREATE TABLE customers (
----
.Example change event key
If the `database.server.name` connector configuration property has the value `PostgreSQL_server`, every change event for the `customers` table while it has this definition has the same key structure, which in JSON looks like this:
If the `topic.prefix` connector configuration property has the value `PostgreSQL_server`, every change event for the `customers` table while it has this definition has the same key structure, which in JSON looks like this:
[source,json,indent=0]
----
@ -2448,7 +2448,7 @@ apiVersion: {KafkaConnectorApiVersion}
database.user: debezium
database.password: dbz
database.dbname: sampledb
database.server.name: fulfillment // <5>
topic.prefix: fulfillment // <5>
schema.include.list: public // <6>
plugin.name: pgoutput // <7>
----
@ -2462,7 +2462,7 @@ If any of the services stop or crash,
those tasks will be redistributed to running services.
<3> The connectors configuration.
<4> The name of the database host that is running the PostgreSQL server. In this example, the database host name is `192.168.99.100`.
<5> A unique server name.
<5> A unique topic prefix.
The server name is the logical identifier for the PostgreSQL server or cluster of servers.
This name is used as the prefix for all Kafka topics that receive change event records.
<6> The connector captures changes in only the `public` schema. It is possible to configure the connector to capture changes in only the tables that you choose. See xref:{link-postgresql-connector}#postgresql-property-table-include-list[`table.include.list` connector configuration property].
@ -2504,7 +2504,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
"database.user": "postgres", // <5>
"database.password": "postgres", // <6>
"database.dbname" : "postgres", // <7>
"database.server.name": "fulfillment", // <8>
"topic.prefix": "fulfillment", // <8>
"table.include.list": "public.inventory" // <9>
}
@ -2517,7 +2517,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
<5> The name of the PostgreSQL user that has the xref:{link-postgresql-connector}#postgresql-permissions[required privileges].
<6> The password for the PostgreSQL user that has the xref:{link-postgresql-connector}#postgresql-permissions[required privileges].
<7> The name of the PostgreSQL database to connect to
<8> The logical name of the PostgreSQL server/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro converter is used.
<8> The topic prefix for the PostgreSQL server/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro converter is used.
<9> A list of all tables hosted by this server that this connector will monitor. This is optional, and there are other properties for listing the schemas and tables to include or exclude from monitoring.
See the xref:{link-postgresql-connector}#postgresql-connector-properties[complete list of PostgreSQL connector properties] that can be specified in these configurations.
@ -2650,10 +2650,10 @@ If the publication already exists, either for all tables or configured with a su
|No default
|The name of the PostgreSQL database from which to stream the changes.
|[[postgresql-property-database-server-name]]<<postgresql-property-database-server-name, `+database.server.name+`>>
|[[postgresql-property-topic-prefix]]<<postgresql-property-topic-prefix, `+topic.prefix+`>>
|No default
|Logical name that identifies and provides a namespace for the particular PostgreSQL database server or cluster in which {prodname} is capturing changes.
The logical name should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector.
|Topic prefix that provides a namespace for the particular PostgreSQL database server or cluster in which {prodname} is capturing changes.
The prefix should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector.
Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name. +
+
[WARNING]
@ -3155,14 +3155,6 @@ The default value of `0` disables tracking XMIN tracking.
|`.`
|Specify the delimiter for topic name, defaults to `.`.
|[[postgresql-property-topic-prefix]]<<postgresql-property-topic-prefix, `topic.prefix`>>
|`${database.server.name}`
|The name of the prefix to be used for all topics, defaults to xref:postgresql-property-database-server-name[`${database.server.name}`].
[NOTE]
====
Once specify the prefix value to this property, the xref:postgresql-property-database-server-name[`${database.server.name}`] will not play the prefix role of all topics.
====
|[[postgresql-property-topic-cache-size]]<<postgresql-property-topic-cache-size, `topic.cache.size`>>
|`10000`
|The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
@ -3173,7 +3165,7 @@ Once specify the prefix value to this property, the xref:postgresql-property-dat
+
_topic.heartbeat.prefix_._topic.prefix_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
For example, if the topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
|[[postgresql-property-topic-transaction]]<<postgresql-property-topic-transaction, `topic.transaction`>>
|`transaction`
@ -3181,7 +3173,7 @@ For example, if the database server name or topic prefix is `fulfillment`, the d
+
_topic.prefix_._topic.transaction_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
For example, if the topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
|===

View File

@ -219,11 +219,11 @@ As consequence, capturing changes from indexed views (aka. materialized views) i
By default, the SQL Server connector writes events for all `INSERT`, `UPDATE`, and `DELETE` operations that occur in a table to a single Apache Kafka topic that is specific to that table.
The connector uses the following convention to name change event topics:
`_<serverName>_._<schemaName>_._<tableName>_`
`_<topicPrefix>_._<schemaName>_._<tableName>_`
The following list provides definitions for the components of the default name:
_serverName_:: The logical name of the server, as specified by the xref:sqlserver-property-database-server-name[`database.server.name`] configuration property.
_topicPrefix_:: The logical name of the server, as specified by the xref:sqlserver-property-topic-prefix[`topic.prefix`] configuration property.
_schemaName_:: The name of the database schema in which the change event occurred.
_tableName_:: The name of the database table in which the change event occurred.
@ -249,7 +249,7 @@ For more information about using the logical topic routing SMT to customize topi
=== Schema change topic
For each table for which CDC is enabled, the {prodname} SQL Server connector stores a history of the schema change events that are applied to captured tables in the database.
The connector writes schema change events to a Kafka topic named `_<serverName>_`, where `_serverName_` is the logical server name that is specified in the xref:sqlserver-property-database-server-name[`database.server.name`] configuration property.
The connector writes schema change events to a Kafka topic named `_<topicPrefix>_`, where `_topicPrefix_` is the logical server name that is specified in the xref:sqlserver-property-topic-prefix[`topic.prefix`] configuration property.
Messages that the connector sends to the schema change topic contain a payload, and, optionally, also contain the schema of the change event message.
The payload of a schema change event message includes the following elements:
@ -1141,7 +1141,7 @@ The following example shows a typical transaction boundary message:
----
Unless overridden via the xref:sqlserver-property-topic-transaction[`topic.transaction`] option,
transaction events are written to the topic named xref:sqlserver-property-database-server-name[`_<database.server.name>_`]`.transaction`.
transaction events are written to the topic named xref:sqlserver-property-topic-prefix[`_<topic.prefix>_`]`.transaction`.
//Type: concept
//ModuleID: change-data-event-enrichment
@ -1906,7 +1906,7 @@ spec:
database.user: debezium // <5>
database.password: dbz // <6>
database.names: testDB1,testDB2 // <7>
database.server.name: fullfullment // <8>
topic.prefix: fullfullment // <8>
database.include.list: dbo.customers // <9>
schema.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092 // <10>
schema.history.kafka.topic: dbhistory.fullfillment // <11>
@ -1941,7 +1941,7 @@ spec:
|The name of the database to capture changes from.
|8
|The logical name of the SQL Server instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the xref:{link-avro-serialization}#avro-serialization[Avro converter] is used.
|The topic prefix for the SQL Server instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the xref:{link-avro-serialization}#avro-serialization[Avro converter] is used.
|9
|A list of all tables whose changes {prodname} should capture.
@ -2000,7 +2000,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
"database.user": "sa", // <5>
"database.password": "Password!", // <6>
"database.names": "testDB1,testDB2", // <7>
"database.server.name": "fullfillment", // <8>
"topic.prefix": "fullfillment", // <8>
"table.include.list": "dbo.customers", // <9>
"schema.history.kafka.bootstrap.servers": "kafka:9092", // <10>
"schema.history.kafka.topic": "dbhistory.fullfillment" // <11>
@ -2016,7 +2016,7 @@ Optionally, you can ignore, mask, or truncate columns that contain sensitive dat
<5> The name of the SQL Server user
<6> The password for the SQL Server user
<7> The name of the database to capture changes from.
<8> The logical name of the SQL Server instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the xref:{link-avro-serialization}#avro-serialization[Avro converter] is used.
<8> The topic prefix for the SQL Server instance/cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the xref:{link-avro-serialization}#avro-serialization[Avro converter] is used.
<9> A list of all tables whose changes {prodname} should capture.
<10> The list of Kafka brokers that this connector will use to write and recover DDL statements to the database schema history topic.
<11> The name of the database schema history topic where the connector will write and recover DDL statements. This topic is for internal use only and should not be used by consumers.
@ -2124,10 +2124,10 @@ ifdef::community[]
|No default
|The comma-separated list of the SQL Server database names from which to stream the changes.
endif::community[]
|[[sqlserver-property-database-server-name]]<<sqlserver-property-database-server-name, `+database.server.name+`>>
|[[sqlserver-property-topic-prefix]]<<sqlserver-property-topic-prefix, `+topic.prefix+`>>
|No default
|Logical name that identifies and provides a namespace for the SQL Server database server that you want {prodname} to capture.
The logical name should be unique across all other connectors, since it is used as the prefix for all Kafka topic names that receive records from this connector.
|Topic prefix that provides a namespace for the SQL Server database server that you want {prodname} to capture.
The prefix should be unique across all other connectors, since it is used as the prefix for all Kafka topic names that receive records from this connector.
Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name. +
+
[WARNING]
@ -2137,7 +2137,6 @@ If you change the name value, after a restart, instead of continuing to emit eve
The connector is also unable to recover its database schema history topic.
====
|[[sqlserver-property-schema-include-list]]<<sqlserver-property-schema-include-list, `+schema.include.list+`>>
|No default
|An optional, comma-separated list of regular expressions that match names of schemas for which you *want* to capture changes. Any schema name not included in `schema.include.list` is excluded from having its changes captured. By default, all non-system schemas have their changes captured. Do not also set the `schema.exclude.list` property.
@ -2526,14 +2525,6 @@ When set to a value greater than zero, the connector uses the n-th LSN specified
|`.`
|Specify the delimiter for topic name, defaults to `.`.
|[[sqlserver-property-topic-prefix]]<<sqlserver-property-topic-prefix, `topic.prefix`>>
|`${database.server.name}`
|The name of the prefix to be used for all topics, defaults to xref:sqlserver-property-database-server-name[`${database.server.name}`].
[NOTE]
====
Once specify the prefix value to this property, the xref:sqlserver-property-database-server-name[`${database.server.name}`] will not play the prefix role of all topics.
====
|[[sqlserver-property-topic-cache-size]]<<sqlserver-property-topic-cache-size, `topic.cache.size`>>
|`10000`
|The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
@ -2544,7 +2535,7 @@ Once specify the prefix value to this property, the xref:sqlserver-property-data
+
_topic.heartbeat.prefix_._topic.prefix_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
For example, if the topic prefix is `fulfillment`, the default topic name is `__debezium-heartbeat.fulfillment`.
|[[sqlserver-property-topic-transaction]]<<sqlserver-property-topic-transaction, `topic.transaction`>>
|`transaction`
@ -2552,7 +2543,7 @@ For example, if the database server name or topic prefix is `fulfillment`, the d
+
_topic.prefix_._topic.transaction_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
For example, if the topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
For more information, see xref:sqlserver-transaction-metadata[Transaction Metadata].

View File

@ -67,9 +67,9 @@ When Kafka Connect gracefully shuts down, it stops the connectors, flushes all e
[[vitess-topic-names]]
=== Topics names
The Vitess connector writes events for all insert, update, and delete operations on a single table to a single Kafka topic. By default, the Kafka topic name is _serverName_._keyspaceName_._tableName_ where:
The Vitess connector writes events for all insert, update, and delete operations on a single table to a single Kafka topic. By default, the Kafka topic name is _topicPrefix_._keyspaceName_._tableName_ where:
* _serverName_ is the logical name of the connector as specified with the `database.server.name` connector configuration property.
* _topicPrefix_ is the topic prefix as specified by the `topic.prefix` connector configuration property.
* _keyspaceName_ is the name of the keyspace (a.k.a. database) where the operation occurred.
* _tableName_ is the name of the database table in which the operation occurred.
@ -129,7 +129,7 @@ If the data source does not provide {prodname} with the event time, then the fie
----
Unless overridden via the xref:vitess-property-topic-transaction[`topic.transaction`] option,
the connector emits transaction events to the xref:vitess-property-database-server-name[`_<database.server.name>_`]`.transaction` topic.
the connector emits transaction events to the xref:vitess-property-topic-prefix[`_<topic.prefix>_`]`.transaction` topic.
.Change data event enrichment
@ -261,7 +261,7 @@ CREATE TABLE customers (
----
.Example change event key
If the `database.server.name` connector configuration property has the value `Vitess_server`, every change event for the `customers` table while it has this definition has the same key structure, which in JSON looks like this:
If the `topic.prefix` connector configuration property has the value `Vitess_server`, every change event for the `customers` table while it has this definition has the same key structure, which in JSON looks like this:
[source,json,indent=0]
----
@ -964,7 +964,7 @@ You can choose to produce events for a subset of the schemas and tables. Optiona
"vitess.vtctld.port": "15999", // <10>
"vitess.vtctld.user": "vitess", // <11>
"vitess.vtctld.password": "vitess_password", // <12>
"database.server.name": "fullfillment", // <13>
"topic.prefix": "fullfillment", // <13>
"tasks.max": 1 // <14>
}
}
@ -981,7 +981,7 @@ You can choose to produce events for a subset of the schemas and tables. Optiona
<10> The port of the VTCtld server.
<11> The username of the VTCtld server (VTCtld gRPC).
<12> The password of the VTCtld database server (VTCtld gRPC).
<13> The logical name of the Vitess cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro converter is used.
<13> The topic prefix for the Vitess cluster, which forms a namespace and is used in all the names of the Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro converter is used.
<14> Only one task should operate at any one time.
See the xref:vitess-connector-properties[complete list of Vitess connector properties] that can be specified in these configurations.
@ -1217,11 +1217,11 @@ If set to `true`, you should also consider setting `vitess.gtid` in the configur
+
`RDONLY` represents streaming from the read-only slave MySQL instance.
|[[vitess-property-database-server-name]]<<vitess-property-database-server-name, `+database.server.name+`>>
|[[vitess-property-topic-prefix]]<<vitess-property-topic-prefix, `+topic.prefix+`>>
|
|Logical name that identifies and provides a namespace for the particular Vitess database server or cluster in which {prodname} is capturing changes.
|Topic prefix that provides a namespace for the particular Vitess database server or cluster in which {prodname} is capturing changes.
Only alphanumeric characters, hyphens, dots and underscores must be used in the database server logical name.
The logical name should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector.
The prefix should be unique across all other connectors, since it is used as a topic name prefix for all Kafka topics that receive records from this connector.
+
[WARNING]
====
@ -1429,14 +1429,6 @@ See xref:vitess-data-types[how Vitess connectors map data types] for the list of
|`.`
|Specify the delimiter for topic name, defaults to `.`.
|[[vitess-property-topic-prefix]]<<vitess-property-topic-prefix, `topic.prefix`>>
|`${database.server.name}`
|The name of the prefix to be used for all topics, defaults to xref:vitess-property-database-server-name[`${database.server.name}`].
[NOTE]
====
Once specify the prefix value to this property, the xref:vitess-property-database-server-name[`${database.server.name}`] will not play the prefix role of all topics.
====
|[[vitess-property-topic-cache-size]]<<vitess-property-topic-cache-size, `topic.cache.size`>>
|`10000`
|The size used for holding the topic names in bounded concurrent hash map. This cache will help to determine the topic name corresponding to a given data collection.
@ -1447,7 +1439,7 @@ Once specify the prefix value to this property, the xref:vitess-property-databas
+
_topic.prefix_._topic.transaction_ +
+
For example, if the database server name or topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
For example, if the topic prefix is `fulfillment`, the default topic name is `fulfillment.transaction`.
|===

View File

@ -96,7 +96,7 @@ Here's an example of code that configures and runs an embedded xref:{link-mysql-
props.setProperty("database.user", "mysqluser");
props.setProperty("database.password", "mysqlpw");
props.setProperty("database.server.id", "85744");
props.setProperty("database.server.name", "my-app-connector");
props.setProperty("topic.prefix", "my-app-connector");
props.setProperty("schema.history",
"io.debezium.storage.file.history.FileDatabaseHistory");
props.setProperty("schema.history.file.filename",
@ -151,7 +151,7 @@ The next few lines define the fields that are specific to the connector (documen
props.setProperty("database.user", "mysqluser")
props.setProperty("database.password", "mysqlpw")
props.setProperty("database.server.id", "85744")
props.setProperty("database.server.name", "my-app-connector")
props.setProperty("topic.prefix", "my-app-connector")
props.setProperty("schema.history",
"io.debezium.storage.file.history.FileDatabaseHistory")
props.setProperty("schema.history.file.filename",

View File

@ -96,7 +96,7 @@ The following example shows what a CloudEvents change event record emitted by a
}
----
<1> Unique ID that the connector generates for the change event based on the change event's content.
<2> The source of the event, which is the logical name of the database as specified by the `database.server.name` property in the connector's configuration.
<2> The source of the event, which is the logical name of the database as specified by the `topic.prefix` property in the connector's configuration.
<3> The CloudEvents specification version.
<4> Connector type that generated the change event. The format of this field is `io.debezium._CONNECTOR_TYPE_.datachangeevent`. The value of `_CONNECTOR_TYPE_` is `mongodb`, `mysql`, `postgresql`, or `sqlserver`.
<5> Time of the change in the source database.

View File

@ -127,7 +127,7 @@ public void canRegisterPostgreSqlConnector() throws Exception {
ConnectorConfiguration connector = ConnectorConfiguration
.forJdbcContainer(postgresContainer)
.with("database.server.name", "dbserver1");
.with("topic.prefix", "dbserver1");
debeziumContainer.registerConnector("my-connector",
connector); // <2>

View File

@ -84,7 +84,7 @@ debezium.source.database.port=5432
debezium.source.database.user=postgres
debezium.source.database.password=postgres
debezium.source.database.dbname=postgres
debezium.source.database.server.name=tutorial
debezium.source.topic.prefix=tutorial
debezium.source.schema.include.list=inventory
----
@ -171,7 +171,7 @@ __ ____ __ _____ ___ __ ____ ______
2020-05-15 11:33:12,838 INFO [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) database.hostname = localhost
2020-05-15 11:33:12,838 INFO [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) database.password = ********
2020-05-15 11:33:12,839 INFO [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) name = kinesis
2020-05-15 11:33:12,839 INFO [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) database.server.name = tutorial
2020-05-15 11:33:12,839 INFO [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) topic.prefix = tutorial
2020-05-15 11:33:12,839 INFO [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) database.port = 5432
2020-05-15 11:33:12,839 INFO [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) schema.include.list = inventory
2020-05-15 11:33:12,908 INFO [io.quarkus] (main) debezium-server 1.2.0-SNAPSHOT (powered by Quarkus 1.4.1.Final) started in 1.198s. Listening on: http://0.0.0.0:8080

View File

@ -84,7 +84,7 @@ Configuration config = Configuration.create()
.with("database.user", "mysqluser")
.with("database.password", "mysqlpw")
.with("database.server.id", 85744)
.with("database.server.name", "my-app-connector")
.with("topic.prefix", "my-app-connector")
.with("schema.history",
"io.debezium.storage.file.history.FileDatabaseHistory")
.with("schema.history.file.filename",
@ -136,7 +136,7 @@ The next few lines define the fields that are specific to the connector, which i
.with("database.user", "mysqluser")
.with("database.password", "mysqlpw")
.with("database.server.id", 85744)
.with("database.server.name", "products")
.with("topic.prefix", "products")
.with("schema.history",
"io.debezium.storage.file.history.FileDatabaseHistory")
.with("schema.history.file.filename",

View File

@ -374,7 +374,7 @@ spec:
database.user: ${secrets:debezium-example/debezium-secret:username}
database.password: ${secrets:debezium-example/debezium-secret:password}
database.server.id: 184054
database.server.name: mysql
topic.prefix: mysql
database.include.list: inventory
schema.history.kafka.bootstrap.servers: debezium-cluster-kafka-bootstrap:9092
schema.history.kafka.topic: schema-changes.inventory

View File

@ -352,7 +352,7 @@ spec:
database.user: ${secrets:debezium-example/debezium-secret:username}
database.password: ${secrets:debezium-example/debezium-secret:password}
database.server.id: 184054
database.server.name: mysql
topic.prefix: mysql
database.include.list: inventory
schema.history.kafka.bootstrap.servers: debezium-cluster-kafka-bootstrap:9092
schema.history.kafka.topic: schema-changes.inventory

View File

@ -71,7 +71,7 @@ spec:
database.user: debezium // <7>
database.password: dbz // <8>
database.dbname: mydatabase // <9>
database.server.name: inventory_connector_{context} // <10>
topic.prefix: inventory_connector_{context} // <10>
database.include.list: public.inventory // <11>
----
@ -110,10 +110,10 @@ spec:
|The name of the database to capture changes from.
|10
|The logical name of the database instance or cluster. +
|The topic prefix for the database instance or cluster. +
The specified name must be formed only from alphanumeric characters or underscores. +
Because the logical name is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster. +
The namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the xref:{link-avro-serialization}#avro-serialization[Avro connector].
Because the topic prefix is used as the prefix for any Kafka topics that receive change events from this connector, the name must be unique among the connectors in the cluster. +
This namespace is also used in the names of related Kafka Connect schemas, and the namespaces of a corresponding Avro schema if you integrate the connector with the xref:{link-avro-serialization}#avro-serialization[Avro connector].
|11
|The list of tables from which the connector captures change events.

View File

@ -42,7 +42,7 @@ you will register the following connector:
"database.user": "debezium",
"database.password": "dbz",
"database.server.id": "184054", // <5>
"database.server.name": "dbserver1", // <5>
"topic.prefix": "dbserver1", // <5>
"database.include.list": "inventory", // <6>
"schema.history.kafka.bootstrap.servers": "kafka:9092", // <7>
"schema.history.kafka.topic": "schema-changes.inventory" // <7>
@ -61,7 +61,7 @@ If any of the services stop or crash, those tasks will be redistributed to runni
which is the name of the Docker container running the MySQL server (`mysql`).
Docker manipulates the network stack within the containers so that each linked container can be resolved with `/etc/hosts` using the container name for the host name.
If MySQL were running on a normal network, you would specify the IP address or resolvable host name for this value.
<5> A unique server ID and name. The server name is the logical identifier for the MySQL server or cluster of servers.
<5> A unique topic prefix.
This name will be used as the prefix for all Kafka topics.
<6> Only changes in the `inventory` database will be detected.
<7> The connector will store the history of the database schemas in Kafka using this broker (the same broker to which you are sending events) and topic name.
@ -89,7 +89,7 @@ replace `localhost` with the IP address of of your Docker host.
[source,shell,options="nowrap"]
----
$ curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "database.hostname": "mysql", "database.port": "3306", "database.user": "debezium", "database.password": "dbz", "database.server.id": "184054", "database.server.name": "dbserver1", "database.include.list": "inventory", "schema.history.kafka.bootstrap.servers": "kafka:9092", "schema.history.kafka.topic": "dbhistory.inventory" } }'
$ curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "database.hostname": "mysql", "database.port": "3306", "database.user": "debezium", "database.password": "dbz", "database.server.id": "184054", "topic.prefix": "dbserver1", "database.include.list": "inventory", "schema.history.kafka.bootstrap.servers": "kafka:9092", "schema.history.kafka.topic": "dbhistory.inventory" } }'
----
ifdef::windows[]
@ -100,7 +100,7 @@ For example:
[source,shell,options="nowrap"]
----
$ curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ \"name\": \"inventory-connector\", \"config\": { \"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\", \"tasks.max\": \"1\", \"database.hostname\": \"mysql\", \"database.port\": \"3306\", \"database.user\": \"debezium\", \"database.password\": \"dbz\", \"database.server.id\": \"184054\", \"database.server.name\": \"dbserver1\", \"database.include.list\": \"inventory\", \"schema.history.kafka.bootstrap.servers\": \"kafka:9092\", \"schema.history.kafka.topic\": \"dbhistory.inventory\" } }'
$ curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ \"name\": \"inventory-connector\", \"config\": { \"connector.class\": \"io.debezium.connector.mysql.MySqlConnector\", \"tasks.max\": \"1\", \"database.hostname\": \"mysql\", \"database.port\": \"3306\", \"database.user\": \"debezium\", \"database.password\": \"dbz\", \"database.server.id\": \"184054\", \"topic.prefix\": \"dbserver1\", \"database.include.list\": \"inventory\", \"schema.history.kafka.bootstrap.servers\": \"kafka:9092\", \"schema.history.kafka.topic\": \"dbhistory.inventory\" } }'
----
Otherwise, you might see an error like the following:
@ -119,7 +119,7 @@ ifdef::community[]
If you use Podman, run the following command:
[source,shell,options="nowrap",subs="+attributes"]
----
$ curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "database.hostname": "0.0.0.0", "database.port": "3306", "database.user": "debezium", "database.password": "dbz", "database.server.id": "184054", "database.server.name": "dbserver1", "database.include.list": "inventory", "schema.history.kafka.bootstrap.servers": "0.0.0.0:9092", "schema.history.kafka.topic": "dbhistory.inventory" } }'
$ curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d '{ "name": "inventory-connector", "config": { "connector.class": "io.debezium.connector.mysql.MySqlConnector", "tasks.max": "1", "database.hostname": "0.0.0.0", "database.port": "3306", "database.user": "debezium", "database.password": "dbz", "database.server.id": "184054", "topic.prefix": "dbserver1", "database.include.list": "inventory", "schema.history.kafka.bootstrap.servers": "0.0.0.0:9092", "schema.history.kafka.topic": "dbhistory.inventory" } }'
----
====
endif::community[]