DBZ-2011 Complete .yaml config examples for connectors
This commit is contained in:
parent
e0f37a3a0c
commit
27b103de96
@ -623,11 +623,11 @@ Typically, you configure the {prodname} MongoDB connector in a `.yaml` file usin
|
||||
|
||||
[source,yaml,options="nowrap"]
|
||||
----
|
||||
apiVersion:
|
||||
kind: MongoDbConnector
|
||||
apiVersion: kafka.strimzi.io/v1beta1
|
||||
kind: KafkaConnector
|
||||
metadata:
|
||||
name: inventory-connector // <1>
|
||||
labels:
|
||||
labels: strimzi.io/cluster: my-connect-cluster
|
||||
spec:
|
||||
class: io.debezium.connector.mongodb.MongoDbConnector // <2>
|
||||
config:
|
||||
|
@ -1641,11 +1641,11 @@ Typically, you configure the {prodname} PostgreSQL connector in a `.yaml` file u
|
||||
|
||||
[source,yaml,options="nowrap"]
|
||||
----
|
||||
apiVersion:
|
||||
kind: PostgresConnector
|
||||
apiVersion: kafka.strimzi.io/v1beta1
|
||||
kind: KafkaConnector
|
||||
metadata:
|
||||
name: inventory-connector // <1>
|
||||
labels:
|
||||
labels: strimzi.io/cluster: my-connect-cluster
|
||||
spec:
|
||||
class: io.debezium.connector.postgresql.PostgresConnector
|
||||
tasksMax: 1 // <2>
|
||||
|
@ -124,7 +124,7 @@ endif::cdc-product[]
|
||||
[[how-the-connector-works]]
|
||||
== How the SQL Server connector works
|
||||
|
||||
[[snapshots]]
|
||||
[[snapshots-sqlserver]]
|
||||
=== Snapshots
|
||||
|
||||
SQL Server CDC is not designed to store the complete history of database changes.
|
||||
@ -161,7 +161,7 @@ After a restart, the connector will resume from the offset (commit and change LS
|
||||
|
||||
The connector is able to detect whether the CDC is enabled or disabled for whitelisted source table during the runtime and modify its behaviour.
|
||||
|
||||
[[topic-names]]
|
||||
[[topic-names-sqlserver]]
|
||||
=== Topic names
|
||||
|
||||
The SQL Server connector writes events for all insert, update, and delete operations on a single table to a single Kafka topic. The name of the Kafka topics always takes the form _serverName_._schemaName_._tableName_, where _serverName_ is the logical name of the connector as specified with the `database.server.name` configuration property, _schemaName_ is the name of the schema where the operation occurred, and _tableName_ is the name of the database table on which the operation occurred.
|
||||
@ -321,7 +321,7 @@ The schema change messages use as the key the name of the database to which the
|
||||
|
||||
=== Events
|
||||
|
||||
All data change events produced by the SQL Server connector have a key and a value, although the structure of the key and value depend on the table from which the change events originated (see link:#topic-names[Topic names]).
|
||||
All data change events produced by the SQL Server connector have a key and a value, although the structure of the key and value depend on the table from which the change events originated (see link:#topic-names-sqlserver[Topic names]).
|
||||
|
||||
[WARNING]
|
||||
====
|
||||
@ -1277,11 +1277,11 @@ Typically, you configure the {prodname} SQL Server connector in a `.yaml` file u
|
||||
|
||||
[source,yaml,options="nowrap"]
|
||||
----
|
||||
apiVersion:
|
||||
kind: SqlServerConnector
|
||||
apiVersion: kafka.strimzi.io/v1beta1
|
||||
kind: KafkaConnector
|
||||
metadata:
|
||||
name: inventory-connector // <1>
|
||||
labels:
|
||||
labels: strimzi.io/cluster: my-connect-cluster
|
||||
spec:
|
||||
class: io.debezium.connector.sqlserver.SqlServerConnector // <2>
|
||||
config:
|
||||
@ -1315,7 +1315,7 @@ See the link:#sqlserver-connector-properties[complete list of connector properti
|
||||
This configuration can be sent via POST to a running Kafka Connect service, which will then record the configuration and start up the one connector task that will connect to the SQL Server database, read the transaction log, and record events to Kafka topics.
|
||||
|
||||
|
||||
[[monitoring]]
|
||||
[[monitoring-sqlserver]]
|
||||
=== Monitoring
|
||||
|
||||
The {prodname} SQL Server connector has three metric types in addition to the built-in support for JMX metrics that Zookeeper, Kafka, and Kafka Connect have.
|
||||
@ -1327,7 +1327,7 @@ The {prodname} SQL Server connector has three metric types in addition to the bu
|
||||
Please refer to the xref:operations/monitoring.adoc[monitoring documentation] for details of how to expose these metrics via JMX.
|
||||
|
||||
[[monitoring-snapshots]]
|
||||
[[snapshot-metrics]]
|
||||
[[snapshot-metrics-sqlserver]]
|
||||
==== Snapshot Metrics
|
||||
|
||||
The *MBean* is `debezium.sql_server:type=connector-metrics,context=snapshot,server=_<database.server.name>_`.
|
||||
@ -1335,7 +1335,7 @@ The *MBean* is `debezium.sql_server:type=connector-metrics,context=snapshot,serv
|
||||
include::{partialsdir}/modules/cdc-all-connectors/r_connector-monitoring-snapshot-metrics.adoc[leveloffset=+1]
|
||||
|
||||
[[monitoring-streaming]]
|
||||
[[streaming-metrics]]
|
||||
[[streaming-metrics-sqlserver]]
|
||||
==== Streaming Metrics
|
||||
|
||||
The *MBean* is `debezium.sql_server:type=connector-metrics,context=streaming,server=_<database.server.name>_`.
|
||||
@ -1560,7 +1560,7 @@ The connector will read the table contents in multiple batches of this size. Def
|
||||
|
||||
|[[connector-property-snapshot-lock-timeout-ms]]<<connector-property-snapshot-lock-timeout-ms, `snapshot.lock.timeout.ms`>>
|
||||
|`10000`
|
||||
|An integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If table locks cannot be acquired in this time interval, the snapshot will fail (also see link:#snapshots[snapshots]). +
|
||||
|An integer value that specifies the maximum amount of time (in milliseconds) to wait to obtain table locks when performing a snapshot. If table locks cannot be acquired in this time interval, the snapshot will fail (also see link:#snapshots-sqlserver[snapshots]). +
|
||||
When set to `0` the connector will fail immediately when it cannot obtain the lock. Value `-1` indicates infinite waiting.
|
||||
|
||||
|[[connector-property-snapshot-select-statement-overrides]]<<connector-property-snapshot-select-statement-overrides, `snapshot.select.statement.overrides`>>
|
||||
|
Loading…
Reference in New Issue
Block a user