DBZ-3991 Update based in initial review; replace strings w/ attributes

This commit is contained in:
Bob Roldan 2021-12-13 23:45:56 -05:00 committed by Jakub Cechacek
parent 5f8fdf132c
commit 3fb6cd5902
9 changed files with 183 additions and 106 deletions

View File

@ -8,6 +8,7 @@
:mbean-name: {context}
:connector-file: {context}
:connector-class: Db2Connector
:connector-name: Db2
ifdef::community[]
:toc:
@ -1623,13 +1624,20 @@ endif::community[]
ifdef::product[]
You can use either of the following methods to deploy a {prodname} connector:
* xref:debezium-{context}-using-streams-to-deploy-a-connector[Use {StreamsName} to automatically create a container image that includes the connector plug-in]
* xref:openshift-streams-db2-connector-deployment[Use {StreamsName} to automatically create an image that includes the connector plug-in].
+ This is the preferred method.
* xref:deploying-debezium-{context}-connectors[Build a custom Kafka Connect container image from a Dockerfile].
* xref:deploying-debezium-db2-connectors[Build a custom Kafka Connect container image from a Dockerfile].
include::{partialsdir}/modules/all-connectors/con-connector-streams-deployment.adoc[leveloffset=+1]
include::{partialsdir}/modules/all-connectors/proc-connector-streams-deployment.adoc[leveloffset=+1]
// Type: concept
[id="openshift-streams-db2-connector-deployment"]
=== Db2 connector deployment using {StreamsName}
include::{partialsdir}/modules/all-connectors/con-connector-streams-deployment.adoc
// Type: procedure
[id="using-streams-to-deploy-debezium-db2-connectors"]
=== Using {StreamsName} to deploy a {prodname} Db2 connector
include::{partialsdir}/modules/all-connectors/proc-using-streams-to-deploy-a-debezium-connector.adoc
.Additional resources
@ -1831,10 +1839,6 @@ oc apply -f inventory-connector.yaml
+
The preceding command registers `inventory-connector` and the connector starts to run against the `mydatabase` database as defined in the `KafkaConnector` CR.
[id="verifying-that-the-debezium-{context}-connector-is-running"]
== Verifying that the connector is running
include::{partialsdir}/modules/all-connectors/proc-verifying-that-the-debezium-{context}-connector-is-running.adoct kafkatopics
----
endif::product[]
ifdef::community[]
@ -1912,6 +1916,14 @@ endif::community[]
When the connector starts, it {link-prefix}:{link-db2-connector}#db2-snapshots[performs a consistent snapshot] of the Db2 database tables that the connector is configured to capture changes for.
The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.
ifdef::product[]
// Type: procedure
[id="verifying-that-the-debezium-db2-connector-is-running"]
=== Verifying that the {prodname} Db2 connector is running
include::{partialsdir}/modules/all-connectors/proc-verifying-the-debezium-connector-deployment.adoc
endif::[product]
// Type: reference
// Title: Description of {prodname} Db2 connector configuration properties
// ModuleID: descriptions-of-debezium-db2-connector-configuration-properties
@ -2253,12 +2265,12 @@ endif::product[]
[id="debezium-{context}-connector-database-history-configuration-properties"]
==== {prodname} connector database history configuration properties
include::{partialsdir}/modules/all-connectors/ref-connector-configuration-database-history-properties.adoc[leveloffset=+1]
include::{partialsdir}/modules/all-connectors/ref-connector-configuration-database-history-properties.adoc
[id="debezium-{context}-connector-pass-through-database-driver-configuration-properties"]
==== {prodname} connector pass-through database driver configuration properties
include::{partialsdir}/modules/all-connectors/ref-connector-pass-through-database-driver-configuration-properties.adoc[leveloffset=+1]
include::{partialsdir}/modules/all-connectors/ref-connector-pass-through-database-driver-configuration-properties.adoc
// Type: assembly
// ModuleID: monitoring-debezium-db2-connector-performance
@ -2280,9 +2292,9 @@ The {prodname} Db2 connector provides three types of metrics that are in additio
[[db2-snapshot-metrics]]
=== Snapshot metrics
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-snapshot-metrics.adoc[leveloffset=+1]
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-snapshot-metrics.adoc[
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-incremental-snapshot-metrics.adoc[leveloffset=+1]
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-incremental-snapshot-metrics.adoc
// Type: reference
// ModuleID: monitoring-debezium-db2-connector-record-streaming
@ -2290,7 +2302,7 @@ include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-increment
[[db2-streaming-metrics]]
=== Streaming metrics
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-streaming-metrics.adoc[leveloffset=+1]
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-streaming-metrics.adoc
// Type: reference
// ModuleID: monitoring-debezium-db2-connector-schema-history
@ -2298,7 +2310,7 @@ include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-streaming
[[db2-schema-history-metrics]]
=== Schema history metrics
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-schema-history-metrics.adoc[leveloffset=+1]
include::{partialsdir}/modules/all-connectors/ref-connector-monitoring-schema-history-metrics.adoc
// Type: reference
// ModuleID: managing-debezium-db2-connectors

View File

@ -8,6 +8,7 @@
:mbean-name: {context}
:connector-file: {context}
:connector-class: MongoDb
:connector-name: MongoDB
ifdef::community[]
:toc:

View File

@ -8,6 +8,7 @@
:mbean-name: {context}
:connector-file: {context}
:connector-class: MySql
:connector-name: MySQL
ifdef::community[]
:toc:
:toc-placement: macro
@ -2007,16 +2008,33 @@ You can also xref:operations/openshift.adoc[run {prodname} on Kubernetes and Ope
endif::community[]
ifdef::product[]
To deploy a {prodname} MySQL connector, you add the connector files to Kafka Connect, create a custom container to run the connector, and then add the connector configuration to your container.
For details about deploying the {prodname} MySQL connector, see the following topics:
You can use either of the following methods to deploy a {prodname} MySQL connector:
* xref:using-streams-to-deploy-a-debezium-mysql-connector[Use {StreamsName} to automatically create an image that includes the connector plug-in].
+
This is the preferred method.
* xref:deploying-debezium-mysql-connectors[Build a custom Kafka Connect container image from a Dockerfile].
.Additional resources
* xref:deploying-debezium-mysql-connectors[]
* xref:descriptions-of-debezium-mysql-connector-configuration-properties[]
// Type: concept
[id="openshift-streams-mysql-connector-deployment"]
=== MySQL connector deployment using {StreamsName}
include::{partialsdir}/modules/all-connectors/con-connector-streams-deployment.adoc
//Type: procedure
[id="using-streams-to-deploy-debezium-mysql-connectors"]
=== Using {StreamsName} to deploy a {prodname} MySQL connector
include::{partialsdir}/modules/all-connectors/proc-using-streams-to-deploy-a-debezium-connector.adoc
// Type: procedure
// ModuleID: deploying-debezium-mysql-connectors
=== Deploying {prodname} MySQL connectors
//To deploy a {prodname} MySQL connector, you add the connector files to Kafka Connect, create a custom container to run the connector, and then add the connector configuration to your container.
To deploy a {prodname} MySQL connector, you must build a custom Kafka Connect container image that contains the {prodname} connector archive, and then push this container image to a container registry.
You then need to create the following custom resources (CRs):
@ -2210,9 +2228,6 @@ oc apply -f inventory-connector.yaml
+
The preceding command registers `inventory-connector` and the connector starts to run against the `inventory` database as defined in the `KafkaConnector` CR.
[id="verifying-that-the-debezium-{context}-connector-is-running"]
== Verifying that the connector is running
include::{partialsdir}/modules/all-connectors/proc-verifying-that-the-debezium-{context}-connector-is-running.adoc
endif::product[]
ifdef::community[]
@ -2290,6 +2305,15 @@ endif::community[]
When the connector starts, it {link-prefix}:{link-mysql-connector}#mysql-snapshots[performs a consistent snapshot] of the MySQL databases that the connector is configured for.
The connector then starts generating data change events for row-level operations and streaming change event records to Kafka topics.
ifdef::product[]
// Type: procedure
[id="verifying-that-the-debezium-mysql-connector-is-running"]
=== Verifying that the {prodname} MySQL connector is running
include::{partialsdir}/modules/all-connectors/proc-verifying-the-debezium-connector-deployment.adoc
endif::[product]
// Type: reference
// Title: Description of {prodname} MySQL connector configuration properties
// ModuleID: descriptions-of-debezium-mysql-connector-configuration-properties

View File

@ -7,6 +7,7 @@
:data-collection: table
:mbean-name: {context}
:connector-file: {context}
:connector-name: Oracle
ifdef::community[]
:toc:
:toc-placement: macro

View File

@ -8,6 +8,7 @@
:mbean-name: postgres
:connector-file: postgres
:connector-class: Postgres
:connector-name: PostgreSQL
ifdef::community[]
:toc:

View File

@ -9,6 +9,7 @@
:mbean-name: sql_server
:connector-file: {context}
:connector-class: SqlServer
:connector-name: SQL Server
ifdef::community[]
:toc:

View File

@ -1,31 +1,43 @@
////
//Include in the deployment section that is conditionalized for [product]
// Add under a concept heading in the connector file.
// You can use either of the following methods to deploy a {prodname} connector:
//* Use {StreamsName} to automatically create a container image that includes the connector plug-in This is the preferred method. [Links to Overview topic, which follows]
//* Build a custom Kafka Connect container image from a Dockerfile.
Beginning with {prodname} 1.6, the preferred method for deploying a {prodname} connector is to create a `KafkaConnect` custom resource CR that includes the connector configuration,
and then to use {StreamsName} to build a Kafka Connect container image that includes the connector plug-in automatically.
You then apply a `KafkaConnector` CR to deploy the connector.
//.Additional resources
//[Links to connector configuration properties]
To specify the connector to incorporate into the Kafka Connect image, you add it to the `build.plugins` configuration of the `KafkaConnect` custom resource,
alongside the standard Kafka Connect configuration properties, such as the number of replicas and the name of the Kafka Connect server.
// Set the type to concept
//(// Type: concept
// Set the following ID
//[id="overview-of-using-streams-to-deploy-a-debezium-<context>-connector"]
The ID should explicitly specify the connector type vs. using the {context} variable
// Add the following heading in the connector file.
//=== Overview of using {StreamsName} to deploy a {prodname} {connector-name} connector
// Follow the title with the following INCLUDE statement
//include::{partialsdir}/modules/all-connectors/con-connector-streams-deployment.adoc[leveloffset=+1]
You also add a `spec.build.output` parameter in the CR to specify where to store the resulting Kafka Connect container image.
Container images can be stored in a Docker repository, or in an OpenShift ImageStream.
To store images in an ImageStream, you must create the ImageStream manually before you deploy Kafka Connect.
////
//For {StreamsName} to create the new image automatically, the build configuration requires `output` properties that specify a container registry to store the container newly built image, and `plugins` properties that list the connector plug-ins and their artifacts to add to the image.
//You specify the connectors to include by adding the list of connector plug-ins and their artifacts to the `.spec.build.plugins` section of the `KafkaConnect` custom resource.
Beginning with {prodname} 1.6, the preferred method for deploying a {prodname} connector is to use {StreamsName} to build a Kafka Connect container image that includes the connector plug-in.
When you build the Kafka Connect image, {kafka-streams} downloads the connector plug-in artifacts that you specify, and incorporates them into the `KafkaConnect` image.
Optionally, for each connector plug-in, you can include other components that you want to use with the connector.
For example, you can integrate service registry artifacts or the Debezium scripting component with a connector.
During the deployment process, you create and use the following custom resources (CRs):
After the Kafka Connect pod that contains your connector starts, you can start the connector by creating a `KafkaConnector` resource.
The KafkaConnector CR specifies the connector configuration, which includes the following connection and deployment details:
* A `KafkaConnect` CR that defines your Kafka Connect instance and includes information about the connector artifacts needs to include in the image.
* A `KafkaConnector` CR that provides details that include information the connector uses to access the source database.
After {kafka-streams} starts the Kafka Connect pod, you start the connector by applying the `KafkaConnector` CR.
In the build specification for the Kafka Connect image, you can specify the connectors that are available to deploy.
For each connector plug-in, you can also specify other components that you want to make available for deployment.
For example, you can add {registry} artifacts, or the {prodname} scripting component.
When {kafka-streams} builds the Kafka Connect image, it downloads the specified artifacts, and incorporates them into the image.
The `spec.build.output` parameter in the `KafkaConnect` CR specifies where to store the resulting Kafka Connect container image.
Container images can be stored in a Docker registry, or in an OpenShift ImageStream.
To store images in an ImageStream, you must create the ImageStream before you deploy Kafka Connect.
ImageStreams are not created automatically.
* Name of the cluster where you want to deploy the connector.
* The database server name.
* Host address and port for connecting to the database
* The database account and password that the connector uses to connect to the database.
NOTE: If you use a `KafkaConnect` resource to create a cluster, afterwards you cannot use the Kafka Connect REST API to create or update connectors.
You can still use the REST API to retrieve information.

View File

@ -1,28 +1,32 @@
////
Add this content to the deployment section that is conditionalized for [product]
Add the content in this file to the part of the Deployment section in each connector file that is conditionalized for [product]
{prodname} provides the following methods for deploying a {prodname} connector:
You can use either of the following methods to deploy a {prodname} connector:
Confirm whether it's necessary to obtain JDBC drivers for Db2 and Oracle.
The current downstream doc doesn't explicitly mention obtaining the Db2 driver. Is it part of the RH package?
* xref:debezium-{context}-using-streams-to-deploy-a-connector[Use {StreamsName} to automatically create a container image that includes the connector plug-in]
+ This is the preferred method.
* xref:deploying-debezium-{context}-connectors[Build a custom Kafka Connect container image from a Dockerfile].
include::{partialsdir}/modules/all-connectors/con-connector-streams-deployment.adoc[leveloffset=+1]
include::{partialsdir}/modules/all-connectors/proc-connector-streams-deployment.adoc[leveloffset=+1]
For Db2 and Oracle, customer must obtain JDBC drivers.
WHere to insert that content.
In the current downstream Db2 doc there's no mention of obtaining the Db2 driver, although the information is part of the upstream doc.
// Insert the following anchor ID and title immediately after the bulleted list and add the connector name in place of <database>
[id="using-streams-to-deploy-a-debezium-{context}-connector"]
=== Using {StreamsName} to deploy a {prodname} <database> connector
=== Using {StreamsName} to deploy a {prodname} {connector-name} connector
// Follow the title with the following INCLUDE statement
include::{partialsdir}/modules/all-connectors/proc-{context}-debezium-using-streams-to-deploy-a-connector.adoc
////
When deploying {prodname} connectors on OpenShift, it's no longer necessary to first build your own Kafka Connect image.
Rather, the preferred method for deploying connectors on OpenShift is to use a build configuration in {kafka-streams} to automatically build a Kafka Connect container image that includes the {prodname} connector plug-ins that you want to use.
Instead, the preferred method for deploying connectors on OpenShift is to use a build configuration in {kafka-streams} to automatically build a Kafka Connect container image that includes the {prodname} connector plug-ins that you want to use.
During the build process, the {kafka-streams} Operator transforms input parameters in a `KafkaConnect` custom resource, including {prodname} connector definitions, into a Kafka Connect container image.
The build downloads the necessary artifacts from the Red Hat Maven repository or another configured HTTP server.
@ -30,30 +34,27 @@ The newly created container is pushed to the container repository that is specif
After {StreamsName} builds the Kafka Connect image, you create `KafkaConnector` custom resources to start the connectors that included in the build.
.Prerequisites
* You have administrator (?) access to an OpenShift cluster on which the cluster Operator is installed.
* The {StreamsName} Operator is deployed and {StreamsName} is running.
* A Kafka cluster is deployed as documented in link:{LinkDeployStreamsOpenShift}#kafka-cluster-str[{NameDeployStreamsOpenShift}].
* You have access to an OpenShift cluster on which the cluster Operator is installed.
* The {StreamsName} Operator is running.
* An Apache Kafka cluster is deployed as documented in link:{LinkDeployStreamsOpenShift}#kafka-cluster-str[{NameDeployStreamsOpenShift}].
* You have a {prodnamefull} license.
* The OpenShift `oc` CLI client is installed.
* You have access to the OpenShift Container Platform web console.
* A source database is running on a server that is available to the OpenShift cluster.
* You have one of the following:
To store the build image for the connnector as a container in a container registry such as `quay.io` or `docker.io`::
* The OpenShift `oc` CLI client is installed or you have access to the OpenShift Container Platform web console.
* Depending on how you intend to store the Kafka Connect build image, you need registry permissions or you must create an ImageStream resource:
To store the build image in an image registry, such as quay.io or Docker Hub:::
+
** An account and permissions to create and manage containers in the container registry.
To store the build image for the connector as a native OpenShift ImageStream::
** An account and permissions to create and manage images in the registry.
To store the build image as a native OpenShift ImageStream::
+
** An link:https://docs.openshift.com/container-platform/latest/openshift_images/images-understand.html#images-imagestream-use_images-understand[ImageStream] resource is deployed to the cluster.
You must explicitly create an ImageStream for the cluster.
ImageStreams are not created by default.
ImageStreams are not available by default.
.Procedure
. Log in to the OpenShift cluster.
. Create a new {prodname} `KafkaConnect` custom resource (CR) for the connector.
For example, create a `KafkaConnect` CR that specifies the `metadata.annotations` and `spec.build` properties as shown xref:debezium-connector-deployment-kafka-connect-custom-resource[Example 1, window="_blank" rel="noopener noreferrer"].
Save the file with the name `dbz-connect.yaml`.
Save the file with a name such as `dbz-connect.yaml`.
+
[id="debezium-connector-deployment-kafka-connect-custom-resource"]
.A `dbz-inventory-connector.yaml` file that defines a `KafkaConnect` custom resource that includes a {prodname} connector
@ -76,9 +77,9 @@ spec:
- name: debezium-connector-{connector-file}
artifacts:
- type: zip // <6>
url: {red-hat-maven-repository}/debezium/debezium-connector-{connector-file}/{debezium-version}-redhat-_<build_number>_/{debezium-version}-.zip // <7>
url: {red-hat-maven-repository}/debezium/debezium-connector-{connector-file}/{debezium-version}-redhat-_<build_number>_/debezium-connector-{connector-file}-{debezium-version}.zip // <7>
- type: zip
url: {red-hat-maven-repository}/apicurio/apicurio-registry-distro-connect-converter/{registry-version}/apicurio-registry-distro-connect-converter-{registry-version}-converter.zip
url: {red-hat-maven-repository}/apicurio/apicurio-registry-distro-connect-converter/{registry-version}-redhat-_<build-number>_/apicurio-registry-distro-connect-converter-{registry-version}-redhat-_<build-number>.zip
- type: zip
url: {red-hat-maven-repository}/debezium/debezium-scripting/{debezium-version}/debezium-scripting-{debezium-version}.zip
@ -93,28 +94,28 @@ spec:
| Sets the `strimzi.io/use-connector-resources` annotation to `"true"` to enable the Cluster Operator to use `KafkaConnector` resources to configure connectors in this Kafka Connect cluster.
|2
|The `spec.build` configuration specifies where to output the container image and which plug-ins. The build configuration specifies the output location for storing the Kafka Connect image after it is built, and lists the plug-ins to include .
|The `spec.build` configuration specifies where to store the build image and lists the plug-ins to include in the image, along with the location of the plug-in artifacts.
|3
|The `build.output` specifies the registry in which the newly built image is stored.
|4
|Specifies the name and image name for the image output.
Valid values for `output.type` are `docker` to push into a container registry like Docker Hub or Quay, or `ImageStream` to push the image to an internal OpenShift registry.
An ImageStream resource must be deployed to the cluster if you want to store the build image as a native OpenShift ImageStream rather than storing the image in a docker container.
Valid values for `output.type` are `docker` to push into a container registry like Docker Hub or Quay, or `imagestream` to push the image to an internal OpenShift ImageStream.
To use an ImageStream, an ImageStream resource must be deployed to the cluster.
For more information about specifying the `build.output` in the KafkaConnect configuration, see the link:{LinkStreamsOpenShift}#type-Build-reference[{StreamsName} Build schema reference documentation].
|5
|The `plugins` configuration lists all of the connectors that you want to include in the Kafka Connect image.
For each entry in the list you specify a plug-in `name`, and provide type and location information for the `artifacts` that are required to build the connector.
Optionally, for each connector plug-in, you can include other components that you want to use with the connector.
For example, you can add service registry artifacts, or the Debezium scripting component if you want to use these with a connector.
For each entry in the list, specify a plug-in `name`, and information for about the artifacts that are required to build the connector.
Optionally, for each connector plug-in, you can include other components that you want to be available for use with the connector.
For example, you can add Service Registry artifacts, or the {prodname} scripting component.
|6
|The value of `artifacts.type` specifies the file type of the artifact specified in the `artifacts.url`.
Valid types are `zip`, `tgz`, or `jar`.
{prodname} connector archives are provided in `zip` file format.
JDBC driver files are in JAR format.
{prodname} connector archives are provided in `.zip` file format.
JDBC driver files are in `.jar` format.
The `type` value must match the type of the file that is referenced in the `url` field.
|7
@ -132,12 +133,11 @@ oc create -f dbz-connect.yaml
----
+
Based on the configuration specified in the custom resource, the Streams Operator prepares a Kafka Connect image to deploy. +
After the build completes, it pushes the image to the specified container registry or ImageStream.
A Kafka Connect pod is started.
The pod includes the connectors that you listed in the configuration.
After the build completes, the Operator pushes the image to the specified registry or ImageStream, and starts the Kafka Connect cluster.
The connector artifacts that you listed in the configuration are available in the cluster.
. After the Kafka Connect connect pod starts, create a `KafkaConnector` resource to start the connector. +
For example, create the following `KafkaConnector` CR and save it as `{context}-inventory-connector.yaml`
. Create a `KafkaConnector` resource to define an instance of each connector that you want to deploy. +
For example, create the following `KafkaConnector` CR, and save it as `{context}-inventory-connector.yaml`
+
[id="debezium-connector-deployment-kafkaconnector-custom-resource"]
.A `{context}-inventory-connector.yaml` file that defines the `KafkaConnector` custom resource for a {prodname} connector
@ -210,6 +210,22 @@ The namespace is also used in the names of related Kafka Connect schemas, and th
|The list of tables from which the connector captures change events.
|===
. Create the connector resource by running the following command:
+
[source,shell,options="nowrap"]
----
oc create -n <namespace> -f _<kafkaConnector>_.yaml
----
+
For example,
+
[source,shell,options="nowrap"]
----
oc create -n debezium-docs -f dbz-connect.yaml
----
+
The connector is registered to the Kafka Connect cluster and starts to run against the database that is specified by `spec.config.database.dbname` in the `KafkaConnector` CR.
After the connector pod is ready, {prodname} is running.
For information about verifying the connector, see xref:[Verifying the connector deployment]

View File

@ -1,3 +1,10 @@
////
Insert this content after the legacy deployment instructions, in place of the verification steps.
// Add the following ID and title
// [id="verifying-that-the-debezium-<connector-name>-connector-is-running"]
// == Verifying that the {prodname}{connector-name} connector is running
//include::{partialsdir}/modules/all-connectors/proc-verifying-the-debezium-connector-deployment.adoc
////
If the connector starts correctly without errors, it creates a topic for each table that the connector is configured to capture.
Downstream applications can subscribe to these topics to retrieve information events that occur in the source database.
@ -17,7 +24,7 @@ To verify that the connector is running, perform the following operations from t
* From the OpenShift Container Platform web console:
.. Navigate to *Home -> Search*.
.. On the *Search* page, click *Resources* to open the *Select Resource* box, and then type `*KafkaConnector*`.
.. From the *KafkaConnectors* list, click the name of the connector that you want to check, for example *inventory-connector-mysql*.
.. From the *KafkaConnectors* list, click the name of the connector that you want to check, for example *inventory-connector-{context}*.
.. In the *Conditions* section, verify that the values in the *Type* and *Status* columns are set to *Ready* and *True*.
+
* From a terminal window:
@ -30,17 +37,18 @@ oc describe KafkaConnector _<connector-name>_ -n _<project>_
+
For example,
+
[source,shell,options="nowrap"]
[source,shell,options="nowrap",subs="+attributes,quotes"]
----
oc describe KafkaConnector inventory-connector-mysql -n debezium
oc describe KafkaConnector inventory-connector-{context} -n debezium
----
+
The command returns status information that is similar to the following output:
+
.`KafkaConnector` resource status
======================================
[source,shell,options="nowrap",subs="+attributes,quotes"]
----
Name: inventory-connector-mysql
Name: inventory-connector-{context}
Namespace: debezium
Labels: strimzi.io/cluster=debezium-kafka-connect-cluster
Annotations: <none>
@ -60,17 +68,17 @@ Metadata:
Operation: Update
Time: 2021-12-08T17:41:34Z
Resource Version: 996714
Self Link: /apis/kafka.strimzi.io/v1beta2/namespaces/debezium/kafkaconnectors/inventory-connector-mysql
Self Link: /apis/kafka.strimzi.io/v1beta2/namespaces/debezium/kafkaconnectors/inventory-connector-{context}
UID: 53390480-9e04-4415-8999-43bc9d072d54
Spec:
Class: io.debezium.connector.mysql.MySqlConnector
Class: io.debezium.connector.{context}.{connector-class}Connector
Config:
database.history.kafka.bootstrap.servers: debezium-kafka-cluster-kafka-bootstrap.debezium.svc.cluster.local:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: mysql.debezium-mysql.svc.cluster.local
database.hostname: {context}.debezium-{context}.svc.cluster.local
database.password: xxx
database.port: 3306
database.server.name: inventory_connector_mysql
database.server.name: inventory_connector_{context}
database.user: debezium
Tasks Max: 1
Status:
@ -82,7 +90,7 @@ Status:
Connector:
State: RUNNING
worker_id: 10.131.1.124:8083
Name: inventory-connector-mysql
Name: inventory-connector-{context}
Tasks:
Id: 0
State: RUNNING
@ -91,13 +99,13 @@ Status:
Observed Generation: 1
Tasks Max: 1
Topics:
inventory_connector_mysql
inventory_connector_mysql.inventory.addresses
inventory_connector_mysql.inventory.customers
inventory_connector_mysql.inventory.geom
inventory_connector_mysql.inventory.orders
inventory_connector_mysql.inventory.products
inventory_connector_mysql.inventory.products_on_hand
inventory_connector_{context}
inventory_connector_{context}.inventory.addresses
inventory_connector_{context}.inventory.customers
inventory_connector_{context}.inventory.geom
inventory_connector_{context}.inventory.orders
inventory_connector_{context}.inventory.products
inventory_connector_{context}.inventory.products_on_hand
Events: <none>
----
======================================
@ -106,7 +114,7 @@ Events: <none>
* From the OpenShift Container Platform web console.
.. Navigate to *Home -> Search*.
.. On the *Search* page, click *Resources* to open the *Select Resource* box, and then type `*KafkaTopic*`.
.. From the *KafkaTopics* list, click the name of the topic that you want to check, for example, *inventory-connector-mysql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d*.
.. From the *KafkaTopics* list, click the name of the topic that you want to check, for example, *inventory-connector-{context}.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d*.
.. In the *Conditions* section, verify that the values in the *Type* and *Status* columns are set to *Ready* and *True*.
* From a terminal window:
.. Enter the following command:
@ -120,20 +128,20 @@ The command returns status information that is similar to the following output:
+
.`KafkaTopic` resource status
======================================
[source,options="nowrap"]
[source,options="nowrap",subs="+attributes"]
----
NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
connect-cluster-configs debezium-kafka-cluster 1 1 True
connect-cluster-offsets debezium-kafka-cluster 25 1 True
connect-cluster-status debezium-kafka-cluster 5 1 True
consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a debezium-kafka-cluster 50 1 True
inventory-connector-mysql---a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True
inventory-connector-mysql.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True
inventory-connector-mysql.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True
inventory-connector-mysql.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True
inventory-connector-mysql.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True
inventory-connector-mysql.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True
inventory-connector-mysql.inventory.products-on-hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True
inventory-connector-{context}---a96f69b23d6118ff415f772679da623fbbb99421 debezium-kafka-cluster 1 1 True
inventory-connector-{context}.inventory.addresses---1b6beaf7b2eb57d177d92be90ca2b210c9a56480 debezium-kafka-cluster 1 1 True
inventory-connector-{context}.inventory.customers---9931e04ec92ecc0924f4406af3fdace7545c483b debezium-kafka-cluster 1 1 True
inventory-connector-{context}.inventory.geom---9f7e136091f071bf49ca59bf99e86c713ee58dd5 debezium-kafka-cluster 1 1 True
inventory-connector-{context}.inventory.orders---ac5e98ac6a5d91e04d8ec0dc9078a1ece439081d debezium-kafka-cluster 1 1 True
inventory-connector-{context}.inventory.products---df0746db116844cee2297fab611c21b56f82dcef debezium-kafka-cluster 1 1 True
inventory-connector-{context}.inventory.products-on-hand---8649e0f17ffcc9212e266e31a7aeea4585e5c6b5 debezium-kafka-cluster 1 1 True
schema-changes.inventory debezium-kafka-cluster 1 1 True
strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 debezium-kafka-cluster 1 1 True
strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b debezium-kafka-cluster 1 1 True
@ -155,23 +163,24 @@ oc exec -n __<project>__ -it _<kafka-cluster>_ -- /opt/kafka/bin/kafka-console-
+
For example,
+
[source,shell,options="nowrap",subs="+attributes,quotes"]
----
oc exec -n debezium -it debezium-kafka-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \
> --bootstrap-server localhost:9092 \
> --from-beginning \
> --property print.key=true \
> --topic=inventory_connector_mysql.inventory.products_on_hand
> --topic=inventory_connector_{context}.inventory.products_on_hand
----
+
The format for specifying the topic name is the same as the `oc describe` command returns in Step 1, for example, `inventory_connector_mysql.inventory.addresses`.
The format for specifying the topic name is the same as the `oc describe` command returns in Step 1, for example, `inventory_connector_{context}.inventory.addresses`.
+
For each event in the topic, the command returns information that is similar to the following output:
+
.Content of a {prodname} change event
======================================
[source,subs="+quotes"]
[source,subs="+attributes,quotes"]
----
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory_connector_mysql.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory_connector_mysql.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory_connector_mysql.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.mysql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":**"inventory_connector_mysql.inventory.products_on_hand.Envelope"**},*"payload"*:{*"before"*:**null**,*"after"*:{*"product_id":101,"quantity":3*},"source":{"version":"1.6.4.Final-redhat-00001","connector":"mysql","name":"inventory_connector_mysql","ts_ms":1638985247805,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"mysql-bin.000003","pos":156,"row":0,"thread":null,"query":null},*"op"*:**"r"**,"ts_ms":1638985247805,"transaction":null}}
{"schema":{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"}],"optional":false,"name":"inventory_connector_{context}.inventory.products_on_hand.Key"},"payload":{"product_id":101}} {"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory_connector_{context}.inventory.products_on_hand.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"field":"product_id"},{"type":"int32","optional":false,"field":"quantity"}],"optional":true,"name":"inventory_connector_{context}.inventory.products_on_hand.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":true,"field":"table"},{"type":"int64","optional":false,"field":"server_id"},{"type":"string","optional":true,"field":"gtid"},{"type":"string","optional":false,"field":"file"},{"type":"int64","optional":false,"field":"pos"},{"type":"int32","optional":false,"field":"row"},{"type":"int64","optional":true,"field":"thread"},{"type":"string","optional":true,"field":"query"}],"optional":false,"name":"io.debezium.connector.{context}.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"field":"transaction"}],"optional":false,"name":**"inventory_connector_{context}.inventory.products_on_hand.Envelope"**},*"payload"*:{*"before"*:**null**,*"after"*:{*"product_id":101,"quantity":3*},"source":{"version":"{debezium-version}-redhat-00001","connector":"{context}","name":"inventory_connector_{context}","ts_ms":1638985247805,"snapshot":"true","db":"inventory","sequence":null,"table":"products_on_hand","server_id":0,"gtid":null,"file":"{context}-bin.000003","pos":156,"row":0,"thread":null,"query":null},*"op"*:**"r"**,"ts_ms":1638985247805,"transaction":null}}
----
======================================
+