DBZ-2096 Wording fixes
This commit is contained in:
parent
c0e83b808e
commit
c512d0bab4
@ -1,7 +1,6 @@
|
|||||||
[id="debezium-architecture"]
|
[id="debezium-architecture"]
|
||||||
= {prodname} Architecture
|
= {prodname} Architecture
|
||||||
|
|
||||||
|
|
||||||
Most commonly, {prodname} is deployed via Apache {link-kafka-docs}/#connect[Kafka Connect].
|
Most commonly, {prodname} is deployed via Apache {link-kafka-docs}/#connect[Kafka Connect].
|
||||||
Kafka Connect is a framework and runtime for implementing and operating
|
Kafka Connect is a framework and runtime for implementing and operating
|
||||||
|
|
||||||
@ -27,21 +26,21 @@ Depending on the chosen sink connector, it may be needed to apply {prodname}'s {
|
|||||||
which will only propagate the "after" structure from {prodname}'s event envelope to the sink connector.
|
which will only propagate the "after" structure from {prodname}'s event envelope to the sink connector.
|
||||||
|
|
||||||
ifdef::community[]
|
ifdef::community[]
|
||||||
== {prodname} server
|
== {prodname} Server
|
||||||
|
|
||||||
Another way to deploy {prodname} is using the xref:operations/debezium-server.adoc[standalone server].
|
Another way to deploy {prodname} is using the xref:operations/debezium-server.adoc[{prodname} server].
|
||||||
The {prodname} server is a read-to-use application that streams change events from the source database to a variety of messaging infrastructures.
|
The {prodname} server is a configurable, read-to-use application that streams change events from a source database to a variety of messaging infrastructures.
|
||||||
|
|
||||||
The following image shows the architecture of a CDC pipeline using the {prodname} server:
|
The following image shows the architecture of a CDC pipeline using the {prodname} server:
|
||||||
|
|
||||||
image::debezium-server-architecture.png[{prodname} Architecture]
|
image::debezium-server-architecture.png[{prodname} Architecture]
|
||||||
|
|
||||||
The {prodname} server is configured to use one of the {prodname} source connectors to capture changes from the source database.
|
The {prodname} server is configured to use one of the {prodname} source connectors to capture changes from the source database.
|
||||||
By default, the changes from one capture table in the source database are emitted as change records and consumed by one of a variety of sink messaging infrastructures like Amazon Kinesis, Google Cloud Pub/Sub, or Apache Pulsar.
|
Change events can be serialized to different formats like JSON or Apache Avro and then will be sent to one of a variety of messaging infrastructures like Amazon Kinesis, Google Cloud Pub/Sub, or Apache Pulsar.
|
||||||
|
|
||||||
== Embedded Engine
|
== Embedded Engine
|
||||||
|
|
||||||
An alternative way for using the {prodname} connectors is the xref:operations/embedded.adoc[embedded engine].
|
Yet an alternative way for using the {prodname} connectors is the xref:operations/embedded.adoc[embedded engine].
|
||||||
In this case, {prodname} will not be run via Kafka Connect, but as a library embedded into your custom Java applications.
|
In this case, {prodname} will not be run via Kafka Connect, but as a library embedded into your custom Java applications.
|
||||||
This can be useful for either consuming change events within your application itself,
|
This can be useful for either consuming change events within your application itself,
|
||||||
without the needed for deploying complete Kafka and Kafka Connect clusters,
|
without the needed for deploying complete Kafka and Kafka Connect clusters,
|
||||||
|
Loading…
Reference in New Issue
Block a user