DBZ-8090 Edits & refactoring;addtl shared file entries; fix build errors
This commit is contained in:
parent
6c93c64784
commit
d4431e17e4
@ -65,7 +65,7 @@ For example, `signal.data.collection = inventory.debezium_signals`. +
|
|||||||
The format for the fully-qualified name of the signaling collection depends on the connector. +
|
The format for the fully-qualified name of the signaling collection depends on the connector. +
|
||||||
The following example shows the naming formats to use for each connector:
|
The following example shows the naming formats to use for each connector:
|
||||||
|
|
||||||
.Fully qualified {data-collection} names
|
.Fully qualified table names
|
||||||
[id="format-for-specifying-fully-qualified-names-for-data-collections"]
|
[id="format-for-specifying-fully-qualified-names-for-data-collections"]
|
||||||
Db2:: `_<schemaName>_._<tableName>_`
|
Db2:: `_<schemaName>_._<tableName>_`
|
||||||
MongoDB:: `_<databaseName>_._<collectionName>_`
|
MongoDB:: `_<databaseName>_._<collectionName>_`
|
||||||
@ -210,10 +210,10 @@ The xref:format-for-specifying-fully-qualified-names-for-data-collections[naming
|
|||||||
Each additional condition is an object that specifies the criteria for filtering the data that an ad hoc snapshot captures.
|
Each additional condition is an object that specifies the criteria for filtering the data that an ad hoc snapshot captures.
|
||||||
You can set the following properties for each additional condition:
|
You can set the following properties for each additional condition:
|
||||||
|
|
||||||
`data-collection`:: The fully-qualified name of the +{data-collection}+ that the filter applies to.
|
`data-collection`:: The fully-qualified name of the data collection that the filter applies to.
|
||||||
You can apply different filters to each +{data-collection}+.
|
You can apply different filters to each data collection.
|
||||||
`filter`:: Specifies column values that must be present in a database record for the snapshot to include it, for example, `"color='blue'"`. +
|
`filter`:: Specifies column values that must be present in a database record for the snapshot to include it, for example, `"color='blue'"`. +
|
||||||
The snapshot process evaluates records in the +{data-collection}+ against the `filter` value and captures only records that contain matching values. +
|
The snapshot process evaluates records in the data collection against the `filter` value and captures only records that contain matching values. +
|
||||||
+
|
+
|
||||||
The specific values that you assign to the `filter` property depend on the type of ad hoc snapshot:
|
The specific values that you assign to the `filter` property depend on the type of ad hoc snapshot:
|
||||||
|
|
||||||
@ -263,8 +263,7 @@ Currently {prodname} supports the `incremental` and `blocking` types.
|
|||||||
|
|
||||||
|`data-collections`
|
|`data-collections`
|
||||||
|_N/A_
|
|_N/A_
|
||||||
| An array of comma-separated regular expressions that match the fully-qualified names of the tables to include in the snapshot. +
|
| An array of comma-separated regular expressions that match the xref:format-for-specifying-fully-qualified-names-for-data-collections[fully-qualified names of the tables] to include in the snapshot. +
|
||||||
Specify the names by using the same format as is required for the xref:{context}-property-signal-data-collection[signal.data.collection] configuration option.
|
|
||||||
|
|
||||||
|`additional-conditions`
|
|`additional-conditions`
|
||||||
|_N/A_
|
|_N/A_
|
||||||
@ -272,11 +271,11 @@ Specify the names by using the same format as is required for the xref:{context}
|
|||||||
Each additional condition is an object that specifies the criteria for filtering the data that an ad hoc snapshot captures.
|
Each additional condition is an object that specifies the criteria for filtering the data that an ad hoc snapshot captures.
|
||||||
You can set the following properties for each additional condition:
|
You can set the following properties for each additional condition:
|
||||||
|
|
||||||
`data-collection`:: The fully-qualified name of the +{data-collection}+ that the filter applies to.
|
`data-collection`:: The fully-qualified name of the data collection that the filter applies to.
|
||||||
You can apply different filters to each +{data-collection}+.
|
You can apply different filters to each data collection.
|
||||||
|
|
||||||
`filter`:: Specifies column values that must be present in a database record for the snapshot to include it, for example, `"color='blue'"`. +
|
`filter`:: Specifies column values that must be present in a database record for the snapshot to include it, for example, `"color='blue'"`. +
|
||||||
The snapshot process evaluates records in the +{data-collection}+ against the `filter` value and captures only records that contain matching values. +
|
The snapshot process evaluates records in the data collection against the `filter` value and captures only records that contain matching values. +
|
||||||
+
|
+
|
||||||
The specific values that you assign to the `filter` property depend on the type of ad hoc snapshot:
|
The specific values that you assign to the `filter` property depend on the type of ad hoc snapshot:
|
||||||
|
|
||||||
|
@ -11,6 +11,8 @@
|
|||||||
:connector-class: InformixConnector
|
:connector-class: InformixConnector
|
||||||
:connector-name: Informix
|
:connector-name: Informix
|
||||||
:include-list-example: public.inventory
|
:include-list-example: public.inventory
|
||||||
|
:collection-container: database.schema
|
||||||
|
|
||||||
ifdef::community[]
|
ifdef::community[]
|
||||||
|
|
||||||
:toc:
|
:toc:
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
:linkattrs:
|
:linkattrs:
|
||||||
:icons: font
|
:icons: font
|
||||||
:source-highlighter: highlight.js
|
:source-highlighter: highlight.js
|
||||||
|
:connector-name: Spanner
|
||||||
|
|
||||||
toc::[]
|
toc::[]
|
||||||
|
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
:linkattrs:
|
:linkattrs:
|
||||||
:icons: font
|
:icons: font
|
||||||
:source-highlighter: highlight.js
|
:source-highlighter: highlight.js
|
||||||
|
:connector-name: Vitess
|
||||||
|
|
||||||
toc::[]
|
toc::[]
|
||||||
|
|
||||||
@ -259,13 +260,13 @@ The following example shows a data change event with ordered transaction metadat
|
|||||||
[[vitess-efficient-transaction-metadata]]
|
[[vitess-efficient-transaction-metadata]]
|
||||||
=== Efficient Transaction Metadata
|
=== Efficient Transaction Metadata
|
||||||
|
|
||||||
If you enable the connector to provide transaction metadata, it generates significantly more data.
|
If you enable the connector to provide transaction metadata, it generates significantly more data.
|
||||||
Not only does the connector send additional messages to the transaction topics, but messages that it sends to the data change topics are larger, because they include a transaction metadata block.
|
Not only does the connector send additional messages to the transaction topics, but messages that it sends to the data change topics are larger, because they include a transaction metadata block.
|
||||||
The added volume is due to the following factors:
|
The added volume is due to the following factors:
|
||||||
|
|
||||||
* The VGTID is stored twice, once as `source.vgtid`, and then again as `transaction.id`.
|
* The VGTID is stored twice, once as `source.vgtid`, and then again as `transaction.id`.
|
||||||
In keyspaces that include many shards, these VGTIDs can be quite large
|
In keyspaces that include many shards, these VGTIDs can be quite large
|
||||||
* In a sharded environment, the VGTID typically contains the VGTID for every shard.
|
* In a sharded environment, the VGTID typically contains the VGTID for every shard.
|
||||||
In keyspaces with many shards, the amount of data in the VGTID field can be quite large.
|
In keyspaces with many shards, the amount of data in the VGTID field can be quite large.
|
||||||
* The connector sends transaction topic messages for every transaction boundary event.
|
* The connector sends transaction topic messages for every transaction boundary event.
|
||||||
Typically, keyspaces that include many shards tend to generate a high number of transaction boundary events.
|
Typically, keyspaces that include many shards tend to generate a high number of transaction boundary events.
|
||||||
@ -1774,7 +1775,7 @@ See xref:vitess-data-types[how Vitess connectors map data types] for the list of
|
|||||||
|[[vitess-property-override-data-change-topic-prefix]]<<vitess-property-override-data-change-topic-prefix, `override.data.change.topic.prefix`>>
|
|[[vitess-property-override-data-change-topic-prefix]]<<vitess-property-override-data-change-topic-prefix, `override.data.change.topic.prefix`>>
|
||||||
|No default
|
|No default
|
||||||
|Specifies the prefix that the connector uses to create the names of data change topics when xref:vitess-property-topic-naming-strategy[topic.naming.strategy] is set to `io.debezium.connector.vitess.TableTopicNamingStrategy`.
|
|Specifies the prefix that the connector uses to create the names of data change topics when xref:vitess-property-topic-naming-strategy[topic.naming.strategy] is set to `io.debezium.connector.vitess.TableTopicNamingStrategy`.
|
||||||
The connector uses the value of this property instead of the specified xref:vitess-property-topic-prefix[`topic.prefix`].
|
The connector uses the value of this property instead of the specified xref:vitess-property-topic-prefix[`topic.prefix`].
|
||||||
|
|
||||||
|
|
||||||
|[[vitess-property-topic-delimiter]]<<vitess-property-topic-delimiter, `topic.delimiter`>>
|
|[[vitess-property-topic-delimiter]]<<vitess-property-topic-delimiter, `topic.delimiter`>>
|
||||||
|
@ -40,8 +40,8 @@ Currently, you can request `incremental` or `blocking` snapshots.
|
|||||||
|
|
||||||
|`data-collections`
|
|`data-collections`
|
||||||
|_N/A_
|
|_N/A_
|
||||||
| An array that contains regular expressions matching the fully-qualified names of the {data-collection} to be snapshotted. +
|
| An array that contains regular expressions matching the fully-qualified names of the {data-collection}s to include in the snapshot. +
|
||||||
The format of the names is the same as for the `signal.data.collection` configuration option.
|
For the {connector-name} connector, use the following format to specify the fully qualified name of a {data-collection}: `{collection-container}.{data-collection}`.
|
||||||
|
|
||||||
ifeval::['{context}' != 'mongodb']
|
ifeval::['{context}' != 'mongodb']
|
||||||
|`additional-conditions`
|
|`additional-conditions`
|
||||||
@ -67,7 +67,7 @@ endif::[]
|
|||||||
|
|
||||||
.Triggering an ad hoc incremental snapshot
|
.Triggering an ad hoc incremental snapshot
|
||||||
|
|
||||||
You initiate an ad hoc incremental snapshot by adding an entry with the `execute-snapshot` signal type to the signaling {data-collection}.
|
You initiate an ad hoc incremental snapshot by adding an entry with the `execute-snapshot` signal type to the signaling {data-collection}, or by xref:{context}-triggering-an-incremental-snapshot-kafka[sending a signal message to a Kafka signaling topic].
|
||||||
After the connector processes the message, it begins the snapshot operation.
|
After the connector processes the message, it begins the snapshot operation.
|
||||||
The snapshot process reads the first and last primary key values and uses those values as the start and end point for each {data-collection}.
|
The snapshot process reads the first and last primary key values and uses those values as the start and end point for each {data-collection}.
|
||||||
Based on the number of entries in the {data-collection}, and the configured chunk size, {prodname} divides the {data-collection} into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
|
Based on the number of entries in the {data-collection}, and the configured chunk size, {prodname} divides the {data-collection} into chunks, and proceeds to snapshot each chunk, in succession, one at a time.
|
||||||
@ -76,7 +76,7 @@ For more information, see xref:debezium-{context}-incremental-snapshots[Incremen
|
|||||||
|
|
||||||
.Triggering an ad hoc blocking snapshot
|
.Triggering an ad hoc blocking snapshot
|
||||||
|
|
||||||
You initiate an ad hoc blocking snapshot by adding an entry with the `execute-snapshot` signal type to the signaling {data-collection}.
|
You initiate an ad hoc blocking snapshot by adding an entry with the `execute-snapshot` signal type to the signaling {data-collection} or signaling topic.
|
||||||
After the connector processes the message, it begins the snapshot operation.
|
After the connector processes the message, it begins the snapshot operation.
|
||||||
The connector temporarily stops streaming, and then initiates a snapshot of the specified {data-collection}, following the same process that it uses during an initial snapshot.
|
The connector temporarily stops streaming, and then initiates a snapshot of the specified {data-collection}, following the same process that it uses during an initial snapshot.
|
||||||
After the snapshot completes, the connector resumes streaming.
|
After the snapshot completes, the connector resumes streaming.
|
||||||
|
@ -38,7 +38,7 @@ Only after collisions between the snapshot events and the streamed events are re
|
|||||||
|
|
||||||
.Snapshot window
|
.Snapshot window
|
||||||
To assist in resolving collisions between late-arriving `READ` events and streamed events that modify the same {data-collection} row, {prodname} employs a so-called _snapshot window_.
|
To assist in resolving collisions between late-arriving `READ` events and streamed events that modify the same {data-collection} row, {prodname} employs a so-called _snapshot window_.
|
||||||
The snapshot windows demarcates the interval during which an incremental snapshot captures data for a specified {data-collection} chunk.
|
The snapshot window demarcates the interval during which an incremental snapshot captures data for a specified {data-collection} chunk.
|
||||||
Before the snapshot window for a chunk opens, {prodname} follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic.
|
Before the snapshot window for a chunk opens, {prodname} follows its usual behavior and emits events from the transaction log directly downstream to the target Kafka topic.
|
||||||
But from the moment that the snapshot for a particular chunk opens, until it closes, {prodname} performs a de-duplication step to resolve collisions between events that have the same primary key..
|
But from the moment that the snapshot for a particular chunk opens, until it closes, {prodname} performs a de-duplication step to resolve collisions between events that have the same primary key..
|
||||||
|
|
||||||
@ -55,7 +55,7 @@ After the snapshot window for the chunk closes, the buffer contains only `READ`
|
|||||||
|
|
||||||
The connector repeats the process for each snapshot chunk.
|
The connector repeats the process for each snapshot chunk.
|
||||||
|
|
||||||
Currently, you can use one of the following methods to initiate an incremental snapshot:
|
Currently, you can use either of the following methods to initiate an incremental snapshot:
|
||||||
|
|
||||||
* xref:{context}-triggering-an-incremental-snapshot[Send an ad hoc snapshot signal to the signaling {data-collection} on the source database].
|
* xref:{context}-triggering-an-incremental-snapshot[Send an ad hoc snapshot signal to the signaling {data-collection} on the source database].
|
||||||
* xref:{context}-triggering-an-incremental-snapshot-kafka[Send a message to a configured Kafka signaling topic].
|
* xref:{context}-triggering-an-incremental-snapshot-kafka[Send a message to a configured Kafka signaling topic].
|
||||||
|
@ -28,22 +28,17 @@ For example, suppose you have a `products` {data-collection} that contains the f
|
|||||||
|
|
||||||
If you want an incremental snapshot of the `products` {data-collection} to include only the data items where `color=blue`, you can use the following SQL statement to trigger the snapshot:
|
If you want an incremental snapshot of the `products` {data-collection} to include only the data items where `color=blue`, you can use the following SQL statement to trigger the snapshot:
|
||||||
|
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
include::{partialsdir}/modules/snippets/{context}-frag-signaling-fq-table-formats.adoc[leveloffset=+1,tags=snapshot-additional-conditions-example]
|
||||||
----
|
|
||||||
INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "schema1.products", "filter": "color=blue"}]}');
|
|
||||||
----
|
|
||||||
|
|
||||||
The `additional-conditions` parameter also enables you to pass conditions that are based on more than one column.
|
The `additional-conditions` parameter also enables you to pass conditions that are based on more than one column.
|
||||||
For example, using the `products` {data-collection} from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which `color=blue` and `quantity>10`:
|
For example, using the `products` {data-collection} from the previous example, you can submit a query that triggers an incremental snapshot that includes the data of only those items for which `color=blue` and `quantity>10`:
|
||||||
|
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
include::{partialsdir}/modules/snippets/{context}-frag-signaling-fq-table-formats.adoc[leveloffset=+1,tags=snapshot-multiple-additional-conditions-example]
|
||||||
----
|
|
||||||
INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "schema1.products", "filter": "color=blue AND quantity>10"}]}');
|
|
||||||
----
|
|
||||||
|
|
||||||
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
|
The following example, shows the JSON for an incremental snapshot event that is captured by a connector.
|
||||||
|
|
||||||
.Example: Incremental snapshot event message
|
.Incremental snapshot event message
|
||||||
|
====
|
||||||
[source,json,index=0]
|
[source,json,index=0]
|
||||||
----
|
----
|
||||||
{
|
{
|
||||||
@ -63,6 +58,8 @@ The following example, shows the JSON for an incremental snapshot event that is
|
|||||||
"transaction":null
|
"transaction":null
|
||||||
}
|
}
|
||||||
----
|
----
|
||||||
|
====
|
||||||
|
.Description of fields in an incremental snapshot event message
|
||||||
[cols="1,1,4",options="header"]
|
[cols="1,1,4",options="header"]
|
||||||
|===
|
|===
|
||||||
|Item |Field name |Description
|
|Item |Field name |Description
|
||||||
|
@ -20,14 +20,9 @@ See the next section for more details.
|
|||||||
|`data-collections`
|
|`data-collections`
|
||||||
|_N/A_
|
|_N/A_
|
||||||
| An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of {data-collection} names or regular expressions to match {data-collection} names to remove from the snapshot. +
|
| An optional array of comma-separated regular expressions that match the fully-qualified names of the tables an array of {data-collection} names or regular expressions to match {data-collection} names to remove from the snapshot. +
|
||||||
Specify {data-collection} names by using the format `{container}.{data-collection}`.
|
Specify {data-collection} names by using the format `{collection-container}.{data-collection}`.
|
||||||
|
|
||||||
|===
|
|===
|
||||||
|
|
||||||
The following example shows a typical `stop-snapshot` Kafka message:
|
The following example shows a typical `stop-snapshot` Kafka message:
|
||||||
|
include::{partialsdir}/modules/snippets/{context}-frag-signaling-fq-table-formats.adoc[leveloffset=+1,tags=stopping-incremental-snapshot-kafka-example]
|
||||||
----
|
|
||||||
Key = `test_connector`
|
|
||||||
|
|
||||||
Value = `{"type":"stop-snapshot","data": {"data-collections": ["schema1.table1", "schema1.table2"], "type": "INCREMENTAL"}}`
|
|
||||||
----
|
|
||||||
|
@ -1,8 +1,14 @@
|
|||||||
You can also stop an incremental snapshot by sending a signal to the {data-collection} on the source database.
|
In some situations, it might be necessary to stop an incremental snapshot.
|
||||||
You submit a stop snapshot signal by inserting a document into the to the signaling {data-collection}.
|
For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations.
|
||||||
|
You can stop a snapshot that is already running by sending a signal to the {data-collection} on the source database.
|
||||||
|
|
||||||
|
You submit a stop snapshot signal to the signaling {data-collection} by inserting a stop snapshot signal document into it.
|
||||||
|
The stop snapshot signal that you submit specifies the `type` of the snapshot operation as `incremental`, and, optionally specifies the {data-collection}s that you want to omit from the currently running snapshot.
|
||||||
After {prodname} detects the change in the signaling {data-collection}, it reads the signal, and stops the incremental snapshot operation if it's in progress.
|
After {prodname} detects the change in the signaling {data-collection}, it reads the signal, and stops the incremental snapshot operation if it's in progress.
|
||||||
|
|
||||||
The query that you submit specifies the snapshot operation of `incremental`, and, optionally, the {data-collection}s of the current running snapshot to be removed.
|
|
||||||
|
.Additional resources
|
||||||
|
You can also stop an incremental snapshot by sending a JSON message to the xref:{context}-stopping-an-incremental-snapshot-kafka[Kafka signaling topic].
|
||||||
|
|
||||||
.Prerequisites
|
.Prerequisites
|
||||||
|
|
||||||
|
@ -1,9 +1,13 @@
|
|||||||
You can also stop an incremental snapshot by sending a signal to the {data-collection} on the source database.
|
In some situations, it might be necessary to stop an incremental snapshot.
|
||||||
You submit a stop snapshot signal to the {data-collection} by sending a SQL `INSERT` query.
|
For example, you might realize that snapshot was not configured correctly, or maybe you want to ensure that resources are available for other database operations.
|
||||||
|
You can stop a snapshot that is already running by sending a signal to the signaling {data-collection} on the source database.
|
||||||
|
|
||||||
|
You submit a stop snapshot signal to the signaling {data-collection} by sending it in a SQL `INSERT` query.
|
||||||
|
The stop-snapshot signal specifies the `type` of the snapshot operation as `incremental`, and optionally specifies the {data-collection}s that you want to omit from the currently running snapshot.
|
||||||
After {prodname} detects the change in the signaling {data-collection}, it reads the signal, and stops the incremental snapshot operation if it's in progress.
|
After {prodname} detects the change in the signaling {data-collection}, it reads the signal, and stops the incremental snapshot operation if it's in progress.
|
||||||
|
|
||||||
The query that you submit specifies the snapshot operation of `incremental`, and, optionally, the {data-collection}s of the current running snapshot to be removed.
|
.Additional resources
|
||||||
|
You can also stop an incremental snapshot by sending a JSON message to the xref:{context}-stopping-an-incremental-snapshot-kafka[Kafka signaling topic].
|
||||||
|
|
||||||
.Prerequisites
|
.Prerequisites
|
||||||
|
|
||||||
@ -22,14 +26,7 @@ INSERT INTO _<signalTable>_ (id, type, data) values (_'<id>'_, 'stop-snapshot',
|
|||||||
+
|
+
|
||||||
For example,
|
For example,
|
||||||
+
|
+
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
include::{partialsdir}/modules/snippets/{context}-frag-signaling-fq-table-formats.adoc[leveloffset=+1,tags=stopping-incremental-snapshot-example]
|
||||||
----
|
|
||||||
INSERT INTO myschema.debezium_signal (id, type, data) // <1>
|
|
||||||
values ('ad-hoc-1', // <2>
|
|
||||||
'stop-snapshot', // <3>
|
|
||||||
'{"data-collections": ["schema1.table1", "schema2.table2"], // <4>
|
|
||||||
"type":"incremental"}'); // <5>
|
|
||||||
----
|
|
||||||
+
|
+
|
||||||
The values of the `id`, `type`, and `data` parameters in the signal command correspond to the {link-prefix}:{link-signalling}#debezium-signaling-description-of-required-structure-of-a-signaling-data-collection[fields of the signaling {data-collection}].
|
The values of the `id`, `type`, and `data` parameters in the signal command correspond to the {link-prefix}:{link-signalling}#debezium-signaling-description-of-required-structure-of-a-signaling-data-collection[fields of the signaling {data-collection}].
|
||||||
+
|
+
|
||||||
@ -41,7 +38,7 @@ The following table describes the parameters in the example:
|
|||||||
|Item|Value |Description
|
|Item|Value |Description
|
||||||
|
|
||||||
|1
|
|1
|
||||||
|`{container}.debezium_signal`
|
|`{collection-container}.debezium_signal`
|
||||||
|Specifies the fully-qualified name of the signaling {data-collection} on the source database.
|
|Specifies the fully-qualified name of the signaling {data-collection} on the source database.
|
||||||
|
|
||||||
|2
|
|2
|
||||||
@ -57,7 +54,7 @@ Use this string to identify logging messages to entries in the signaling {data-c
|
|||||||
|4
|
|4
|
||||||
|`data-collections`
|
|`data-collections`
|
||||||
|An optional component of the `data` field of a signal that specifies an array of {data-collection} names or regular expressions to match {data-collection} names to remove from the snapshot. +
|
|An optional component of the `data` field of a signal that specifies an array of {data-collection} names or regular expressions to match {data-collection} names to remove from the snapshot. +
|
||||||
The array lists regular expressions which match {data-collection}s by their fully-qualified names in the format `{container}.table`
|
The array lists regular expressions which match {data-collection}s by their fully-qualified names in the format `{collection-container}.table`
|
||||||
|
|
||||||
If you omit this component from the `data` field, the signal stops the entire incremental snapshot that is in progress.
|
If you omit this component from the `data` field, the signal stops the entire incremental snapshot that is in progress.
|
||||||
|
|
||||||
|
@ -34,13 +34,14 @@ You can apply different filters to each {data-collection}.
|
|||||||
The values that you assign to the `filter` parameter are the same types of values that you might specify in the `WHERE` clause of `SELECT` statements when you set the `snapshot.select.statement.overrides` property for a blocking snapshot.
|
The values that you assign to the `filter` parameter are the same types of values that you might specify in the `WHERE` clause of `SELECT` statements when you set the `snapshot.select.statement.overrides` property for a blocking snapshot.
|
||||||
|===
|
|===
|
||||||
|
|
||||||
An example of the execute-snapshot Kafka message:
|
.An `execute-snapshot` Kafka message
|
||||||
|
====
|
||||||
----
|
----
|
||||||
Key = `test_connector`
|
Key = `test_connector`
|
||||||
|
|
||||||
Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.table1", "schema1.table2"], "type": "INCREMENTAL"}}`
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["{collection-container}.table1", "{collection-container}.table2"], "type": "INCREMENTAL"}}`
|
||||||
----
|
----
|
||||||
|
====
|
||||||
|
|
||||||
.Ad hoc incremental snapshots with additional-conditions
|
.Ad hoc incremental snapshots with additional-conditions
|
||||||
|
|
||||||
@ -50,22 +51,15 @@ Typically, when {prodname} runs a snapshot, it runs a SQL query such as:
|
|||||||
|
|
||||||
`SELECT * FROM _<tableName>_ ....`
|
`SELECT * FROM _<tableName>_ ....`
|
||||||
|
|
||||||
When the snapshot request includes an `additional-conditions` property, the `data-collection` and `filter` parameters of the property are appended to the SQL query, for example:
|
When the snapshot request includes an `additional-conditions` property, the `data-collection` and `filter` parameters of the property are appended to the SQL query, for example:
|
||||||
|
|
||||||
`SELECT * FROM _<data-collection>_ WHERE _<filter>_ ....`
|
`SELECT * FROM _<data-collection>_ WHERE _<filter>_ ....`
|
||||||
|
|
||||||
For example, given a `products` {data-collection} with the columns `id` (primary key), `color`, and `brand`, if you want a snapshot to include only content for which `color='blue'`, when you request the snapshot, you could add the `additional-conditions` property to filter the content:
|
For example, given a `products` {data-collection} with the columns `id` (primary key), `color`, and `brand`, if you want a snapshot to include only content for which `color='blue'`, when you request the snapshot, you could add the `additional-conditions` property to filter the content:
|
||||||
----
|
|
||||||
Key = `test_connector`
|
|
||||||
|
|
||||||
Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue'"}]}}`
|
include::{partialsdir}/modules/snippets/{context}-frag-signaling-fq-table-formats.adoc[leveloffset=+1,tags=triggering-incremental-snapshot-kafka-addtl-cond-example]
|
||||||
----
|
|
||||||
|
|
||||||
You can use the `additional-conditions` property to pass conditions based on multiple columns.
|
You can also use the `additional-conditions` property to pass conditions based on multiple columns.
|
||||||
For example, using the same `products` {data-collection} as in the previous example, if you want a snapshot to include only the content from the `products` {data-collection} for which `color='blue'`, and `brand='MyBrand'`, you could send the following request:
|
For example, using the same `products` {data-collection} as in the previous example, if you want a snapshot to include only the content from the `products` {data-collection} for which `color='blue'`, and `brand='MyBrand'`, you could send the following request:
|
||||||
|
|
||||||
----
|
include::{partialsdir}/modules/snippets/{context}-frag-signaling-fq-table-formats.adoc[leveloffset=+1,tags=triggering-incremental-snapshot-kafka-multi-addtl-cond-example]
|
||||||
Key = `test_connector`
|
|
||||||
|
|
||||||
Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
|
||||||
----
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
Currently, the only way to initiate an incremental snapshot is to send an {link-prefix}:{link-signalling}#debezium-signaling-ad-hoc-snapshots[ad hoc snapshot signal] to the signaling {data-collection} on the source database.
|
To initiate an incremental snapshot, you can send an {link-prefix}:{link-signalling}#debezium-signaling-ad-hoc-snapshots[ad hoc snapshot signal] to the signaling {data-collection} on the source database.
|
||||||
|
|
||||||
You submit a signal to the signaling {data-collection} by using the MongoDB `insert()` method.
|
You submit a signal to the signaling {data-collection} by using the MongoDB `insert()` method.
|
||||||
|
|
||||||
|
@ -13,12 +13,7 @@ To specify the {data-collection}s to include in the snapshot, provide a `data-co
|
|||||||
The `data-collections` array for an incremental snapshot signal has no default value.
|
The `data-collections` array for an incremental snapshot signal has no default value.
|
||||||
If the `data-collections` array is empty, {prodname} interprets the empty array to mean that no action is required, and it does not perform a snapshot.
|
If the `data-collections` array is empty, {prodname} interprets the empty array to mean that no action is required, and it does not perform a snapshot.
|
||||||
|
|
||||||
[NOTE]
|
include::{partialsdir}/modules/snippets/{context}-frag-signaling-fq-table-formats.adoc[leveloffset=+1,tags=fq-table-name-format-note]
|
||||||
====
|
|
||||||
If the name of a {data-collection} that you want to include in a snapshot contains a dot (`.`) in the name of the database, schema, or table, to add the {data-collection} to the `data-collections` array, you must escape each part of the name in double quotes. +
|
|
||||||
+
|
|
||||||
For example, to include a table that exists in the `*public*` schema and that has the name `*My.Table*`, use the following format: `*"public"."My.Table"*`.
|
|
||||||
====
|
|
||||||
|
|
||||||
.Prerequisites
|
.Prerequisites
|
||||||
|
|
||||||
@ -37,15 +32,7 @@ INSERT INTO _<signalTable>_ (id, type, data) VALUES (_'<id>'_, _'<snapshotType>'
|
|||||||
+
|
+
|
||||||
For example,
|
For example,
|
||||||
+
|
+
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
include::{partialsdir}/modules/snippets/{context}-frag-signaling-fq-table-formats.adoc[leveloffset=+1,tags=snapshot-signal-example]
|
||||||
----
|
|
||||||
INSERT INTO myschema.debezium_signal (id, type, data) // <1>
|
|
||||||
values ('ad-hoc-1', // <2>
|
|
||||||
'execute-snapshot', // <3>
|
|
||||||
'{"data-collections": ["schema1.table1", "schema2.table2"], // <4>
|
|
||||||
"type":"incremental", // <5>
|
|
||||||
"additional-conditions":[{"data-collection": "schema1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
|
||||||
----
|
|
||||||
+
|
+
|
||||||
The values of the `id`,`type`, and `data` parameters in the command correspond to the {link-prefix}:{link-signalling}#debezium-signaling-description-of-required-structure-of-a-signaling-data-collection[fields of the signaling {data-collection}].
|
The values of the `id`,`type`, and `data` parameters in the command correspond to the {link-prefix}:{link-signalling}#debezium-signaling-description-of-required-structure-of-a-signaling-data-collection[fields of the signaling {data-collection}].
|
||||||
+
|
+
|
||||||
@ -57,7 +44,7 @@ The following table describes the parameters in the example:
|
|||||||
|Item |Value |Description
|
|Item |Value |Description
|
||||||
|
|
||||||
|1
|
|1
|
||||||
|`myschema.debezium_signal`
|
|`{collection-container}.debezium_signal`
|
||||||
|Specifies the fully-qualified name of the signaling {data-collection} on the source database.
|
|Specifies the fully-qualified name of the signaling {data-collection} on the source database.
|
||||||
|
|
||||||
|2
|
|2
|
||||||
@ -74,14 +61,14 @@ Rather, during the snapshot, {prodname} generates its own `id` string as a water
|
|||||||
|4
|
|4
|
||||||
|`data-collections`
|
|`data-collections`
|
||||||
|A required component of the `data` field of a signal that specifies an array of {data-collection} names or regular expressions to match {data-collection} names to include in the snapshot. +
|
|A required component of the `data` field of a signal that specifies an array of {data-collection} names or regular expressions to match {data-collection} names to include in the snapshot. +
|
||||||
The array lists regular expressions that use the format `{container}.table` to match the fully-qualified names of {data-collection}s.
|
The array lists regular expressions that use the format `{collection-container}.table` to match the fully-qualified names of the {data-collection}s.
|
||||||
This format is the same as the one that you use to specify the name of the connector's xref:{context}-property-signal-data-collection[signaling {data-collection}].
|
This format is the same as the one that you use to specify the name of the connector's {link-prefix}:{link-signalling}#format-for-specifying-fully-qualified-names-for-data-collections[signaling {data-collection}].
|
||||||
|
|
||||||
|5
|
|5
|
||||||
|`incremental`
|
|`incremental`
|
||||||
|An optional `type` component of the `data` field of a signal that specifies the type of snapshot operation to run. +
|
|An optional `type` component of the `data` field of a signal that specifies the type of snapshot operation to run. +
|
||||||
Valid values are `incremental` and `blocking`. +
|
Valid values are `incremental` and `blocking`. +
|
||||||
If you do not specify a value, the connector runs an incremental snapshot.
|
If you do not specify a value, the connector defaults to performing an incremental snapshot.
|
||||||
|
|
||||||
|6
|
|6
|
||||||
|`additional-conditions`
|
|`additional-conditions`
|
||||||
|
@ -1,21 +1,43 @@
|
|||||||
tag::fq-table-format-in-example[]
|
= Shared snippets for Db2 and PG incremental snapshots
|
||||||
For example, to include a table that exists in the `*public*` schema and that has the name `*My.Table*`, use the following format: `*"public"."My.Table"*`.
|
|
||||||
end::fq-table-format-in-example[]
|
|
||||||
|
|
||||||
|
|
||||||
|
== Triggering an incremental snapshot (SQL)
|
||||||
|
|
||||||
|
=== `data-collections` note
|
||||||
|
|
||||||
|
tag::fq-table-name-format-note[]
|
||||||
|
[NOTE]
|
||||||
|
====
|
||||||
|
If the name of a table that you want to include in a snapshot contains a dot (`.`), a space, or some other non-alphanumeric character, you must escape the table name in double quotes. +
|
||||||
|
For example, to include a table that exists in the `*public*` schema and that has the name `*My.Table*`, use the following format: `*"public.\"My.Table\""*`.
|
||||||
|
====
|
||||||
|
end::fq-table-name-format-note[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Using a source signaling channel to trigger an incremental snapshot
|
||||||
|
|
||||||
|
// Example in Step 1 of procedure
|
||||||
|
|
||||||
tag::snapshot-signal-example[]
|
tag::snapshot-signal-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
----
|
----
|
||||||
INSERT INTO myschema.debezium_signal (id, type, data) // <1>
|
INSERT INTO myschema.debezium_signal (id, type, data) // <1>
|
||||||
values ('ad-hoc-1', // <2>
|
values ('ad-hoc-1', // <2>
|
||||||
'execute-snapshot', // <3>
|
'execute-snapshot', // <3>
|
||||||
'{"data-collections": ["schema1.table1", "schema2.table2"], // <4>
|
'{"data-collections": ["schema1.table1", "schema1.table2"], // <4>
|
||||||
"type":"incremental", // <5>
|
"type":"incremental", // <5>
|
||||||
"additional-conditions":[{"data-collection": "schema1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
"additional-conditions":[{"data-collection": "schema1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
||||||
----
|
----
|
||||||
end::snapshot-signal-example[]
|
end::snapshot-signal-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with additional conditions
|
||||||
|
|
||||||
tag::snapshot-additional-conditions-example[]
|
tag::snapshot-additional-conditions-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
----
|
----
|
||||||
@ -24,6 +46,12 @@ INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execut
|
|||||||
end::snapshot-additional-conditions-example[]
|
end::snapshot-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with multiple additional conditions
|
||||||
|
|
||||||
tag::snapshot-multiple-additional-conditions-example[]
|
tag::snapshot-multiple-additional-conditions-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
----
|
----
|
||||||
@ -32,16 +60,29 @@ INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execut
|
|||||||
end::snapshot-multiple-additional-conditions-example[]
|
end::snapshot-multiple-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with additional conditions example
|
||||||
|
|
||||||
|
|
||||||
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
----
|
----
|
||||||
Key = `test_connector`
|
Key = `test_connector`
|
||||||
|
|
||||||
Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue'"}]}}`
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue'"}]}}`
|
||||||
----
|
----
|
||||||
end::triggering-incremental-snapshot-addtl-cond-kafka-example[]
|
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with multiple additional conditions
|
||||||
|
|
||||||
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
----
|
----
|
||||||
Key = `test_connector`
|
Key = `test_connector`
|
||||||
|
|
||||||
@ -49,13 +90,32 @@ Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.produ
|
|||||||
----
|
----
|
||||||
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot
|
||||||
|
|
||||||
tag::stopping-incremental-snapshot-example[]
|
tag::stopping-incremental-snapshot-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
----
|
----
|
||||||
INSERT INTO myschema.debezium_signal (id, type, data) // <1>
|
INSERT INTO myschema.debezium_signal (id, type, data) // <1>
|
||||||
values ('ad-hoc-1', // <2>
|
values ('ad-hoc-1', // <2>
|
||||||
'stop-snapshot', // <3>
|
'stop-snapshot', // <3>
|
||||||
'{"data-collections": ["schema1.table1", "schema2.table2"], // <4>
|
'{"data-collections": ["schema1.table1", "schema1.table2"], // <4>
|
||||||
"type":"incremental"}'); // <5>
|
"type":"incremental"}'); // <5>
|
||||||
----
|
----
|
||||||
end::stopping-incremental-snapshot-example[]
|
end::stopping-incremental-snapshot-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot using the Kafka signaling channel
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"stop-snapshot","data": {"data-collections": ["schema1.table1", "schema1.table2"], "type": "INCREMENTAL"}}`
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
@ -1 +1,123 @@
|
|||||||
include::../{partialsdir}/modules/snippets/oracle-frag-signaling-fq-table-formats.adoc[]
|
// include::{partialsdir}/modules/snippets/oracle-frag-signaling-fq-table-formats.adoc[]
|
||||||
|
= Shared snippets for Oracle and SQL Server incremental snapshots
|
||||||
|
|
||||||
|
|
||||||
|
== Triggering an incremental snapshot (SQL)
|
||||||
|
|
||||||
|
=== `data-collections` note
|
||||||
|
|
||||||
|
tag::fq-table-name-format-note[]
|
||||||
|
[NOTE]
|
||||||
|
====
|
||||||
|
If the name of a table that you want to include in a snapshot contains a dot (`.`), a space, or some other non-alphanumeric character, you must escape the table name in double quotes. +
|
||||||
|
For example, to include a table that exists in the `*public*` schema in the `*db1*` database, and that has the name `*My.Table*`, use the following format: `*"db1.public.\"My.Table\""*`.
|
||||||
|
====
|
||||||
|
end::fq-table-name-format-note[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Using a source signaling channel to trigger an incremental snapshot
|
||||||
|
|
||||||
|
// Example in Step 1 of procedure
|
||||||
|
|
||||||
|
tag::snapshot-signal-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'execute-snapshot', // <3>
|
||||||
|
'{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], // <4>
|
||||||
|
"type":"incremental", // <5>
|
||||||
|
"additional-conditions":[{"data-collection": "db1.schema1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
||||||
|
----
|
||||||
|
end::snapshot-signal-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-additional-conditions-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.schema1.products", "filter": "color=blue"}]}');
|
||||||
|
----
|
||||||
|
end::snapshot-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-multiple-additional-conditions-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.schema1.products", "filter": "color=blue AND quantity>10"}]}');
|
||||||
|
----
|
||||||
|
end::snapshot-multiple-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with additional conditions example
|
||||||
|
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'stop-snapshot', // <3>
|
||||||
|
'{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], // <4>
|
||||||
|
"type":"incremental"}'); // <5>
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot using the Kafka signaling channel
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], "type": "INCREMENTAL"}}`
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
@ -1 +1,121 @@
|
|||||||
include::../{partialsdir}/modules/snippets/mysql-frag-signaling-fq-table-formats.adoc[]
|
// include::{partialsdir}/modules/snippets/mysql-frag-signaling-fq-table-formats.adoc[]
|
||||||
|
= Shared snippets for MariaDB and MySQL incremental snapshots
|
||||||
|
|
||||||
|
|
||||||
|
== Triggering an incremental snapshot (SQL)
|
||||||
|
|
||||||
|
=== `data-collections` note
|
||||||
|
|
||||||
|
tag::fq-table-name-format-note[]
|
||||||
|
[NOTE]
|
||||||
|
====
|
||||||
|
If the name of a table that you want to include in a snapshot contains a dot (`.`), a space, or some other non-alphanumeric character, you must escape the table name in double quotes. +
|
||||||
|
For example, to include a table that exists in the `*db1*` database, and that has the name `*My.Table*`, use the following format: `*"db1.\"My.Table\""*`.
|
||||||
|
====
|
||||||
|
end::fq-table-name-format-note[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Using a source signaling channel to trigger an incremental snapshot
|
||||||
|
|
||||||
|
// Example in Step 1 of procedure
|
||||||
|
|
||||||
|
tag::snapshot-signal-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'execute-snapshot', // <3>
|
||||||
|
'{"data-collections": ["db1.table1", "db1.table2"], // <4>
|
||||||
|
"type":"incremental", // <5>
|
||||||
|
"additional-conditions":[{"data-collection": "db1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
||||||
|
----
|
||||||
|
end::snapshot-signal-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-additional-conditions-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.products", "filter": "color=blue"}]}');
|
||||||
|
----
|
||||||
|
end::snapshot-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-multiple-additional-conditions-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.products", "filter": "color=blue AND quantity>10"}]}');
|
||||||
|
----
|
||||||
|
end::snapshot-multiple-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with additional conditions example
|
||||||
|
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'stop-snapshot', // <3>
|
||||||
|
'{"data-collections": ["db1.table1", "db1.table2"], // <4>
|
||||||
|
"type":"incremental"}'); // <5>
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-example[]
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot using the Kafka signaling channel
|
||||||
|
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.table1", "db1.table2"], "type": "INCREMENTAL"}}`
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
@ -0,0 +1,46 @@
|
|||||||
|
= Shared snippets for MongoDB incremental snapshots
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with additional conditions example
|
||||||
|
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot using the Kafka signaling channel
|
||||||
|
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.table1", "db1.table2"], "type": "INCREMENTAL"}}`
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-kafka-example[]
|
@ -1,6 +1,24 @@
|
|||||||
tag::fq-table-format-in-example[]
|
= Shared snippets for MariaDB and MySQL incremental snapshots
|
||||||
For example, to include a table that exists in the `*db1*` database and that has the name `*My.Table*`, use the following format: `*"db1"."My.Table"*`.
|
|
||||||
end::fq-table-format-in-example[]
|
|
||||||
|
== Triggering an incremental snapshot (SQL)
|
||||||
|
|
||||||
|
=== `data-collections` note
|
||||||
|
|
||||||
|
tag::fq-table-name-format-note[]
|
||||||
|
[NOTE]
|
||||||
|
====
|
||||||
|
If the name of a table that you want to include in a snapshot contains a dot (`.`), a space, or some other non-alphanumeric character, you must escape the table name in double quotes. +
|
||||||
|
For example, to include a table that exists in the `*db1*` database, and that has the name `*My.Table*`, use the following format: `*"db1.\"My.Table\""*`.
|
||||||
|
====
|
||||||
|
end::fq-table-name-format-note[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Using a source signaling channel to trigger an incremental snapshot
|
||||||
|
|
||||||
|
// Example in Step 1 of procedure
|
||||||
|
|
||||||
tag::snapshot-signal-example[]
|
tag::snapshot-signal-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
@ -14,6 +32,12 @@ values ('ad-hoc-1', // <2>
|
|||||||
----
|
----
|
||||||
end::snapshot-signal-example[]
|
end::snapshot-signal-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with additional conditions
|
||||||
|
|
||||||
tag::snapshot-additional-conditions-example[]
|
tag::snapshot-additional-conditions-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
----
|
----
|
||||||
@ -22,18 +46,75 @@ INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-sna
|
|||||||
end::snapshot-additional-conditions-example[]
|
end::snapshot-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-multiple-additional-conditions-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
----
|
----
|
||||||
INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.products", "filter": "color=blue AND quantity>10"}]}');
|
INSERT INTO db1.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.products", "filter": "color=blue AND quantity>10"}]}');
|
||||||
----
|
----
|
||||||
|
end::snapshot-multiple-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with additional conditions example
|
||||||
|
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue'"}]}}`
|
||||||
|
----
|
||||||
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with multiple additional conditions
|
||||||
|
|
||||||
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
----
|
----
|
||||||
Key = `test_connector`
|
Key = `test_connector`
|
||||||
|
|
||||||
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
||||||
----
|
----
|
||||||
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'stop-snapshot', // <3>
|
||||||
|
'{"data-collections": ["db1.table1", "db1.table2"], // <4>
|
||||||
|
"type":"incremental"}'); // <5>
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-example[]
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot using the Kafka signaling channel
|
||||||
|
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.table1", "db1.table2"], "type": "INCREMENTAL"}}`
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
@ -1,8 +1,24 @@
|
|||||||
tag::fq-table-format-in-example[]
|
= Shared snippets for Oracle and SQL Server incremental snapshots
|
||||||
For example, to include a table that exists in the `*db1*` database in the `*public*` schema, and that has the name `*My.Table*`, use the following format: `*"db1"."public"."My.Table"*`.
|
|
||||||
end::fq-table-format-in-example[]
|
|
||||||
|
|
||||||
|
|
||||||
|
== Triggering an incremental snapshot (SQL)
|
||||||
|
|
||||||
|
=== `data-collections` note
|
||||||
|
|
||||||
|
tag::fq-table-name-format-note[]
|
||||||
|
[NOTE]
|
||||||
|
====
|
||||||
|
If the name of a table that you want to include in a snapshot contains a dot (`.`), a space, or some other non-alphanumeric character, you must escape the table name in double quotes. +
|
||||||
|
For example, to include a table that exists in the `*public*` schema in the `*db1*` database, and that has the name `*My.Table*`, use the following format: `*"db1.public.\"My.Table\""*`.
|
||||||
|
====
|
||||||
|
end::fq-table-name-format-note[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Using a source signaling channel to trigger an incremental snapshot
|
||||||
|
|
||||||
|
// Example in Step 1 of procedure
|
||||||
|
|
||||||
tag::snapshot-signal-example[]
|
tag::snapshot-signal-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
@ -10,7 +26,7 @@ tag::snapshot-signal-example[]
|
|||||||
INSERT INTO db1.myschema.debezium_signal (id, type, data) // <1>
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) // <1>
|
||||||
values ('ad-hoc-1', // <2>
|
values ('ad-hoc-1', // <2>
|
||||||
'execute-snapshot', // <3>
|
'execute-snapshot', // <3>
|
||||||
'{"data-collections": ["db1.schema1.table1", "db1.schema2.table2"], // <4>
|
'{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], // <4>
|
||||||
"type":"incremental", // <5>
|
"type":"incremental", // <5>
|
||||||
"additional-conditions":[{"data-collection": "db1.schema1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
"additional-conditions":[{"data-collection": "db1.schema1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
||||||
----
|
----
|
||||||
@ -18,6 +34,10 @@ end::snapshot-signal-example[]
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with additional conditions
|
||||||
|
|
||||||
tag::snapshot-additional-conditions-example[]
|
tag::snapshot-additional-conditions-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
----
|
----
|
||||||
@ -26,6 +46,12 @@ INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'ex
|
|||||||
end::snapshot-additional-conditions-example[]
|
end::snapshot-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with multiple additional conditions
|
||||||
|
|
||||||
tag::snapshot-multiple-additional-conditions-example[]
|
tag::snapshot-multiple-additional-conditions-example[]
|
||||||
[source,sql,indent=0,subs="+attributes"]
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
----
|
----
|
||||||
@ -35,7 +61,14 @@ end::snapshot-multiple-additional-conditions-example[]
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with additional conditions example
|
||||||
|
|
||||||
|
|
||||||
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
----
|
----
|
||||||
Key = `test_connector`
|
Key = `test_connector`
|
||||||
|
|
||||||
@ -44,10 +77,46 @@ Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.p
|
|||||||
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with multiple additional conditions
|
||||||
|
|
||||||
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
----
|
----
|
||||||
Key = `test_connector`
|
Key = `test_connector`
|
||||||
|
|
||||||
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
||||||
----
|
----
|
||||||
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'stop-snapshot', // <3>
|
||||||
|
'{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], // <4>
|
||||||
|
"type":"incremental"}'); // <5>
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot using the Kafka signaling channel
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], "type": "INCREMENTAL"}}`
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
@ -1 +1,121 @@
|
|||||||
include::../{partialsdir}/modules/snippets/db2-frag-signaling-fq-table-formats.adoc[]
|
//include::{partialsdir}/modules/snippets/db2-frag-signaling-fq-table-formats.adoc[]
|
||||||
|
= Shared snippets for Db2 and PG incremental snapshots
|
||||||
|
|
||||||
|
|
||||||
|
== Triggering an incremental snapshot (SQL)
|
||||||
|
|
||||||
|
=== `data-collections` note
|
||||||
|
|
||||||
|
tag::fq-table-name-format-note[]
|
||||||
|
[NOTE]
|
||||||
|
====
|
||||||
|
If the name of a table that you want to include in a snapshot contains a dot (`.`), a space, or some other non-alphanumeric character, you must escape the table name in double quotes. +
|
||||||
|
For example, to include a table that exists in the `*public*` schema and that has the name `*My.Table*`, use the following format: `*"public.\"My.Table\""*`.
|
||||||
|
====
|
||||||
|
end::fq-table-name-format-note[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Using a source signaling channel to trigger an incremental snapshot
|
||||||
|
|
||||||
|
// Example in Step 1 of procedure
|
||||||
|
|
||||||
|
tag::snapshot-signal-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO myschema.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'execute-snapshot', // <3>
|
||||||
|
'{"data-collections": ["schema1.table1", "schema1.table2"], // <4>
|
||||||
|
"type":"incremental", // <5>
|
||||||
|
"additional-conditions":[{"data-collection": "schema1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
||||||
|
----
|
||||||
|
end::snapshot-signal-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-additional-conditions-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "schema1.products", "filter": "color=blue"}]}');
|
||||||
|
----
|
||||||
|
end::snapshot-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-multiple-additional-conditions-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "schema1.products", "filter": "color=blue AND quantity>10"}]}');
|
||||||
|
----
|
||||||
|
end::snapshot-multiple-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with additional conditions example
|
||||||
|
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO myschema.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'stop-snapshot', // <3>
|
||||||
|
'{"data-collections": ["schema1.table1", "schema1.table2"], // <4>
|
||||||
|
"type":"incremental"}'); // <5>
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot using the Kafka signaling channel
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"stop-snapshot","data": {"data-collections": ["schema1.table1", "schema1.table2"], "type": "INCREMENTAL"}}`
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
@ -1 +1,123 @@
|
|||||||
include::../{partialsdir}/modules/snippets/oracle-frag-signaling-fq-table-formats.adoc[]
|
// include::{partialsdir}/modules/snippets/oracle-frag-signaling-fq-table-formats.adoc[]
|
||||||
|
= Shared snippets for Oracle and SQL Server incremental snapshots
|
||||||
|
|
||||||
|
|
||||||
|
== Triggering an incremental snapshot (SQL)
|
||||||
|
|
||||||
|
=== `data-collections` note
|
||||||
|
|
||||||
|
tag::fq-table-name-format-note[]
|
||||||
|
[NOTE]
|
||||||
|
====
|
||||||
|
If the name of a table that you want to include in a snapshot contains a dot (`.`), a space, or some other non-alphanumeric character, you must escape the table name in double quotes. +
|
||||||
|
For example, to include a table that exists in the `*public*` schema in the `*db1*` database, and that has the name `*My.Table*`, use the following format: `*"db1.public.\"My.Table\""*`.
|
||||||
|
====
|
||||||
|
end::fq-table-name-format-note[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Using a source signaling channel to trigger an incremental snapshot
|
||||||
|
|
||||||
|
// Example in Step 1 of procedure
|
||||||
|
|
||||||
|
tag::snapshot-signal-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'execute-snapshot', // <3>
|
||||||
|
'{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], // <4>
|
||||||
|
"type":"incremental", // <5>
|
||||||
|
"additional-conditions":[{"data-collection": "db1.schema1.table1" ,"filter":"color=\'blue\'"}]}'); // <6>
|
||||||
|
----
|
||||||
|
end::snapshot-signal-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-additional-conditions-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.schema1.products", "filter": "color=blue"}]}');
|
||||||
|
----
|
||||||
|
end::snapshot-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Running an ad hoc snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::snapshot-multiple-additional-conditions-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) VALUES('ad-hoc-1', 'execute-snapshot', '{"data-collections": ["db1.schema1.products"],"type":"incremental", "additional-conditions":[{"data-collection": "db1.schema1.products", "filter": "color=blue AND quantity>10"}]}');
|
||||||
|
----
|
||||||
|
end::snapshot-multiple-additional-conditions-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with additional conditions example
|
||||||
|
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Kafka snapshot with multiple additional conditions
|
||||||
|
|
||||||
|
tag::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"execute-snapshot","data": {"data-collections": ["db1.schema1.products"], "type": "INCREMENTAL", "additional-conditions": [{"data-collection": "db1.schema1.products" ,"filter":"color='blue' AND brand='MyBrand'"}]}}`
|
||||||
|
----
|
||||||
|
end::triggering-incremental-snapshot-kafka-multi-addtl-cond-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-example[]
|
||||||
|
[source,sql,indent=0,subs="+attributes"]
|
||||||
|
----
|
||||||
|
INSERT INTO db1.myschema.debezium_signal (id, type, data) // <1>
|
||||||
|
values ('ad-hoc-1', // <2>
|
||||||
|
'stop-snapshot', // <3>
|
||||||
|
'{"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], // <4>
|
||||||
|
"type":"incremental"}'); // <5>
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-example[]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
=== Stopping an incremental snapshot using the Kafka signaling channel
|
||||||
|
|
||||||
|
tag::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
[source,json]
|
||||||
|
----
|
||||||
|
Key = `test_connector`
|
||||||
|
|
||||||
|
Value = `{"type":"stop-snapshot","data": {"data-collections": ["db1.schema1.table1", "db1.schema1.table2"], "type": "INCREMENTAL"}}`
|
||||||
|
----
|
||||||
|
end::stopping-incremental-snapshot-kafka-example[]
|
||||||
|
Loading…
Reference in New Issue
Block a user