The {prodname} signaling mechanism provides a way to modify the behavior of a connector, or to trigger a one-time action, such as initiating an xref:debezium-signaling-ad-hoc-incremental-snapshots[ad hoc snapshot] of a table.
When {prodname} detects that a new xref:debezium-signaling-example-of-a-logging-record[logging record] or xref:debezium-signaling-example-of-an-ad-hoc-blocking-snapshot-signal-record[ad hoc snapshot record] is added to the channel, it reads the signal, and initiates the requested operation.
You can specify which channel is enabled by setting the `signal.enabled.channels` configuration property. The property lists the names of the channels that are enabled. By default, {prodname} provides the following channels: `source` and `kafka`.
The `source` channel is enabled by default, because it is required for incremental snapshot signals.
. On the source database, create a signaling data collection table for sending signals to the connector.
For information about the required structure of the signaling data collection, see xref:debezium-signaling-data-collection-structure[Structure of a signaling data collection].
. For source databases such as Db2 or SQL Server that implement a native change data capture (CDC) mechanism, enable CDC for the signaling table.
. Add the name of the signaling data collection to the {prodname} connector configuration. +
In the connector configuration, add the property `signal.data.collection`, and set its value to the fully-qualified name of the signaling data collection that you created in Step 1. +
+
For example, `signal.data.collection = inventory.debezium_signals`. +
+
The format for the fully-qualified name of the signaling collection depends on the connector. +
The following example shows the naming formats to use for each connector:
* Fields are arranged in a specific order, as shown in xref:debezium-signaling-description-of-required-structure-of-a-signaling-data-collection[Table 1].
You can use some signal types with any connector for which signaling is available, while other signal types are available for specific connectors only.
* Submit a SQL query to the source database to create a table that is consistent with the xref:debezium-signaling-description-of-required-structure-of-a-signaling-data-collection[required structure], as shown in the following example: +
`CREATE TABLE _<tableName>_ (id VARCHAR(_<varcharValue>_) PRIMARY KEY, type VARCHAR(__<varcharValue>__) NOT NULL, data VARCHAR(_<varcharValue>_) NULL);` +
[NOTE]
====
The amount of space that you allocate to the `VARCHAR` parameter of the `id` variable must be sufficient to accommodate the size of the ID strings of signals sent to the signaling table. +
If the size of an ID exceeds the available space, the connector cannot process the signal.
You can enable the Kafka signaling channel by adding it to the `signal.enabled.channels` configuration property, and then adding the name of the topic that receives signals to the `signal.kafka.topic` property.
.Additional configuration available for the consumer
* {link-prefix}:{link-db2-connector}#debezium-db2-connector-kafka-signals-configuration-properties[Db2 connector Kafka signal configuration properties]
ifdef::community[]
* {link-prefix}:{link-mongodb-connector}#debezium-mongodb-connector-kafka-signals-configuration-properties[MongoDB connector Kafka signal configuration properties]
endif::community[]
* {link-prefix}:{link-mysql-connector}#debezium-mysql-connector-kafka-signals-configuration-properties[MySQL connector Kafka signal configuration properties]
* {link-prefix}:{link-oracle-connector}#debezium-oracle-connector-kafka-signals-configuration-properties[Oracle connector Kafka signal configuration properties]
* {link-prefix}:{link-postgresql-connector}#debezium-postgresql-connector-kafka-signals-configuration-properties[PostgreSQL connector Kafka signal configuration properties]
* {link-prefix}:{link-sqlserver-connector}#debezium-sqlserver-connector-kafka-signals-configuration-properties[SQL Server connector Kafka signal configuration properties]
To use Kafka signaling to trigger ad hoc incremental snapshots for a connector, you must first xref:debezium-signaling-enabling-source-signaling-channel[enable a `source` signaling channel] in the connector configuration.
The source channel implements a watermarking mechanism to deduplicate events that might be captured by an incremental snapshot and then captured again after streaming resumes.
| An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot. +
* For incremental snapshots, you specify a search condition fragment, such as `"color='blue'"`, that the snapshot appends to the condition clause of a query.
* For blocking snapshots, you specify a full `SELECT` statement, such as the one that you might set in the `snapshot.select.statement.overrides` property.
You can enable the JMX signaling by adding `jmx` to the `signal.enabled.channels` property in the connector configuration, and then {link-prefix}:{link-debezium-monitoring}#monitoring-debezium[enabling the JMX MBean Server] to expose the signaling bean.
| An array of comma-separated regular expressions that match the fully-qualified names of the tables to include in the snapshot. +
Specify the names by using the same format as is required for the xref:{context}-property-signal-data-collection[signal.data.collection] configuration option.
|An optional array that specifies a set of additional conditions that the connector evaluates to determine the subset of records to include in a snapshot. +
* For incremental snapshots, you specify a search condition fragment, such as `"color='blue'"`, that the snapshot appends to the condition clause of a query.
* For blocking snapshots, you specify a full `SELECT` statement, such as the one that you might set in the `snapshot.select.statement.overrides` property.
You can implement channels as needed to send signals to {prodname} in a manner that works best in your environment.
Adding a signaling channel involves several steps:
1. xref:debezium-signaling-enabling-custom-signaling-channel[Create a Java project for the channel] to implement the channel, and xref:debezium-signaling-core-module-dependencies[add `{prodname} Core` as a dependency].
2. xref:deploying-a-debezium-custom-signaling-channel[Deploy the custom signaling channel].
3. xref:configuring-connectors-to-use-a-custom-signaling-channel[Enable connectors to use the custom signaling channel by modifying the connector configuration].
Custom signaling channels are Java classes that implement the `io.debezium.pipeline.signal.channels.SignalChannelReader` service provider interface (SPI).
* To use a custom signaling channel with a {prodname} connector, export the Java project to a JAR file, and copy the file to the directory that contains the JAR file for each {prodname} connector that you want to use it with. +
For example, in a typical deployment, the {prodname} connector files are stored in subdirectories of a Kafka Connect directory (`/kafka/connect`), with each connector JAR in its own subdirectory (`/kafka/connect/debezium-connector-db2`, `/kafka/connect/debezium-connector-mysql`, and so forth).
NOTE: To use a custom signaling channel with multiple connectors, you must place a copy of the custom signaling channel JAR file in the subdirectory for each connector.
Unlike the initial snapshot that a connector runs after it first starts, an ad hoc snapshot occurs during runtime, after the connector has already begun to stream change events from a database.
If you want the incremental snapshot to proceed, but you want to exclude specific collections from the snapshot, provide a comma-separated list of the names of the collections or regular expressions to exclude.
By capturing the initial state of the specified tables in chunks rather than in a single monolithic operation, incremental snapshots provide the following advantages over the initial snapshot process:
* While the connector captures the baseline state of the specified tables, streaming of near real-time events from the transaction log continues uninterrupted.
* If the incremental snapshot process is interrupted, it can be resumed from the point at which it stopped.
* You can initiate an incremental snapshot at any time.
You can request a connector to pause an in-progress incremental snapshot by creating a signal table entry with the `pause-snapshot` signal type.
After processing the signal, the connector will stop pause current in-progress snapshot operation.
Therefor it's not possible to specify the data collection as the snapshot processing will be paused in position where it is in time of processing of the signal.
You can pause incremental snapshots for the following {prodname} connectors:
You can request a connector to initiate an ad hoc blocking snapshot by creating a signal with the `execute-snapshot` signal type and `data.type` with value `blocking`.
After processing the signal, the connector runs the requested snapshot operation.
Unlike the initial snapshot that a connector runs after it first starts, an ad hoc blocking snapshot occurs during runtime, after the connector has stopped to stream change events from a database.
You can initiate ad hoc blocking snapshots at any time.
Blocking snapshots are available for the following {prodname} connectors:
The `io.debezium.pipeline.signal.actions.SignalAction` exposes a single method with one parameter, which represents the message payloads sent through the signaling channel.
After you define a custom signaling action, use the following SPI interface to make the custom action available to the signaling mechanism: `io.debezium.pipeline.signal.actions.SignalActionProvider`.
* To use a custom action with a {prodname} connector, export the Java project to a JAR file, and copy the file to the directory that contains the JAR file for each {prodname} connector that you want to use it with. +
+
For example, in a typical deployment, the {prodname} connector files are stored in subdirectories of a Kafka Connect directory (`/kafka/connect`), with each connector JAR in its own subdirectory (`/kafka/connect/debezium-connector-db2`, `/kafka/connect/debezium-connector-mysql`, and so forth).
NOTE: To use a custom action with multiple connectors, you must place a copy of the custom signaling channel JAR file in the subdirectory for each connector.