link:https://cloudevents.io/[CloudEvents] is a specification for describing event data in a common way. Its aim is to provide interoperability across services, platforms and systems. {prodname} enables you to configure a MongoDB, MySQL, PostgreSQL, or SQL Server connector to emit change event records that conform to the CloudEvents specification.
Support for CloudEvents is in an incubating state. This means that exact semantics, configuration options, and other details may change in future revisions based on feedback.
Please let us know your specific requirements or if you encounter any problems while using this feature.
Emitting change event records in CloudEvents format is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about support scope, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope].
To configure a {prodname} connector to emit change event records that conform to the CloudEvents specification, {prodname} provides the `io.debezium.converters.CloudEventsConverter`, which is a Kafka Connect message converter.
Currently, only structured mapping mode is supported. The CloudEvents change event envelope can be JSON or Avro and each envelope type supports JSON or Avro as the `data` format. It is expected that a future {prodname} release will support binary mapping mode.
.Descriptions of fields in a CloudEvents change event record
[cols="1,7",options="header",subs="+attributes"]
|===
|Item |Description
|1
|Unique ID that the connector generates for the change event based on the change event's content.
|2
|The source of the event, which is the logical name of the database as specified by the `topic.prefix` property in the connector's configuration.
|3
|The CloudEvents specification version.
|4
|Connector type that generated the change event.
The format of this field is `io.debezium._CONNECTOR_TYPE_.datachangeevent`.
The value of `_CONNECTOR_TYPE_` is `mongodb`, `mysql`, `postgresql`, or `sqlserver`.
|5
|Time of the change in the source database.
|6
|Describes the content type of the `data` attribute.
Possible values are `json`, as in this example, or `avro`.
|7
|An operation identifier.
Possible values are `r` for read, `c` for create, `u` for update, or `d` for delete.
|8
|All `source` attributes that are known from {prodname} change events are mapped to CloudEvents extension attributes by using the `iodebezium` prefix for the attribute name.
|9
|When enabled in the connector, each `transaction` attribute that is known from {prodname} change events is mapped to a CloudEvents extension attribute by using the `iodebeziumtx` prefix for the attribute name.
|10
|The actual data change.
Depending on the operation and the connector, the data might contain `before`, `after`, or `patch` fields.
The following example also shows what a CloudEvents change event record emitted by a PostgreSQL connector looks like. In this example, the PostgreSQL connector is again configured to use JSON as the CloudEvents format envelope, but this time the connector is configured to use Avro for the `data` format.
The CloudEvents converter converts Kafka record values. In the same connector configuration, you can specify `key.converter` if you want to operate on record keys.
But in some cases, before the converter receives a record, the record might be processed in such a way that metadata is not present in its value, for example, after the record is processed by the Outbox Event Router SMT.
To preserve the required metadata, you can use the following approach to pass the metadata in the record headers.
1. Implement a mechanism for recording the metadata in the record's headers before the record reaches the converter, for example, by using the `HeaderFrom` SMT.
As a result, even if you omit parts of a property's value, such as the `id` and `type` sources, the converter generates `header` values for the omitted parts.
By default, the CloudEvents converter automatically generates values for the `id` and `type` fields of a CloudEvent, and generates the schema name for its `data` field.
You can customize the way that the converter populates these fields by changing the defaults and specifying the fields' values in the appropriate headers.
With the preceding configuration in effect, you could configure upstream functions to add `id` and `type` headers with the values that you want to pass to the CloudEvents converter.
If you want to provide values only for `id` header, use:
To enable the converter to retrieve the data schema name from a header field, you must set xref:cloud-events-converter-schema-data-name-source-header-enable[`schema.data.name.source.header.enable`] to `true`.
|Any configuration options to be passed through to the underlying converter when using Avro. The `avro.` prefix is removed. For example, for Avro `data`, you would specify the `avro.schema.registry.url` option.
|Specifies CloudEvents schema name under which the schema is registered in a Schema Registry. The setting is ignored when `serializer.type` is `json` in
The schema name is obtained from the `dataSchemaName` parameter that is specified in the xref:cloud-events-converter-metadata-source[`metadata.source`] property.
|A comma-separated list that specifies the sources from which the converter retrieves metadata values (source, operation, transaction) for CloudEvent `id` and `type` fields,
and for the `dataSchemaName` parameter, which specifies the name under which the schema is registered in a Schema Registry.
For configuration examples, see xref:configuration-of-sources-of-metadata-and-some-cloudevents-fields[Configuration of sources of metadata and some CloudEvents fields].