tet123/documentation/modules/ROOT/pages/configuration/mongodb-event-flattening.adoc
Gunnar Morling 22c6901e3c DBZ-1442 Documentation update;
* Describing new MongoDB SMT option
* Adding missing docs for operation.header for relational SMT
2019-09-13 11:50:30 +02:00

292 lines
10 KiB
Plaintext

= MongoDB New Document State Extraction
include::../_attributes.adoc[]
:linkattrs:
:icons: font
:source-highlighter: highlight.js
[NOTE]
====
This single message transformation (SMT) is under active development right now, so the emitted message structure or other details may still change as development progresses.
Please see below for a descriptions of known limitations of this transformation.
====
[NOTE]
====
This SMT is supported only for the MongoDB connector.
See xref:configuration/event-flattening.adoc[here] for the relational database equivalent to this SMT.
====
[WARNING]
====
0.9.0 Backwards compatible breaking changes
Breaking changes were introduced together with handling deletions features, the previous default behavior was to keep deletion messages, now the new default behavior is to remove them. To change this setting please refer to link:#configuration_options[`delete.handling.mode`] and link:#configuration_options[`drop.tombstones`]
====
The Debezium MongoDB connector generates the data in a form of a complex message structure.
The message consists of two parts:
* operation and metadata
* for inserts, the whole data after the insert has been executed; for updates a patch element describing the altered fields
The `after` and `patch` elements are Strings containing JSON representations of the inserted/altered data.
E.g. the general message structure for a insert event looks like this:
[source,json,indent=0]
----
{
"op": "r",
"after": "{\"field1\":newvalue1,\"field2\":\"newvalue1\"}",
"source": { ... }
}
----
More details about the message structure are provided in xref:connectors/mongodb.adoc[the documentation] of the MongoDB connector.
While this structure is a good fit to represent changes to MongoDB's schemaless collections,
it's not understood by existing sink connectors as for instance the Confluent JDBC sink connector.
Therefore Debezium provides a https://kafka.apache.org/documentation/#connect_transforms[a single message transformation] (SMT)
which converts the `after`/`patch` information from the MongoDB CDC events into a structure suitable for consumption by existing sink connectors.
To do so, the SMT parses the JSON strings and reconstructs properly typed Kafka Connect
(comprising the correct message payload and schema) records from that,
which then can be consumed by connectors such as the JDBC sink connector.
Using JSON as visualization of the emitted record structure, the event from above would like this:
[source,json,indent=0]
----
{
"field1" : "newvalue1",
"field2" : "newvalue2"
}
----
The SMT should be applied on a sink connector.
== Configuration
The configuration is a part of sink task connector and is expressed in a set of properties:
[source]
----
transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.ExtractNewDocumentState
transforms.unwrap.drop.tombstones=false
transforms.unwrap.delete.handling.mode=drop
transforms.unwrap.operation.header=true
----
=== Array encoding
The SMT converts MongoDB arrays into arrays as defined by Apache Connect (or Apache Avro) schema.
The problem is that such arrays must contains elements of the same time.
MongoDB allows the user to store elements of heterogeneous types into the same array.
To bypass this impedance mismatch it is possible to encode the array in two different ways using `array.encoding` configuration option.
[source]
----
transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.ExtractNewDocumentState
transforms.unwrap.array.encoding=<array|document>
----
Value `array` (the default) will encode arrays as the array datatype.
It is user's responsibility to ensure that all elements for a given array instance are of the same time.
This option is a restricting one but offers easy processing of arrays by downstream clients.
Value `document` will convert the array into a *struct* of *structs* in the similar way as done by http://bsonspec.org/[BSON serialization].
The main *struct* contains fields named `_0`, `_1`, `_2` etc. where the name represents the index of the element in the array.
Every element is then passed as the value for the give field.
Let's suppose an example source MongoDB document with array with heterogeneous types
[source,json,indent=0]
----
{
"_id": 1,
"a1": [
{
"a": 1,
"b": "none"
},
{
"a": "c",
"d": "something"
}
]
}
----
This document will be encoded as
[source,json,indent=0]
----
{
"_id": 1,
"a1": {
"_0": {
"a": 1,
"b": "none"
},
"_1": {
"a": "c",
"d": "something"
}
}
}
----
This option allows you to process arbitrary arrays but the consumer need to know how to properly handle them.
_Note: The underscore in index names is present because Avro encoding requires field names not to start with digit._
=== Nested structure flattening
When a MongoDB document contains a nested document (structure) it is faithfully encoded as a nested structure field.
If the sink connector does support only flat structure it is possible to flatten the internal structure into a flat one with a consistent field naming.
To enable this feature the option `flatten.struct` must be set to `true`.
[source]
----
transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.ExtractNewDocumentState
transforms.unwrap.flatten.struct=<true|false>
transforms.unwrap.flatten.struct.delimiter=<string>
----
The resulting flat document will consist of fields whose names are created by joining the name of the parent field and the name of the fields in the nested document.
Those elements are separated with string defined by an option `struct.delimiter` by default set to the _underscore_.
Let's suppose an example source MongoDB document with a field with a nested document
[source,json,indent=0]
----
{
"_id": 1,
"a": {
"b": 1,
"c": "none"
},
"d": 100
}
----
Such document will be encoded as
[source,json,indent=0]
----
{
"_id": 1,
"a_c": 1,
"a_d": "none",
"d": 100
}
----
This option allows you to convert a hierarchical document into a flat structure suitable for a table-like storage.
=== MongoDB `$unset` handling
MongoDB allows you to make `$unset` operations which allows you to remove a certain field from a Document, and since the collections are schemaless it becomes hard to find a way to tell the consumers/sinkers which a field is now missing, the approach debezium uses is to set the desired to remove field to null value.
Given the operation
[source,json,indent=0]
----
{
"after":null,
"patch":"{\"$unset\" : {\"a\" : true}}"
}
----
The final encoding will look like
[source,json,indent=0]
----
{
"id": 1,
"a": null
}
----
Note that other mongo operations might cause an `$unset` internally, `$rename` is one example.
=== Determine original operation
When a message is flattened the final result won't show whether it was an insert, update or first read (Deletions can be detected via tombstones or rewrites, see link:#configuration_options[Configuration options]).
To solve this problem Debezium offers an option to propagate the original operation via a header added to the message.
To enable this feature the option `operation.header` must be set to `true`.
[source]
----
transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.ExtractNewDocumentState
transforms.unwrap.operation.header=true
----
The possible values are the ones from the `op` field of xref:connectors/mongodb.adoc#change-events-value[MongoDB connector change events].
=== Adding source metadata fields
The SMT can optionally add metadata fields from the original change event's `source` structure to the final flattened record (prefixed with "__").
This functionality can be used to add things like the collection from the change event, or connector-specific fields like the replica set name.
For more information on what's available in the source structure see xref:connectors/mongodb.adoc[the documentation] for the MongoDB connector.
For example, the configuration
----
transforms=unwrap,...
transforms.unwrap.type=io.debezium.connector.mongodb.transforms.ExtractNewDocumentState
transforms.unwrap.add.source.fields=rs,collection
----
will add
----
{ "__rs" : "rs0", "__collection" : "my-collection", ... }
----
to the final flattened record.
For `DELETE` events, this option is only supported when the `delete.handling.mode` option is set to "rewrite".
[[configuration_options]]
== Configuration options
[cols="35%a,10%a,55%a",width=100,options="header,footer",role="table table-bordered table-striped"]
|=======================
|Property
|Default
|Description
|`array.encoding`
|`array`
|The SMT converts MongoDB arrays into arrays as defined by Apache Connect (or Apache Avro) schema.
|`flatten.struct`
|`false`
|The SMT flattens structs by concatenating the fields into plain properties, using a configurable delimiter.
|`flatten.struct.delimiter`
|`_`
|Delimiter to concat between field names from the input record when generating field names for the output record. Only applies when `flatten.struct` is set to `true`
|`operation.header`
|`false`
|The SMT adds the xref:connectors/mongodb.adoc#change-events-value[event operation] as a message header.
|`drop.tombstones`
|`true`
|The SMT removes the tombstone generated by Debezium from the stream.
|`delete.handling.mode`
|`drop`
|The SMT can `drop`, `rewrite` or pass delete records (`none`). The `rewrite` mode will add a `__deleted` field set to `true` or `false` depending on the represented operation.
|`add.source.fields`
|
|Fields from the change event's `source` structure to add as metadata (prefixed with "__") to the flattened record
|=======================
== Known limitations
* Feeding data changes from a schemaless store such as MongoDB to strictly schema-based datastores such as a relational database can by definition work within certain limits only.
Specifically, all fields of documents within one collection with the same name must be of the same type. Otherwise, no consistent column definition can be derived in the target database.
* Arrays will be restored in the emitted Kafka Connect record correctly, but they are not supported by sink connector just expecting a "flat" message structure.