tet123/CHANGELOG.md
2016-07-25 18:32:58 -05:00

11 KiB

Change log

All notable changes are documented in this file. Release numbers follow Semantic Versioning

0.3.0

July 26, 2016 - Detailed release notes

New features

  • New MongoDB connector supports capturing changes from a MongoDB replica set or a MongoDB sharded cluster. See the documentation for details. DBZ-2

Backwards-incompatible changes since 0.2.0

  • Upgraded to Kafka 0.10.0.0, which means that the Debezium connectors can only be used with Kafka Connect 0.10.0.0. Check Kafka documentation for compatibility with other versions of Kafka brokers. DBZ-62, DBZ-80
  • Removed several methods in the GtidSet class inside the MySQL connector. The class was introduced in 0.2. This change will only affect applications explicitly using the class (by reusing the MySQL connector JAR), and will not affect how the MySQL connector works. Changed in 0.2.2. DBZ-79
  • The source field within each MySQL change event now contains the binlog position of that event (rather than the next event). Events persisted by earlier versions of the connector are unaffected. This change may adversely clients that are directly using the position within the source field. Changed in 0.2.2. DBZ-76
  • Correted the names of the Avro-compliant Kafka Connect schemas generated by the MySQL connector for the before and after fields in its data change events. Consumers that require knowledge (by name) of the particular schemas used in 0.2 events may have trouble consuming events produced by the 0.2.1 (or later) connector. Fixed in 0.2.1. DBZ-72

Fixes since 0.2.0

  • The Kafka Connect schema names used in the MySQL connector's change events are now always Avro-compatible schema names DBZ-86
  • Corrected parsing errors when MySQL DDL statements are generated by Liquibase. Fixed in 0.2.3. DBZ-83
  • Corrected support of MySQL TINYINT and SMALLINT types. Fixed in 0.2.3. DBZ-84, DBZ-87
  • Corrected support of MySQL temporal types, including DATE, TIME, and TIMESTAMP. Fixed in 0.2.3. DBZ-85
  • Corrected call to MySQL SHOW MASTER STATUS so that it works on pre-5.7 versions of MySQL. Fixed in 0.2.3. DBZ-82
  • Correct how the MySQL connector records offsets with multi-row MySQL events so that, even if the connector experiences a non-graceful shutdown (i.e., crash) after committing the offset of some of the rows from such an event, upon restart the connector will resume with the remaining rows in that multi-row event. Previously, the connector might incorrectly restart at the next event. Fixed in 0.2.2. DBZ-73
  • Shutdown of the MySQL connector immediately after a snapshot completes (before another change event is reccorded) will now be properly marked as complete. Fixed in 0.2.2. DBZ-77
  • The MySQL connector's plugin archive now contains the MySQL JDBC driver JAR file required by the connector. Fixed in 0.2.1. DBZ-71

0.2.3

July 26, 2016 - Detailed release notes

Backwards-incompatible changes since 0.2.2

None

Fixes since 0.2.2

  • Corrected parsing errors when MySQL DDL statements are generated by Liquibase. DBZ-83
  • Corrected support of MySQL TINYINT and SMALLINT types. DBZ-84, DBZ-87
  • Corrected support of MySQL temporal types, including DATE, TIME, and TIMESTAMP. DBZ-85
  • Corrected call to MySQL SHOW MASTER STATUS so that it works on pre-5.7 versions of MySQL. DBZ-82

0.2.2

June 22, 2016 - Detailed release notes

Backwards-incompatible changes since 0.2.1

  • Removed several methods in the GtidSet class inside the MySQL connector. The class was introduced in 0.2. This change will only affect applications explicitly using the class (by reusing the MySQL connector JAR), and will not affect how the MySQL connector works. DBZ-79
  • The source field within each MySQL change event now contains the binlog position of that event (rather than the next event). Events persisted by earlier versions of the connector are unaffected. This change may adversely clients that are directly using the position within the source field. DBZ-76

Fixes since 0.2.1

  • Correct how the MySQL connector records offsets with multi-row MySQL events so that, even if the connector experiences a non-graceful shutdown (i.e., crash) after committing the offset of some of the rows from such an event, upon restart the connector will resume with the remaining rows in that multi-row event. Previously, the connector might incorrectly restart at the next event. DBZ-73
  • Shutdown of the MySQL connector immediately after a snapshot completes (before another change event is reccorded) will now be properly marked as complete. DBZ-77

0.2.1

June 10, 2016 - Detailed release notes

Backwards-incompatible changes since 0.2.0

  • Correted the names of the Avro-compliant Kafka Connect schemas generated by the MySQL connector for the before and after fields in its data change events. Consumers that require knowledge (by name) of the particular schemas used in 0.2 events may have trouble consuming events produced by the 0.2.1 (or later) connector. (DBZ-72)

Fixes since 0.2.0

  • The MySQL connector's plugin archive now contains the MySQL JDBC driver JAR file required by the connector.(DBZ-71)

0.2.0

June 8, 2016 - Detailed release notes

New features

  • MySQL connector supports high availability MySQL cluster topologies. See the documentation for details. (DBZ-37)
  • MySQL connector now by default starts by performing a consistent snapshot of the schema and contents of the upstream MySQL databases in its current state. See the documentation for details about how this works and how it impacts other database clients. (DBZ-31)
  • MySQL connector can be configured to exclude, truncate, or mask specific columns in events. (DBZ-29)
  • MySQL connector events can be serialized using the Confluent Avro converter or the JSON converter. Previously, only the JSON converter could be used. (DBZ-29, DBZ-63, DBZ-64)

Backwards-incompatible changes since 0.1

  • Completely redesigned the structure of event messages produced by MySQL connector and stored in Kafka topics. Events now contain an envelope structure with information about the source event, the kind of operation (create/insert, update, delete, read), the time that Debezium processed the event, and the state of the row before and/or after the event. The messages written to each topic have a distinct Avro-compliant Kafka Connect schema that reflects the structure of the source table, which may vary over time independently from the schemas of all other topics. See the documentation for details. This envelope structure will likely be used by future connectors. (DBZ-50, DBZ-52, DBZ-45, DBZ-60)
  • MySQL connector handles deletion of a row by recording a delete event message whose value contains the state of the removed row (and other metadata), followed by a tombstone event message with a null value to signal Kafka's log compaction that all prior messages with the same key can be garbage collected. See the documentation for details. (DBZ-44)
  • Changed the format of events that the MySQL connector writes to its schema change topic, through which consumers can access events with the DDL statements applied to the database(s). The format change makes it possible for consumers to correlate these events with the data change events. (DBZ-43, DBZ-55)

Changes since 0.1

  • DDL parsing framework identifies table affected by statements via a new listener callback. (DBZ-38)
  • The database.binlog configuration property was required in version 0.1 of the MySQL connector, but in 0.2 it is no longer used because of the new snapshot feature. If provided, it will be quietly ignored. (DBZ-31)

Bug fixes since 0.1

  • MySQL connector now properly parses COMMIT statements, the REFERENCES clauses of CREATE TABLE statements, and statements with CHARSET shorthand of CHARACTER SET. (DBZ-48, DBZ-49, DBZ-57)
  • MySQL connector properly handles binary values that are hexadecimal strings (DBZ-61)

0.1

March 17, 2016 - Detailed release notes

New features

  • MySQL connector for ingesting change events from MySQL databases. (DBZ-1)
  • Kafka Connect plugin archive for MySQL connector. (DBZ-17)
  • Simple DDL parsing framework that can be extended and used by various connectors. (DBZ-1)
  • Framework for embedding a single Kafka Connect connector inside an application. (DBZ-8)