Corrects how the MySQL connector reloads database history to take into account the included and excluded GTID sources. This only affects a connector configured to capture changes from _multiple_ MySQL database servers when GTID sources are explicitly excluded or included.
Some of our test cases verify (de)serialization using the Avro Converter, which is included in the Confluent Platform. This commit upgrades the Confluent Platform to version 3.1.2, which matches Kafka 0.10.1.1.
Improved the MySQL connector's logic to better handle Amazon RDS that does not allow giving user `SUPER` privileges. As before, the connector starts a transaction and attempts to get a global read lock via `FLUSH TABLES WITH READ LOCK` to prevent writes to the database so that the binlog position can be accurately read _and_ the table schemas can be read without interference from other clients. Once that is done, the connector releases the global read lock and continues in the same transaction to read all table rows. This means that our snapshot is consistent, but we maintain the global read lock for a very short period of time.
Amazon's RDS and Aurora are hosted MySQL instances that do not allow users to have the `SUPER` privilege, which means the user cannot get a global read lock. In this case, the connector detects this error, continues to read the database and table names (without any lock), and _then_ uses `FLUSH TABLES <tableName> WITH READ LOCK` on each table that satisfies the filters to prevent changes from other clients. The connector then reads the table schemas, reads _all_ table rows, commits the transaction, and _finally_ releases the table locks.
Therefore, there are two very different behaviors/requirements when the user can't obtain a global read lock because of lack of privilege, like on RDS:
# The RDS user that the connector makes use of must also have the `LOCK TABLES` privilege; without it the connector will fail during the snapshot.
# The connector must hold the table read locks _until it has completed reading all of the tables_, since release the table locks using `UNLOCK TABLES` would prematurely commit our transaction and prevent us from getting a consistent snapshot. From the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/flush.html):
> `UNLOCK TABLES` implicitly commits any active transaction only if any tables currently have been locked with `LOCK TABLES`. The commit does not occur for `UNLOCK TABLES` following `FLUSH TABLES WITH READ LOCK` because the latter statement does not acquire table locks.
Corrected the MongoDB connector upon startup to restart an initial sync if the previously recorded offset signals that an initial sync was not completed in the prior run.
Also change the connector’s replicator to buffer the last record during an initial sync so that, upon completion of the initial sync, the last record can be updated with an offset that reflects that the initial sync was completed. This way, if the initial sync is completed but there are no other events in the oplog, the connector will still consider the initial sync as completed.
The MySQL DDL parser was not correclty handling `DEFINER` clauses within `CREATE TRIGGER` or `CREATE EVENT` statements. Support for `DEFINER` clauses was recently added for the various forms of `CREATE PROCEDURE`, `CREATE FUNCTION` and `CREATE VIEW` statements. These are the only kinds of statements that have the definer attribute, per the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/stored-programs-security.html).
Changed the events’ `source` structure to optionally contain the identifier of the MySQL thread where appropriate. The thread is included on each `BEGIN` binlog event, so these are captured and added to all of the associated change events produced for that transaction.
The KafkaDatabaseHistory class was not behaving well in tests using my local development environment. When restoring from the persisted Kafka topic, the class would set up a Kafka consumer and see repeated messages. It is unclear whether the repeats were due to our test environment and very short poll timeouts. Regardless, the restore logic was refactored to track offsets so as to only process messages once.
A recent change to MySQL added quoted identifiers from DDL statements (e.g., resulting from `SHOW CREATE TABLE <quotedIdentifier>`), so the expected results were changed to reflect this. Also, the `pos` field is quite brittle and changes with many MySQL version upgrades in the Docker images, so those fields are now ignored during the integration tests.
The project requires that all JavaDoc for public methods exist and are valid (e.g., have all @param, @return and @throws to match the signature). However, the generated Java source for Protobuf contain numerous JavaDoc errors relative to these settings. This causes lots of errors inside Eclipse (and probably other IDEs), but ignoring/disabling the JavaDoc errors leads to improper JavaDoc (fixed in next commit). By moving the generated Protobuf source code to a separate directory (e.g., `generated-sources`), the IDEs will automatically discover the directory and the user can ignore any compiler and JavaDoc errors/warnings for those files while keeping the more strict JavaDoc checking enabled for the rest of the code.
The PostgreSQL connetor was not able to build locally, since the Maven build would wait forever trying to talk to the TCP port for PostgreSQL before starting the integration tests. Even when I corrected the `wait` specification to use the localhost (rather than the direct container address), the build successfully connected to Postgres when it started the first time but before it shutdown to adjust the configuration, and thus the tests failed as the server was shutdown. The build now looks for a specific log message which is unique and output by the container after the second startup, and this seems to work great (at least locally).
The version of the DB server required for this to work is at least 9.4. To be able to stream logical changes, the code relies on enhancements to the JDBC driver which are not yet public. Therefore, the current codebase includes the sources for the JDBC driver.
The commit also updates the general DBZ build system for:
* custom checkstyle package exclusions - required by the Postgres driver the protobuf code for now
* adds support for debugging Surefire and Failsafe
The version of the DB server required for this to work is at least 9.4
The commit also updates the general DBZ build system for:
* custom checkstyle package exclusions - required by the Postgres driver the protobuf code for now
* adds support for debugging Surefire and Failsafe