Commit Graph

198 Commits

Author SHA1 Message Date
Randall Hauch
b48ccce4b5 DBZ-200 Corrected MySQL DDL parser to better handle column definitions
Apparently not all reserved words must be quoted when using them as colum names, so refactored MySQL’s DDL parser to better handle a variety of unquoted colum names that are reserved words.
2017-03-08 12:12:27 -06:00
Josh Stanfield
c794416684 update to allow savepoint in mysql replication stream 2017-03-08 09:28:10 -07:00
dleibovic
9fd2afc8b4 Increase the default mysql init timeout to 60 seconds for slower computers. Also paramterize it so that users can pass a custom value via 'mvn clean install -Dmysql.init.timeout=80000' for example 2017-02-23 13:34:14 -05:00
rich
9aa49736c8 DBZ-140 when locking individual tables, use a single statement with all the table names instead of issuing a statement per table which causes a MySQL error 2017-02-16 15:45:29 -05:00
Randall Hauch
043a2d2d92 DBZ-194 Improved MySQL connector’s built-in table filtering
The MySQL connector’s built-in table filter now just filters out all tables within the known built-in databases, and does not check the names of the tables. Thus, the connector should no longer filter out tables in other databases that happen to have the same names as the tables in the built-in databases.
2017-02-14 09:23:39 -06:00
Randall Hauch
af94fa8759 DBZ-193 MySQL DDL parser handles FULLTEXT index
Corrected the MySQL DDL parser to correctly handle `FULLTEXT` indexes within a `CREATE TABLE` statement. The parser was incorrectly using `canConsume(…)` with a list of options instead of `canConsumeAnyOf(…)`.
2017-02-10 15:49:20 -06:00
Randall Hauch
9a4a177004 DBZ-188 Corrected JavaDoc 2017-02-10 15:39:22 -06:00
dleibovic
aa50bfe71a DBZ-188: Allow a debezium mysql connector to filter production of DML events into kafka by the mysql UUID of the event
With GTIDs enabled, each transaction in the binlog contains a GTID event, which gives us access to the GTID of the transaction. The GTID has the following format: source_id:transaction_id, where source_id is the UUID of the mysql server the transaction was written to.

I propose to allow a debezium instance to be configured with a UUID pattern to check against before producing DML events into Kafka. Debezium would produce a DML event into kafka if and only if the UUID in the event's GTID matches the pattern with which debezium was configured.

The configuration for the UUID patterns will make use of the existing gtid.source.includes and gtid.source.excludes options. The DML event filtering will only be performed if the new option gtid.source.filter.dml.events is true.
2017-02-10 14:14:10 -05:00
Randall Hauch
d2986710a5 DBZ-188 More efficient GTID source filters for MySQL Connector
Changed the GTID source filters in the MySQL connector to be far more efficient when the filters specify literal UUIDs rather than regex patterns. In these cases, the predicate just checks whether a supplied value is in a hash set, and no regular expression patterns are used.

The GTID source filters can still be a combination of UUID literals and regular expressions, and the predicate will use the best implementation for each. For example, if the filters include all UUID literals, then regular expressions will never be used.
2017-02-10 11:34:24 -06:00
Randall Hauch
8c60c29883 [maven-release-plugin] prepare for next development iteration 2017-02-07 14:22:12 -06:00
Randall Hauch
20134286e9 [maven-release-plugin] prepare release v0.4.0 2017-02-07 14:22:11 -06:00
Randall Hauch
403fee1375 DBZ-185 MySQL’s database history now filters GTID sources
Corrects how the MySQL connector reloads database history to take into account the included and excluded GTID sources. This only affects a connector configured to capture changes from _multiple_ MySQL database servers when GTID sources are explicitly excluded or included.
2017-02-07 11:21:22 -06:00
Randall Hauch
bb0800ca3a DBZ-140 Improved locking logic to support RDS
Improved the MySQL connector's logic to better handle Amazon RDS that does not allow giving user `SUPER` privileges. As before, the connector starts a transaction and attempts to get a global read lock via `FLUSH TABLES WITH READ LOCK` to prevent writes to the database so that the binlog position can be accurately read _and_ the table schemas can be read without interference from other clients. Once that is done, the connector releases the global read lock and continues in the same transaction to read all table rows. This means that our snapshot is consistent, but we maintain the global read lock for a very short period of time.

Amazon's RDS and Aurora are hosted MySQL instances that do not allow users to have the `SUPER` privilege, which means the user cannot get a global read lock. In this case, the connector detects this error, continues to read the database and table names (without any lock), and _then_ uses `FLUSH TABLES <tableName> WITH READ LOCK` on each table that satisfies the filters to prevent changes from other clients. The connector then reads the table schemas, reads _all_ table rows, commits the transaction, and _finally_ releases the table locks.

Therefore, there are two very different behaviors/requirements when the user can't obtain a global read lock because of lack of privilege, like on RDS:

# The RDS user that the connector makes use of must also have the `LOCK TABLES` privilege; without it the connector will fail during the snapshot.
# The connector must hold the table read locks _until it has completed reading all of the tables_, since release the table locks using `UNLOCK TABLES` would prematurely commit our transaction and prevent us from getting a consistent snapshot. From the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/flush.html):
> `UNLOCK TABLES` implicitly commits any active transaction only if any tables currently have been locked with `LOCK TABLES`. The commit does not occur for `UNLOCK TABLES` following `FLUSH TABLES WITH READ LOCK` because the latter statement does not acquire table locks.
2017-02-06 13:56:55 -06:00
Randall Hauch
5490842449 Merge pull request #175 from rhauch/dbz-176
DBZ-176 Corrected MySQL DDL parser to support creating triggers with definers
2017-02-02 13:59:01 -06:00
Randall Hauch
74e5ba6448 DBZ-176 Corrected MySQL DDL parser to support creating triggers with definers
The MySQL DDL parser was not correclty handling `DEFINER` clauses within `CREATE TRIGGER` or `CREATE EVENT` statements. Support for `DEFINER` clauses was recently added for the various forms of `CREATE PROCEDURE`, `CREATE FUNCTION` and `CREATE VIEW` statements. These are the only kinds of statements that have the definer attribute, per the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/stored-programs-security.html).
2017-02-02 12:44:28 -06:00
Randall Hauch
32a88fdc6f DBZ-184 Added database and table name to change event metadata 2017-02-02 12:09:53 -06:00
Randall Hauch
6230cab90e Merge pull request #173 from rhauch/dbz-113
DBZ-113 Added MySQL threads to the event’s source metadata
2017-02-02 12:00:19 -06:00
Randall Hauch
fe17b246af DBZ-113 Added MySQL threads to the event’s source metadata
Changed the events’ `source` structure to optionally contain the identifier of the MySQL thread where appropriate. The thread is included on each `BEGIN` binlog event, so these are captured and added to all of the associated change events produced for that transaction.
2017-02-02 11:53:32 -06:00
Randall Hauch
f2a65d03df DBZ-174 Added support for new binlog events
MySQL recently added additional binlog events, and this commit adds support to handle these new events by ignoring them.
2017-02-01 15:26:28 -06:00
Horia Chiorean
031c4a1552 DBZ-183 Fixes the BinlogReader's handling of TIMESTAMP columns to correctly account for timezones 2017-01-25 16:39:36 +02:00
Randall Hauch
a73f85a80f Merge pull request #162 from rareddy/DBZ-177
DBZ-177: Providing an alternative way to create JDBC connection based …
2017-01-13 13:37:38 -06:00
Ramesh Reddy
a9aace3480 DBZ-177: Providing an alternative way to create JDBC connection based on the configured JDBC driver class name and supplied classloader. The loading/creating the JDBC connections is not reliable when driver libraries in a different classloader than the DriverManager. 2017-01-13 12:58:14 -06:00
Horia Chiorean
a300d3e1cf DBZ-3 Changes the configuration of the Docker Maven plugin to only use alias naming when necessary and moves the PG connector ahead of the Mongo connector in the build 2016-12-27 14:44:33 +02:00
Horia Chiorean
23e3f59fa1 DBZ-3 Implements a connector for streaming changes from a Postgres database
The version of the DB server required for this to work is at least 9.4
The commit also updates the general DBZ build system for:
* custom checkstyle package exclusions - required by the Postgres driver the protobuf code for now
* adds support for debugging Surefire and Failsafe
2016-12-27 14:44:32 +02:00
Randall Hauch
e60839e76b DBZ-164 Improved MySQL snapshot reader logic
Added more logic to the snapshot reader to better handle errors when reading the list of table names in each database. Now, any errors with a single database (e.g., some of the not-quite-a-database names described in the JIRA issue) will cause the snapshot reader to simply skip that database name and continue on (with proper logging).

This change also quotes all of the database and table names when used in SQL statements.
2016-12-20 22:03:46 -06:00
Randall Hauch
fd7e152852 Merge pull request #142 from rhauch/dbz-151
DBZ-151 Added new integration test framework
2016-12-20 17:53:16 -06:00
Randall Hauch
ab1140ef70 Merge pull request #155 from rhauch/dbz-169
DBZ-169 MySQL connector support for ON UPDATE clauses
2016-12-20 17:48:06 -06:00
Randall Hauch
fe44380d4c Merge pull request #154 from rhauch/dbz-168
DBZ-168 MySQL connector ignores XA binlog events
2016-12-20 17:47:57 -06:00
Randall Hauch
a9a84cb6aa DBZ-152 Enabled MySQL connector to skip table count checks during snapshot
Change the MySQL connector’s `min.row.count.to.stream.results` configuration property to accept a value of 0, which signifies that all `SELECT COUNT(*) FROM tableA` queries should be skipped and instead all results should be streamed.
2016-12-20 17:40:57 -06:00
Randall Hauch
046702d959 DBZ-169 MySQL connector support for ON UPDATE clauses
Corrected the MySQL DDL parser to support `ON UPDATE NOW()` clauses in addition to `ON UPDATE CURRENT_TIMESTAMP`.
2016-12-20 16:19:18 -06:00
Randall Hauch
09f87cf190 DBZ-168 MySQL connector ignores XA binlog events
MySQL 5.7.7 introduced new behavior for handling XA events in the binlog. See the [MySQL documentation|http://dev.mysql.com/doc/refman/5.7/en/xa-restrictions.html] for details. This PR changes the binlog reader so that `XA …` statements appearing in the binlog are ignored altogether.
2016-12-20 15:32:44 -06:00
Randall Hauch
5dceb05f69 DBZ-151 Additional changes to improve test framework and MySQL integration tests 2016-12-20 10:58:56 -06:00
Randall Hauch
08e32a4a8b DBZ-151 Added multiple integration test modules to test various MySQL versions and configurations.
These new modules run during the '-Passembly' profile and use the new integration test framework that compares all
output produced by a connector to expected results that were previously recorded and verified. These integration test modules
can be run manually with a simple build of those modules or their parent; only the top-level 'integration-tests' module is run
during the assembly profile during builds of the entire codebase.
2016-12-20 09:18:10 -06:00
Randall Hauch
0851d8280c DBZ-166 Corrected shutdown logic of MySQL connector
The MySQL connector uses several threads, so previously upon connector shutdown these threads were simply cancelled. This is fine for the binlog reader (which can stop at any moment), but is a poor approach for the snapshot as we didn’t always properly release the database resources and also didn’t complete the writing of the DDL history.

With this change, the snapshot reader stops in a very controlled manner, basically by having the 10-step snapshot procedure frequently check whether the reader is to continue working, and to completely avoid thread interruption altogether. And, the snapshot procedure will always clean up its database resources (locks, transactions, etc.), even if the procedure is stopped before completion.

This change also refactors how the snapshot and binlog reader are managed. This is no longer done in the MySqlConnectorTask class (which is busy enough), but rather the logic has been encapsulated in a new `ChainedReader` that makes use of a new `Reader` interface. This makes testing of `ChainedReader` easier, and ensure that `ChainedReader` relies only upon the primary methods of `Reader` rather than upon `AbstractReader`. `ChainedReader` handles multiple readers generically, and ensures that when stopped the readers are all handled correctly and completely process all records, yet avoid accidentally starting a subsequent reader(s) when stopping the previous reader.
2016-12-15 10:55:18 -06:00
Randall Hauch
e3e66bf960 DBZ-161 Corrected MySQL connector logic when no GTIDs are used
Corrected the logic of the MySQL connector when getting the server’s GTID set. Previously, this logic failed if GTIDs are not used.
2016-12-08 08:09:52 -06:00
Dennis Persson
acd7bd8fa5 DBZ-142 Handle national character set columns in DDL parser 2016-12-07 07:38:30 +01:00
Randall Hauch
c762a221b7 DBZ-162 Corrected DDL parsing of MySQL functions
The MySQL DDL parser was not properly consuming function declarations. For functions, the parser consumes the entire statement without handline the various expressions within the function declaration, but the parser was not properly finding the end of the statement and instead was continuing to try to consume values beyond the end of the statement.

Specifically, when the parser consumes a `BEGIN`, it looks for a corresponding `END`. However, if it encountered an `END IF`, the `IF` plus any remaining tokens were left on the token stream and unprocessed. This confused the parser, which keep looking for statements and ultimately ended with a `No more content` error.

This case was replicated in integration tests, and the code fixed to properly find the end of the statements.
2016-12-06 17:34:52 -06:00
Randall Hauch
c72242eeb0 Merge pull request #145 from sherafpm/bugfix/DBZ-160
DBZ-160 - Issue while parsing create table script with ENUM type and default value 'b'
2016-12-06 14:21:23 -06:00
Randall Hauch
eedc4fba00 DBZ-163 Corrected assembly profile in build
The Travis-CI builds run the Maven build using the `assembly` profile, and this has been failing quite a bit lately.

The first problem appears to be that the Travis-CI environment recently changed to have port 3306 taken, which means that our build fails to start any Docker containers for MySQL that attempt to use this port. A simple fix is to use different ports for the assembly build.

However, trying to change the port numbers for some of the profiles caused a lot of problems, and to correct these required refactoring how the properties are set. The Docker Maven plugin is now configured with separate properties that are set once (depending upon the profile) to determine the port assignments of the various Docker containers. The Failsafe plugin executions then use these Maven properties when setting the system variables (e.g., `database.host`) needed in the integration tests. This appears to have worked, but it still is a bit fragile. For example, the assembly profile defines several Failsafe executions, and during this profile these should be the only executions run; however, if not all the properties are set properly, the build seems to also run the default Failsafe execution in addition to the other `assembly` profile executions. (I think properties can’t only be defined in the execution, but need to also be defined in the Failsafe configuration.)

The “alternative” MySQL Docker images were removed, since they basically should not provide any different behavior than the `mysql/mysql-server` images we normally used. The extra containers required a lot more resources to run and dramatically increased the complexity of the build.

A few other trivial changes were made.
2016-12-05 16:37:59 -06:00
Randall Hauch
2b2bf693d7 DBZ-163 Changed Travis-CI build to skip the install dependencies step 2016-12-02 15:43:57 -06:00
Sherafudheen PM
ee52219736 DBZ-160 - Issue while parsing create table script with ENUM type and default value 'b' 2016-12-02 17:42:44 +05:30
Randall Hauch
0bf3b4c9f3 DBZ-157 Upgraded Docker Maven plugin
Upgraded the Docker Maven plugin to 0.18.1, which required changing our use of the `docker.image` to `docker.filter` (per the [changes in 0.17.1](https://github.com/fabric8io/docker-maven-plugin/blob/master/doc/changelog.md)).
2016-11-22 09:23:07 -06:00
Randall Hauch
a82ae5691b Reduce the log verbosity of the MySQL tests 2016-11-14 13:41:10 -06:00
Randall Hauch
d80bc1bfd7 DBZ-153 MySQL connector supports enum and set values with parentheses
Changed the MySQL connector to support ENUM and SET literals with parentheses.
2016-11-14 12:22:08 -06:00
Randall Hauch
8a52cda0dc DBZ-150 Changed the order of events when a row's key is changed. 2016-11-09 14:42:43 -06:00
Randall Hauch
b0ded5f383 DBZ-147 Added ability to treat MySQL DECIMAL as double
By default the MySQL connector handles `DECIMAL` and `NUMERIC` columns using `java.math.BigDecimal` values and describing them using the `org.apache.kafka.connect.data.Decimal` schema type, which serializes the values to a binary form.

This change adds a configuration option that will keep the default behavior, but will instead allow handling `DECIMAL` adn `NUMERIC` values as Java `double` and a schema type of `FLOAT64`.
2016-11-09 11:27:09 -06:00
Randall Hauch
ea5f7983c7 DBZ-144 Corrected MySQL connector restart
Added tests to verify whether the connector is properly restarting in the binlog when previously the connector failed or stopped in the middle of a transaction. The tests showed that the connector is not able to properly start when using or not using GTIDs, since restarting from an arbitrary binlog event causes problems since the TABLE_MAP events for the affected tables are skipped.

The logic was changed significantly to record in the offsets the binlog coordinates at the start of the transaction, which should work whether or not GTIDs are used. Upon restart, the connector may have to re-read the events that were previously processed, but now the offset also includes the number of events that were previously processed so that these can be skipped upon restart.

This has an unforunate side effect since the offsets capture a transaction was completed only when it generates a source record for the subsequent transaction. This is because the connector generates source records (with their offsets) for the binlog events in the transaction before the transaction's commit is seen. And, since no additional source records are produced for the transaction commit, the recorded offsets will show that the prior transaction is complete and that all of the events in the subsequent transaction are to be skipped. Thus, upon restart the connector has to re-read (but ignore) all of the binlog events associated with the completed transaction. This shouldn’t be a problem, and will only slow restarts for very large transactions.
2016-11-09 08:11:41 -06:00
Randall Hauch
0d2acfd0a6 DBZ-149 Corrected type of BINARY column
The MySQL connector (or rather the DDL parser used in the connector) improperly assumed a `CHAR` JDBC type (and Avro schema `STRING` type) for MySQL columns of type `BINARY`. This corrects the error.
2016-11-08 17:41:01 -06:00
Randall Hauch
7656dce985 DBZ-148 Corrected timestamp check in test case to account for DST 2016-11-08 15:37:03 -06:00
Randall Hauch
43d2bf14cf DBZ-143 Minor improvements and correction of JavaDoc. 2016-11-04 09:02:44 -05:00