Commit Graph

23 Commits

Author SHA1 Message Date
Randall Hauch
b5945a24ec DBZ-32 Corrected assembly dependencies 2016-03-17 15:58:27 -05:00
Randall Hauch
0867bd7961 DBZ-32 Changed Maven build to support releasing to Maven Central via the Sonatype OSSRH. 2016-03-17 15:16:31 -05:00
Randall Hauch
91d200df51 DBZ-15 Removed some of the unnecessary JARs from the MySQL connector plugin kit 2016-03-17 11:03:27 -05:00
Randall Hauch
046fc83850 DBZ-23 Simplified PosgreSQL Connector's use of Docker plugin 2016-02-25 10:24:52 -06:00
Randall Hauch
42e531dbe9 DBZ-23 Simplified MySQL Connector's use of Docker plugin 2016-02-25 10:24:39 -06:00
Randall Hauch
7d4a996406 DBZ-23 Docker image created by the module no longer is tagged 2016-02-25 09:43:11 -06:00
Randall Hauch
73da199a4d DBZ-22 Adapted to the Docker Maven Plugin's move to Fabric8 community 2016-02-25 08:59:24 -06:00
Randall Hauch
92949d31c0 DBZ-21 Upgraded to Kafka 0.9.0.1 2016-02-23 15:26:02 -06:00
Randall Hauch
50e28d72a6 DBZ-17 Added plugin distribution ZIP that can be used for other Kafka Connector plugin modules 2016-02-23 13:23:36 -06:00
Randall Hauch
1d46e59048 DBZ-17 Minor changes to the POMs 2016-02-18 13:58:29 -06:00
Randall Hauch
0102f620a9 DBZ-13 Changed Maven build to attach JavaDoc JARs to each module
Modified the 'docs' profile to build and attach JavaDoc JARs for each module's source and test source artifacts. The profile will be automatically used when releasing.
2016-02-17 11:14:50 -06:00
Randall Hauch
dab0440612 DBZ-14 Corrected the 'alt-mysql' Maven profile so that it can be used with any of the other Maven commands. 2016-02-16 16:37:30 -06:00
Christian Posta
c730685a01 add option to run without integration tests 2016-02-15 16:26:32 -07:00
Randall Hauch
73f3c9836b DBZ-1 Completed integration testing and debugging of the MySQL connector 2016-02-15 14:46:12 -06:00
Randall Hauch
1a59f9b07c DBZ-11 Build can skip long-running unit and integration tests 2016-02-04 15:35:27 -06:00
Randall Hauch
54b822bb72 DBZ-10 Added small utility so unit tests can run an embedded Kafka cluster within the same process.
This utility is only suitable for unit tests and therefore is defined in the test JAR of the `debezium-core` module. It certainly should never be used for production purposes.
2016-02-04 15:18:27 -06:00
Randall Hauch
37d6a5e7da DBZ-1 Expanded documentation and improved EmbeddedConnector framework
Changed the EmbeddedConnector framework to initialize all major components via configuration properties rather than through the public builder. This increases the size of the configurations, but it simplifies what embedding applications must do to obtain an EmbeddedConnector instance.

The DatabaseHistory framework was also changed to be configurable in similar ways to the OffsetBackingStore. Essentially, connectors that want to use it (like the MySqlConnector) will describe it as part of the connector's configuration, allowing more flexibility in which DatabaseHistory implementation is used and how it is configured whether in Kafka Connector or as part of the EmbeddedConnector.

Added a README.md to `debezium-embedded` to provide documentation and sample code showing how to use the EmbeddedConnector.
2016-02-03 14:11:53 -06:00
Randall Hauch
0e58dba9d6 DBZ-1 Renamed the connector modules and packages 2016-02-02 16:58:48 -06:00
Randall Hauch
2da5b37f76 DBZ-1 Added support for recording and recovering database schema
Adds a small framework for recording the DDL operations on the schema state (e.g., Tables) as they are read and applied from the log, and when restarting the connector task to recover the accumulated schema state. Where and how the DDL operations are recorded is an abstraction called `DatabaseHistory`, with three options: in-memory (primarily for testing purposes), file-based (for embedded cases and perhaps standalone Kafka Connect uses), and Kafka (for normal Kafka Connect deployments).

The `DatabaseHistory` interface methods take several parameters that are used to construct a `SourceRecord`. The `SourceRecord` type was not used, however, since that would result in this interface (and potential extension mechanism) having a dependency on and exposing the Kafka API. Instead, the more general parameters are used to keep the API simple.

The `FileDatabaseHistory` and `MemoryDatabaseHistory` implementations are both fairly simple, but the `FileDatabaseHistory` relies upon representing each recorded change as a JSON document. This is simple, is easily written to files, allows for recovery of data from the raw file, etc. Although this was done initially using Jackson, the code to read and write the JSON documents required a lot of boilerplate. Instead, the `Document` framework developed during Debezium's very early prototype stages was brought back. It provides a very usable API for working with documents, including the ability to compare documents semantically (e.g., numeric values are converted to be able to compare their numeric values rather than just compare representations) and with or without field order.

The `KafkaDatabaseHistory` is a bit more complicated, since it uses a Kafka broker to record all database schema changes on a single topic with single partition, and then upon restart uses it to recover the history from the dedicated topics. This implementation also records the changes as JSON documents, keeping it simple and independent of the Kafka Connect converters.
2016-02-02 14:27:14 -06:00
Randall Hauch
4ddd4b33be Changed Docker usage on Travis-CI 2016-01-25 16:12:07 -06:00
Randall Hauch
8e6c615644 Added utilities for managing a relational schema's table definitions, with support for updating those by reading DDL 2016-01-20 08:53:29 -06:00
Randall Hauch
dffdfd8049 Added debezium-core and MySQL binary log reading tests. 2015-11-24 15:54:37 -06:00
Randall Hauch
0a99ed67cd Initial project skeleton
This initial commit defines several modules for ingesting from JDBC and specifically from PostgreSQL and MySQL. The two latter modules define separate unit tests and integration tests, and prior to running the integration tests create a Docker image with the respective database and start a Docker container. Any *.sql or *.sh files are run on database, allowing the modules to easily create and populate databases used in the tests. The integration tests are then run (using the failsafe maven plugin), and regardless of whether there are any failures the Docker container is always shutdown (at least when running `mvn install`). See the modules' README files for details.
2015-11-18 14:23:29 -06:00