We load all the schemas of the captured tables when the connector
starts. If we process a record from a table which schema is not
available, this means we have some bug in the intial schema loading.
Don't fail in such case, but print a warning about that.
When taking a snapshot, the Oracle connector was converting the TIMESTAMP
WITH TIME ZONE value to GMT and per the documentation, the value should
be emitted in the time zone of the data.
The snapshot emitted value in GMT is temporally accurate, so there is no
data inconsistency, but the emitted format itself was inconsistent when
looking at how the column data was emitted during a snapshot versus in a
streaming event.
DBZ-5648 introduced a regression where transaction start, commit, and rollback
events were only being read from within the scope of the configured PDB that
the connector was capturing changes instead of the entire Oracle database.
This can lead to situations where the offsets may not be advanced as quickly
in a low traffic PDB environment, potentially causing stale offsets.
* Add annotation and jUnit rule for skipping test when it's run against
Apicurio registry.
* Skip `AbstractOracleDatatypesTest#intTypes` tests which fails with
Apicurio. Which is partially tests in `OracleNumberNegativeScaleIT`
Oracle allows the scale to be negatove for `NUMBER` data type.
Conversion of such number to Avro would fail as Avro doesn't allow
negative scales. Provide a converter which converts the number to
zero scale number. For completeness the converter provides also
conversion to other supported types - string and double.
N.B.: if the conversion to Avro fails actually depends on
implementation, e.g. Kafka schema registy allows also negative scales.
Run the tests alwyas in thr same order to make it more easy to debug
failures. If needed, the order can be changed (e.g. to `random`) by
overriding propeperty `runOrder`.