The {prodname} PostgreSQL connector can capture data changes from TimescaleDB.
The standard link:/documentation/reference/connectors/postgresql[PostgreSQL connector] reads the raw data from the database.
You can then use the `io.debezium.connector.postgresql.transforms.timescaledb.TimescaleDb` transformation to process the raw data, perform logical routing, and add relevant metadata.
. Install TimescaleDB as described in the link:https://docs.timescale.com/[TimescaleDB documentation].
. Install the {prodname} PostgresSQL connector according to the instructions in the link:/documentation/reference/install[{prodname} installation guide].
. Configure TimescaleDB, and deploy the connector.
Because the SMT cannot access the database configuration at the connector level, you must explicitly define configuration metadata for the transformation.
* The transformation has access to TimescaleDB metadata to obtain chunk/hypertable mapping.
* The transformation reroutes the captured events from their chunk-specific topics to a single logical topic that is named according to the following pattern: `_<prefix>_._<hypertable-schema-name>_._<hypertable-name>_`
* The transformation adds the following headers to the event:
`__debezium_timescaledb_chunk_table`:: The name of the physical table that stores the event data.
`__debezium_timescaledb_chunk_schema`:: The name of the schema that the physical table belongs to.
The aggregates can be recalculated either automatically or manually.
After an aggregate is recalculated, the new values are stored in the hypertable, from which they can be captured and streamed.
Data from the aggregates is streamed to different topics, based on the chunk in which it is stored.
The Timescaledb transformation reassembles data that was streamed to different topics and routes it to a single topic.
* The transformation has access to TimescaleDB metadata to obtain mappings between chunks and hypertables, and between hypertables and aggregates.
* The transformation reroutes the captured events from their chunk-specific topics to a single logical topic that is named according to the following pattern `_<prefix>_._<aggregate-schema-name>_._<aggregate-name>_`.
* The transformation adds the following headers to the event:
`__debezium_timescaledb_hypertable_table`:: The name of the hypertable that stores the continuous aggregate.
`__debezium_timescaledb_hypertable_schema`:: The name of the schema that the hypertable belongs to.
`__debezium_timescaledb_chunk_table`:: The name of the physical table that stores the continuous aggregate.
`__debezium_timescaledb_chunk_schema`:: The name of the schema that the physical table belongs to.
.Example: Streaming data from a continuous aggregate
The following example shows a SQL command for creating a continuous aggregate `conditions_summary` in the `public` schema.
{prodname} uses replication slots to capture changes from TimescaleDB and PostgreSQL.
Replication slots Data store data in multiple message formats.
Typically, it's best to configure {prodname} to use the link:/reference/connectors/postgresql.html#postgresql-pgoutput[pgoutput] decoder, the default decoder for TimescaleDB instances, to read from the slot.
The following example shows the configuration for setting up a PostgreSQL connector to connect to a TimescaleDB server with the logical name `dbserver1` on port 5432 at 192.168.99.100.
<8> The topic prefix for the TimescaleDB server or cluster.
This prefix forms a namespace, and is used in the names of all Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema, when the Avro converter is used.
<9> Indicates use of the `pgoutput` logical decoding plug-in.
<10> A list of all schemas that contain TimescaleDB physical tables.
<11> Enables the SMT to process raw TimescaleDB events.
<12> Enables the SMT to process raw TimescaleDB events.
<13> Provides TimescaleDB connection information for the SMT.
The values must match the value of items `3` - `7`.