Loved by data and engineering teams

Fully managed schema evolution

Automatic column detection, mapping and creation

Option to skip or drop deleted columns

Choose between soft or hard row deletes, or skip deletes entirely

Worry free, reliable connectors

No more missing data from pipelines that silently fail

We hard fail and continuously retry on every data ingestion error to guarantee data consistency

We leverage Kafka for ordering and use Kafka and SQL transactions to ensure idempotency and atomicity

Low impact on source databases

CDC log-based replication is the most non-intrusive and efficient way to replicate data from databases

Historical backfills do not require table locking and can be performed against replicas

Minimal operation log growth as logs are constantly streamed to Kafka clusters, reducing risk of replication slot overflow

Kafka is our external buffer (instead of your operation log) to handle back pressure from data pipeline errors

Steps to move data

STEP 1

Extract changes from source systems

STEP 2

Publish data to Kafka

STEP 3

Consume from Kafka and stream data to a destination staging table

STEP 4

Align staging and destination table schema and merge changes

STEP 5

Delete staging table and commit offset in Kafka

Start your 14 day free trial

Say goodbye to data latency