This document introduces the architecture and the deployment of the cluster version of TiDB Binlog.
TiDB Binlog is tool used to collect binlog data from TiDB and provide real-time backup and replication to downstream platforms.
TiDB Binlog has the following features:
The TiDB Binlog architecture is as follows:
The TiDB Binlog cluster is composed of Pump and Drainer.
Pump is used to record the binlogs generated in TiDB, sort the binlogs based on the commit time of the transaction, and send binlogs to Drainer for consumption.
Drainer collects and merges binlogs from each Pump, converts the binlog to SQL or data of a specific format, and replicates the data to a specific downstream platform.
binlogctl is an operations tool for TiDB Binlog with the following features:
tsoof TiDB cluster
You need to use TiDB v2.0.8-binlog, v2.1.0-rc.5 or a later version. Older versions of TiDB cluster are not compatible with the cluster version of TiDB Binlog.
Drainer supports replicating binlogs to MySQL, TiDB, Kafka or local files. If you need to replicate binlogs to other Drainer unsuppored destinations, you can set Drainer to replicate the binlog to Kafka and read the data in Kafka for customized processing according to binlog slave protocol. See Binlog Slave Client User Guide.
To use TiDB Binlog for recovering incremental data, set the config
file (local files in the proto buffer format). Drainer converts the binlog to data in the specified proto buffer format and writes the data to local files. In this way, you can use Reparo to recover data incrementally.
Pay attention to the value of
If the downstream is MySQL, MariaDB, or another TiDB cluster, you can use sync-diff-inspector to verify the data after data replication.
Once you grasp the basics from the above, you can refer to the following documents to use TiDB Binlog: