After you have deployed TiDB Binlog using Ansible successfully, you can go to the Grafana Web (default address: http://grafana_ip:3000, default account: admin, password: admin) to check the state of Pump and Drainer.
TiDB Binlog consists of two components: Pump and Drainer. This section shows the monitoring metrics of Pump and Drainer.
To understand the Pump monitoring metrics, check the following table:
Pump monitoring metrics | Description |
---|---|
Storage Size | Records the total disk space (capacity) and the available disk space (available) |
Metadata | Records the biggest TSO (gc_tso ) of the binlog that each Pump node can delete, and the biggest commit TSO (max_commit_tso ) of the saved binlog |
Write Binlog QPS by Instance | Shows QPS of writing binlog requests received by each Pump node |
Write Binlog Latency | Records the latency time of each Pump node writing binlog |
Storage Write Binlog Size | Shows the size of the binlog data written by Pump |
Storage Write Binlog Latency | Records the latency time of the Pump storage module writing binlog |
Pump Storage Error By Type | Records the number of errors encountered by Pump, counted based on the type of error |
Query TiKV | The number of times that Pump queries the transaction status through TiKV |
To understand the Drainer monitoring metrics, check the following table:
Drainer monitoring metrics | Description |
---|---|
Checkpoint TSO | Shows the biggest TSO time of the binlog that Drainer has already replicated into the downstream. You can get the lag by using the current time to subtract the binlog timestamp. But be noted that the timestamp is allocated by PD of the master cluster and is determined by the time of PD. |
Pump Handle TSO | Records the biggest TSO time among the binlog files that Drainer obtains from each Pump node |
Pull Binlog QPS by Pump NodeID | Shows the QPS when Drainer obtains binlog from each Pump node |
95% Binlog Reach Duration By Pump | Records the delay from the time when binlog is written into Pump to the time when the binlog is obtained by Drainer |
Error By Type | Shows the number of errors encountered by Drainer, counted based on the type of error |
SQL Query Time | Records the time it takes Drainer to execute the SQL statement in the downstream |
Drainer Event | Shows the number of various types of events, including “ddl”, “insert”, “delete”, “update”, “flush”, and “savepoint” |
Execute Time | Records the time it takes to write binlog into the downstream syncing module |
95% Binlog Size | Shows the size of the binlog data that Drainer obtains from each Pump node |
DDL Job Count | Records the number of DDL statements handled by Drainer |
Queue Size | Records the work queue size in Drainer |
Currently, TiDB Binlog monitoring metrics are divided into the following three types based on the level of importance:
changes(binlog_pump_storage_error_count[1m])
> 0pump_storage_error
monitoring and check the Pump log to find the causes(time() - binlog_drainer_checkpoint_tso / 1000)
> 3600Solutions:
Check whether it is too slow to obtain the data from Pump:
You can check handle tso
of Pump to get the time for the latest message of each Pump. Check whether a high latency exists for Pump and make sure the corresponding Pump is running normally
Check whether it is too slow to replicate data in the downstream based on Drainer event
and Drainer execute latency
:
execute time
is too large, check the network bandwidth and latency between the machine with Drainer deployed and the machine with the target database deployed, and the state of the target databaseexecute time
is not too large and Drainer event
is too small, add work count
and batch
and retryIf the two solutions above cannot work, contact support@pingcap.com
histogram_quantile(0.9, rate(binlog_pump_rpc_duration_seconds_bucket{method="WriteBinlog"}[5m]))
> 1Solution:
node exported
disk latency
and util
are low, contact support@pingcap.comhistogram_quantile(0.9, rate(binlog_pump_storage_write_binlog_duration_time_bucket{type="batch"}[5m]))
> 1binlog_pump_storage_storage_size_bytes{type="available"}
< 20 * 1024 * 1024 * 1024gc_tso
is normal. If not, adjust the GC time configuration of Pump or get the corresponding Pump offlinecheckpoint
has not been updated for one minutechanges(binlog_drainer_checkpoint_tso[1m])
< 1histogram_quantile(0.9, rate(binlog_drainer_execute_duration_time_bucket[1m]))
> 10Solutions: