tikv-importer supports metrics collection via Prometheus. This document introduces the monitor configuration and monitoring metrics of TiDB Lightning.
[monitored_servers]section in the
inventory.inifile. Then the Prometheus server can collect their metrics.
tikv-importer v2.1 uses Pushgateway to deliver
tikv-importer.toml to recognize the Pushgateway with the following settings:
[metric] # The Prometheus client push job name. job = "tikv-importer" # The Prometheus client push interval. interval = "15s" # The Prometheus Pushgateway address. address = ""
The metrics of
tidb-lightning can be gathered directly by Prometheus as long as it is discovered. You can set the metrics port in
[lightning] # HTTP port for debugging and Prometheus metrics pulling (0 to disable) pprof-port = 8289 ...
You need to configure Prometheus to make it discover the
tidb-lightning server. For instance, you can directly add the server address to the
... scrape_configs: - job_name: 'tidb-lightning' static_configs: - targets: ['192.168.20.10:8289']
Grafana is a web interface to visualize Prometheus metrics as dashboards.
If TiDB Lightning is installed using TiDB Ansible, its dashboard is already installed. Otherwise, the dashboard JSON can be imported from https://raw.githubusercontent.com/pingcap/tidb-ansible/master/scripts/lightning.json.
|Import speed||write from lightning||Speed of sending KVs from TiDB Lightning to TiKV Importer, which depends on each table’s complexity|
|Import speed||upload to tikv||Total upload speed from TiKV Importer to all TiKV replicas|
|Chunk process duration||Average time needed to completely encode one single data file|
Sometimes the import speed will drop to zero allowing other parts to catch up. This is normal.
|Import progress||Percentage of data files encoded so far|
|Checksum progress||Percentage of tables are verified to be imported successfully|
|Failures||Number of failed tables and their point of failure, normally empty|
|Memory usage||Amount of memory occupied by each service|
|Number of Lightning Goroutines||Number of running goroutines used by TiDB Lightning|
|CPU%||Number of logical CPU cores utilized by each service|
|Idle workers||io||Number of unused
|Idle workers||closed-engine||Number of engines which is closed but not yet cleaned up, normally close to index + table-concurrency (default 8), and close to 0 means TiDB Lightning is faster than TiKV Importer, which will cause TiDB Lightning to stall|
|Idle workers||table||Number of unused
|Idle workers||index||Number of unused
|Idle workers||region||Number of unused
|External resources||KV Encoder||Counts active KV encoders, normally the same as
|External resources||Importer Engines||Counts opened engine files, should never exceed the
|Chunk parser read block duration||read block||Time taken to read one block of bytes to prepare for parsing|
|Chunk parser read block duration||apply worker||Time elapsed to wait for an idle io-concurrency|
|SQL process duration||row encode||Time taken to parse and encode a single row|
|SQL process duration||block deliver||Time taken to send a block of KV pairs to TiKV Importer|
If any of the duration is too high, it indicates that the disk used by TiDB Lightning is too slow or busy with I/O.
|SQL process rate||data deliver rate||Speed of delivery of data KV pairs to TiKV Importer|
|SQL process rate||index deliver rate||Speed of delivery of index KV pairs to TiKV Importer|
|SQL process rate||total deliver rate||The sum of two rates above|
|Total bytes||parser read size||Number of bytes being read by TiDB Lightning|
|Total bytes||data deliver size||Number of bytes of data KV pairs already delivered to TiKV Importer|
|Total bytes||index deliver size||Number of bytes of index KV pairs already delivered to TiKV Importer|
|Total bytes||storage_size / 3||Total size occupied by the TiKV cluster, divided by 3 (the default number of replicas)|
|Delivery duration||Range delivery||Time taken to upload a range of KV pairs to the TiKV cluster|
|Delivery duration||SST delivery||Time taken to upload an SST file to the TiKV cluster|
|SST process duration||Split SST||Time taken to split the stream of KV pairs into SST files|
|SST process duration||SST upload||Time taken to upload an SST file|
|SST process duration||SST ingest||Time taken to ingest an uploaded SST file|
|SST process duration||SST size||File size of an SST file|
This section explains the monitoring metrics of
tidb-lightning, if you need to monitor other metrics not covered by the default Grafana dashboard.
Metrics provided by
tikv-importer are listed under the namespace
Bucketed histogram for the duration of an RPC action. Labels:
Bucketed histogram for the uncompressed size of a block of KV pairs received from Lightning.
Bucketed histogram for the time needed to receive a block of KV pairs from Lightning.
Bucketed histogram for the compressed size of a chunk of SST file uploaded to TiKV.
Bucketed histogram for the time needed to upload a chunk of SST file to TiKV.
Bucketed histogram for the time needed to deliver a range of KV pairs into a
Bucketed histogram for the time needed to split off a range from the engine file into a single SST file.
Bucketed histogram for the time needed to deliver an SST file from a
dispatch-job to an
Bucketed histogram for the time needed to receive an SST file from a
dispatch-job in an
Bucketed histogram for the time needed to upload an SST file from an
ImportSSTJob to a TiKV node.
Bucketed histogram for the compressed size of the SST file uploaded to a TiKV node.
Bucketed histogram for the time needed to ingest an SST file into TiKV.
Indicates the running phase. Possible values are 1, meaning running inside the phase, and 0, meaning outside the phase. Labels:
Counts the number of times a TiKV node is found to have insufficient space when uploading SST files. Labels:
Metrics provided by
tidb-lightning are listed under the namespace
Counts open and closed engine files. Labels:
Counts idle workers. Values should be less than the
*-concurrency settings and are typically zero. Labels:
Counts open and closed KV encoders. KV encoders are in-memory TiDB instances that convert SQL
INSERT statements into KV pairs. The net values need to be bounded in a healthy situation. Labels:
Counts number of tables processed and their status. Labels:
Counts number of engine files processed and their status. Labels:
Counts number of chunks processed and their status. Labels:
Bucketed histogram for the time needed to import a table.
Bucketed histogram for the size of a single SQL row.
Bucketed histogram for the time needed to encode a single SQL row into KV pairs.
Bucketed histogram for the time needed to deliver a set of KV pairs corresponding to one single SQL row.
Bucketed histogram for the time needed to deliver a block of KV pairs to Importer.
Bucketed histogram for the uncompressed size of a block of KV pairs delivered to Importer.
Bucketed histogram for the time needed by the data file parser to read a block.
Bucketed histogram for the time needed to compute the checksum of a table.
Bucketed histogram for the time needed to acquire an idle worker. Labels: