As an open source distributed NewSQL database with high performance, TiDB can be deployed in the Intel architecture server, ARM architecture server, and major virtualization environments and runs well. TiDB supports most of the major hardware networks and Linux operating systems.
|Linux OS Platform||Version|
|Red Hat Enterprise Linux||7.3 or later|
|CentOS||7.3 or later|
|Oracle Enterprise Linux||7.3 or later|
|Ubuntu LTS||16.04 or later|
- For Oracle Enterprise Linux, TiDB supports the Red Hat Compatible Kernel (RHCK) and does not support the Unbreakable Enterprise Kernel provided by Oracle Enterprise Linux.
- A large number of TiDB tests have been run on the CentOS 7.3 system, and in our community there are a lot of best practices in which TiDB is deployed on the Linux operating system. Therefore, it is recommended to deploy TiDB on CentOS 7.3 or later.
- The support for the Linux operating systems above includes the deployment and operation in physical servers as well as in major virtualized environments like VMware, KVM and XEN.
|sshpass||1.06 or later|
|TiUP||0.6.2 or later|
It is required that you deploy TiUP on the Control Machine to operate and manage TiDB clusters.
|sshpass||1.06 or later|
|numa||2.0.12 or later|
You can deploy and run TiDB on the 64-bit generic hardware server platform in the Intel x86-64 architecture or on the hardware server platform in the ARM architecture. The requirements and recommendations about server hardware configuration (ignoring the resources occupied by the operating system itself) for development, test, and production environments are as follows:
|Component||CPU||Memory||Local Storage||Network||Instance Number (Minimum Requirement)|
|TiDB||8 core+||16 GB+||No special requirements||Gigabit network card||1 (can be deployed on the same machine with PD)|
|PD||4 core+||8 GB+||SAS, 200 GB+||Gigabit network card||1 (can be deployed on the same machine with TiDB)|
|TiKV||8 core+||32 GB+||SAS, 200 GB+||Gigabit network card||3|
|Total Server Number||4|
- In the test environment, the TiDB and PD instances can be deployed on the same server.
- For performance-related test, do not use low-performance storage and network hardware configuration, in order to guarantee the correctness of the test result.
- For the TiKV server, it is recommended to use NVMe SSDs to ensure faster reads and writes.
- If you only want to test and verify the features, follow Quick Start Guide for TiDB to deploy TiDB on a single machine.
- The TiDB server uses the disk to store server logs, so there are no special requirements for the disk type and capacity in the test environment.
|Component||CPU||Memory||Hard Disk Type||Network||Instance Number (Minimum Requirement)|
|TiDB||16 core+||32 GB+||SAS||10 Gigabit network card (2 preferred)||2|
|PD||4 core+||8 GB+||SSD||10 Gigabit network card (2 preferred)||3|
|TiKV||16 core+||32 GB+||SSD||10 Gigabit network card (2 preferred)||3|
|Monitor||8 core+||16 GB+||SAS||Gigabit network card||1|
|Total Server Number||9|
- In the production environment, the TiDB and PD instances can be deployed on the same server. If you have a higher requirement for performance and reliability, try to deploy them separately.
- It is strongly recommended to use higher configuration in the production environment.
- It is recommended to keep the size of TiKV hard disk within 2 TB if you are using PCIe SSDs or within 1.5 TB if you are using regular SSDs.
As an open source distributed NewSQL database, TiDB requires the following network port configuration to run. Based on the TiDB deployment in actual environments, the administrator can open relevant ports in the network side and host side.
|TiDB||4000||the communication port for the application and DBA tools|
|TiDB||10080||the communication port to report TiDB status|
|TiKV||20160||the TiKV communication port|
|PD||2379||the communication port between TiDB and PD|
|PD||2380||the inter-node communication port within the PD cluster|
|TiFlash||9000||the TiFlash TCP service port|
|TiFlash||8123||the TiFlash HTTP service port|
|TiFlash||3930||the TiFlash RAFT and Coprocessor service port|
|TiFlash||20170||the TiFlash Proxy service port|
|TiFlash||20292||the port for Prometheus to pull TiFlash Proxy metrics|
|TiFlash||8234||the port for Prometheus to pull TiFlash metrics|
|Pump||8250||the Pump communication port|
|Drainer||8249||the Drainer communication port|
|TiCDC||8300||the TiCDC communication port|
|Prometheus||9090||the communication port for the Prometheus service|
|Node_exporter||9100||the communication port to report the system information of every TiDB cluster node|
|Blackbox_exporter||9115||the Blackbox_exporter communication port, used to monitor the ports in the TiDB cluster|
|Grafana||3000||the port for the external Web monitoring service and client (Browser) access|
|Alertmanager||9093||the port for the alert web service|
|Alertmanager||9094||the alert communication port|