This document is targeted for users who want to upgrade from TiDB 2.0, 2.1, or 3.0 to TiDB 3.1. TiDB 3.1 is compatible with TiDB Binlog of the cluster Version.
Add Index
. If there are any, wait for the DDL operations to finish before you upgrade.Parallel DDL is supported in TiDB 2.1 and later versions. Therefore, for clusters with a TiDB version earlier than 2.0.1, rolling update to TiDB 3.1 is not supported. To upgrade, you can choose either of the following two options:
Note:
Do not execute any DDL statements during the upgrading process, otherwise the undefined behavior error might occur.
Note:
If you have installed Ansible and its dependencies, you can skip this step.
TiDB Ansible release-3.1 depends on Ansible 2.4.2 ~ 2.7.11 (2.4.2 ≦ ansible ≦ 2.7.11
, Ansible 2.7.11 recommended) and the Python modules of jinja2 ≧ 2.9.6
and jmespath ≧ 0.9.0
.
To make it easy to manage dependencies, use pip
to install Ansible and its dependencies. For details, see Install Ansible and its dependencies on the Control Machine. For offline environment, see Install Ansible and its dependencies offline on the Control Machine.
After the installation is finished, you can view the version information using the following command:
ansible --version
ansible 2.7.11
pip show jinja2
Name: Jinja2
Version: 2.10
pip show jmespath
Name: jmespath
Version: 0.9.0
Note:
- You must install Ansible and its dependencies following the above procedures.
- Make sure that the Jinja2 version is correct, otherwise an error occurs when you start Grafana.
- Make sure that the jmespath version is correct, otherwise an error occurs when you perform a rolling update to TiKV.
Log in to the Control Machine using the tidb
user account and enter the /home/tidb
directory.
Back up the tidb-ansible
folders of TiDB 2.0, 2.1, or 3.0 versions using the following command:
$ mv tidb-ansible tidb-ansible-bak
Download the tidb-ansible with the tag corresponding to TiDB 3.1. For more details, See Download TiDB Ansible to the Control Machine. The default folder name is tidb-ansible
.
git clone -b $tag https://github.com/pingcap/tidb-ansible.git
inventory.ini
file and the configuration fileLog in to the Control Machine using the tidb
user account and enter the /home/tidb/tidb-ansible
directory.
inventory.ini
fileEdit the inventory.ini
file. For IP information, see the /home/tidb/tidb-ansible-bak/inventory.ini
backup file.
Note:
Pay special attention to the following variables configuration. For variable meaning, see Description of other variables.
Make sure that ansible_user
is the normal user. For unified privilege management, remote installation using the root user is no longer supported. The default configuration uses the tidb
user as the SSH remote user and the program running user.
## Connection
# ssh via normal user
ansible_user = tidb
You can refer to How to configure SSH mutual trust and sudo rules on the Control Machine to automatically configure the mutual trust among hosts.
Keep the process_supervision
variable consistent with that in the previous version. It is recommended to use systemd
by default.
# process supervision, [systemd, supervise]
process_supervision = systemd
If you need to modify this variable, see How to modify the supervision method of a process from supervise
to systemd
. Before you upgrade, first use the /home/tidb/tidb-ansible-bak/
backup branch to modify the supervision method of a process.
If you have previously customized the configuration file of TiDB cluster components, refer to the backup file to modify the corresponding configuration file in the /home/tidb/tidb-ansible/conf
directory.
Note the following parameter changes:
In the TiKV configuration, end-point-concurrency
is changed to three parameters: high-concurrency
, normal-concurrency
and low-concurrency
.
readpool:
coprocessor:
# Notice: if CPU_NUM > 8, default thread pool size for coprocessors
# will be set to CPU_NUM * 0.8.
# high-concurrency: 8
# normal-concurrency: 8
# low-concurrency: 8
Note:
For the cluster topology of multiple TiKV instances (processes) on a single machine, you need to modify the three parameters above.
Recommended configuration: the number of TiKV instances * the parameter value = the number of CPU cores * 0.8.
In the TiKV configuration, the block-cache-size
parameter of different CFs is changed to block-cache
.
storage:
block-cache:
capacity: "1GB"
Note:
For the cluster topology of multiple TiKV instances (processes) on a single machine, you need to modify the
capacity
parameter.
Recommended configuration: capacity
= MEM_TOTAL * 0.5 / the number of TiKV instances.
In the TiKV configuration, you need to configure the tikv_status_port
port for the multiple instances on a single machine scenario. Before you configure it, check whether a port conflict exists.
[tikv_servers]
TiKV1-1 ansible_host=172.16.10.4 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv1"
TiKV1-2 ansible_host=172.16.10.4 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv1"
TiKV2-1 ansible_host=172.16.10.5 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv2"
TiKV2-2 ansible_host=172.16.10.5 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv2"
TiKV3-1 ansible_host=172.16.10.6 deploy_dir=/data1/deploy tikv_port=20171 tikv_status_port=20181 labels="host=tikv3"
TiKV3-2 ansible_host=172.16.10.6 deploy_dir=/data2/deploy tikv_port=20172 tikv_status_port=20182 labels="host=tikv3"
Make sure that tidb_version = v3.1.x
in the tidb-ansible/inventory.ini
file, and then run the following command to download TiDB 3.0 binary to the Control Machine:
ansible-playbook local_prepare.yml
If the process_supervision
variable uses the default systemd
parameter, perform a rolling update to the TiDB cluster using the following command corresponding to your current TiDB cluster version.
When the TiDB cluster version < 3.0.0, use excessive_rolling_update.yml
.
ansible-playbook excessive_rolling_update.yml
When the TiDB cluster version ≧ 3.0.0, use rolling_update.yml
for both rolling updates and daily rolling restarts.
ansible-playbook rolling_update.yml
If the process_supervision
variable uses the supervise
parameter, perform a rolling update to the TiDB cluster using rolling_update.yml
, no matter what version the current TiDB cluster is.
ansible-playbook rolling_update.yml
ansible-playbook rolling_update_monitor.yml