PingCAP
  • Cloud
  • TiDB Academy
  • Docs
  • Success Stories
  • Blog
  • Free Download
PingCAP
  • Cloud
  • TiDB Academy
  • Docs
  • Success Stories
  • Blog
  • Free Download

Contact

中文
文档
v3.0 (stable) dev v2.1
  • Introduction
    • TiDB Introduction
    • Benchmarks
      • How to Test TiDB Using Sysbench
      • Sysbench Performance Test - v2.1 vs. v2.0
      • TPC-H 50G Performance Test - v2.1 vs. v2.0
      • DM Benchmark Report
  • Concepts
    • Architecture
    • Key Features
      • Horizontal Scalability
      • MySQL Compatible Syntax
      • Replicate from and to MySQL
      • Distributed Transactions with Strong Consistency
      • Cloud Native Architecture
      • Minimize ETL with HTAP
      • Fault Tolerance & Recovery with Raft
      • Automatic Rebalancing
      • Deployment and Orchestration with Ansible, Kubernetes, Docker
      • JSON Support
      • Spark Integration
      • Read Historical Data Without Restoring from Backup
      • Fast Import and Restore of Data
      • Hybrid of Column and Row Storage
      • SQL Plan Management
      • Open Source
      • Online Schema Changes
  • How-to
    • Get Started
      • Start a Local Cluster
        • From Binary
      • Explore SQL with TiDB
      • Import Example Database
      • Read Historical Data
      • TiDB Binlog Tutorial
      • TiDB Data Migration Tutorial
      • TiDB Lightning Tutorial
      • TiSpark Quick Start Guide
    • Deploy
      • Hardware Recommendations
      • From Binary Tarball
        • For Testing Environments
        • For Production Environments
      • Orchestrated Deployment
        • Ansible Deployment (Recommended)
        • Ansible Offline Deployment
        • Docker Deployment
      • Geographic Redundancy
        • Overview
        • Configure Location Awareness
      • Data Migration with Ansible
    • Configure
      • Time Zone
      • Memory Control
    • Secure
      • Transport Layer Security (TLS)
        • Enable TLS For MySQL Clients
        • Enable TLS Between TiDB Components
      • Generate Self-signed Certificates
    • Monitor
      • Overview
      • Monitor a TiDB Cluster
    • Migrate
      • Overview
      • Migrate from MySQL
        • Migrate the Full Data
        • Migrate the Incremental Data
      • Migrate from Aurora
      • Migrate from CSV
    • Maintain
      • Common Ansible Operations
      • Backup and Restore
      • Identify Slow Queries
    • Scale
      • Scale using Ansible
      • Scale a TiDB Cluster
    • Upgrade
      • Upgrade to TiDB 2.1
    • Troubleshoot
      • Troubleshoot Cluster Setup
      • Troubleshoot TiDB Lightning
  • Reference
    • SQL
      • MySQL Compatibility
      • SQL Language Structure
        • Literal Values
        • Schema Object Names
        • Keywords and Reserved Words
        • User-Defined Variables
        • Expression Syntax
        • Comment Syntax
      • Data Types
        • Overview
        • Default Values
        • Numeric Types
          • `BIT`
          • `BOOL|BOOLEAN`
          • `TINYINT`
          • `SMALLINT`
          • `MEDIUMINT`
          • `INT|INTEGER`
          • `BIGINT`
          • `DECIMAL`
          • `FLOAT`
          • `DOUBLE`
        • Date and Time Types
          • `DATE`
          • `DATETIME`
          • `TIMESTAMP`
          • `TIME`
          • `YEAR`
        • String Types
          • `CHAR`
          • `VARCHAR`
          • `TEXT`
          • `LONGTEXT`
          • `BINARY`
          • `VARBINARY`
          • `TINYBLOB`
          • `BLOB`
          • `MEDIUMBLOB`
          • `LONGBLOB`
          • `ENUM`
          • `SET`
        • JSON Type
      • Functions and Operators
        • Function and Operator Reference
        • Type Conversion in Expression Evaluation
        • Operators
        • Control Flow Functions
        • String Functions
        • Numeric Functions and Operators
        • Date and Time Functions
        • Bit Functions and Operators
        • Cast Functions and Operators
        • Encryption and Compression Functions
        • Information Functions
        • JSON Functions
        • Aggregate (GROUP BY) Functions
        • Miscellaneous Functions
        • Precision Math
      • SQL Statements
        • `ADD COLUMN`
        • `ADD INDEX`
        • `ADMIN`
        • `ALTER DATABASE`
        • `ALTER TABLE`
        • `ALTER USER`
        • `ANALYZE TABLE`
        • `BEGIN`
        • `COMMIT`
        • `CREATE DATABASE`
        • `CREATE INDEX`
        • `CREATE TABLE LIKE`
        • `CREATE TABLE`
        • `CREATE USER`
        • `DEALLOCATE`
        • `DELETE`
        • `DESC`
        • `DESCRIBE`
        • `DO`
        • `DROP COLUMN`
        • `DROP DATABASE`
        • `DROP INDEX`
        • `DROP TABLE`
        • `DROP USER`
        • `EXECUTE`
        • `EXPLAIN ANALYZE`
        • `EXPLAIN`
        • `FLUSH PRIVILEGES`
        • `FLUSH STATUS`
        • `FLUSH TABLES`
        • `GRANT <privileges>`
        • `INSERT`
        • `KILL [TIDB]`
        • `LOAD DATA`
        • `MODIFY COLUMN`
        • `PREPARE`
        • `RENAME INDEX`
        • `RENAME TABLE`
        • `REPLACE`
        • `REVOKE <privileges>`
        • `ROLLBACK`
        • `SELECT`
        • `SET [NAMES|CHARACTER SET]`
        • `SET PASSWORD`
        • `SET TRANSACTION`
        • `SET [GLOBAL|SESSION] <variable>`
        • `SHOW CHARACTER SET`
        • `SHOW COLLATION`
        • `SHOW [FULL] COLUMNS FROM`
        • `SHOW CREATE TABLE`
        • `SHOW DATABASES`
        • `SHOW ENGINES`
        • `SHOW ERRORS`
        • `SHOW [FULL] FIELDS FROM`
        • `SHOW GRANTS`
        • `SHOW INDEXES [FROM|IN]`
        • `SHOW INDEX [FROM|IN]`
        • `SHOW KEYS [FROM|IN]`
        • `SHOW PRIVILEGES`
        • `SHOW [FULL] PROCESSSLIST`
        • `SHOW SCHEMAS`
        • `SHOW [FULL] TABLES`
        • `SHOW TABLE REGIONS`
        • `SHOW TABLE STATUS`
        • `SHOW [GLOBAL|SESSION] VARIABLES`
        • `SHOW WARNINGS`
        • `SPLIT REGION`
        • `START TRANSACTION`
        • `TRACE`
        • `TRUNCATE`
        • `UPDATE`
        • `USE`
      • Constraints
      • Generated Columns
      • Character Set
    • Configuration
      • tidb-server
        • MySQL System Variables
        • TiDB Specific System Variables
        • Configuration Flags
        • Configuration File
      • pd-server
        • Configuration Flags
      • tikv-server
        • Configuration Flags
    • Security
      • Security Compatibility with MySQL
      • The TiDB Access Privilege System
      • TiDB User Account Management
    • Transactions
      • Overview
      • Transaction Model
      • Isolation Levels
    • System Databases
      • `mysql`
      • `information_schema`
    • Errors Codes
    • Supported Client Drivers
    • Garbage Collection (GC)
    • Performance
      • Overview
      • Understanding the Query Execution Plan
      • Introduction to Statistics
      • Optimizer Hints
      • Tune TiKV
    • Key Monitoring Metrics
      • Overview
      • TiDB
      • PD
      • TiKV
    • Alert Rules
    • Best Practices
      • HAProxy Best Practices
      • PD Scheduling Best Practices
    • TiSpark
    • TiDB Binlog
      • Overview
      • Deploy
      • Maintain
      • Monitor
      • Upgrade
      • Reparo
      • Binlog Slave Client
      • FAQ
    • Tools
      • Mydumper
      • Syncer
      • Loader
      • TiDB Data Migration
        • Overview
          • DM Overview
          • Restrictions
          • DM-worker
          • DM Relay Log
        • Features
          • Table Routing
          • Black and White Lists
          • Binlog Event Filter
          • Replication Delay Monitoring
          • Sharding Support
            • Introduction
            • Restrictions
            • Handle Sharding DDL Locks Manually
        • Usage Scenarios
          • Simple Scenario
          • Shard Merge Scenario
          • Shard Merge Best Practices
        • Deploy
        • Configure
          • Overview
          • Task Configuration
        • Manage the DM Cluster
          • Cluster Operations
          • Cluster Upgrade
        • Manage Replication Tasks
          • Manage Tasks
          • Precheck Tasks
          • Query Task Status
          • Skip or Replace Abnormal SQL Statements
        • Monitor
        • Migrate from MySQL compatible database
          • Migrate from Aurora
        • Troubleshoot
          • DM Troubleshooting
          • Error Description
          • Error Handling
        • FAQ
      • TiDB Lightning
        • Overview
        • Deployment
        • Checkpoints
        • Table Filter
        • CSV Support
        • Monitor
        • Troubleshoot
        • FAQ
      • sync-diff-inspector
      • PD Control
      • PD Recover
      • TiKV Control
      • TiDB Control
      • Download
  • FAQs
    • TiDB FAQs
    • TiDB Lightning FAQs
    • Upgrade FAQs
  • Support
    • Support Resources
    • Report an Issue
  • Contribute
    • Contribute to TiDB
    • Improve the Docs
  • Adopters
  • Roadmap
  • Releases
    • v2.1
      • 2.1.18
      • 2.1.17
      • 2.1.16
      • 2.1.15
      • 2.1.14
      • 2.1.13
      • 2.1.12
      • 2.1.11
      • 2.1.10
      • 2.1.9
      • 2.1.8
      • 2.1.7
      • 2.1.6
      • 2.1.5
      • 2.1.4
      • 2.1.3
      • 2.1.2
      • 2.1.1
      • 2.1 GA
      • 2.1 RC5
      • 2.1 RC4
      • 2.1 RC3
      • 2.1 RC2
      • 2.1 RC1
      • 2.1 Beta
    • v2.0
      • 2.0.11
      • 2.0.10
      • 2.0.9
      • 2.0.8
      • 2.0.7
      • 2.0.6
      • 2.0.5
      • 2.0.4
      • 2.0.3
      • 2.0.2
      • 2.0.1
      • 2.0
      • 2.0 RC5
      • 2.0 RC4
      • 2.0 RC3
      • 2.0 RC1
      • 1.1 Beta
      • 1.1 Alpha
    • v1.0
      • 1.0.8
      • 1.0.7
      • 1.0.6
      • 1.0.5
      • 1.0.4
      • 1.0.3
      • 1.0.2
      • 1.0.1
      • 1.0
      • Pre-GA
      • RC4
      • RC3
      • RC2
      • RC1

Scale the TiDB Cluster Using TiDB Ansible

The capacity of a TiDB cluster can be increased or decreased without affecting the online services.

Warning:

In decreasing the capacity, if your cluster has a mixed deployment of other services, do not perform the following procedures. The following examples assume that the removed nodes have no mixed deployment of other services.

Assume that the topology is as follows:

Name Host IP Services
node1 172.16.10.1 PD1
node2 172.16.10.2 PD2
node3 172.16.10.3 PD3, Monitor
node4 172.16.10.4 TiDB1
node5 172.16.10.5 TiDB2
node6 172.16.10.6 TiKV1
node7 172.16.10.7 TiKV2
node8 172.16.10.8 TiKV3
node9 172.16.10.9 TiKV4

Increase the capacity of a TiDB/TiKV node

For example, if you want to add two TiDB nodes (node101, node102) with the IP addresses 172.16.10.101 and 172.16.10.102, take the following steps:

  1. Edit the inventory.ini file and append the node information:

    [tidb_servers]
    172.16.10.4
    172.16.10.5
    172.16.10.101
    172.16.10.102
    
    [pd_servers]
    172.16.10.1
    172.16.10.2
    172.16.10.3
    
    [tikv_servers]
    172.16.10.6
    172.16.10.7
    172.16.10.8
    172.16.10.9
    
    [monitored_servers]
    172.16.10.1
    172.16.10.2
    172.16.10.3
    172.16.10.4
    172.16.10.5
    172.16.10.6
    172.16.10.7
    172.16.10.8
    172.16.10.9
    172.16.10.101
    172.16.10.102
    
    [monitoring_servers]
    172.16.10.3
    
    [grafana_servers]
    172.16.10.3

    Now the topology is as follows:

    Name Host IP Services
    node1 172.16.10.1 PD1
    node2 172.16.10.2 PD2
    node3 172.16.10.3 PD3, Monitor
    node4 172.16.10.4 TiDB1
    node5 172.16.10.5 TiDB2
    node101 172.16.10.101 TiDB3
    node102 172.16.10.102 TiDB4
    node6 172.16.10.6 TiKV1
    node7 172.16.10.7 TiKV2
    node8 172.16.10.8 TiKV3
    node9 172.16.10.9 TiKV4
  2. Initialize the newly added node:

    ansible-playbook bootstrap.yml -l 172.16.10.101,172.16.10.102

    Note:

    If an alias is configured in the inventory.ini file, for example, node101 ansible_host=172.16.10.101, use -l to specify the alias when executing ansible-playbook. For example, ansible-playbook bootstrap.yml -l node101,node102. This also applies to the following steps.

  3. Deploy the newly added node:

    ansible-playbook deploy.yml -l 172.16.10.101,172.16.10.102
  4. Start the newly added node:

    ansible-playbook start.yml -l 172.16.10.101,172.16.10.102
  5. Update the Prometheus configuration and restart the cluster:

    ansible-playbook rolling_update_monitor.yml --tags=prometheus
  6. Monitor the status of the entire cluster and the newly added node by opening a browser to access the monitoring platform: http://172.16.10.3:3000.

You can use the same procedure to add a TiKV node. But to add a PD node, some configuration files need to be manually updated.

Increase the capacity of a PD node

For example, if you want to add a PD node (node103) with the IP address 172.16.10.103, take the following steps:

  1. Edit the inventory.ini file and append the node information to the end of the [pd_servers] group:

    [tidb_servers]
    172.16.10.4
    172.16.10.5
    
    [pd_servers]
    172.16.10.1
    172.16.10.2
    172.16.10.3
    172.16.10.103
    
    [tikv_servers]
    172.16.10.6
    172.16.10.7
    172.16.10.8
    172.16.10.9
    
    [monitored_servers]
    172.16.10.4
    172.16.10.5
    172.16.10.1
    172.16.10.2
    172.16.10.3
    172.16.10.103
    172.16.10.6
    172.16.10.7
    172.16.10.8
    172.16.10.9
    
    [monitoring_servers]
    172.16.10.3
    
    [grafana_servers]
    172.16.10.3

    Now the topology is as follows:

    Name Host IP Services
    node1 172.16.10.1 PD1
    node2 172.16.10.2 PD2
    node3 172.16.10.3 PD3, Monitor
    node103 172.16.10.103 PD4
    node4 172.16.10.4 TiDB1
    node5 172.16.10.5 TiDB2
    node6 172.16.10.6 TiKV1
    node7 172.16.10.7 TiKV2
    node8 172.16.10.8 TiKV3
    node9 172.16.10.9 TiKV4
  2. Initialize the newly added node:

    ansible-playbook bootstrap.yml -l 172.16.10.103
  3. Deploy the newly added node:

    ansible-playbook deploy.yml -l 172.16.10.103
  4. Login the newly added PD node and edit the starting script:

    {deploy_dir}/scripts/run_pd.sh
    1. Remove the --initial-cluster="xxxx" \ configuration.

      Note:

      You cannot add the # character at the beginning of the line. Otherwise, the following configuration cannot take effect.

    2. Add --join="http://172.16.10.1:2379" \. The IP address (172.16.10.1) can be any of the existing PD IP address in the cluster.

    3. Manually start the PD service in the newly added PD node:

      {deploy_dir}/scripts/start_pd.sh
    4. Use pd-ctl to check whether the new node is added successfully:

      ./pd-ctl -u "http://172.16.10.1:2379"

      Note:

      pd-ctl is a command used to check the number of PD nodes.

  5. Apply a rolling update to the entire cluster:

    ansible-playbook rolling_update.yml
  6. Start the monitor service:

    ansible-playbook start.yml -l 172.16.10.103
  7. Update the Prometheus configuration and restart the cluster:

    ansible-playbook rolling_update_monitor.yml --tags=prometheus
  8. Monitor the status of the entire cluster and the newly added node by opening a browser to access the monitoring platform: http://172.16.10.3:3000.

Decrease the capacity of a TiDB node

For example, if you want to remove a TiDB node (node5) with the IP address 172.16.10.5, take the following steps:

  1. Stop all services on node5:

    ansible-playbook stop.yml -l 172.16.10.5
  2. Edit the inventory.ini file and remove the node information:

    [tidb_servers]
    172.16.10.4
    #172.16.10.5  # the removed node
    
    [pd_servers]
    172.16.10.1
    172.16.10.2
    172.16.10.3
    
    [tikv_servers]
    172.16.10.6
    172.16.10.7
    172.16.10.8
    172.16.10.9
    
    [monitored_servers]
    172.16.10.4
    #172.16.10.5  # the removed node
    172.16.10.1
    172.16.10.2
    172.16.10.3
    172.16.10.6
    172.16.10.7
    172.16.10.8
    172.16.10.9
    
    [monitoring_servers]
    172.16.10.3
    
    [grafana_servers]
    172.16.10.3

    Now the topology is as follows:

    Name Host IP Services
    node1 172.16.10.1 PD1
    node2 172.16.10.2 PD2
    node3 172.16.10.3 PD3, Monitor
    node4 172.16.10.4 TiDB1
    node5 172.16.10.5 TiDB2 removed
    node6 172.16.10.6 TiKV1
    node7 172.16.10.7 TiKV2
    node8 172.16.10.8 TiKV3
    node9 172.16.10.9 TiKV4
  3. Update the Prometheus configuration and restart the cluster:

    ansible-playbook rolling_update_monitor.yml --tags=prometheus
  4. Monitor the status of the entire cluster by opening a browser to access the monitoring platform: http://172.16.10.3:3000.

Decrease the capacity of a TiKV node

For example, if you want to remove a TiKV node (node9) with the IP address 172.16.10.9, take the following steps:

  1. Remove the node from the cluster using pd-ctl:

    1. View the store ID of node9:

      ./pd-ctl -u "http://172.16.10.1:2379" -d store
    2. Remove node9 from the cluster, assuming that the store ID is 10:

      ./pd-ctl -u "http://172.16.10.1:2379" -d store delete 10
  2. Use pd-ctl to check whether the node is successfully removed:

    ./pd-ctl -u "http://172.16.10.1:2379" -d store 10

    Note:

    It takes some time to remove the node. If the status of the node you remove becomes Tombstone, then this node is successfully removed.

  3. After the node is successfully removed, stop the services on node9:

    ansible-playbook stop.yml -l 172.16.10.9
  4. Edit the inventory.ini file and remove the node information:

    [tidb_servers]
    172.16.10.4
    172.16.10.5
    
    [pd_servers]
    172.16.10.1
    172.16.10.2
    172.16.10.3
    
    [tikv_servers]
    172.16.10.6
    172.16.10.7
    172.16.10.8
    #172.16.10.9  # the removed node
    
    [monitored_servers]
    172.16.10.4
    172.16.10.5
    172.16.10.1
    172.16.10.2
    172.16.10.3
    172.16.10.6
    172.16.10.7
    172.16.10.8
    #172.16.10.9  # the removed node
    
    [monitoring_servers]
    172.16.10.3
    
    [grafana_servers]
    172.16.10.3

    Now the topology is as follows:

    Name Host IP Services
    node1 172.16.10.1 PD1
    node2 172.16.10.2 PD2
    node3 172.16.10.3 PD3, Monitor
    node4 172.16.10.4 TiDB1
    node5 172.16.10.5 TiDB2
    node6 172.16.10.6 TiKV1
    node7 172.16.10.7 TiKV2
    node8 172.16.10.8 TiKV3
    node9 172.16.10.9 TiKV4 removed
  5. Update the Prometheus configuration and restart the cluster:

    ansible-playbook rolling_update_monitor.yml --tags=prometheus
  6. Monitor the status of the entire cluster by opening a browser to access the monitoring platform: http://172.16.10.3:3000.

Decrease the capacity of a PD node

For example, if you want to remove a PD node (node2) with the IP address 172.16.10.2, take the following steps:

  1. Remove the node from the cluster using pd-ctl:

    1. View the name of node2:

      ./pd-ctl -u "http://172.16.10.1:2379" -d member
    2. Remove node2 from the cluster, assuming that the name is pd2:

      ./pd-ctl -u "http://172.16.10.1:2379" -d member delete name pd2
  2. Use Grafana or pd-ctl to check whether the node is successfully removed:

    ./pd-ctl -u "http://172.16.10.1:2379" -d member
  3. After the node is successfully removed, stop the services on node2:

    ansible-playbook stop.yml -l 172.16.10.2
  4. Edit the inventory.ini file and remove the node information:

    [tidb_servers]
    172.16.10.4
    172.16.10.5
    
    [pd_servers]
    172.16.10.1
    #172.16.10.2  # the removed node
    172.16.10.3
    
    [tikv_servers]
    172.16.10.6
    172.16.10.7
    172.16.10.8
    172.16.10.9
    
    [monitored_servers]
    172.16.10.4
    172.16.10.5
    172.16.10.1
    #172.16.10.2  # the removed node
    172.16.10.3
    172.16.10.6
    172.16.10.7
    172.16.10.8
    172.16.10.9
    
    [monitoring_servers]
    172.16.10.3
    
    [grafana_servers]
    172.16.10.3

    Now the topology is as follows:

    Name Host IP Services
    node1 172.16.10.1 PD1
    node2 172.16.10.2 PD2 removed
    node3 172.16.10.3 PD3, Monitor
    node4 172.16.10.4 TiDB1
    node5 172.16.10.5 TiDB2
    node6 172.16.10.6 TiKV1
    node7 172.16.10.7 TiKV2
    node8 172.16.10.8 TiKV3
    node9 172.16.10.9 TiKV4
  5. Perform a rolling update to the entire TiDB cluster:

    ansible-playbook rolling_update.yml
  6. Update the Prometheus configuration and restart the cluster:

    ansible-playbook rolling_update_monitor.yml --tags=prometheus
  7. To monitor the status of the entire cluster, open a browser to access the monitoring platform: http://172.16.10.3:3000.

"Scale the TiDB Cluster Using TiDB Ansible" was last updated Aug 27 2019: dev, v2.1, v3.0/*: correct TiDB tool names (#1466) (02512fb)
Edit this page Request docs changes

What’s on this page

Product

  • TiDB
  • TiSpark
  • Roadmap

Docs

  • Quick Start
  • Best Practices
  • FAQ
  • TiDB Tools
  • Release Notes

Resources

  • Blog
  • Weekly
  • GitHub
  • TiDB Community

Company

  • About
  • Careers
  • News
  • Contact Us
  • Privacy Policy
  • Terms of Service

Connect

  • Twitter
  • LinkedIn
  • Reddit
  • Google Group
  • Stack Overflow

© 2019 PingCAP. All Rights Reserved.

中文