HTAP Summit 2024 session replays are now live!Access Session Replays
tidbcloud_node_upgrades_banner

A key aspect of Database as a Service (DBaaS) solutions is ensuring zero downtime, especially during system upgrades. Zero downtime is paramount as it guarantees uninterrupted service, a critical requirement for mission-critical applications running on the database. 

TiDB Dedicated is PingCAP’s fully-managed cloud Database-as-a-Service (DBaaS) platform on AWS and GCP. This DBaaS uses managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE), together with TiDB Operator, to provide enhanced scalability, security, and reliability. However, these managed services also introduce operational challenges, such as the necessity for regular Kubernetes upgrades due to each Kubernetes version’s limited support period.

In this blog post, we’ll unravel how TiDB Dedicated achieves zero downtime for TiDB clusters during Kubernetes node upgrades (especially with EKS), ensuring uninterrupted service.

Challenges with Managed Kubernetes Services and Zero Downtime

TiDB Dedicated serves as a foundational infrastructure where operational impacts must be minimized. For instance, latency should not fluctuate too much, and connections and QPS should always remain on. Such requirements present significant challenges in managing TiDB on managed Kubernetes.

While Managed Kubernetes services like Amazon EKS or Google GKE simplify the management of the Kubernetes control plane, they also introduce challenges, particularly around the version support policy. EKS and GKE only support each Kubernetes minor version for about 14 months after release. After the end of the support date, the vendor will initiate a control plane upgrade that involves both the control plane and the nodes. While control plane upgrades usually have little impact on TiDB clusters, node upgrades can be somewhat disruptive as they require node replacement. 

How TiDB Dedicated Approaches Zero Downtime

Ensuring zero downtime during Kubernetes node upgrades is a multifaceted process. Here are the strategies TiDB Dedicated uses to tackle managed Kubernetes services challenges:

Add an Extra TiDB Server Pod

TiDB Dedicated uses the advanced StatefulSet provided by TiDB Operator, the operation system for TiDB clusters on Kubernetes, to manage TiDB server pods. This setup facilitates rolling updates from the largest ordinal to the smallest, updating one pod at a time. However, this means the number of available TiDB server pods will always be less than the expected replicas during the rolling restart. Consequently, the availability and performance will be affected. Consider a worst-case scenario: if there’s only one TiDB server pod (tidb-0), a restart of tidb-0 will result in downtime as there are no other pods to handle queries. 

To mitigate this, TiDB Dedicated scales out an extra TiDB server before the upgrade to guarantee availability and performance and scales in it after the upgrade. This ensures that there are always at least as many TiDB server pods serving traffic as the desired number of replicas. The process is transparent to users, and TiDB Dedicated covers the cost.

Figure 1. Scale out an extra TiDB server pod

Use Blue-Green Upgrades

Surge and blue-green upgrades are two prevalent methods for upgrading Kubernetes nodes, each suitable for different circumstances. Surge upgrades are the default upgrade strategy for EKS and GKE. It uses a rolling method where nodes are upgraded individually in an undefined order.  

This works well enough for stateless applications. However, stateful applications, such as a distributed database like TiDB, usually involve extra operational steps before the process shuts down. During the planned maintenance of TiDB, if we instruct the PD and TiKV Raft group to evict the leader proactively without waiting for the Raft election timeout, the cluster can serve the client request steadily during the server restart.  TiDB Operator simplifies this by automating the rolling restart of the cluster and handling the complex orchestration seamlessly. However, the TiDB Operator handles the rolling restart sequentially. Random pod and node termination during the EKS upgrade could be a big problem. Furthermore, the inability to roll back to the original version when issues arise on the new nodes adds to the challenge.

Given the intricacies associated with surge upgrades, we choose the blue-green upgrades for TiDB Dedicated. This strategy entails the following steps:

  1. Create a new set of desired nodes (“green” nodes).
  2. Cordon the original nodes (“blue” nodes).
  3. TiDB Operator triggers a rolling restart of the TiDB cluster. This facilitates a graceful migration of pods to “green” nodes.
  4. Verify the TiDB cluster’s health after the “blue” nodes are drained.
  5. Remove the “blue” nodes.

This approach allows us to stop upgrading and revert to old nodes if any issues arise. It is also more resource-intensive as it doubles the nodes needed during the upgrade. Again, the fee is on TiDB Dedicated, so customers don’t have to worry about extra fees.

Migrate to Self-Managed Node Groups

We originally used managed node groups for TiDB clusters on AWS, which simplified many aspects of node maintenance. However, it did not align well with TiDB’s operational requirements. For example, the EKS-managed node groups provide a one-click node upgrades feature, but it is for surge upgrades and does not support blue-green upgrades. Furthermore, to comply with AWS’s policy, we must upgrade nodes annually, as the node version with managed node groups should either follow the control plane’s version or lag behind by one version.

On the other hand, self-managed node groups allow node versions to be up to two minor versions older than the control plane. Given this flexibility, we migrated from managed node groups to self-managed ones during blue-green upgrades. 

Following this change, TiDB Dedicated can now upgrade the Kubernetes control plane version twice consecutively without the need to upgrade nodes. This significantly reduces the maintenance frequency for TiDB clusters caused by EKS upgrades and minimizes the impact on the customer side.

Figure 2. Self-managed node groups

Benefits of TiDB Dedicated’s Zero-Downtime Upgrade Strategy

Through rigorous testing under a spectrum of loading conditions—from high to low—we have evaluated the impact of upgrading TiDB nodes on performance and availability. The findings are encouraging:

  • Stability and performance: During the upgrade process, the system maintained a stable performance, with only minor fluctuations in key metrics. The Queries Per Second (QPS) experienced a negligible drop, and the query duration saw a slight increase, both within a tight margin of just 5%. The figure below illustrates the stability in QPS and query duration throughout the upgrade process.

Figure 3. Stability and performance gains

  • Uninterrupted service: The results underscore that our meticulous upgrade methodology yields virtually no observable impact on TiDB service performance or availability from the user’s perspective.
  • Control over latency and throughput: Even during the upgrades of the underlying Kubernetes infrastructure, we can maintain stringent control over latency and throughput, ensuring a consistent and reliable service for our users.

Conclusion

In conclusion, TiDB Dedicated’s strategic approach towards Kubernetes node upgrades significantly mitigates the operational challenges commonly associated with managed Kubernetes services. The meticulous employment of extra TiDB server pods, blue-green upgrades, and migration to self-managed node groups (for EKS) provides a robust framework that ensures zero downtime and consistent performance during upgrades.

As customers, you can rely on the performance and reliability of TiDB Dedicated and focus on your applications without worrying about disruptive Kubernetes upgrades and extra costs.


Book a Demo


Spin up a Serverless database with 25GiB free resources.

Start Right Away

Have questions? Let us know how we can help.

Contact Us

TiDB Cloud Dedicated

A fully-managed cloud DBaaS for predictable workloads

TiDB Cloud Serverless

A fully-managed cloud DBaaS for auto-scaling workloads