This document collects frequently asked questions (FAQs) about the TiDB cluster in Kubernetes.
The default time zone setting for each component container of a TiDB cluster in Kubernetes is UTC. To modify this setting, take the steps below based on your cluster status:
If it is the first time you deploy the cluster:
values.yaml file of the TiDB cluster, modify the
timezone setting. For example, you can set it to
timezone: Asia/Shanghai before you deploy the TiDB cluster.
If the cluster is running:
values.yamlfile of the TiDB cluster, modify
timezonesettings in the
values.yamlfile of the TiDB cluster. For example, you can set it to
timezone: Asia/Shanghaiand then upgrade the TiDB cluster.
Currently, the TiDB cluster does not support HPA (Horizontal Pod Autoscaling) or VPA (Vertical Pod Autoscaling), because it is difficult to achieve autoscaling on stateful applications such as a database. Autoscaling can not be achieved merely by the monitoring data of CPU and memory.
Besides the operation of the Kubernetes cluster itself, there are the following two scenarios that might require manual intervention when using TiDB Operator:
To achieve high availability and data safety, it is recommended that you deploy the TiDB cluster in at least three availability zones in a production environment.
In terms of the deployment topology relationship between the TiDB cluster and TiDB services, TiDB Operator supports the following three deployment modes. Each mode has its own merits and demerits, so your choice must be based on actual application needs.
TiDB Operator does not yet support automatically orchestrating TiSpark.
If you want to add the TiSpark component to TiDB in Kubernetes, you must maintain Spark on your own in the same Kubernetes cluster. You must ensure that Spark can access the IPs and ports of PD and TiKV instances, and install the TiSpark plugin for Spark. TiSpark offers a detailed guide for you to install the TiSpark plugin.
To maintain Spark in Kubernetes, refer to Spark on Kubernetes.
To check the configuration of the PD, TiKV, and TiDB components of the current cluster, run the following command:
Check the PD configuration file:
kubectl exec -it <pd-pod-name> -n <namespace> -- cat /etc/pd/pd.toml
Check the TiKV configuration file:
kubectl exec -it <tikv-pod-name> -n <namespace> -- cat /etc/tikv/tikv.toml
Check the TiDB configuration file:
kubectl exec -it <tidb-pod-name> -c tidb -n <namespace> -- cat /etc/tidb/tidb.toml
Three possible reasons:
Insufficient resource or HA Policy causes the Pod stuck in the
Pending state. Refer to Troubleshoot TiDB in Kubernetes for more details.
taint is applied to some nodes, which prevents the Pod from being scheduled to these nodes unless the Pod has the matching
toleration. Refer to taint & toleration for more details.
Scheduling conflict, which causes the Pod stuck in the
ContainerCreating state. In such case, you can check if there is more than one TiDB Operator deployed in the Kubernetes cluster. Conflicts occur when custom schedulers in multiple TiDB Operators schedule the same Pod in different phases.
You can execute the following command to verify whether there is more than one TiDB Operator deployed. If more than one record is returned, delete the extra TiDB Operator to resolve the scheduling conflict.
kubectl get deployment --all-namespaces |grep tidb-scheduler