title | summary | aliases | ||||
---|---|---|---|---|---|---|
Upgrade TiDB Using TiUP |
Learn how to upgrade TiDB using TiUP. |
|
This document is targeted for the following upgrade paths:
- Upgrade from TiDB 4.0 versions to TiDB 6.4.
- Upgrade from TiDB 5.0-5.4 versions to TiDB 6.4.
- Upgrade from TiDB 6.0 to TiDB 6.4.
- Upgrade from TiDB 6.1 to TiDB 6.4.
- Upgrade from TiDB 6.2 to TiDB 6.4.
- Upgrade from TiDB 6.3 to TiDB 6.4.
Warning:
- You cannot upgrade TiFlash online from versions earlier than 5.3 to 5.3 or later. Instead, you must first stop all the TiFlash instances of the early version, and then upgrade the cluster offline. If other components (such as TiDB and TiKV) do not support an online upgrade, follow the instructions in warnings in Online upgrade.
- DO NOT upgrade a TiDB cluster when a DDL statement is being executed in the cluster (usually for the time-consuming DDL statements such as
ADD INDEX
and the column type changes).- Before the upgrade, it is recommended to use the
ADMIN SHOW DDL
command to check whether the TiDB cluster has an ongoing DDL job. If the cluster has a DDL job, to upgrade the cluster, wait until the DDL execution is finished or use theADMIN CANCEL DDL
command to cancel the DDL job before you upgrade the cluster.- In addition, during the cluster upgrade, DO NOT execute any DDL statement. Otherwise, the issue of undefined behavior might occur.
Note:
If your cluster to be upgraded is v3.1 or an earlier version (v3.0 or v2.1), the direct upgrade to v6.4.0 is not supported. You need to upgrade your cluster first to v4.0 and then to v6.4.0.
- TiDB currently does not support version downgrade or rolling back to an earlier version after the upgrade.
- For the v4.0 cluster managed using TiDB Ansible, you need to import the cluster to TiUP (
tiup cluster
) for new management according to Upgrade TiDB Using TiUP (v4.0). Then you can upgrade the cluster to v6.4.0 according to this document. - To update versions earlier than v3.0 to v6.4.0:
- Update this version to 3.0 using TiDB Ansible.
- Use TiUP (
tiup cluster
) to import the TiDB Ansible configuration. - Update the 3.0 version to 4.0 according to Upgrade TiDB Using TiUP (v4.0).
- Upgrade the cluster to v6.4.0 according to this document.
- Support upgrading the versions of TiDB Binlog, TiCDC, TiFlash, and other components.
- For detailed compatibility changes of different versions, see the Release Notes of each version. Modify your cluster configuration according to the "Compatibility Changes" section of the corresponding release notes.
- For clusters that upgrade from versions earlier than v5.3 to v5.3 or later versions, the default deployed Prometheus will upgrade from v2.8.1 to v2.27.1. Prometheus v2.27.1 provides more features and fixes a security issue. Compared with v2.8.1, alert time representation in v2.27.1 is changed. For more details, see Prometheus commit for more details.
This section introduces the preparation works needed before upgrading your TiDB cluster, including upgrading TiUP and the TiUP Cluster component.
Before upgrading your TiDB cluster, you first need to upgrade TiUP or TiUP mirror.
Note:
If the control machine of the cluster to upgrade cannot access
https://tiup-mirrors.pingcap.com
, skip this section and see Upgrade TiUP offline mirror.
-
Upgrade the TiUP version. It is recommended that the TiUP version is
1.11.0
or later.{{< copyable "shell-regular" >}}
tiup update --self tiup --version
-
Upgrade the TiUP Cluster version. It is recommended that the TiUP Cluster version is
1.11.0
or later.{{< copyable "shell-regular" >}}
tiup update cluster tiup cluster --version
Note:
If the cluster to upgrade was deployed not using the offline method, skip this step.
Refer to Deploy a TiDB Cluster Using TiUP - Deploy TiUP offline to download the TiUP mirror of the new version and upload it to the control machine. After executing local_install.sh
, TiUP will complete the overwrite upgrade.
{{< copyable "shell-regular" >}}
tar xzvf tidb-community-server-${version}-linux-amd64.tar.gz
sh tidb-community-server-${version}-linux-amd64/local_install.sh
source /home/tidb/.bash_profile
After the overwrite upgrade, run the following command to merge the server and toolkit offline mirrors to the server directory:
{{< copyable "shell-regular" >}}
tar xf tidb-community-toolkit-${version}-linux-amd64.tar.gz
ls -ld tidb-community-server-${version}-linux-amd64 tidb-community-toolkit-${version}-linux-amd64
cd tidb-community-server-${version}-linux-amd64/
cp -rp keys ~/.tiup/
tiup mirror merge ../tidb-community-toolkit-${version}-linux-amd64
After merging the mirrors, run the following command to upgrade the TiUP Cluster component:
{{< copyable "shell-regular" >}}
tiup update cluster
Now, the offline mirror has been upgraded successfully. If an error occurs during TiUP operation after the overwriting, it might be that the manifest
is not updated. You can try rm -rf ~/.tiup/manifests/*
before running TiUP again.
Note:
Skip this step if one of the following situations applies:
- You have not modified the configuration parameters of the original cluster. Or you have modified the configuration parameters using
tiup cluster
but no more modification is needed.- After the upgrade, you want to use v6.4.0's default parameter values for the unmodified configuration items.
-
Enter the
vi
editing mode to edit the topology file:{{< copyable "shell-regular" >}}
tiup cluster edit-config <cluster-name>
-
Refer to the format of topology configuration template and fill the parameters you want to modify in the
server_configs
section of the topology file. -
After the modification, enter : + w + q to save the change and exit the editing mode. Enter Y to confirm the change.
Note:
Before you upgrade the cluster to v6.4.0, make sure that the parameters you have modified in v4.0 are compatible in v6.4.0. For details, see TiKV Configuration File.
The following three TiKV parameters are obsolete in TiDB v5.0. If the following parameters have been configured in your original cluster, you need to delete these parameters through
edit-config
:
- pessimistic-txn.enabled
- server.request-batch-enable-cross-command
- server.request-batch-wait-duration
To avoid the undefined behaviors or other issues during the upgrade, it is recommended to check the health status of Regions of the current cluster before the upgrade. To do that, you can use the check
sub-command.
{{< copyable "shell-regular" >}}
tiup cluster check <cluster-name> --cluster
After the command is executed, the "Region status" check result will be output.
- If the result is "All Regions are healthy", all Regions in the current cluster are healthy and you can continue the upgrade.
- If the result is "Regions are not fully healthy: m miss-peer, n pending-peer" with the "Please fix unhealthy regions before other operations." prompt, some Regions in the current cluster are abnormal. You need to troubleshoot the anomalies until the check result becomes "All Regions are healthy". Then you can continue the upgrade.
This section describes how to upgrade the TiDB cluster and verify the version after the upgrade.
You can upgrade your cluster in one of the two ways: online upgrade and offline upgrade.
By default, TiUP Cluster upgrades the TiDB cluster using the online method, which means that the TiDB cluster can still provide services during the upgrade process. With the online method, the leaders are migrated one by one on each node before the upgrade and restart. Therefore, for a large-scale cluster, it takes a long time to complete the entire upgrade operation.
If your application has a maintenance window for the database to be stopped for maintenance, you can use the offline upgrade method to quickly perform the upgrade operation.
{{< copyable "shell-regular" >}}
tiup cluster upgrade <cluster-name> <version>
For example, if you want to upgrade the cluster to v6.4.0:
{{< copyable "shell-regular" >}}
tiup cluster upgrade <cluster-name> v6.4.0
Note:
An online upgrade upgrades all components one by one. During the upgrade of TiKV, all leaders in a TiKV instance are evicted before stopping the instance. The default timeout time is 5 minutes (300 seconds). The instance is directly stopped after this timeout time.
You can use the
--force
parameter to upgrade the cluster immediately without evicting the leader. However, the errors that occur during the upgrade will be ignored, which means that you are not notified of any upgrade failure. Therefore, use the--force
parameter with caution.To keep a stable performance, make sure that all leaders in a TiKV instance are evicted before stopping the instance. You can set
--transfer-timeout
to a larger value, for example,--transfer-timeout 3600
(unit: second).
- To upgrade TiFlash from versions earlier than 5.3 to 5.3 or later, you should stop TiFlash and then upgrade it. The following steps help you upgrade TiFlash without interrupting other components:
- Stop the TiFlash instance:
tiup cluster stop <cluster-name> -R tiflash
- Upgrade the TiDB cluster without restarting it (only updating the files):
tiup cluster upgrade <cluster-name> <version> --offline
- Reload the TiDB cluster:
tiup cluster reload <cluster-name>
. After the reload, the TiFlash instance is started and you do not need to manually start it.
-
Before the offline upgrade, you first need to stop the entire cluster.
{{< copyable "shell-regular" >}}
tiup cluster stop <cluster-name>
-
Use the
upgrade
command with the--offline
option to perform the offline upgrade.{{< copyable "shell-regular" >}}
tiup cluster upgrade <cluster-name> <version> --offline
-
After the upgrade, the cluster will not be automatically restarted. You need to use the
start
command to restart it.{{< copyable "shell-regular" >}}
tiup cluster start <cluster-name>
Execute the display
command to view the latest cluster version TiDB Version
:
{{< copyable "shell-regular" >}}
tiup cluster display <cluster-name>
Cluster type: tidb
Cluster name: <cluster-name>
Cluster version: v6.4.0
Note:
By default, TiUP and TiDB share usage details with PingCAP to help understand how to improve the product. For details about what is shared and how to disable the sharing, see Telemetry.
This section describes common problems encountered when updating the TiDB cluster using TiUP.
If an error occurs and the upgrade is interrupted, how to resume the upgrade after fixing this error?
Re-execute the tiup cluster upgrade
command to resume the upgrade. The upgrade operation restarts the nodes that have been previously upgraded. If you do not want the upgraded nodes to be restarted, use the replay
sub-command to retry the operation:
-
Execute
tiup cluster audit
to see the operation records:{{< copyable "shell-regular" >}}
tiup cluster audit
Find the failed upgrade operation record and keep the ID of this operation record. The ID is the
<audit-id>
value in the next step. -
Execute
tiup cluster replay <audit-id>
to retry the corresponding operation:{{< copyable "shell-regular" >}}
tiup cluster replay <audit-id>
You can specify --force
. Then the processes of transferring PD leader and evicting TiKV leader are skipped during the upgrade. The cluster is directly restarted to update the version, which has a great impact on the cluster that runs online. Here is the command:
{{< copyable "shell-regular" >}}
tiup cluster upgrade <cluster-name> <version> --force
You can upgrade the tool version by using TiUP to install the ctl
component of the corresponding version:
{{< copyable "shell-regular" >}}
tiup install ctl:v6.4.0
- See TiDB 6.4.0 Release Notes for the compatibility changes.
- Try to avoid creating a new clustered index table when you apply rolling updates to the clusters using TiDB Binlog.