From 1af9e6f99319300efd94a44d9818790294d87658 Mon Sep 17 00:00:00 2001 From: Liuxiaozhen12 Date: Mon, 13 Sep 2021 10:54:56 +0800 Subject: [PATCH] Squashed commit of the following: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit commit c05d1be5bdcd2a42a87448cef58deeb6d778f23f Author: Daniël van Eeden Date: Fri Sep 10 09:32:40 2021 +0200 encryption-at-rest: Update (#6152) commit f42e8fde873f7dbf6d629b2a021ff1a8aa7c7a0f Author: Morgan Tocker Date: Fri Sep 10 00:24:39 2021 -0600 Update system variables for correctness (#6224) commit 380d0dfac44ca9f6d19d6c9519cb0c08b158c7bd Author: shichun-0415 <89768198+shichun-0415@users.noreply.github.com> Date: Fri Sep 10 14:14:39 2021 +0800 release note: add a check item for feedback-probability (#6405) commit b2e70ce04f9891332d0f74af0e15374d69a25932 Author: Morgan Tocker Date: Thu Sep 9 08:14:38 2021 -0600 basic features: add feature matrix (#6130) commit 055dbf21e95f67c08f59640e78998cb6e865c372 Author: Liuxiaozhen12 <82579298+Liuxiaozhen12@users.noreply.github.com> Date: Thu Sep 9 21:00:39 2021 +0800 release-notes: add 5.2.1 release notes (#6438) commit bea12de7ab04095745092b80d6d86d628096ce24 Author: Fendy <40378371+septemberfd@users.noreply.github.com> Date: Thu Sep 9 16:04:39 2021 +0800 Enhance TiDB login descriptions - EN (#6427) commit 815ed6ff14231fd6fd2f552f67f4988e2b9e8782 Author: Yini Xu <34967660+YiniXu9506@users.noreply.github.com> Date: Thu Sep 9 15:06:39 2021 +0800 chore: update ci scripts (#6429) commit c666f0586160ffc83c66706b70ce35e7fde6712f Author: Kolbe Kegel Date: Thu Sep 9 00:04:40 2021 -0700 performance_schema -> information_schema (#6418) commit b8d3b32725371e74f2e79df0f83885e83d9ef778 Author: Yini Xu <34967660+YiniXu9506@users.noreply.github.com> Date: Thu Sep 9 14:50:39 2021 +0800 chore: fix byte encode (#6428) commit 7db1f254764e8e1b0d4c049aa0eeadcb578552bd Author: Fendy <40378371+septemberfd@users.noreply.github.com> Date: Thu Sep 9 12:06:38 2021 +0800 add doc links to overview.md (#6422) commit efd371ef0ed47be9ada1eb4a2e624c9850454943 Author: Morgan Tocker Date: Wed Sep 8 19:56:38 2021 -0600 system-variables: improve noop functions warning (#6374) commit 5a56bcc562ddb80b15913881c9341748c133fc98 Author: Enwei Date: Wed Sep 8 10:50:58 2021 +0200 Configuration Options: remove two TiDB's command options (#6370) commit 9e53b404119adc2f024d7a6c47d6f3e24afded70 Author: Enwei Date: Wed Sep 8 10:48:58 2021 +0200 BR FAQ: add a warning about multi br importing (#6263) commit 391e4bb135e3824ac8bb2a038a944bdc243dc485 Author: you06 Date: Wed Sep 8 16:46:58 2021 +0800 update transaction doc (#6158) commit 7c6c1def395c66c2244c54c98d68632cb81ba4f3 Author: Enwei Date: Wed Sep 8 10:42:59 2021 +0200 TiKV config: fix wrong description about `compaction-readahead-size` (#6371) commit 5f16115a9b734dbb6dbcba952e4c90b3a4c247f4 Author: Liuxiaozhen12 <82579298+Liuxiaozhen12@users.noreply.github.com> Date: Wed Sep 8 15:34:58 2021 +0800 Add description for TiKV Ready handled panel (#6375) --- TOC.md | 1 + basic-features.md | 219 ++++++++++++------- br/backup-and-restore-faq.md | 8 + br/use-br-command-line-tool.md | 2 + command-line-flags-for-tidb-configuration.md | 11 - dashboard/dashboard-access.md | 2 +- download-ecosystem-tools.md | 8 +- encryption-at-rest.md | 82 +++++-- grafana-tikv-dashboard.md | 9 +- optimistic-transaction.md | 2 +- overview.md | 4 +- pd-control.md | 4 +- pd-recover.md | 2 +- post-installation-check.md | 4 +- production-deployment-using-tiup.md | 6 +- quick-start-with-tidb.md | 10 +- releases/release-5.0.0.md | 1 + releases/release-5.1.0.md | 1 + releases/release-5.2.0.md | 4 + releases/release-5.2.1.md | 19 ++ releases/release-notes.md | 1 + scale-tidb-using-tiup.md | 2 +- scripts/check-conflicts.py | 6 +- scripts/check-control-char.py | 2 +- scripts/check-manual-line-breaks.py | 2 +- scripts/check-tags.py | 4 +- statement-summary-tables.md | 4 +- system-variables.md | 54 ++--- ticdc/manage-ticdc.md | 4 +- tikv-configuration-file.md | 2 +- tiup/tiup-component-cluster-deploy.md | 2 +- tiup/tiup-component-management.md | 12 +- tiup/tiup-mirror.md | 6 +- tiup/tiup-playground.md | 4 +- transaction-isolation-levels.md | 4 +- upgrade-tidb-using-tiup.md | 8 +- 36 files changed, 329 insertions(+), 187 deletions(-) create mode 100644 releases/release-5.2.1.md diff --git a/TOC.md b/TOC.md index 84f8e335e8a8b..9640c57529fc5 100644 --- a/TOC.md +++ b/TOC.md @@ -556,6 +556,7 @@ + Release Notes + [All Releases](/releases/release-notes.md) + v5.2 + + [5.2.1](/releases/release-5.2.1.md) + [5.2.0](/releases/release-5.2.0.md) + v5.1 + [5.1.1](/releases/release-5.1.1.md) diff --git a/basic-features.md b/basic-features.md index 74260e3ff2b49..7eac8b8dbf16f 100644 --- a/basic-features.md +++ b/basic-features.md @@ -1,80 +1,149 @@ --- -title: TiDB Basic Features +title: TiDB Features summary: Learn about the basic features of TiDB. aliases: ['/docs/dev/basic-features/'] --- -# TiDB Basic Features - -This document introduces the basic features of TiDB. - -## Data types - -- Numeric types: `BIT`, `BOOL|BOOLEAN`, `SMALLINT`, `MEDIUMINT`, `INT|INTEGER`, `BIGINT`, `FLOAT`, `DOUBLE`, `DECIMAL`. - -- Date and time types: `DATE`, `TIME`, `DATETIME`, `TIMESTAMP`, `YEAR`. - -- String types: `CHAR`, `VARCHAR`, `TEXT`, `TINYTEXT`, `MEDIUMTEXT`, `LONGTEXT`, `BINARY`, `VARBINARY`, `BLOB`, `TINYBLOB`, `MEDIUMBLOB`, `LONGBLOB`, `ENUM`, `SET`. - -- The `JSON` type. - -## Operators - -- Arithmetic operators, bit operators, comparison operators, logical operators, date and time operators, and so on. - -## Character sets and collations - -- Character sets: `UTF8`, `UTF8MB4`, `BINARY`, `ASCII`, `LATIN1`. - -- Collations: `UTF8MB4_GENERAL_CI`, `UTF8MB4_UNICODE_CI`, `UTF8MB4_GENERAL_BIN`, `UTF8_GENERAL_CI`, `UTF8_UNICODE_CI`, `UTF8_GENERAL_BIN`, `BINARY`. - -## Functions - -- Control flow functions, string functions, date and time functions, bit functions, data type conversion functions, data encryption and decryption functions, compression and decompression functions, information functions, JSON functions, aggregation functions, window functions, and so on. - -## SQL statements - -- Fully supports standard Data Definition Language (DDL) statements, such as `CREATE`, `DROP`, `ALTER`, `RENAME`, `TRUNCATE`, and so on. - -- Fully supports standard Data Manipulation Language (DML) statements, such as `INSERT`, `REPLACE`, `SELECT`, subqueries, `UPDATE`, `LOAD DATA`, and so on. - -- Fully supports standard transactional and locking statements, such as `START TRANSACTION`, `COMMIT`, `ROLLBACK`, `SET TRANSACTION`, and so on. - -- Fully supports standard database administration statements, such as `SHOW`, `SET`, and so on. - -- Fully supports standard utility statements, such as `DESCRIBE`, `EXPLAIN`, `USE`, and so on. - -- Fully supports the `GROUP BY` and `ORDER BY` clauses. - -- Fully supports the standard `LEFT OUTER JOIN` and `RIGHT OUTER JOIN` SQL statements. - -- Fully supports the standard SQL table and column aliases. - -## Partitioning - -- Supports Range partitioning -- Supports Hash partitioning - -## Views - -- Supports general views - -## Constraints - -- Supports non-empty constraints -- Supports primary key constraints -- Supports unique constraints - -## Security - -- Supports privilege management based on RBAC (role-based access control) -- Supports password management -- Supports communication and data encryption -- Supports IP allowlist -- Supports audit - -## Tools - -- Supports fast backup -- Supports data migration from MySQL to TiDB using tools -- Supports deploying and maintaining TiDB using tools +# TiDB Features + +The following table provides an overview of the feature development history of TiDB. Note that features under active development might change before the final release. + +| Data types, functions, and operators | 5.2 | 5.1 | 5.0 | 4.0 | +|----------------------------------------------------------------------------------------------------------|:------------:|:------------:|:------------:|:------------:| +| [Numeric types](/data-type-numeric.md) | Y | Y | Y | Y | +| [Date and time types](/data-type-date-and-time.md) | Y | Y | Y | Y | +| [String types](/data-type-string.md) | Y | Y | Y | Y | +| [JSON type](/data-type-json.md) | Experimental | Experimental | Experimental | Experimental | +| [Control flow functions](/functions-and-operators/control-flow-functions.md) | Y | Y | Y | Y | +| [String functions](/functions-and-operators/string-functions.md) | Y | Y | Y | Y | +| [Numeric functions and operators](/functions-and-operators/numeric-functions-and-operators.md) | Y | Y | Y | Y | +| [Date and time functions](/functions-and-operators/date-and-time-functions.md) | Y | Y | Y | Y | +| [Bit functions and operators](/functions-and-operators/bit-functions-and-operators.md) | Y | Y | Y | Y | +| [Cast functions and operators](/functions-and-operators/cast-functions-and-operators.md) | Y | Y | Y | Y | +| [Encryption and compression functions](/functions-and-operators/encryption-and-compression-functions.md) | Y | Y | Y | Y | +| [Information functions](/functions-and-operators/information-functions.md) | Y | Y | Y | Y | +| [JSON functions](/functions-and-operators/json-functions.md) | Experimental | Experimental | Experimental | Experimental | +| [Aggregation functions](/functions-and-operators/aggregate-group-by-functions.md) | Y | Y | Y | Y | +| [Window functions](/functions-and-operators/window-functions.md) | Y | Y | Y | Y | +| [Miscellaneous functions](/functions-and-operators/miscellaneous-functions.md) | Y | Y | Y | Y | +| [Operators](/functions-and-operators/operators.md) | Y | Y | Y | Y | +| [**Character sets**](/character-set-and-collation.md) | **5.2** | **5.1** | **5.0** | **4.0** | +| `utf8` | Y | Y | Y | Y | +| `utf8mb4` | Y | Y | Y | Y | +| `ascii` [^1] | Y | Y | Y | Y | +| `latin1` | Y | Y | Y | Y | +| `binary` | Y | Y | Y | Y | +| [**Collations**](/character-set-and-collation.md) | **5.2** | **5.1** | **5.0** | **4.0** | +| `utf8_bin` | Y | Y | Y | Y | +| `utf8_general_ci` | Y | Y | Y | Y | +| `utf8_unicode_ci` | Y | Y | Y | Y | +| `utf8mb4_bin` | Y | Y | Y | Y | +| `utf8mb4_general_ci` | Y | Y | Y | Y | +| `utf8mb4_unicode_ci` | Y | Y | Y | Y | +| `ascii_bin` | Y | Y | Y | Y | +| `latin1_bin` | Y | Y | Y | Y | +| `binary` | Y | Y | Y | Y | +| **Indexing and constraints** | **5.2** | **5.1** | **5.0** | **4.0** | +| [Expression indexes](/sql-statements/sql-statement-create-index.md#expression-index) | Experimental | Experimental | Experimental | Experimental | +| [Columnar storage (TiFlash)](/tiflash/tiflash-overview.md) | Y | Y | Y | Y | +| [RocksDB engine](/storage-engine/rocksdb-overview.md) | Y | Y | Y | Y | +| [Titan plugin](/storage-engine/titan-overview.md) | Y | Y | Y | Y | +| [Invisible indexes](/sql-statements/sql-statement-add-index.md) | Y | Y | Y | N | +| [Composite `PRIMARY KEY`](/constraints.md) | Y | Y | Y | Y | +| [Unique indexes](/constraints.md) | Y | Y | Y | Y | +| [Clustered index on integer `PRIMARY KEY`](/constraints.md) | Y | Y | Y | Y | +| [Clustered index on composite or non-integer key](/constraints.md) | Y | Y | Y | N | +| **SQL statements** [^2] | **5.2** | **5.1** | **5.0** | **4.0** | +| Basic `SELECT`, `INSERT`, `UPDATE`, `DELETE`, `REPLACE` | Y | Y | Y | Y | +| `INSERT ON DUPLICATE KEY UPDATE` | Y | Y | Y | Y | +| `LOAD DATA INFILE` | Y | Y | Y | Y | +| `SELECT INTO OUTFILE` | Y | Y | Y | Y | +| `INNER JOIN`, `LEFT\|RIGHT [OUTER] JOIN` | Y | Y | Y | Y | +| `UNION`, `UNION ALL` | Y | Y | Y | Y | +| [`EXCEPT` and `INTERSECT` operators](/functions-and-operators/set-operators.md) | Y | Y | Y | N | +| `GROUP BY`, `ORDER BY` | Y | Y | Y | Y | +| [Window Functions](/functions-and-operators/window-functions.md) | Y | Y | Y | Y | +| [Common Table Expressions (CTE)](/sql-statements/sql-statement-with.md) | Y | Y | N | N | +| `START TRANSACTION`, `COMMIT`, `ROLLBACK` | Y | Y | Y | Y | +| [`EXPLAIN`](/sql-statements/sql-statement-explain.md) | Y | Y | Y | Y | +| [`EXPLAIN ANALYZE`](/sql-statements/sql-statement-explain-analyze.md) | Y | Y | Y | Y | +| [User-defined variables](/user-defined-variables.md) | Experimental | Experimental | Experimental | Experimental | +| **Advanced SQL Features** | **5.2** | **5.1** | **5.0** | **4.0** | +| [Prepared statement cache](/sql-prepare-plan-cache.md) | Experimental | Experimental | Experimental | Experimental | +| [SQL plan management (SPM)](/sql-plan-management.md) | Y | Y | Y | Y | +| [Coprocessor cache](/coprocessor-cache.md) | Y | Y | Y | Experimental | +| [Stale Read](/stale-read.md) | Y | Y | N | N | +| [Follower reads](/follower-read.md) | Y | Y | Y | Y | +| [Read historical data (tidb_snapshot)](/read-historical-data.md) | Y | Y | Y | Y | +| [Optimizer hints](/optimizer-hints.md) | Y | Y | Y | Y | +| [MPP Exection Engine](/explain-mpp.md) | Y | Y | Y | N | +| [Index Merge Join](/explain-index-merge.md) | Experimental | Experimental | Experimental | Experimental | +| **Data definition language (DDL)** | **5.2** | **5.1** | **5.0** | **4.0** | +| Basic `CREATE`, `DROP`, `ALTER`, `RENAME`, `TRUNCATE` | Y | Y | Y | Y | +| [Generated columns](/generated-columns.md) | Experimental | Experimental | Experimental | Experimental | +| [Views](/views.md) | Y | Y | Y | Y | +| [Sequences](/sql-statements/sql-statement-create-sequence.md) | Y | Y | Y | Y | +| [Auto increment](/auto-increment.md) | Y | Y | Y | Y | +| [Auto random](/auto-random.md) | Y | Y | Y | Y | +| [DDL algorithm assertions](/sql-statements/sql-statement-alter-table.md) | Y | Y | Y | Y | +| Multi schema change: add column(s) | Y | Y | Y | N | +| [Change column type](/sql-statements/sql-statement-modify-column.md) | Y | Y | N | N | +| **Transactions** | **5.2** | **5.1** | **5.0** | **4.0** | +| [Async commit](/system-variables.md#tidb_enable_async_commit-new-in-v50) | Y | Y | Y | N | +| [1PC](/system-variables.md#tidb_enable_1pc-new-in-v50) | Y | Y | Y | N | +| [Large transactions (10GB)](/transaction-overview.md#transaction-size-limit) | Y | Y | Y | Y | +| [Pessimistic transactions](/pessimistic-transaction.md) | Y | Y | Y | Y | +| [Optimistic transactions](/optimistic-transaction.md) | Y | Y | Y | Y | +| [Repeatable-read isolation (snapshot isolation)](/transaction-isolation-levels.md) | Y | Y | Y | Y | +| [Read-committed isolation](/transaction-isolation-levels.md) | Y | Y | Y | Y | +| **Partitioning** | **5.2** | **5.1** | **5.0** | **4.0** | +| [Range partitioning](/partitioned-table.md) | Y | Y | Y | Y | +| [Hash partitioning](/partitioned-table.md) | Y | Y | Y | Y | +| [List partitioning](/partitioned-table.md) | Experimental | Experimental | Experimental | N | +| [List COLUMNS partitioning](/partitioned-table.md) | Experimental | Experimental | Experimental | N | +| [`EXCHANGE PARTITION`](/partitioned-table.md) | Experimental | Experimental | Experimental | N | +| [Dynamic Pruning](/partitioned-table.md#dynamic-pruning-mode) | Experimental | Experimental | N | N | +| **Statistics** | **5.2** | **5.1** | **5.0** | **4.0** | +| [CMSketch](/statistics.md) | Deprecated | Deprecated | Deprecated | Y | +| [Histograms](/statistics.md) | Y | Y | Y | Y | +| [Extended statistics (multiple columns)](/statistics.md) | Experimental | Experimental | Experimental | N | +| [Statistics Feedback](/statistics.md#automatic-update) | Experimental | Experimental | Experimental | Experimental | +| [Fast Analyze](/system-variables.md#tidb_enable_fast_analyze) | Experimental | Experimental | Experimental | Experimental | +| **Security** | **5.2** | **5.1** | **5.0** | **4.0** | +| [Transparent layer security (TLS)](/enable-tls-between-clients-and-servers.md) | Y | Y | Y | Y | +| [Encryption at rest (TDE)](/encryption-at-rest.md) | Y | Y | Y | Y | +| [Role-based authentication (RBAC)](/role-based-access-control.md) | Y | Y | Y | Y | +| [Certificate-based authentication](/certificate-authentication.md) | Y | Y | Y | Y | +| `caching_sha2_password` authentication | Y | N | N | N | +| [MySQL compatible `GRANT` system](/privilege-management.md) | Y | Y | Y | Y | +| [Dynamic Privileges](/privilege-management.md#dynamic-privileges) | Y | Y | N | N | +| [Security Enhanced Mode](/system-variables.md#tidb_enable_enhanced_security) | Y | Y | N | N | +| [Redacted Log Files](/log-redaction.md) | Y | Y | Y | N | +| **Data import and export** | **5.2** | **5.1** | **5.0** | **4.0** | +| [Fast Importer (TiDB Lightning)](/tidb-lightning/tidb-lightning-overview.md) | Y | Y | Y | Y | +| mydumper logical dumper | Deprecated | Deprecated | Deprecated | Deprecated | +| [Dumpling logical dumper](/dumpling-overview.md) | Y | Y | Y | Y | +| [Transactional `LOAD DATA`](/sql-statements/sql-statement-load-data.md) | Y | Y | Y | N | +| [Database migration toolkit (DM)](/migration-overview.md) | Y | Y | Y | Y | +| [TiDB Binlog](/tidb-binlog/tidb-binlog-overview.md) | Deprecated | Deprecated | Deprecated | Deprecated | +| [Change data capture (CDC)](/ticdc/ticdc-overview.md) | Y | Y | Y | Y | +| **Management, observability and tools** | **5.2** | **5.1** | **5.0** | **4.0** | +| [TiDB Dashboard](/dashboard/dashboard-intro.md) | Y | Y | Y | Y | +| [SQL diagnostics](/information-schema/information-schema-sql-diagnostics.md) | Experimental | Experimental | Experimental | Experimental | +| [Information schema](/information-schema/information-schema.md) | Y | Y | Y | Y | +| [Metrics schema](/metrics-schema.md) | Y | Y | Y | Y | +| [Statements summary tables](/statement-summary-tables.md) | Y | Y | Y | Y | +| [Slow query log](/identify-slow-queries.md) | Y | Y | Y | Y | +| [TiUP deployment](/tiup/tiup-overview.md) | Y | Y | Y | Y | +| Ansible deployment | N | N | N | Deprecated | +| [Kubernetes operator](https://docs.pingcap.com/tidb-in-kubernetes/) | Y | Y | Y | Y | +| [Built-in physical backup](/br/backup-and-restore-use-cases.md) | Y | Y | Y | Y | +| Top SQL | Y | N | N | N | +| [Global Kill](/sql-statements/sql-statement-kill.md) | Experimental | Experimental | Experimental | Experimental | +| [Lock View](/information-schema/information-schema-data-lock-waits.md) | Y | Experimental | Experimental | Experimental | +| [`SHOW CONFIG`](/sql-statements/sql-statement-show-config.md) | Experimental | Experimental | Experimental | Experimental | +| [`SET CONFIG`](/dynamic-config.md) | Experimental | Experimental | Experimental | Experimental | + +[^1]: TiDB incorrectly treats latin1 as a subset of utf8. See [TiDB #18955](https://github.com/pingcap/tidb/issues/18955) for more details. + +[^2]: See [Statement Reference](/sql-statements/sql-statement-select.md) for a full list of SQL statements supported. diff --git a/br/backup-and-restore-faq.md b/br/backup-and-restore-faq.md index d9abf4c2aaeb4..c69435fad3bcd 100644 --- a/br/backup-and-restore-faq.md +++ b/br/backup-and-restore-faq.md @@ -162,3 +162,11 @@ BR does not back up statistics (except in v4.0.9). Therefore, after restoring th In v4.0.9, BR backs up statistics by default, which consumes too much memory. To ensure that the backup process goes well, the backup for statistics is disabled by default starting from v4.0.10. If you do not execute `ANALYZE` on the table, TiDB will fail to select the optimized execution plan due to inaccurate statistics. If query performance is not a key concern, you can ignore `ANALYZE`. + +## Can I use multiple BR processes at the same time to restore the data of a single cluster? + +**It is strongly not recommended** to use multiple BR processes at the same time to restore the data of a single cluster for the following reasons: + ++ When BR restores data, it modifies some global configurations of PD. Therefore, if you use multiple BR processes for data restore at the same time, these configurations might be mistakenly overwritten and cause abnormal cluster status. ++ BR consumes a lot of cluster resources to restore data, so in fact, running BR processes in parallel improves the restore speed only to a limited extent. ++ There has been no test for running multiple BR processes in parallel for data restore, so it is not guaranteed to succeed. \ No newline at end of file diff --git a/br/use-br-command-line-tool.md b/br/use-br-command-line-tool.md index f9f5ed28495a7..27a559572d1a6 100644 --- a/br/use-br-command-line-tool.md +++ b/br/use-br-command-line-tool.md @@ -307,6 +307,8 @@ To restore the cluster data, use the `br restore` command. You can add the `full > - Where each peer is scattered to during restore is random. We don't know in advance which node will read which file. > > These can be avoided using shared storage, for example mounting an NFS on the local path, or using S3. With network storage, every node can automatically read every SST file, so these caveats no longer apply. +> +> Also, note that you can only run one restore operation for a single cluster at the same time. Otherwise, unexpected behaviors might occur. For details, see [FAQ](/br/backup-and-restore-faq.md#can-i-use-multiple-br-processes-at-the-same-time-to-restore-the-data-of-a-single-cluster). ### Restore all the backup data diff --git a/command-line-flags-for-tidb-configuration.md b/command-line-flags-for-tidb-configuration.md index a6a7e05f50b5d..13b409dac8a2e 100644 --- a/command-line-flags-for-tidb-configuration.md +++ b/command-line-flags-for-tidb-configuration.md @@ -14,12 +14,6 @@ When you start the TiDB cluster, you can use command-line options or environment - Default: "" - This address must be accessible by the rest of the TiDB cluster and the user. -## `--binlog-socket` - -- The TiDB services use the unix socket file for internal connections, such as the Pump service -- Default: "" -- You can use "/tmp/pump.sock" to accept the communication of Pump unix socket file. - ## `--config` - The configuration file @@ -103,11 +97,6 @@ When you start the TiDB cluster, you can use command-line options or environment - Default: "/tmp/tidb" - You can use `tidb-server --store=unistore --path=""` to enable a pure in-memory TiDB. -## `--tmp-storage-path` - -+ TiDB's temporary storage path -+ Default: `/tidb/tmp-storage` - ## `--proxy-protocol-networks` - The list of proxy server's IP addresses allowed to connect to TiDB using the [PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). diff --git a/dashboard/dashboard-access.md b/dashboard/dashboard-access.md index 7bc968f02e23b..0b9061da2f93a 100644 --- a/dashboard/dashboard-access.md +++ b/dashboard/dashboard-access.md @@ -28,7 +28,7 @@ You can use TiDB Dashboard in the following common desktop browsers of a relativ ## Sign in -For the first-time access, TiDB Dashboard displays the user sign in interface, as shown in the image below. You can sign in using the TiDB `root` account. +For the first-time access, TiDB Dashboard displays the user sign in interface, as shown in the image below. You can sign in using the TiDB `root` account. By default, the `root` password is empty. ![Login interface](/media/dashboard/dashboard-access-login.png) diff --git a/download-ecosystem-tools.md b/download-ecosystem-tools.md index 7b54fd01c63ba..f347a4b7ed20a 100644 --- a/download-ecosystem-tools.md +++ b/download-ecosystem-tools.md @@ -18,7 +18,7 @@ If you want to download the latest version of [TiDB Binlog](/tidb-binlog/tidb-bi > **Note:** > -> `{version}` in the above download link indicates the version number of TiDB. For example, the download link for `v5.2.0` is `https://download.pingcap.org/tidb-v5.2.0-linux-amd64.tar.gz`. +> `{version}` in the above download link indicates the version number of TiDB. For example, the download link for `v5.2.1` is `https://download.pingcap.org/tidb-v5.2.1-linux-amd64.tar.gz`. ## TiDB Lightning @@ -30,7 +30,7 @@ Download [TiDB Lightning](/tidb-lightning/tidb-lightning-overview.md) by using t > **Note:** > -> `{version}` in the above download link indicates the version number of TiDB Lightning. For example, the download link for `v5.2.0` is `https://download.pingcap.org/tidb-toolkit-v5.2.0-linux-amd64.tar.gz`. +> `{version}` in the above download link indicates the version number of TiDB Lightning. For example, the download link for `v5.2.1` is `https://download.pingcap.org/tidb-toolkit-v5.2.1-linux-amd64.tar.gz`. ## BR (backup and restore) @@ -42,7 +42,7 @@ Download [BR](/br/backup-and-restore-tool.md) by using the download link in the > **Note:** > -> `{version}` in the above download link indicates the version number of BR. For example, the download link for `v5.0.0-beta` is `http://download.pingcap.org/tidb-toolkit-v5.0.0-beta-linux-amd64.tar.gz`. +> `{version}` in the above download link indicates the version number of BR. For example, the download link for `v5.2.1` is `https://download.pingcap.org/tidb-toolkit-v5.2.1-linux-amd64.tar.gz`. ## TiDB DM (Data Migration) @@ -66,7 +66,7 @@ Download [Dumpling](/dumpling-overview.md) from the links below: > **Note:** > -> The `{version}` in the download link is the version number of Dumpling. For example, the link for downloading the `v5.2.0` version of Dumpling is `https://download.pingcap.org/tidb-toolkit-v5.2.0-linux-amd64.tar.gz`. You can view the currently released versions in [Dumpling Releases](https://github.com/pingcap/dumpling/releases). +> The `{version}` in the download link is the version number of Dumpling. For example, the link for downloading the `v5.2.1` version of Dumpling is `https://download.pingcap.org/tidb-toolkit-v5.2.1-linux-amd64.tar.gz`. You can view the currently released versions in [Dumpling Releases](https://github.com/pingcap/dumpling/releases). > > Dumpling supports arm64 linux. You can replace `amd64` in the download link with `arm64`, which means the `arm64` version of Dumpling. diff --git a/encryption-at-rest.md b/encryption-at-rest.md index 554100b9f3a12..923d4edcaa99b 100644 --- a/encryption-at-rest.md +++ b/encryption-at-rest.md @@ -4,23 +4,47 @@ summary: Learn how to enable encryption at rest to protect sensitive data. aliases: ['/docs/dev/encryption at rest/'] --- -# Encryption at Rest New in v4.0.0 +# Encryption at Rest + +> **Note:** +> +> If your cluster is deployed on AWS and uses the EBS storage, it is recommended to use the EBS encryption. See [AWS documentation - EBS Encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html). You are using the non-EBS storage on AWS such as the local NVMe storage, it is recommended to use encryption at rest introduced in this document. Encryption at rest means that data is encrypted when it is stored. For databases, this feature is also referred to as TDE (transparent data encryption). This is opposed to encryption in flight (TLS) or encryption in use (rarely used). Different things could be doing encryption at rest (SSD drive, file system, cloud vendor, etc), but by having TiKV do the encryption before storage this helps ensure that attackers must authenticate with the database to gain access to data. For example, when an attacker gains access to the physical machine, data cannot be accessed by copying files on disk. -TiKV supports encryption at rest starting from v4.0.0. The feature allows TiKV to transparently encrypt data files using [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in [CTR](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation) mode. To enable encryption at rest, an encryption key must be provided by user and this key is called master key. The master key can be provided via AWS KMS (recommended), or specifying a key stored as plaintext in a file. TiKV automatically rotates data keys that it used to encrypt actual data files. Manually rotating the master key can be done occasionally. Note that encryption at rest only encrypts data at rest (namely, on disk) and not while data is transferred over network. It is advised to use TLS together with encryption at rest. +## Encryption support in different TiDB components + +In a TiDB cluster, different components use different encryption methods. This section introduces the encryption supports in different TiDB components such as TiKV, TiFlash, PD, and Backup & Restore (BR). + +When a TiDB cluster is deployed, the majority of user data is stored on TiKV and TiFlash nodes. Some metadata is stored on PD nodes (for example, secondary index keys used as TiKV Region boundaries). To get the full benefits of encryption at rest, you need to enable encryption for all components. Backups, log files, and data transmitted over the network should also be considered when you implement encryption. + +### TiKV + +TiKV supports encryption at rest. This feature allows TiKV to transparently encrypt data files using [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in [CTR](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation) mode. To enable encryption at rest, an encryption key must be provided by the user and this key is called master key. TiKV automatically rotates data keys that it used to encrypt actual data files. Manually rotating the master key can be done occasionally. Note that encryption at rest only encrypts data at rest (namely, on disk) and not while data is transferred over network. It is advised to use TLS together with encryption at rest. + +Optionally, you can use AWS KMS for both cloud and on-premises deployments. You can also supply the plaintext master key in a file. + +TiKV currently does not exclude encryption keys and user data from core dumps. It is advised to disable core dumps for the TiKV process when using encryption at rest. This is not currently handled by TiKV itself. + +TiKV tracks encrypted data files using the absolute path of the files. As a result, once encryption is turned on for a TiKV node, the user should not change data file paths configuration such as `storage.data-dir`, `raftstore.raftdb-path`, `rocksdb.wal-dir` and `raftdb.wal-dir`. + +### TiFlash + +TiFlash supports encryption at rest. Data keys are generated by TiFlash. All files (including data files, schema files, and temporary files) written into TiFlash (including TiFlash Proxy) are encrypted using the current data key. The encryption algorithms, the encryption configuration (in the `tiflash-learner.toml` file) supported by TiFlash, and the meanings of monitoring metrics are consistent with those of TiKV. + +If you have deployed TiFlash with Grafana, you can check the **TiFlash-Proxy-Details** -> **Encryption** panel. + +### PD -Also from v4.0.0, BR supports S3 server-side encryption (SSE) when backing up to S3. A customer owned AWS KMS key can also be used together with S3 server-side encryption. +Encryption-at-rest for PD is an experimental feature, which is configured in the same way as in TiKV. -## Warnings +### Backups with BR -The current version of TiKV encryption has the following drawbacks. Be aware of these drawbacks before you get started: +BR supports S3 server-side encryption (SSE) when backing up data to S3. A customer-owned AWS KMS key can also be used together with S3 server-side encryption. See [BR S3 server-side encryption](/encryption-at-rest.md#br-s3-server-side-encryption) for details. -* When a TiDB cluster is deployed, the majority of user data is stored in TiKV nodes, and that data will be encrypted when encryption is enabled. However, a small amount of user data is stored in PD nodes as metadata (for example, secondary index keys used as TiKV region boundaries). As of v4.0.0, PD doesn't support encryption at rest. It is recommended to use storage-level encryption (for example, file system encryption) to help protect sensitive data stored in PD. -* TiFlash supports encryption at rest since v4.0.5. For details, refer to [Encryption at Rest for TiFlash](#encryption-at-rest-for-tiflash-new-in-v405). When deploying TiKV with TiFlash earlier than v4.0.5, data stored in TiFlash is not encrypted. -* TiKV currently does not exclude encryption keys and user data from core dumps. It is advised to disable core dumps for the TiKV process when using encryption at rest. This is not currently handled by TiKV itself. -* TiKV tracks encrypted data files using the absolute path of the files. As a result, once encryption is turned on for a TiKV node, the user should not change data file paths configuration such as `storage.data-dir`, `raftstore.raftdb-path`, `rocksdb.wal-dir` and `raftdb.wal-dir`. -* TiKV, TiDB, and PD info logs might contain user data for debugging purposes. The info log and this data in it are not encrypted. It is recommended to enable [log redaction](/log-redaction.md). +### Logging + +TiKV, TiDB, and PD info logs might contain user data for debugging purposes. The info log and this data in it are not encrypted. It is recommended to enable [log redaction](/log-redaction.md). ## TiKV encryption at rest @@ -29,24 +53,42 @@ The current version of TiKV encryption has the following drawbacks. Be aware of TiKV currently supports encrypting data using AES128, AES192 or AES256, in CTR mode. TiKV uses envelope encryption. As a result, two types of keys are used in TiKV when encryption is enabled. * Master key. The master key is provided by user and is used to encrypt the data keys TiKV generates. Management of master key is external to TiKV. -* Data key. The data key is generated by TiKV and is the key actually used to encrypt data. The data key is automatically rotated by TiKV. +* Data key. The data key is generated by TiKV and is the key actually used to encrypt data. The same master key can be shared by multiple instances of TiKV. The recommended way to provide a master key in production is via AWS KMS. Create a customer master key (CMK) through AWS KMS, and then provide the CMK key ID to TiKV in the configuration file. The TiKV process needs access to the KMS CMK while it is running, which can be done by using an [IAM role](https://aws.amazon.com/iam/). If TiKV fails to get access to the KMS CMK, it will fail to start or restart. Refer to AWS documentation for [KMS](https://docs.aws.amazon.com/kms/index.html) and [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) usage. Alternatively, if using custom key is desired, supplying the master key via file is also supported. The file must contain a 256 bits (or 32 bytes) key encoded as hex string, end with a newline (namely, `\n`), and contain nothing else. Persisting the key on disk, however, leaks the key, so the key file is only suitable to be stored on the `tempfs` in RAM. -Data keys are generated by TiKV and passed to the underlying storage engine (namely, RocksDB). All files written by RocksDB, including SST files, WAL files, and the MANIFEST file, are encrypted by the current data key. Other temporary files used by TiKV that may include user data are also encrypted using the same data key. Data keys are automatically rotated by TiKV every week by default, but the period is configurable. On key rotation, TiKV does not rewrite all existing files to replace the key, but RocksDB compaction are expected to rewrite old data into new data files, with the most recent data key, if the cluster gets constant write workload. TiKV keeps track of the key and encryption method used to encrypt each of the files and use the information to decrypt the content on reads. +Data keys are passed to the underlying storage engine (namely, RocksDB). All files written by RocksDB, including SST files, WAL files, and the MANIFEST file, are encrypted by the current data key. Other temporary files used by TiKV that may include user data are also encrypted using the same data key. Data keys are automatically rotated by TiKV every week by default, but the period is configurable. On key rotation, TiKV does not rewrite all existing files to replace the key, but RocksDB compaction are expected to rewrite old data into new data files, with the most recent data key, if the cluster gets constant write workload. TiKV keeps track of the key and encryption method used to encrypt each of the files and use the information to decrypt the content on reads. Regardless of data encryption method, data keys are encrypted using AES256 in GCM mode for additional authentication. This required the master key to be 256 bits (32 bytes), when passing from file instead of KMS. +### Key creation + +To create a key on AWS, follow these steps: + +1. Go to the [AWS KMS](https://console.aws.amazon.com/kms) on the AWS console. +2. Make sure that you have selected the correct region on the top right corner of your console. +3. Click **Create key** and select **Symmetric** as the key type. +4. Set an alias for the key. + +You can also perform the operations using the AWS CLI: + +```shell +aws --region us-west-2 kms create-key +aws --region us-west-2 kms create-alias --alias-name "alias/tidb-tde" --target-key-id 0987dcba-09fe-87dc-65ba-ab0987654321 +``` + +The `--target-key-id` to enter in the second command is in the output of the first command. + ### Configure encryption -To enable encryption, you can add the encryption section in TiKV's configuration file: +To enable encryption, you can add the encryption section in the configuration files of TiKV and PD: ``` [security.encryption] -data-encryption-method = aes128-ctr -data-key-rotation-period = 7d +data-encryption-method = "aes128-ctr" +data-key-rotation-period = "168h" # 7 days ``` Possible values for `data-encryption-method` are "aes128-ctr", "aes192-ctr", "aes256-ctr" and "plaintext". The default value is "plaintext", which means encryption is not turned on. `data-key-rotation-period` defines how often TiKV rotates the data key. Encryption can be turned on for a fresh TiKV cluster, or an existing TiKV cluster, though only data written after encryption is enabled is guaranteed to be encrypted. To disable encryption, remove `data-encryption-method` in the configuration file, or reset it to "plaintext", and restart TiKV. To change encryption method, update `data-encryption-method` in the configuration file and restart TiKV. @@ -61,7 +103,9 @@ region = "us-west-2" endpoint = "https://kms.us-west-2.amazonaws.com" ``` -The `key-id` specifies the key id for the KMS CMK. The `region` is the AWS region name for the KMS CMK. The `endpoint` is optional and doesn't need to be specified normally, unless you are using a AWS KMS compatible service from a non-AWS vendor. +The `key-id` specifies the key ID for the KMS CMK. The `region` is the AWS region name for the KMS CMK. The `endpoint` is optional and you do not need to specify it normally unless you are using an AWS KMS-compatible service from a non-AWS vendor or need to use a [VPC endpoint for KMS](https://docs.aws.amazon.com/kms/latest/developerguide/kms-vpc-endpoint.html). + +You can also use [multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in AWS. For this, you need to set up a primary key in a specific region and add replica keys in the regions you require. To specify a master key that's stored in a file, the master key configuration would look like the following: @@ -141,9 +185,3 @@ When restoring the backup, both `--s3.sse` and `--s3.sse-kms-key-id` should NOT ``` ./br restore full --pd --storage "s3:/// --s3.region " ``` - -## Encryption at rest for TiFlash New in v4.0.5 - -TiFlash supports encryption at rest since v4.0.5. Data keys are generated by TiFlash. All files (including data files, schema files, and temporary files) written into TiFlash (including TiFlash Proxy) are encrypted using the current data key. The encryption algorithms, the encryption configuration (in the `tiflash-learner.toml` file) supported by TiFlash, and the meanings of monitoring metrics are consistent with those of TiKV. - -If you have deployed TiFlash with Grafana, you can check the **TiFlash-Proxy-Details** -> **Encryption** panel. diff --git a/grafana-tikv-dashboard.md b/grafana-tikv-dashboard.md index 03bca8a699a35..89d667111fd9e 100644 --- a/grafana-tikv-dashboard.md +++ b/grafana-tikv-dashboard.md @@ -100,7 +100,14 @@ This document provides a detailed description of these key metrics on the **TiKV ## Raft process -- Ready handled: The count of handled ready operations per second +- Ready handled: The number of handled ready operations per type per second + - count: The number of handled ready operations per second + - has_ready_region: The number of Regions that have ready per second + - pending_region: The operations per second of the Regions being checked for whether it has ready. This metric is deprecated since v3.0.0 + - message: The number of messages that the ready operations per second contain + - append: The number of Raft log entries that the ready operations per second contain + - commit: The number of committed Raft log entries that the ready operations per second contain + - snapshot: The number of snapshots that the ready operations per second contains - 0.99 Duration of Raft store events: The time consumed by Raftstore events (P99) - Process ready duration: The time consumed for processes to be ready in Raft - Process ready duration per server: The time consumed for peer processes to be ready in Raft per TiKV instance. It should be less than 2 seconds (P99.99). diff --git a/optimistic-transaction.md b/optimistic-transaction.md index 2403dbc608fa5..c1ffae47f36ba 100644 --- a/optimistic-transaction.md +++ b/optimistic-transaction.md @@ -65,7 +65,7 @@ However, TiDB transactions also have the following disadvantages: ## Transaction retries -In the optimistic transaction model, transactions might fail to be committed because of write–write conflict in heavy contention scenarios. TiDB uses optimistic concurrency control by default, whereas MySQL applies pessimistic concurrency control. This means that MySQL adds locks during SQL execution, and its Repeatable Read isolation level allows for non-repeatable reads, so commits generally do not encounter exceptions. To lower the difficulty of adapting applications, TiDB provides an internal retry mechanism. +In the optimistic transaction model, transactions might fail to be committed because of write–write conflict in heavy contention scenarios. TiDB uses optimistic concurrency control by default, whereas MySQL applies pessimistic concurrency control. This means that MySQL adds locks during the execution of write-type SQL statements, and its Repeatable Read isolation level allows for current reads, so commits generally do not encounter exceptions. To lower the difficulty of adapting applications, TiDB provides an internal retry mechanism. ### Automatic retry diff --git a/overview.md b/overview.md index 4c5a66ce0ca40..c4f69d1e6d40b 100644 --- a/overview.md +++ b/overview.md @@ -20,7 +20,7 @@ aliases: ['/docs/dev/key-features/','/tidb/dev/key-features','/docs/dev/overview - **Real-time HTAP** - TiDB provides two storage engines: [TiKV](https://tikv.org/), a row-based storage engine, and [TiFlash](/tiflash/tiflash-overview.md), a columnar storage engine. TiFlash uses the Multi-Raft Learner protocol to replicate data from TiKV in real time, ensuring that the data between the TiKV row-based storage engine and the TiFlash columnar storage engine are consistent. TiKV and TiFlash can be deployed on different machines as needed to solve the problem of HTAP resource isolation. + TiDB provides two storage engines: [TiKV](/tikv-overview.md), a row-based storage engine, and [TiFlash](/tiflash/tiflash-overview.md), a columnar storage engine. TiFlash uses the Multi-Raft Learner protocol to replicate data from TiKV in real time, ensuring that the data between the TiKV row-based storage engine and the TiFlash columnar storage engine are consistent. TiKV and TiFlash can be deployed on different machines as needed to solve the problem of HTAP resource isolation. - **Cloud-native distributed database** @@ -28,7 +28,7 @@ aliases: ['/docs/dev/key-features/','/tidb/dev/key-features','/docs/dev/overview - **Compatible with the MySQL 5.7 protocol and MySQL ecosystem** - TiDB is compatible with the MySQL 5.7 protocol, common features of MySQL, and the MySQL ecosystem. To migrate your applications to TiDB, you do not need to change a single line of code in many cases or only need to modify a small amount of code. In addition, TiDB provides a series of [data migration tools](/migration-overview.md) to help migrate application data easily into TiDB. + TiDB is compatible with the MySQL 5.7 protocol, common features of MySQL, and the MySQL ecosystem. To migrate your applications to TiDB, you do not need to change a single line of code in many cases or only need to modify a small amount of code. In addition, TiDB provides a series of [data migration tools](/ecosystem-tool-user-guide.md) to help easily migrate application data into TiDB. ## Use cases diff --git a/pd-control.md b/pd-control.md index e4bcda0cb7577..ea12d73800a1b 100644 --- a/pd-control.md +++ b/pd-control.md @@ -28,7 +28,7 @@ If you want to download the latest version of `pd-ctl`, directly download the Ti > **Note:** > -> `{version}` indicates the version number of TiDB. For example, if `{version}` is `v5.2.0`, the package download link is `https://download.pingcap.org/tidb-v5.2.0-linux-amd64.tar.gz`. +> `{version}` indicates the version number of TiDB. For example, if `{version}` is `v5.2.1`, the package download link is `https://download.pingcap.org/tidb-v5.2.1-linux-amd64.tar.gz`. ### Compile from source code @@ -179,7 +179,7 @@ Usage: } >> config show cluster-version // Display the current version of the cluster, which is the current minimum version of TiKV nodes in the cluster and does not correspond to the binary version. -"5.2.0" +"5.2.1" ``` - `max-snapshot-count` controls the maximum number of snapshots that a single store receives or sends out at the same time. The scheduler is restricted by this configuration to avoid taking up normal application resources. When you need to improve the speed of adding replicas or balancing, increase this value. diff --git a/pd-recover.md b/pd-recover.md index 7bbb1e71d230b..2630f4a8679ce 100644 --- a/pd-recover.md +++ b/pd-recover.md @@ -27,7 +27,7 @@ To download the latest version of PD Recover, directly download the TiDB package > **Note:** > -> `{version}` indicates the version number of TiDB. For example, if `{version}` is `v5.2.0`, the package download link is `https://download.pingcap.org/tidb-v5.2.0-linux-amd64.tar.gz`. +> `{version}` indicates the version number of TiDB. For example, if `{version}` is `v5.2.1`, the package download link is `https://download.pingcap.org/tidb-v5.2.1-linux-amd64.tar.gz`. ## Quick Start diff --git a/post-installation-check.md b/post-installation-check.md index a44ba63000319..b97f3b92fadfa 100644 --- a/post-installation-check.md +++ b/post-installation-check.md @@ -53,9 +53,11 @@ Log in to the database by running the following command: {{< copyable "shell-regular" >}} ```shell -mysql -u root -h 10.0.1.4 -P 4000 +mysql -u root -h ${tidb_server_host_IP_address} -P 4000 ``` +`${tidb_server_host_IP_address}` is one of the IP addresses set for `tidb_servers` when you [initialize the cluster topology file](/production-deployment-using-tiup.md#step-3-initialize-cluster-topology-file), such as `10.0.1.7`. + The following information indicates successful login: ```sql diff --git a/production-deployment-using-tiup.md b/production-deployment-using-tiup.md index 73556d6e50246..e4ed3589af37b 100644 --- a/production-deployment-using-tiup.md +++ b/production-deployment-using-tiup.md @@ -306,13 +306,13 @@ Then execute the `deploy` command to deploy the TiDB cluster: {{< copyable "shell-regular" >}} ```shell -tiup cluster deploy tidb-test v5.2.0 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] +tiup cluster deploy tidb-test v5.2.1 ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa] ``` In the above command: - The name of the deployed TiDB cluster is `tidb-test`. -- You can see the latest supported versions by running `tiup list tidb`. This document takes `v5.2.0` as an example. +- You can see the latest supported versions by running `tiup list tidb`. This document takes `v5.2.1` as an example. - The initialization configuration file is `topology.yaml`. - `--user root`: Log in to the target machine through the `root` key to complete the cluster deployment, or you can use other users with `ssh` and `sudo` privileges to complete the deployment. - `[-i]` and `[-p]`: optional. If you have configured login to the target machine without password, these parameters are not required. If not, choose one of the two parameters. `[-i]` is the private key of the `root` user (or other users specified by `--user`) that has access to the target machine. `[-p]` is used to input the user password interactively. @@ -334,7 +334,7 @@ TiUP supports managing multiple TiDB clusters. The command above outputs informa Starting /home/tidb/.tiup/components/cluster/v1.5.0/cluster list Name User Version Path PrivateKey ---- ---- ------- ---- ---------- -tidb-test tidb v5.2.0 /home/tidb/.tiup/storage/cluster/clusters/tidb-test /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa +tidb-test tidb v5.2.1 /home/tidb/.tiup/storage/cluster/clusters/tidb-test /home/tidb/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa ``` ## Step 6: Check the status of the deployed TiDB cluster diff --git a/quick-start-with-tidb.md b/quick-start-with-tidb.md index 11c340fe530d6..ab5d4c134b534 100644 --- a/quick-start-with-tidb.md +++ b/quick-start-with-tidb.md @@ -58,10 +58,10 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in {{< copyable "shell-regular" >}} ```shell - tiup playground v5.2.0 --db 2 --pd 3 --kv 3 --monitor + tiup playground v5.2.1 --db 2 --pd 3 --kv 3 --monitor ``` - The command downloads a version cluster to the local machine and starts it, such as v5.2.0. `--monitor` means that the monitoring component is also deployed. + The command downloads a version cluster to the local machine and starts it, such as v5.2.1. `--monitor` means that the monitoring component is also deployed. To view the latest version, run `tiup list tidb`. @@ -77,7 +77,7 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in > **Note:** > - > + Since v5.2.0, TiDB supports running `tiup playground` on the machine that uses the Apple M1 chip. + > + Since v5.2.1, TiDB supports running `tiup playground` on the machine that uses the Apple M1 chip. > + For the playground operated in this way, after the test deployment is finished, TiUP will clean up the original cluster data. You will get a new cluster after re-running the command. > + If you want the data to be persisted on storage,run `tiup --tag playground ...`. For details, refer to [TiUP Reference Guide](/tiup/tiup-reference.md#-t---tag). @@ -164,10 +164,10 @@ As a distributed system, a basic TiDB test cluster usually consists of 2 TiDB in {{< copyable "shell-regular" >}} ```shell - tiup playground v5.2.0 --db 2 --pd 3 --kv 3 --monitor + tiup playground v5.2.1 --db 2 --pd 3 --kv 3 --monitor ``` - The command downloads a version cluster to the local machine and starts it, such as v5.2.0. `--monitor` means that the monitoring component is also deployed. + The command downloads a version cluster to the local machine and starts it, such as v5.2.1. `--monitor` means that the monitoring component is also deployed. To view the latest version, run `tiup list tidb`. diff --git a/releases/release-5.0.0.md b/releases/release-5.0.0.md index 1c4575493084b..9cd384e80999a 100644 --- a/releases/release-5.0.0.md +++ b/releases/release-5.0.0.md @@ -68,6 +68,7 @@ In v5.0, the key new features or improvements are as follows: ### Others ++ Before the upgrade, check the value of the TiDB configuration [`feedback-probability`](/tidb-configuration-file.md#feedback-probability). If the value is not 0, the "panic in the recoverable goroutine" error will occur after the upgrade, but this error does not affect the upgrade. + Forbid conversion between `VARCHAR` type and `CHAR` type during the column type change to avoid data correctness issues. ## New features diff --git a/releases/release-5.1.0.md b/releases/release-5.1.0.md index 25d1e23efb0dd..b4d53dd2dff28 100644 --- a/releases/release-5.1.0.md +++ b/releases/release-5.1.0.md @@ -58,6 +58,7 @@ In v5.1, the key new features or improvements are as follows: ### Others +- Before the upgrade, check the value of the TiDB configuration [`feedback-probability`](/tidb-configuration-file.md#feedback-probability). If the value is not 0, the "panic in the recoverable goroutine" error will occur after the upgrade, but this error does not affect the upgrade. - Upgrade the Go compiler version of TiDB from go1.13.7 to go1.16.4, which improves the TiDB performance. If you are a TiDB developer, upgrade your Go compiler version to ensure a smooth compilation. - Avoid creating tables with clustered indexes in the cluster that uses TiDB Binlog during the TiDB rolling upgrade. - Avoid executing statements like `alter table ... modify column` or `alter table ... change column` during the TiDB rolling upgrade. diff --git a/releases/release-5.2.0.md b/releases/release-5.2.0.md index e00734404cea4..2dd60e6f69bd3 100644 --- a/releases/release-5.2.0.md +++ b/releases/release-5.2.0.md @@ -8,6 +8,10 @@ Release date: August 27, 2021 TiDB version: 5.2.0 +> **Warning:** +> +> Some known issues are found in this version, and these issues are fixed in new versions. It is recommended that you use the latest 5.2.x version. + In v5.2, the key new features and improvements are as follows: - Support using several functions in expression indexes to greatly improve query performance diff --git a/releases/release-5.2.1.md b/releases/release-5.2.1.md new file mode 100644 index 0000000000000..55e7ee9b1c904 --- /dev/null +++ b/releases/release-5.2.1.md @@ -0,0 +1,19 @@ +--- +title: TiDB 5.2.1 Release Notes +--- + +# TiDB 5.2.1 Release Notes + +Release date: September 9, 2021 + +TiDB version: 5.2.1 + +## Bug fixes + ++ TiDB + + - Fix an error that occurs during execution caused by the wrong execution plan. The wrong execution plan is caused by the shallow copy of schema columns when pushing down the aggregation operators on partitioned tables. [#27797](https://github.com/pingcap/tidb/issues/27797) [#26554](https://github.com/pingcap/tidb/issues/26554) + ++ TiKV + + - Fix the issue of unavailable TiKV caused by Raftstore deadlock when migrating Regions. The workaround is to disable the scheduling and restart the unavailable TiKV. [#10909](https://github.com/tikv/tikv/issues/10909) diff --git a/releases/release-notes.md b/releases/release-notes.md index f8adb3ae8ddf8..7082bf502c881 100644 --- a/releases/release-notes.md +++ b/releases/release-notes.md @@ -7,6 +7,7 @@ aliases: ['/docs/dev/releases/release-notes/','/docs/dev/releases/rn/'] ## 5.2 +- [5.2.1](/releases/release-5.2.1.md) - [5.2.0](/releases/release-5.2.0.md) ## 5.1 diff --git a/scale-tidb-using-tiup.md b/scale-tidb-using-tiup.md index f54411d01a795..d0de352bdf7a9 100644 --- a/scale-tidb-using-tiup.md +++ b/scale-tidb-using-tiup.md @@ -262,7 +262,7 @@ If you want to remove a TiKV node from the `10.0.1.5` host, take the following s ``` Starting /root/.tiup/components/cluster/v1.5.0/cluster display TiDB Cluster: - TiDB Version: v5.2.0 + TiDB Version: v5.2.1 ID Role Host Ports Status Data Dir Deploy Dir -- ---- ---- ----- ------ -------- ---------- 10.0.1.3:8300 cdc 10.0.1.3 8300 Up - deploy/cdc-8300 diff --git a/scripts/check-conflicts.py b/scripts/check-conflicts.py index 7d940ad4d9f0b..0f003e5f96597 100644 --- a/scripts/check-conflicts.py +++ b/scripts/check-conflicts.py @@ -42,7 +42,7 @@ single = [] lineNum = 0 if os.path.isfile(filename): - with open(filename,'r') as file: + with open(filename,'r', encoding='utf-8') as file: for line in file: lineNum += 1 if re.match(r'<{7}.*\n', line): @@ -57,7 +57,7 @@ flag = 0 else: continue - + if len(pos): mark = 1 @@ -65,7 +65,7 @@ for conflict in pos: if len(conflict) == 2: print("CONFLICTS: line " + str(conflict[0]) + " to line " + str(conflict[1]) + "\n") - + pos = [] if mark: diff --git a/scripts/check-control-char.py b/scripts/check-control-char.py index e17a721d8c74e..3bf7784c3b718 100644 --- a/scripts/check-control-char.py +++ b/scripts/check-control-char.py @@ -37,7 +37,7 @@ def check_control_char(filename): pos = [] flag = 0 - with open(filename,'r') as file: + with open(filename,'r', encoding='utf-8') as file: for line in file: lineNum += 1 diff --git a/scripts/check-manual-line-breaks.py b/scripts/check-manual-line-breaks.py index 7102581ff37e2..771e1658f852b 100644 --- a/scripts/check-manual-line-breaks.py +++ b/scripts/check-manual-line-breaks.py @@ -40,7 +40,7 @@ def check_manual_break(filename): lineNum = 0 mark = 0 - with open(filename,'r') as file: + with open(filename,'r', encoding='utf-8') as file: for line in file: lineNum += 1 diff --git a/scripts/check-tags.py b/scripts/check-tags.py index 51eb14ff6920d..1b1f8f84fd7e2 100644 --- a/scripts/check-tags.py +++ b/scripts/check-tags.py @@ -107,7 +107,7 @@ def filter_frontmatter(content): if len(collect) >= 2: filter_point = collect[1] content = content[filter_point:] - + return content def filter_backticks(content, filename): @@ -140,7 +140,7 @@ def filter_backticks(content, filename): for filename in sys.argv[1:]: # print("Checking " + filename + "......\n") if os.path.isfile(filename): - file = open(filename, "r" ) + file = open(filename, "r", encoding='utf-8') content = file.read() file.close() diff --git a/statement-summary-tables.md b/statement-summary-tables.md index fbc448e7a7e26..a67732c0f8a9b 100644 --- a/statement-summary-tables.md +++ b/statement-summary-tables.md @@ -8,7 +8,7 @@ aliases: ['/docs/dev/statement-summary-tables/','/docs/dev/reference/performance To better handle SQL performance issues, MySQL has provided [statement summary tables](https://dev.mysql.com/doc/refman/5.6/en/statement-summary-tables.html) in `performance_schema` to monitor SQL with statistics. Among these tables, `events_statements_summary_by_digest` is very useful in locating SQL problems with its abundant fields such as latency, execution times, rows scanned, and full table scans. -Therefore, starting from v4.0.0-rc.1, TiDB provides system tables in `information_schema`. These system tables are similar to `events_statements_summary_by_digest` in terms of features. +Therefore, starting from v4.0.0-rc.1, TiDB provides system tables in `information_schema` (_not_ `performance_schema`) that are similar to `events_statements_summary_by_digest` in terms of features. - [`statements_summary`](#statements_summary) - [`statements_summary_history`](#statements_summary_history) @@ -20,7 +20,7 @@ This document details these tables and introduces how to use them to troubleshoo ## `statements_summary` -`statements_summary` is a system table in `performance_schema`. `statements_summary` groups the SQL statements by the SQL digest and the plan digest, and provides statistics for each SQL category. +`statements_summary` is a system table in `information_schema`. `statements_summary` groups the SQL statements by the SQL digest and the plan digest, and provides statistics for each SQL category. The "SQL digest" here means the same as used in slow logs, which is a unique identifier calculated through normalized SQL statements. The normalization process ignores constant, blank characters, and is case insensitive. Therefore, statements with consistent syntaxes have the same digest. For example: diff --git a/system-variables.md b/system-variables.md index b9fdbabe39cb4..c442f18ce77e4 100644 --- a/system-variables.md +++ b/system-variables.md @@ -128,19 +128,20 @@ mysql> SELECT * FROM t1; - This variable indicates the location where data is stored. This location can be a local path or point to a PD server if the data is stored on TiKV. - A value in the format of `ip_address:port` indicates the PD server that TiDB connects to on startup. +### ddl_slow_threshold + +- Scope: INSTANCE +- Default value: `300` +- DDL operations whose execution time exceeds the threshold value are output to the log. The unit is millisecond. + ### default_authentication_plugin - Scope: GLOBAL - Default value: `mysql_native_password` +- Possible values: `mysql_native_password`, `caching_sha2_password` - This variable sets the authentication method that the server advertises when the server-client connection is being established. Possible values for this variable are documented in [Authentication plugin status](/security-compatibility-with-mysql.md#authentication-plugin-status). - Value options: `mysql_native_password` and `caching_sha2_password`. For more details, see [Authentication plugin status](/security-compatibility-with-mysql.md#authentication-plugin-status). -### ddl_slow_threshold - -- Scope: INSTANCE -- Default value: `300` -- DDL operations whose execution time exceeds the threshold value are output to the log. The unit is millisecond. - ### foreign_key_checks - Scope: SESSION | GLOBAL @@ -189,7 +190,7 @@ mysql> SELECT * FROM t1; ### license - Scope: NONE -- Default value: Apache License 2.0 +- Default value: `Apache License 2.0` - This variable indicates the license of your TiDB server installation. ### max_execution_time @@ -277,8 +278,8 @@ MPP is a distributed computing framework provided by the TiFlash engine, which a ### tidb_analyze_version New in v5.1.0 - Scope: SESSION | GLOBAL -- Value options: `1` and `2` - Default value: `2` +- Range: `[1, 2]` - Controls how TiDB collects statistics. - In versions before v5.1.0, the default value of this variable is `1`. In v5.1.0, the default value of this variable is `2`, which serves as an experimental feature. For detailed introduction, see [Introduction to Statistics](/statistics.md). @@ -578,7 +579,7 @@ Constraint checking is always performed in place for pessimistic transactions (d > > Currently, List partition and List COLUMNS partition are experimental features. It is not recommended that you use it in the production environment. -- Scope: SESSION +- Scope: SESSION | GLOBAL - Default value: `OFF` - This variable is used to set whether to enable the `LIST (COLUMNS) TABLE PARTITION` feature. @@ -596,9 +597,9 @@ Constraint checking is always performed in place for pessimistic transactions (d * `START TRANSACTION READ ONLY` and `SET TRANSACTION READ ONLY` syntax * The `tx_read_only`, `transaction_read_only`, `offline_mode`, `super_read_only` and `read_only` system variables -> **Note:** +> **Warning:** > -> Only the default value of `OFF` can be considered safe. Setting `tidb_enable_noop_functions=1` might lead to unexpected behaviors in your application, because it permits TiDB to ignore certain syntax without providing an error. +> Only the default value of `OFF` can be considered safe. Setting `tidb_enable_noop_functions=1` might lead to unexpected behaviors in your application, because it permits TiDB to ignore certain syntax without providing an error. For example, the syntax `START TRANSACTION READ ONLY` is permitted, but the transaction remains in read-write mode. ### tidb_enable_parallel_apply New in v5.0 @@ -680,7 +681,8 @@ Query OK, 0 rows affected (0.09 sec) ### tidb_enforce_mpp New in v5.1 - Scope: SESSION -- Default value: `OFF`. To change this default value, modify the [`performance.enforce-mpp`](/tidb-configuration-file.md#enforce-mpp) configuration value. +- Default value: `OFF` +- To change this default value, modify the [`performance.enforce-mpp`](/tidb-configuration-file.md#enforce-mpp) configuration value. - Controls whether to ignore the optimizer's cost estimation and to forcibly use TiFlash's MPP mode for query execution. The value options are as follows: - `0` or `OFF`, which means that the MPP mode is not forcibly used (by default). - `1` or `ON`, which means that the cost estimation is ignored and the MPP mode is forcibly used. Note that this setting only takes effect when `tidb_allow_mpp=true`. @@ -1015,20 +1017,6 @@ For a system upgraded to v5.0 from an earlier version, if you have not modified - This variable is used to set whether the optimizer executes the optimization operation of pushing down the aggregate function to the position before Join, Projection, and UnionAll. - When the aggregate operation is slow in query, you can set the variable value to ON. -### tidb_opt_limit_push_down_threshold - -- Scope: SESSION | GLOBAL -- Default value: `100` -- Range: `[0, 2147483647]` -- This variable is used to set the threshold that determines whether to push the Limit or TopN operator down to TiKV. -- If the value of the Limit or TopN operator is smaller than or equal to this threshold, these operators are forcibly pushed down to TiKV. This variable resolves the issue that the Limit or TopN operator cannot be pushed down to TiKV partly due to wrong estimation. - -### tidb_opt_enable_correlation_adjustment - -- Scope: SESSION | GLOBAL -- Default value: `ON` -- This variable is used to control whether the optimizer estimates the number of rows based on column order correlation - ### tidb_opt_correlation_exp_factor - Scope: SESSION | GLOBAL @@ -1081,6 +1069,12 @@ mysql> desc select count(distinct a) from test.t; 4 rows in set (0.00 sec) ``` +### tidb_opt_enable_correlation_adjustment + +- Scope: SESSION | GLOBAL +- Default value: `ON` +- This variable is used to control whether the optimizer estimates the number of rows based on column order correlation + ### tidb_opt_insubq_to_join_and_agg - Scope: SESSION | GLOBAL @@ -1104,6 +1098,14 @@ mysql> desc select count(distinct a) from test.t; select * from t, t1 where t.a=t1.a ``` +### tidb_opt_limit_push_down_threshold + +- Scope: SESSION | GLOBAL +- Default value: `100` +- Range: `[0, 2147483647]` +- This variable is used to set the threshold that determines whether to push the Limit or TopN operator down to TiKV. +- If the value of the Limit or TopN operator is smaller than or equal to this threshold, these operators are forcibly pushed down to TiKV. This variable resolves the issue that the Limit or TopN operator cannot be pushed down to TiKV partly due to wrong estimation. + ### tidb_opt_prefer_range_scan New in v5.0 - Scope: SESSION | GLOBAL diff --git a/ticdc/manage-ticdc.md b/ticdc/manage-ticdc.md index 573f211a079cb..cbb807fc157f8 100644 --- a/ticdc/manage-ticdc.md +++ b/ticdc/manage-ticdc.md @@ -12,14 +12,14 @@ You can also use the HTTP interface (the TiCDC OpenAPI feature) to manage the Ti ## Upgrade TiCDC using TiUP -This section introduces how to upgrade the TiCDC cluster using TiUP. In the following example, assume that you need to upgrade TiCDC and the entire TiDB cluster to v5.2.0. +This section introduces how to upgrade the TiCDC cluster using TiUP. In the following example, assume that you need to upgrade TiCDC and the entire TiDB cluster to v5.2.1. {{< copyable "shell-regular" >}} ```shell tiup update --self && \ tiup update --all && \ -tiup cluster upgrade v5.2.0 +tiup cluster upgrade v5.2.1 ``` ### Notes for upgrade diff --git a/tikv-configuration-file.md b/tikv-configuration-file.md index 838be06d0bbcf..63e34bdc764e5 100644 --- a/tikv-configuration-file.md +++ b/tikv-configuration-file.md @@ -798,7 +798,7 @@ Configuration items related to RocksDB ### `compaction-readahead-size` -+ The size of `readahead` when compaction is being performed ++ Enables the readahead feature during RocksDB compaction and specifies the size of readahead data. If you are using mechanical disks, it is recommended to set the value to 2MB at least. + Default value: `0` + Minimum value: `0` + Unit: B|KB|MB|GB diff --git a/tiup/tiup-component-cluster-deploy.md b/tiup/tiup-component-cluster-deploy.md index 32b67a54bfe43..92acb198c48c3 100644 --- a/tiup/tiup-component-cluster-deploy.md +++ b/tiup/tiup-component-cluster-deploy.md @@ -13,7 +13,7 @@ tiup cluster deploy [flags] ``` - ``: the name of the new cluster, which cannot be the same as the existing cluster names. -- ``: the version number of the TiDB cluster to deploy, such as `v5.2.0`. +- ``: the version number of the TiDB cluster to deploy, such as `v5.2.1`. - ``: the prepared [topology file](/tiup/tiup-cluster-topology-reference.md). ## Options diff --git a/tiup/tiup-component-management.md b/tiup/tiup-component-management.md index dd459dfb09b67..0aa46aca25faa 100644 --- a/tiup/tiup-component-management.md +++ b/tiup/tiup-component-management.md @@ -70,12 +70,12 @@ Example 2: Use TiUP to install the nightly version of TiDB. tiup install tidb:nightly ``` -Example 3: Use TiUP to install TiKV v5.2.0. +Example 3: Use TiUP to install TiKV v5.2.1. {{< copyable "shell-regular" >}} ```shell -tiup install tikv:v5.2.0 +tiup install tikv:v5.2.1 ``` ## Upgrade components @@ -128,12 +128,12 @@ Before the component is started, TiUP creates a directory for it, and then puts If you want to start the same component multiple times and reuse the previous working directory, you can use `--tag` to specify the same name when the component is started. After the tag is specified, the working directory will *not be automatically deleted* when the instance is terminated, which makes it convenient to reuse the working directory. -Example 1: Operate TiDB v5.2.0. +Example 1: Operate TiDB v5.2.1. {{< copyable "shell-regular" >}} ```shell -tiup tidb:v5.2.0 +tiup tidb:v5.2.1 ``` Example 2: Specify the tag with which TiKV operates. @@ -219,12 +219,12 @@ The following flags are supported in this command: - If the version is ignored, adding `--all` means to uninstall all versions of this component. - If the version and the component are both ignored, adding `--all` means to uninstall all components of all versions. -Example 1: Uninstall TiDB v5.0.0. +Example 1: Uninstall TiDB v5.2.1. {{< copyable "shell-regular" >}} ```shell -tiup uninstall tidb:v5.0.0 +tiup uninstall tidb:v5.2.1 ``` Example 2: Uninstall TiKV of all versions. diff --git a/tiup/tiup-mirror.md b/tiup/tiup-mirror.md index 3f17a9aaf4a6d..a3c626211b98f 100644 --- a/tiup/tiup-mirror.md +++ b/tiup/tiup-mirror.md @@ -77,9 +77,9 @@ The `tiup mirror clone` command provides many optional flags (might provide more If you want to clone only one version (not all versions) of a component, use `--=` to specify this version. For example: - - Execute the `tiup mirror clone --tidb v5.2.0` command to clone the v5.2.0 version of the TiDB component. - - Execute the `tiup mirror clone --tidb v5.2.0 --tikv all` command to clone the v5.2.0 version of the TiDB component and all versions of the TiKV component. - - Execute the `tiup mirror clone v5.2.0` command to clone the v5.2.0 version of all components in a cluster. + - Execute the `tiup mirror clone --tidb v5.2.1` command to clone the v5.2.1 version of the TiDB component. + - Execute the `tiup mirror clone --tidb v5.2.1 --tikv all` command to clone the v5.2.1 version of the TiDB component and all versions of the TiKV component. + - Execute the `tiup mirror clone v5.2.1` command to clone the v5.2.1 version of all components in a cluster. ## Usage examples diff --git a/tiup/tiup-playground.md b/tiup/tiup-playground.md index 0611b9dba62dc..582bbd080827c 100644 --- a/tiup/tiup-playground.md +++ b/tiup/tiup-playground.md @@ -22,7 +22,7 @@ This command actually performs the following operations: - Because this command does not specify the version of the playground component, TiUP first checks the latest version of the installed playground component. Assume that the latest version is v1.5.0, then this command works the same as `tiup playground:v1.5.0`. - If you have not used TiUP playground to install the TiDB, TiKV, and PD components, the playground component installs the latest stable version of these components, and then start these instances. -- Because this command does not specify the version of the TiDB, PD, and TiKV component, TiUP playground uses the latest version of each component by default. Assume that the latest version is v5.0.0, then this command works the same as `tiup playground:v1.5.0 v5.2.0`. +- Because this command does not specify the version of the TiDB, PD, and TiKV component, TiUP playground uses the latest version of each component by default. Assume that the latest version is v5.0.0, then this command works the same as `tiup playground:v1.5.0 v5.2.1`. - Because this command does not specify the number of each component, TiUP playground, by default, starts a smallest cluster that consists of one TiDB instance, one TiKV instance, and one PD instance. - After starting each TiDB component, TiUP playground reminds you that the cluster is successfully started and provides you some useful information, such as how to connect to the TiDB cluster through the MySQL client and how to access the [TiDB Dashboard](/dashboard/dashboard-intro.md). @@ -64,7 +64,7 @@ Flags: tiup playground nightly ``` -In the command above, `nightly` is the version number of the cluster. Similarly, you can replace `nightly` with `v5.2.0`, and the command is `tiup playground v5.2.0`. +In the command above, `nightly` is the version number of the cluster. Similarly, you can replace `nightly` with `v5.2.1`, and the command is `tiup playground v5.2.1`. ### Start a cluster with monitor diff --git a/transaction-isolation-levels.md b/transaction-isolation-levels.md index 3b2ec9ca8f8d3..f6dc9540010ae 100644 --- a/transaction-isolation-levels.md +++ b/transaction-isolation-levels.md @@ -50,9 +50,7 @@ The Repeatable Read isolation level in TiDB differs from ANSI Repeatable Read is ### Difference between TiDB and MySQL Repeatable Read -The Repeatable Read isolation level in TiDB differs from that in MySQL. The MySQL Repeatable Read isolation level does not check whether the current version is visible when updating, which means it can continue to update even if the row has been updated after the transaction starts. In contrast, if the row has been updated after the transaction starts, the TiDB transaction is rolled back and retried. Transaction Retries in TiDB might fail, leading to a final failure of the transaction, while in MySQL the updating transaction can be successful. - -The MySQL Repeatable Read isolation level is not the snapshot isolation level. The consistency of MySQL Repeatable Read isolation level is weaker than both the snapshot isolation level and TiDB Repeatable Read isolation level. +The Repeatable Read isolation level in TiDB differs from that in MySQL. The MySQL Repeatable Read isolation level does not check whether the current version is visible when updating, which means it can continue to update even if the row has been updated after the transaction starts. In contrast, if the row has been updated after the transaction starts, the TiDB optimistic transaction is rolled back and retried. Transaction retries in TiDB's optimistic concurrency control might fail, leading to a final failure of the transaction, while in TiDB's pessimistic concurrency control and MySQL, the updating transaction can be successful. ## Read Committed isolation level diff --git a/upgrade-tidb-using-tiup.md b/upgrade-tidb-using-tiup.md index 9eec9957f5576..a491cf3b8d5f4 100644 --- a/upgrade-tidb-using-tiup.md +++ b/upgrade-tidb-using-tiup.md @@ -156,12 +156,12 @@ If your application has a maintenance window for the database to be stopped for tiup cluster upgrade ``` -For example, if you want to upgrade the cluster to v5.2.0: +For example, if you want to upgrade the cluster to v5.2.1: {{< copyable "shell-regular" >}} ```shell -tiup cluster upgrade v5.2.0 +tiup cluster upgrade v5.2.1 ``` > **Note:** @@ -211,7 +211,7 @@ tiup cluster display ``` Cluster type: tidb Cluster name: -Cluster version: v5.2.0 +Cluster version: v5.2.1 ``` > **Note:** @@ -261,7 +261,7 @@ You can upgrade the tool version by using TiUP to install the `ctl` component of {{< copyable "shell-regular" >}} ```shell -tiup install ctl:v5.2.0 +tiup install ctl:v5.2.1 ``` ## TiDB 5.2 compatibility changes