From 85e27f34c4529d75fe477e64cd5b04fee1936436 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dani=C3=ABl=20van=20Eeden?= Date: Fri, 15 Nov 2024 03:24:47 +0100 Subject: [PATCH] glossary: Add more abbreviations (#19213) --- br/br-snapshot-guide.md | 6 +- br/br-snapshot-manual.md | 2 +- dm/dm-glossary.md | 2 + glossary.md | 164 ++++++++++++++++++++-- latency-breakdown.md | 2 +- scripts/check-glossary.py | 34 +++++ ticdc/ticdc-glossary.md | 2 +- tidb-lightning/tidb-lightning-glossary.md | 2 + tiflash/tiflash-mintso-scheduler.md | 2 +- tiflash/use-tiflash-mpp-mode.md | 2 +- 10 files changed, 195 insertions(+), 23 deletions(-) create mode 100755 scripts/check-glossary.py diff --git a/br/br-snapshot-guide.md b/br/br-snapshot-guide.md index d4b0ec67ad49b..4ef4a957630bc 100644 --- a/br/br-snapshot-guide.md +++ b/br/br-snapshot-guide.md @@ -33,7 +33,7 @@ tiup br backup full --pd "${PD_IP}:2379" \ In the preceding command: -- `--backupts`: The time point of the snapshot. The format can be [TSO](/glossary.md#tso) or timestamp, such as `400036290571534337` or `2018-05-11 01:42:23 +08:00`. If the data of this snapshot is garbage collected, the `tiup br backup` command returns an error and `br` exits. When backing up using a timestamp, it is recommended to specify the time zone as well. Otherwise, `br` uses the local time zone to construct the timestamp by default, which might lead to an incorrect backup time point. If you leave this parameter unspecified, `br` picks the snapshot corresponding to the backup start time. +- `--backupts`: The time point of the snapshot. The format can be [TSO](/tso.md) or timestamp, such as `400036290571534337` or `2018-05-11 01:42:23 +08:00`. If the data of this snapshot is garbage collected, the `tiup br backup` command returns an error and `br` exits. When backing up using a timestamp, it is recommended to specify the time zone as well. Otherwise, `br` uses the local time zone to construct the timestamp by default, which might lead to an incorrect backup time point. If you leave this parameter unspecified, `br` picks the snapshot corresponding to the backup start time. - `--storage`: The storage address of the backup data. Snapshot backup supports Amazon S3, Google Cloud Storage, and Azure Blob Storage as backup storage. The preceding command uses Amazon S3 as an example. For more details, see [URI Formats of External Storage Services](/external-storage-uri.md). - `--ratelimit`: The maximum speed **per TiKV** performing backup tasks. The unit is in MiB/s. @@ -129,8 +129,8 @@ tiup br restore full \ ### Restore tables in the `mysql` schema -- Starting from BR v5.1.0, when you back up snapshots, BR automatically backs up the **system tables** in the `mysql` schema, but does not restore these system tables by default. -- Starting from v6.2.0, BR lets you specify `--with-sys-table` to restore **data in some system tables**. +- Starting from BR v5.1.0, when you back up snapshots, BR automatically backs up the **system tables** in the `mysql` schema, but does not restore these system tables by default. +- Starting from v6.2.0, BR lets you specify `--with-sys-table` to restore **data in some system tables**. - Starting from v7.6.0, BR enables `--with-sys-table` by default, which means that BR restores **data in some system tables** by default. **BR can restore data in the following system tables:** diff --git a/br/br-snapshot-manual.md b/br/br-snapshot-manual.md index e0c52559887a5..a59c7fca90675 100644 --- a/br/br-snapshot-manual.md +++ b/br/br-snapshot-manual.md @@ -42,7 +42,7 @@ tiup br backup full \ In the preceding command: -- `--backupts`: The time point of the snapshot. The format can be [TSO](/glossary.md#tso) or timestamp, such as `400036290571534337` or `2024-06-28 13:30:00 +08:00`. If the data of this snapshot is garbage collected, the `tiup br backup` command returns an error and 'br' exits. If you leave this parameter unspecified, `br` picks the snapshot corresponding to the backup start time. +- `--backupts`: The time point of the snapshot. The format can be [TSO](/tso.md) or timestamp, such as `400036290571534337` or `2024-06-28 13:30:00 +08:00`. If the data of this snapshot is garbage collected, the `tiup br backup` command returns an error and 'br' exits. If you leave this parameter unspecified, `br` picks the snapshot corresponding to the backup start time. - `--ratelimit`: The maximum speed **per TiKV** performing backup tasks. The unit is in MiB/s. - `--log-file`: The target file where `br` log is written. diff --git a/dm/dm-glossary.md b/dm/dm-glossary.md index a92a834e43a4d..b301baf00f797 100644 --- a/dm/dm-glossary.md +++ b/dm/dm-glossary.md @@ -8,6 +8,8 @@ aliases: ['/docs/tidb-data-migration/dev/glossary/'] This document lists the terms used in the logs, monitoring, configurations, and documentation of TiDB Data Migration (DM). +For TiDB-related terms and definitions, see [TiDB glossary](/glossary.md). + ## B ### Binlog diff --git a/glossary.md b/glossary.md index 4dd8c9f95a045..51e612e6fb759 100644 --- a/glossary.md +++ b/glossary.md @@ -6,6 +6,14 @@ aliases: ['/docs/dev/glossary/'] # Glossary +This glossary provides definitions for key terms related to the TiDB platform. + +Other available glossaries: + +- [TiDB Data Migration Glossary](/dm/dm-glossary.md) +- [TiCDC Glossary](/ticdc/ticdc-glossary.md) +- [TiDB Lightning Glossary](/tidb-lightning/tidb-lightning-glossary.md) + ## A ### ACID @@ -22,14 +30,20 @@ ACID refers to the four key properties of a transaction: atomicity, consistency, ## B -### Batch Create Table +### Backup & Restore (BR) + +BR is the backup and restore tool for TiDB. For more information, see [BR Overview](/br/backup-and-restore-overview.md). -Batch Create Table is a feature introduced in TiDB v6.0.0. This feature is enabled default. When restoring data with a large number of tables (nearly 50000) using BR (Backup & Restore), the feature can greatly speed up the restore process by creating tables in batches. For details, see [Batch Create Table](/br/br-batch-create-table.md). +`br` is the [br command line tool](/br/use-br-command-line-tool.md) used for backups or restores in TiDB. ### Baseline Capturing Baseline Capturing captures queries that meet capturing conditions and create bindings for them. It is used for [preventing regression of execution plans during an upgrade](/sql-plan-management.md#prevent-regression-of-execution-plans-during-an-upgrade). +### Batch Create Table + +Batch Create Table is a feature introduced in TiDB v6.0.0. This feature is enabled by default. When restoring data with a large number of tables (nearly 50000) using BR (Backup & Restore), the feature can greatly speed up the restore process by creating tables in batches. For details, see [Batch Create Table](/br/br-batch-create-table.md). + ### Bucket A [Region](#regionpeerraft-group) is logically divided into several small ranges called bucket. TiKV collects query statistics by buckets and reports the bucket status to PD. For details, see the [Bucket design doc](https://github.com/tikv/rfcs/blob/master/text/0082-dynamic-size-region.md#bucket). @@ -44,35 +58,105 @@ With the cached table feature, TiDB loads the data of an entire table into the m Coalesce Partition is a way of decreasing the number of partitions in a Hash or Key partitioned table. For more information, see [Manage Hash and Key partitions](/partitioned-table.md#manage-hash-and-key-partitions). +### Column Family (CF) + +In RocksDB and TiKV, a Column Family (CF) represents a logical grouping of key-value pairs within a database. + +### Common Table Expression (CTE) + +A Common Table Expression (CTE) enables you to define a temporary result set that can be referred to multiple times within a SQL statement using the [`WITH`](/sql-statements/sql-statement-with.md) clause. For more information, see [Common Table Expression](/develop/dev-guide-use-common-table-expression.md). + ### Continuous Profiling Introduced in TiDB 5.3.0, Continuous Profiling is a way to observe resource overhead at the system call level. With the support of Continuous Profiling, TiDB provides performance insight as clear as directly looking into the database source code, and helps R&D and operation and maintenance personnel to locate the root cause of performance problems using a flame graph. For details, see [TiDB Dashboard Instance Profiling - Continuous Profiling](/dashboard/continuous-profiling.md). ## D +### Data Definition Language (DDL) + +Data Definition Language (DDL) is a part of the SQL standard that deals with creating, modifying, and dropping tables and other objects. For more information, see [DDL Introduction](/ddl-introduction.md). + +### Data Migration (DM) + +Data Migration (DM) is a tool for migrating data from MySQL-compatible databases into TiDB. DM reads data from a MySQL-compatible database instance and applies it to a TiDB target instance. For more information, see [DM Overview](/dm/dm-overview.md). + +### Data Modification Language (DML) + +Data Modification Language (DML) is a part of the SQL standard that deals with inserting, updating, and dropping rows in tables. + +### Development Milestone Release (DMR) + +Development Milestone Releases (DMR) are TiDB releases that introduce the latest features but do not offer long-term support. For more information, see [TiDB Versioning](/releases/versioning.md). + +### Disaster Recovery (DR) + +Disaster Recovery (DR) includes solutions that can be used to recover data and services from a disaster in the future. TiDB offers various Disaster Recovery solutions such as backups and replication to standby clusters. For more information, see [Overview of TiDB Disaster Recovery Solutions](/dr-solution-introduction.md). + +### Distributed eXecution Framework (DXF) + +Distributed eXecution Framework (DXF) is the framework used by TiDB to centrally schedule certain tasks (such as creating indexes or importing data) and execute them in a distributed manner. DXF is designed to efficiently use cluster resources while controlling resource usage and reducing the impact on core business transactions. For more information, see [DXF Introduction](/tidb-distributed-execution-framework.md). + ### Dynamic Pruning Dynamic pruning mode is one of the modes that TiDB accesses partitioned tables. In dynamic pruning mode, each operator supports direct access to multiple partitions. Therefore, TiDB no longer uses Union. Omitting the Union operation can improve the execution efficiency and avoid the problem of Union concurrent execution. +## G + +### Garbage Collection (GC) + +Garbage Collection (GC) is a process that clears obsolete data to free up resources. For information on TiKV GC process, see [GC Overview](/garbage-collection-overview.md). + +### General Availability (GA) + +General Availability (GA) of a feature means the feature is fully tested and is Generally Available for use in production environments. TiDB features can be released as GA in both [DMR](#development-milestone-release-dmr) and [LTS](#long-term-support-lts) releases. However, as TiDB does not provide patch releases for DMR it is generally recommended to use the LTS release for production use. + +### Global Transaction Identifiers (GTIDs) + +Global Transaction Identifiers (GTIDs) are unique transaction IDs used in MySQL binary logs to track which transactions have been replicated. [Data Migration (DM)](/dm/dm-overview.md) uses these IDs to ensure consistent replication. + +## H + +### Hybrid Transactional and Analytical Processing (HTAP) + +Hybrid Transactional and Analytical Processing (HTAP) is a database feature that enables both OLTP (Online Transactional Processing) and OLAP (Online Analytical Processing) workloads within the same database. For TiDB, the HTAP feature is provided by using TiKV for row storage and TiFlash for columnar storage. For more information, see [the definition of HTAP on the Gartner website](https://www.gartner.com/en/information-technology/glossary/htap-enabling-memory-computing-technologies). + ## I +### In-Memory Pessimistic Lock + +The in-memory pessimistic lock is a new feature introduced in TiDB v6.0.0. When this feature is enabled, pessimistic locks are usually stored in the memory of the Region leader only, and are not persisted to disk or replicated through Raft to other replicas. This feature can greatly reduce the overhead of acquiring pessimistic locks and improve the throughput of pessimistic transactions. + ### Index Merge Index Merge is a method introduced in TiDB v4.0 to access tables. Using this method, the TiDB optimizer can use multiple indexes per table and merge the results returned by each index. In some scenarios, this method makes the query more efficient by avoiding full table scans. Since v5.4, Index Merge has become a GA feature. -### In-Memory Pessimistic Lock +## K -The in-memory pessimistic lock is a new feature introduced in TiDB v6.0.0. When this feature is enabled, pessimistic locks are usually stored in the memory of the Region leader only, and are not persisted to disk or replicated through Raft to other replicas. This feature can greatly reduce the overhead of acquiring pessimistic locks and improve the throughput of pessimistic transactions. +### Key Management Service (KMS) + +Key Management Service (KMS) enables the storage and retrieval of secret keys in a secure way. Examples include AWS KMS, Google Cloud KMS, and HashiCorp Vault. Various TiDB components can use KMS to manage keys for storage encryption and related services. + +### Key-Value (KV) + +Key-Value (KV) is a way of storing information by associating values with unique keys, allowing quick data retrieval. TiDB uses TiKV to map tables and indexes into key-value pairs, enabling efficient data storage and access across the database. ## L -### leader/follower/learner +### Leader/Follower/Learner Leader/Follower/Learner each corresponds to a role in a Raft group of [peers](#regionpeerraft-group). The leader services all client requests and replicates data to the followers. If the group leader fails, one of the followers will be elected as the new leader. Learners are non-voting followers that only serves in the process of replica addition. +### Lightweight Directory Access Protocol (LDAP) + +Lightweight Directory Access Protocol (LDAP) is a standardized way of accessing a directory with information. It is commonly used for account and user data management. TiDB supports LDAP via [LDAP authentication plugins](/security-compatibility-with-mysql.md#authentication-plugin-status). + +### Long Term Support (LTS) + +Long Term Support (LTS) refers to software versions that are extensively tested and maintained for extended periods. For more information, see [TiDB Versioning](/releases/versioning.md). + ## M -### MPP +### Massively Parallel Processing (MPP) Starting from v5.0, TiDB introduces Massively Parallel Processing (MPP) architecture through TiFlash nodes, which shares the execution workloads of large join queries among TiFlash nodes. When the MPP mode is enabled, TiDB, based on cost, determines whether to use the MPP framework to perform the calculation. In the MPP mode, the join keys are redistributed through the Exchange operation while being calculated, which distributes the calculation pressure to each TiFlash node and speeds up the calculation. For more information, see [Use TiFlash MPP Mode](/tiflash/use-tiflash-mpp-mode.md). @@ -86,6 +170,18 @@ Starting from v5.0, TiDB introduces Massively Parallel Processing (MPP) architec The "original value" in the incremental change log output by TiCDC. You can specify whether the incremental change log output by TiCDC contains the "original value". +### Online Analytical Processing (OLAP) + +Online Analytical Processing (OLAP) refers to database workloads focused on analytical tasks, such as data reporting and complex queries. OLAP is characterized by read-heavy queries that process large volumes of data across many rows. + +### Online Transaction Processing (OLTP) + +Online Transaction Processing (OLTP) refers to database workloads focused on transactional tasks, such as selecting, inserting, updating, and deleting small sets of records. + +### Out of Memory (OOM) + +Out of Memory (OOM) is a situation where a system fails due to insufficient memory. For more information, see [Troubleshoot TiDB OOM Issues](/troubleshoot-tidb-oom.md). + ### Operator An operator is a collection of actions that applies to a Region for scheduling purposes. Operators perform scheduling tasks such as "migrate the leader of Region 2 to Store 5" and "migrate replicas of Region 2 to Store 1, 4, 5". @@ -111,20 +207,32 @@ Currently, available steps generated by PD include: [Partitioning](/partitioned-table.md) refers to physically dividing a table into smaller table partitions, which can be done by partition methods such as RANGE, LIST, HASH, and KEY partitioning. -### pending/down +### Pending/Down "Pending" and "down" are two special states of a peer. Pending indicates that the Raft log of followers or learners is vastly different from that of leader. Followers in pending cannot be elected as leader. "Down" refers to a state that a peer ceases to respond to leader for a long time, which usually means the corresponding node is down or isolated from the network. +### Placement Driver (PD) + +Placement Driver (PD) is a core component in the [TiDB Architecture](/tidb-architecture.md#placement-driver-pd-server) responsible for storing metadata, assigning [Timestamp Oracle (TSO)](/tso.md) for transaction timestamps, orchestrating data placement on TiKV, and running [TiDB Dashboard](/dashboard/dashboard-overview.md). For more information, see [TiDB Scheduling](/tidb-scheduling.md). + ### Point Get Point get means reading a single row of data by a unique index or primary index, the returned resultset is up to one row. +### Point in Time Recovery (PITR) + +Point in Time Recovery (PITR) enables you to restore data to a specific point in time (for example, just before an unintended `DELETE` statement). For more information, see [TiDB Log Backup and PITR Architecture](/br/br-log-architecture.md). + ### Predicate columns In most cases, when executing SQL statements, the optimizer only uses statistics of some columns (such as columns in the `WHERE`, `JOIN`, `ORDER BY`, and `GROUP BY` statements). These used columns are called predicate columns. For details, see [Collect statistics on some columns](/statistics.md#collect-statistics-on-some-columns). ## Q +### Queries Per Second (QPS) + +Queries Per Second (QPS) is the number of queries a database service handles per second, serving as a key performance metric for database throughput. + ### Quota Limiter Quota Limiter is an experimental feature introduced in TiDB v6.0.0. If the machine on which TiKV is deployed has limited resources, for example, with only 4v CPU and 16 G memory, and the foreground of TiKV processes too many read and write requests, the CPU resources used by the background are occupied to help process such requests, which affects the performance stability of TiKV. To avoid this situation, the [quota-related configuration items](/tikv-configuration-file.md#quota) can be set to limit the CPU resources to be used by the foreground. @@ -135,23 +243,31 @@ Quota Limiter is an experimental feature introduced in TiDB v6.0.0. If the machi Raft Engine is an embedded persistent storage engine with a log-structured design. It is built for TiKV to store multi-Raft logs. Since v5.4, TiDB supports using Raft Engine as the log storage engine. For details, see [Raft Engine](/tikv-configuration-file.md#raft-engine). -### Region/peer/Raft group +### Region Split + +A region in a TiKV cluster is not divided at the beginning but is gradually split as data is written to it. The process is called Region split. + +The mechanism of Region split is to use one initial Region to cover the entire key space, and generate new Regions through splitting existing ones every time the size of the Region or the number of keys has reached a threshold. + +### Region/Peer/Raft Group Region is the minimal piece of data storage in TiKV, each representing a range of data (256 MiB by default). Each Region has three replicas by default. A replica of a Region is called a peer. Multiple peers of the same Region replicate data via the Raft consensus algorithm, so peers are also members of a Raft instance. TiKV uses Multi-Raft to manage data. That is, for each Region, there is a corresponding, isolated Raft group. -### Region split +### Remote Procedure Call (RPC) -Regions are generated as data writes increase. The process of splitting is called Region split. +Remote Procedure Call (RPC) is a communication way between software components. In a TiDB cluster, the gRPC standard is used for communication between different components such as TiDB, TiKV, and TiFlash. -The mechanism of Region split is to use one initial Region to cover the entire key space, and generate new Regions through splitting existing ones every time the size of the Region or the number of keys has reached a threshold. +### Request Unit (RU) + +Request Unit (RU) is a unified abstraction unit for the resource usage in TiDB. It is used with [Resource Control](/tidb-resource-control.md) to manage resource usage. -### restore +### Restore Restore is the reverse of the backup operation. It is the process of bringing back the system to an earlier state by retrieving data from a prepared backup. ## S -### scheduler +### Scheduler Schedulers are components in PD that generate scheduling tasks. Each scheduler in PD runs independently and serves different purposes. The commonly used schedulers are: @@ -160,16 +276,34 @@ Schedulers are components in PD that generate scheduling tasks. Each scheduler i - `hot-region-scheduler`: Balances the distribution of hot Regions - `evict-leader-{store-id}`: Evicts all leaders of a node (often used for rolling upgrades) +### Static Sorted Table / Sorted String Table (SST) + +Static Sorted Table or Sorted String Table is a file storage format used in RocksDB (a storage engine used by [TiKV](/storage-engine/rocksdb-overview.md)). + ### Store A store refers to the storage node in the TiKV cluster (an instance of `tikv-server`). Each store has a corresponding TiKV instance. ## T +### Timestamp Oracle (TSO) + +Because TiKV is a distributed storage system, it requires a global timing service, Timestamp Oracle (TSO), to assign a monotonically increasing timestamp. In TiKV, such a feature is provided by PD, and in Google [Spanner](http://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf), this feature is provided by multiple atomic clocks and GPS. For details, see [TSO](/tso.md). + ### Top SQL Top SQL helps locate SQL queries that contribute to a high load of a TiDB or TiKV node in a specified time range. For details, see [Top SQL user document](/dashboard/top-sql.md). -### TSO +### Transactions Per Second (TPS) -Because TiKV is a distributed storage system, it requires a global timing service, Timestamp Oracle (TSO), to assign a monotonically increasing timestamp. In TiKV, such a feature is provided by PD, and in Google [Spanner](http://static.googleusercontent.com/media/research.google.com/en//archive/spanner-osdi2012.pdf), this feature is provided by multiple atomic clocks and GPS. For details, see [TSO](/tso.md). +Transactions Per Second (TPS) is the number of transactions a database processes per second, serving as a key metric for measuring database performance and throughput. + +## U + +### Uniform Resource Identifier (URI) + +Uniform Resource Identifier (URI) is a standardized format for identifying a resource. For more information, see [Uniform Resource Identifier](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier) on Wikipedia. + +### Universally Unique Identifier (UUID) + +Universally Unique Identifier (UUID) is a 128-bit (16-byte) generated ID used to uniquely identify records in a database. For more information, see [UUID](/best-practices/uuid.md). diff --git a/latency-breakdown.md b/latency-breakdown.md index bd037016f8e54..56fb881e7faf1 100644 --- a/latency-breakdown.md +++ b/latency-breakdown.md @@ -104,7 +104,7 @@ tidb_session_execute_duration_seconds{type="general"} = read value duration ``` -`pd_client_cmd_handle_cmds_duration_seconds{type="wait"}` records the duration of fetching [TSO (Timestamp Oracle)](/glossary.md#tso) from PD. When reading in an auto-commit transaction mode with a clustered primary index or from a snapshot, the value will be zero. +`pd_client_cmd_handle_cmds_duration_seconds{type="wait"}` records the duration of fetching [TSO (Timestamp Oracle)](/tso.md) from PD. When reading in an auto-commit transaction mode with a clustered primary index or from a snapshot, the value will be zero. The `read handle duration` and `read value duration` are calculated as: diff --git a/scripts/check-glossary.py b/scripts/check-glossary.py new file mode 100755 index 0000000000000..7a0824769a214 --- /dev/null +++ b/scripts/check-glossary.py @@ -0,0 +1,34 @@ +#!/bin/python3 +import sys +from difflib import unified_diff + +print("Checking alphabetic sorting of glossary.md") + +with open("glossary.md") as fh: + # Extract the lines that start with ### into itemsA (unsorted) + itemsA = "" + for line in fh.readlines(): + if line.startswith("###"): + itemsA += line + fh.seek(0) + + # Extract the lines that start with ### into itemsB (sorted) + itemsB = "" + for line in sorted(fh.readlines(), key=str.casefold): + if line.startswith("###"): + itemsB += line + + if itemsA == itemsB: + print("result: OK") + sys.exit(0) + + print("result: differences found, see diff for details") + # diff itemsA and itemsB + diff = unified_diff( + itemsA.splitlines(keepends=True), + itemsB.splitlines(keepends=True), + fromfile="before", + tofile="after", + ) + sys.stdout.writelines(diff) + sys.exit(1) diff --git a/ticdc/ticdc-glossary.md b/ticdc/ticdc-glossary.md index b4debae89e9b9..b054da51db7db 100644 --- a/ticdc/ticdc-glossary.md +++ b/ticdc/ticdc-glossary.md @@ -7,7 +7,7 @@ summary: Learn the terms about TiCDC and their definitions. This glossary provides TiCDC-related terms and definitions. These terms appears in TiCDC logs, monitoring metrics, configurations, and documents. -For TiDB-related terms and definitions, refer to [TiDB glossary](/glossary.md). +For TiDB-related terms and definitions, see [TiDB glossary](/glossary.md). ## C diff --git a/tidb-lightning/tidb-lightning-glossary.md b/tidb-lightning/tidb-lightning-glossary.md index 3645dc3997970..5bef9f13ccfce 100644 --- a/tidb-lightning/tidb-lightning-glossary.md +++ b/tidb-lightning/tidb-lightning-glossary.md @@ -8,6 +8,8 @@ aliases: ['/docs/dev/tidb-lightning/tidb-lightning-glossary/','/docs/dev/referen This page explains the special terms used in TiDB Lightning's logs, monitoring, configurations, and documentation. +For TiDB-related terms and definitions, see [TiDB glossary](/glossary.md). + ## A diff --git a/tiflash/tiflash-mintso-scheduler.md b/tiflash/tiflash-mintso-scheduler.md index 6cb5eda77e866..4cb237cd66b27 100644 --- a/tiflash/tiflash-mintso-scheduler.md +++ b/tiflash/tiflash-mintso-scheduler.md @@ -5,7 +5,7 @@ summary: Learn the implementation principles of the TiFlash MinTSO Scheduler. # TiFlash MinTSO Scheduler -The TiFlash MinTSO scheduler is a distributed scheduler for [MPP](/glossary.md#mpp) tasks in TiFlash. This document describes the implementation principles of the TiFlash MinTSO scheduler. +The TiFlash MinTSO scheduler is a distributed scheduler for [MPP](/glossary.md#massively-parallel-processing-mpp) tasks in TiFlash. This document describes the implementation principles of the TiFlash MinTSO scheduler. ## Background diff --git a/tiflash/use-tiflash-mpp-mode.md b/tiflash/use-tiflash-mpp-mode.md index 7f09ca3be0807..ac87a49adc83f 100644 --- a/tiflash/use-tiflash-mpp-mode.md +++ b/tiflash/use-tiflash-mpp-mode.md @@ -7,7 +7,7 @@ summary: Learn the MPP mode of TiFlash and how to use it. -This document introduces the [Massively Parallel Processing (MPP)](/glossary.md#mpp) mode of TiFlash and how to use it. +This document introduces the [Massively Parallel Processing (MPP)](/glossary.md#massively-parallel-processing-mpp) mode of TiFlash and how to use it.