From f45c49c1702984ce841a5dbce49e8d54e4309581 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 11:43:24 +0800 Subject: [PATCH 01/11] add the prefix dm- to the filenames for 5 DM docs --- en/TOC.md | 10 +++++----- en/_index.md | 4 ++-- en/deploy-a-dm-cluster-using-tiup.md | 2 +- en/{alert-rules.md => dm-alert-rules.md} | 1 + en/{daily-check.md => dm-daily-check.md} | 2 +- ...ates.md => dm-generate-self-signed-certificates.md} | 1 + en/{glossary.md => dm-glossary.md} | 2 +- ...nts.md => dm-hardware-and-software-requirements.md} | 2 +- en/enable-tls.md | 2 +- en/faq.md | 2 +- en/migrate-from-mysql-aurora.md | 4 ++-- zh/TOC.md | 10 +++++----- zh/_index.md | 6 +++--- zh/deploy-a-dm-cluster-using-tiup.md | 2 +- zh/{alert-rules.md => dm-alert-rules.md} | 2 +- zh/{daily-check.md => dm-daily-check.md} | 2 +- ...ates.md => dm-generate-self-signed-certificates.md} | 1 + zh/{glossary.md => dm-glossary.md} | 2 +- ...nts.md => dm-hardware-and-software-requirements.md} | 2 +- zh/enable-tls.md | 2 +- zh/faq.md | 2 +- zh/migrate-from-mysql-aurora.md | 4 ++-- 22 files changed, 35 insertions(+), 32 deletions(-) rename en/{alert-rules.md => dm-alert-rules.md} (89%) rename en/{daily-check.md => dm-daily-check.md} (92%) rename en/{generate-self-signed-certificates.md => dm-generate-self-signed-certificates.md} (98%) rename en/{glossary.md => dm-glossary.md} (99%) rename en/{hardware-and-software-requirements.md => dm-hardware-and-software-requirements.md} (97%) rename zh/{alert-rules.md => dm-alert-rules.md} (82%) rename zh/{daily-check.md => dm-daily-check.md} (90%) rename zh/{generate-self-signed-certificates.md => dm-generate-self-signed-certificates.md} (98%) rename zh/{glossary.md => dm-glossary.md} (98%) rename zh/{hardware-and-software-requirements.md => dm-hardware-and-software-requirements.md} (97%) diff --git a/en/TOC.md b/en/TOC.md index d939ba5bf..f33d01a29 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -28,7 +28,7 @@ - [Migrate Incremental Data to TiDB](usage-scenario-incremental-migration.md) - [Migrate Tables when There Are More Columns Downstream](usage-scenario-downstream-more-columns.md) - Deploy - - [Software and Hardware Requirements](hardware-and-software-requirements.md) + - [Software and Hardware Requirements](dm-hardware-and-software-requirements.md) - Deploy a DM Cluster - [Use TiUP (Recommended)](deploy-a-dm-cluster-using-tiup.md) - [Use TiUP Offline](deploy-a-dm-cluster-using-tiup-offline.md) @@ -57,7 +57,7 @@ - [Manually Handle Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - [Manage Schemas of Tables to be Migrated](manage-schema.md) - [Handle Alerts](handle-alerts.md) - - [Daily Check](daily-check.md) + - [Daily Check](dm-daily-check.md) - Usage Scenarios - [Migrate from Aurora to TiDB](migrate-from-mysql-aurora.md) - [Migrate when TiDB Tables Have More Columns](usage-scenario-downstream-more-columns.md) @@ -80,12 +80,12 @@ - [Data Migration Task Configuration](task-configuration-guide.md) - Secure - [Enable TLS for DM Connections](enable-tls.md) - - [Generate Self-signed Certificates](generate-self-signed-certificates.md) + - [Generate Self-signed Certificates](dm-generate-self-signed-certificates.md) - [Monitoring Metrics](monitor-a-dm-cluster.md) - - [Alert Rules](alert-rules.md) + - [Alert Rules](dm-alert-rules.md) - [Error Codes](error-handling.md#handle-common-errors) - [FAQ](faq.md) -- [Glossary](glossary.md) +- [Glossary](dm-glossary.md) - Release Notes - v5.3 - [5.3.0](releases/5.3.0.md) diff --git a/en/_index.md b/en/_index.md index 310c966d1..917a4cd16 100644 --- a/en/_index.md +++ b/en/_index.md @@ -36,7 +36,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] Deploy and Use -- [Software and Hardware Requirements](hardware-and-software-requirements.md) +- [Software and Hardware Requirements](dm-hardware-and-software-requirements.md) - [Deploy DM Using TiUP (Recommended)](deploy-a-dm-cluster-using-tiup.md) - [Deploy DM Using TiUP Offline](deploy-a-dm-cluster-using-tiup-offline.md) - [Deploy DM Using Binary](deploy-a-dm-cluster-using-binary.md) @@ -53,7 +53,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] - [Upgrade DM](manually-upgrade-dm-1.0-to-2.0.md) - [Manually Handle Sharding DDL Locks](manually-handling-sharding-ddl-locks.md) - [Handle Alerts](handle-alerts.md) -- [Daily Check](daily-check.md) +- [Daily Check](dm-daily-check.md) diff --git a/en/deploy-a-dm-cluster-using-tiup.md b/en/deploy-a-dm-cluster-using-tiup.md index 86d1ada59..8fdf7336c 100644 --- a/en/deploy-a-dm-cluster-using-tiup.md +++ b/en/deploy-a-dm-cluster-using-tiup.md @@ -18,7 +18,7 @@ TiUP supports deploying DM v2.0 or later DM versions. This document introduces h When DM performs a full data replication task, the DM-worker is bound with only one upstream database. The DM-worker first exports the full amount of data locally, and then imports the data into the downstream database. Therefore, the worker's host needs sufficient storage space (The storage path is specified later when you create the task). -In addition, you need to meet the [hardware and software requirements](hardware-and-software-requirements.md) when deploying a DM cluster. +In addition, you need to meet the [hardware and software requirements](dm-hardware-and-software-requirements.md) when deploying a DM cluster. ## Step 1: Install TiUP on the control machine diff --git a/en/alert-rules.md b/en/dm-alert-rules.md similarity index 89% rename from en/alert-rules.md rename to en/dm-alert-rules.md index 4623467dc..1006d63a0 100644 --- a/en/alert-rules.md +++ b/en/dm-alert-rules.md @@ -1,6 +1,7 @@ --- title: DM Alert Information summary: Introduce the alert information of DM. +aliases: ['/tidb-data-migration/dev/alert-rules/'] --- # DM Alert Information diff --git a/en/daily-check.md b/en/dm-daily-check.md similarity index 92% rename from en/daily-check.md rename to en/dm-daily-check.md index 7588680ce..5da9af8d5 100644 --- a/en/daily-check.md +++ b/en/dm-daily-check.md @@ -1,7 +1,7 @@ --- title: Daily Check summary: Learn about the daily check of TiDB Data Migration (DM). -aliases: ['/docs/tidb-data-migration/dev/daily-check/'] +aliases: ['/docs/tidb-data-migration/dev/daily-check/','/tidb-data-migration/dev/daily-check/'] --- # Daily Check diff --git a/en/generate-self-signed-certificates.md b/en/dm-generate-self-signed-certificates.md similarity index 98% rename from en/generate-self-signed-certificates.md rename to en/dm-generate-self-signed-certificates.md index 381759239..48f5694fc 100644 --- a/en/generate-self-signed-certificates.md +++ b/en/dm-generate-self-signed-certificates.md @@ -1,6 +1,7 @@ --- title: Generate Self-signed Certificates summary: Use `openssl` to generate self-signed certificates. +aliases: ['/tidb-data-migration/dev/generate-self-signed-certificates/'] --- # Generate Self-signed Certificates diff --git a/en/glossary.md b/en/dm-glossary.md similarity index 99% rename from en/glossary.md rename to en/dm-glossary.md index 61a53aad6..c6dfb7bd3 100644 --- a/en/glossary.md +++ b/en/dm-glossary.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration Glossary summary: Learn the terms used in TiDB Data Migration. -aliases: ['/docs/tidb-data-migration/dev/glossary/'] +aliases: ['/docs/tidb-data-migration/dev/glossary/','/tidb-data-migration/dev/glossary/'] --- # TiDB Data Migration Glossary diff --git a/en/hardware-and-software-requirements.md b/en/dm-hardware-and-software-requirements.md similarity index 97% rename from en/hardware-and-software-requirements.md rename to en/dm-hardware-and-software-requirements.md index 8e989a679..d039e50a4 100644 --- a/en/hardware-and-software-requirements.md +++ b/en/dm-hardware-and-software-requirements.md @@ -1,7 +1,7 @@ --- title: Software and Hardware Requirements summary: Learn the software and hardware requirements for DM cluster. -aliases: ['/docs/tidb-data-migration/dev/hardware-and-software-requirements/'] +aliases: ['/docs/tidb-data-migration/dev/hardware-and-software-requirements/','/tidb-data-migration/dev/hardware-and-software-requirements/'] --- # Software and Hardware Requirements diff --git a/en/enable-tls.md b/en/enable-tls.md index 1bfb0fda9..4137e4247 100644 --- a/en/enable-tls.md +++ b/en/enable-tls.md @@ -19,7 +19,7 @@ This section introduces how to enable encrypted data transmission between DM-mas To generate self-signed certificates, you can use `openssl`, `cfssl` and other tools based on `openssl`, such as `easy-rsa`. - If you choose `openssl`, you can refer to [generating self-signed certificates](generate-self-signed-certificates.md). + If you choose `openssl`, you can refer to [generating self-signed certificates](dm-generate-self-signed-certificates.md). 2. Configure certificates. diff --git a/en/faq.md b/en/faq.md index 9eff85960..40f5d8f26 100644 --- a/en/faq.md +++ b/en/faq.md @@ -194,7 +194,7 @@ Since DM v2.0, `handle-error` replaces `sql-skip`. You can use `handle-error` in ## Why do `REPLACE` statements keep appearing in the downstream when DM is replicating? -You need to check whether the [safe mode](glossary.md#safe-mode) is automatically enabled for the task. If the task is automatically resumed after an error, or if there is high availability scheduling, then the safe mode is enabled because it is within 1 minutes after the task is started or resumed. +You need to check whether the [safe mode](dm-glossary.md#safe-mode) is automatically enabled for the task. If the task is automatically resumed after an error, or if there is high availability scheduling, then the safe mode is enabled because it is within 1 minutes after the task is started or resumed. You can check the DM-worker log file and search for a line containing `change count`. If the `new count` in the line is not zero, the safe mode is enabled. To find out why it is enabled, check when it happens and if any errors are reported before. diff --git a/en/migrate-from-mysql-aurora.md b/en/migrate-from-mysql-aurora.md index 0f2db17c5..c87752cfa 100644 --- a/en/migrate-from-mysql-aurora.md +++ b/en/migrate-from-mysql-aurora.md @@ -57,7 +57,7 @@ To ensure a successful migration, you need to do prechecks before starting the m ### DM nodes deployment -As the hub of data migration, DM needs to connect to the upstream Aurora cluster and the downstream TiDB cluster. Therefore, you need to use the MySQL client to check whether the nodes in which DM is to be deployed can connect to the upstream and downstream. In addition, for details of DM requirements on hardware, software, and the node number, see [DM Cluster Software and Hardware Recommendations](hardware-and-software-requirements.md). +As the hub of data migration, DM needs to connect to the upstream Aurora cluster and the downstream TiDB cluster. Therefore, you need to use the MySQL client to check whether the nodes in which DM is to be deployed can connect to the upstream and downstream. In addition, for details of DM requirements on hardware, software, and the node number, see [DM Cluster Software and Hardware Recommendations](dm-hardware-and-software-requirements.md). ### Aurora @@ -187,7 +187,7 @@ When the data sources are successfully added, the return information of each dat > **Note:** > -> Because Aurora does not support FTWRL, write operations have to be paused when you only perform the full data migration to export data. See [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/?nc1=h_ls) for details. In this example, both full data migration and incremental replication are performed, and DM automatically enables the [`safe mode`](glossary.md#safe-mode) to solve this pause issue. To ensure data consistency in other combinations of task mode, see [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/?nc1=h_ls). +> Because Aurora does not support FTWRL, write operations have to be paused when you only perform the full data migration to export data. See [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/?nc1=h_ls) for details. In this example, both full data migration and incremental replication are performed, and DM automatically enables the [`safe mode`](dm-glossary.md#safe-mode) to solve this pause issue. To ensure data consistency in other combinations of task mode, see [AWS documentation](https://aws.amazon.com/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/?nc1=h_ls). This example migrates the existing data in Aurora and replicates incremental data to TiDB in real time, which is the **full data migration plus incremental replication** mode. According to the TiDB cluster information above, the added `source-id`, and the table to be migrated, save the following task configuration file `task.yaml`: diff --git a/zh/TOC.md b/zh/TOC.md index 5a4f10ed6..dd52a7c06 100644 --- a/zh/TOC.md +++ b/zh/TOC.md @@ -28,7 +28,7 @@ - [增量迁移数据到 TiDB](usage-scenario-incremental-migration.md) - [下游 TiDB 表结构存在更多列的数据迁移](usage-scenario-downstream-more-columns.md) - 部署使用 - - [软硬件要求](hardware-and-software-requirements.md) + - [软硬件要求](dm-hardware-and-software-requirements.md) - 部署 DM 集群 - [使用 TiUP(推荐)](deploy-a-dm-cluster-using-tiup.md) - [使用 TiUP 离线镜像](deploy-a-dm-cluster-using-tiup-offline.md) @@ -57,7 +57,7 @@ - [手动处理 Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - [管理迁移表的表结构](manage-schema.md) - [处理告警](handle-alerts.md) - - [日常巡检](daily-check.md) + - [日常巡检](dm-daily-check.md) - 使用场景 - [从 Aurora 迁移数据到 TiDB](migrate-from-mysql-aurora.md) - [TiDB 表结构存在更多列的迁移场景](usage-scenario-downstream-more-columns.md) @@ -80,12 +80,12 @@ - [数据迁移任务配置向导](task-configuration-guide.md) - 安全 - [为 DM 的连接开启加密传输](enable-tls.md) - - [生成自签名证书](generate-self-signed-certificates.md) + - [生成自签名证书](dm-generate-self-signed-certificates.md) - [监控指标](monitor-a-dm-cluster.md) - - [告警信息](alert-rules.md) + - [告警信息](dm-alert-rules.md) - [错误码](error-handling.md#常见故障处理方法) - [常见问题](faq.md) -- [术语表](glossary.md) +- [术语表](dm-glossary.md) - 版本发布历史 - v5.3 - [5.3.0](releases/5.3.0.md) diff --git a/zh/_index.md b/zh/_index.md index 06012ba30..ad6399046 100644 --- a/zh/_index.md +++ b/zh/_index.md @@ -36,7 +36,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] 部署使用 -- [软硬件要求](hardware-and-software-requirements.md) +- [软硬件要求](dm-hardware-and-software-requirements.md) - [使用 TiUP 部署集群(推荐)](deploy-a-dm-cluster-using-tiup.md) - [使用 TiUP 离线镜像部署集群](deploy-a-dm-cluster-using-tiup-offline.md) - [使用 Binary 部署集群](deploy-a-dm-cluster-using-binary.md) @@ -53,7 +53,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] - [升级版本](manually-upgrade-dm-1.0-to-2.0.md) - [手动处理 Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - [处理告警](handle-alerts.md) -- [日常巡检](daily-check.md) +- [日常巡检](dm-daily-check.md) @@ -72,7 +72,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] - [DM 命令行参数](command-line-flags.md) - [配置概述](config-overview.md) - [监控指标](monitor-a-dm-cluster.md) -- [告警信息](alert-rules.md) +- [告警信息](dm-alert-rules.md) - [错误码](error-handling.md#常见故障处理方法) diff --git a/zh/deploy-a-dm-cluster-using-tiup.md b/zh/deploy-a-dm-cluster-using-tiup.md index 44e8c8635..3ba9b1a77 100644 --- a/zh/deploy-a-dm-cluster-using-tiup.md +++ b/zh/deploy-a-dm-cluster-using-tiup.md @@ -18,7 +18,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/deploy-a-dm-cluster-using-ansible/', 当 DM 执行全量数据复制任务时,每个 DM-worker 只绑定一个上游数据库。DM-worker 首先在上游导出全部数据,然后将数据导入下游数据库。因此,DM-worker 的主机需要有足够的存储空间,具体存储路径在后续创建迁移任务时指定。 -另外,部署 DM 集群需参照 [DM 集群软硬件环境需求](hardware-and-software-requirements.md),满足相应要求。 +另外,部署 DM 集群需参照 [DM 集群软硬件环境需求](dm-hardware-and-software-requirements.md),满足相应要求。 ## 第 1 步:在中控机上安装 TiUP 组件 diff --git a/zh/alert-rules.md b/zh/dm-alert-rules.md similarity index 82% rename from zh/alert-rules.md rename to zh/dm-alert-rules.md index ecb02fbe0..02032d695 100644 --- a/zh/alert-rules.md +++ b/zh/dm-alert-rules.md @@ -1,7 +1,7 @@ --- title: DM 告警信息 summary: 介绍 DM 的告警信息。 -aliases: ['/docs-cn/tidb-data-migration/dev/alert-rules/'] +aliases: ['/docs-cn/tidb-data-migration/dev/alert-rules/','/zh/tidb-data-migration/dev/alert-rules/'] --- # DM 告警信息 diff --git a/zh/daily-check.md b/zh/dm-daily-check.md similarity index 90% rename from zh/daily-check.md rename to zh/dm-daily-check.md index d4ea1cb87..b120f1321 100644 --- a/zh/daily-check.md +++ b/zh/dm-daily-check.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration 日常巡检 summary: 了解 DM 工具的日常巡检。 -aliases: ['/docs-cn/tidb-data-migration/dev/daily-check/'] +aliases: ['/docs-cn/tidb-data-migration/dev/daily-check/',‘/zh/tidb-data-migration/dev/daily-check/’] --- # TiDB Data Migration 日常巡检 diff --git a/zh/generate-self-signed-certificates.md b/zh/dm-generate-self-signed-certificates.md similarity index 98% rename from zh/generate-self-signed-certificates.md rename to zh/dm-generate-self-signed-certificates.md index a5955f619..b70492a43 100644 --- a/zh/generate-self-signed-certificates.md +++ b/zh/dm-generate-self-signed-certificates.md @@ -1,6 +1,7 @@ --- title: 生成自签名证书 summary: 了解如何生成自签名证书。 +aliases: ['/zh/tidb-data-migration/dev/generate-self-signed-certificates/'] --- # 生成自签名证书 diff --git a/zh/glossary.md b/zh/dm-glossary.md similarity index 98% rename from zh/glossary.md rename to zh/dm-glossary.md index 64a9a9182..996c53c9b 100644 --- a/zh/glossary.md +++ b/zh/dm-glossary.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration 术语表 summary: 学习 TiDB Data Migration 相关术语 -aliases: ['/docs-cn/tidb-data-migration/dev/glossary/'] +aliases: ['/docs-cn/tidb-data-migration/dev/glossary/','/zh/tidb-data-migration/dev/glossary'] --- # TiDB Data Migration 术语表 diff --git a/zh/hardware-and-software-requirements.md b/zh/dm-hardware-and-software-requirements.md similarity index 97% rename from zh/hardware-and-software-requirements.md rename to zh/dm-hardware-and-software-requirements.md index 45e6f62ee..78f173217 100644 --- a/zh/hardware-and-software-requirements.md +++ b/zh/dm-hardware-and-software-requirements.md @@ -1,7 +1,7 @@ --- title: DM 集群软硬件环境需求 summary: 了解部署 DM 集群的软件和硬件要求。 -aliases: ['/docs-cn/tidb-data-migration/dev/hardware-and-software-requirements/'] +aliases: ['/docs-cn/tidb-data-migration/dev/hardware-and-software-requirements/','/zh/tidb-data-migration/dev/hardware-and-software-requirements/'] --- # DM 集群软硬件环境需求 diff --git a/zh/enable-tls.md b/zh/enable-tls.md index 79e4e3ab8..b29dfcd1e 100644 --- a/zh/enable-tls.md +++ b/zh/enable-tls.md @@ -19,7 +19,7 @@ summary: 了解如何为 DM 的连接开启加密传输。 有多种工具可以生成自签名证书,如 `openssl`,`cfssl` 及 `easy-rsa` 等基于 `openssl` 的工具。 - 这里提供一个使用 `openssl` 生成证书的示例:[生成自签名证书](generate-self-signed-certificates.md)。 + 这里提供一个使用 `openssl` 生成证书的示例:[生成自签名证书](dm-generate-self-signed-certificates.md)。 2. 配置证书。 diff --git a/zh/faq.md b/zh/faq.md index 492f6285a..366bafab4 100644 --- a/zh/faq.md +++ b/zh/faq.md @@ -182,7 +182,7 @@ if the DDL is not needed, you can use a filter rule with \"*\" schema-pattern to ## DM 同步时下游长时间出现 REPLACE 语句 -请检查是否符合 [safe mode 触发条件](glossary.md#safe-mode)。如果任务发生错误并自动恢复,或者发生高可用调度,会满足“启动或恢复任务的前 1 分钟”这一条件,因此启用 safe mode。 +请检查是否符合 [safe mode 触发条件](dm-glossary.md#safe-mode)。如果任务发生错误并自动恢复,或者发生高可用调度,会满足“启动或恢复任务的前 1 分钟”这一条件,因此启用 safe mode。 可以检查 DM-worker 日志,在其中搜索包含 `change count` 的行,该行的 `new count` 非零时会启用 safe mode。检查 safe mode 启用时间以及启用前是否有报错,以定位启用原因。 diff --git a/zh/migrate-from-mysql-aurora.md b/zh/migrate-from-mysql-aurora.md index cf51ab07f..b6023f052 100644 --- a/zh/migrate-from-mysql-aurora.md +++ b/zh/migrate-from-mysql-aurora.md @@ -57,7 +57,7 @@ Aurora 集群数据与迁移计划如下: ### DM 部署节点 -DM 作为数据迁移的核心,需要正常连接上游 Aurora 集群与下游 TiDB 集群,因此通过 MySQL client 等方式检查部署 DM 的节点是否能连通上下游。除此以外,关于 DM 节点数目、软硬件等要求,参见 [DM 集群软硬件环境需求](hardware-and-software-requirements.md)。 +DM 作为数据迁移的核心,需要正常连接上游 Aurora 集群与下游 TiDB 集群,因此通过 MySQL client 等方式检查部署 DM 的节点是否能连通上下游。除此以外,关于 DM 节点数目、软硬件等要求,参见 [DM 集群软硬件环境需求](dm-hardware-and-software-requirements.md)。 ### Aurora @@ -187,7 +187,7 @@ tiup dmctl --master-addr 127.0.0.1:8261 operate-source create dm-test/source2.ya > **注意:** > -> 由于 Aurora 不支持 FTWRL,仅使用全量模式导出数据时需要暂停写入,参见 [AWS 官网说明](https://aws.amazon.com/cn/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/)。在示例的全量+增量模式下,DM 将自动启用 [`safe mode`](glossary.md#safe-mode) 解决这一问题。在其他模式下如需保证数据一致,参见 [AWS 官网说明](https://aws.amazon.com/cn/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/)操作。 +> 由于 Aurora 不支持 FTWRL,仅使用全量模式导出数据时需要暂停写入,参见 [AWS 官网说明](https://aws.amazon.com/cn/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/)。在示例的全量+增量模式下,DM 将自动启用 [`safe mode`](dm-glossary.md#safe-mode) 解决这一问题。在其他模式下如需保证数据一致,参见 [AWS 官网说明](https://aws.amazon.com/cn/premiumsupport/knowledge-center/mysqldump-error-rds-mysql-mariadb/)操作。 本示例选择迁移 Aurora 已有数据并将新增数据实时迁移给 TiDB,即**全量+增量**模式。根据上文的 TiDB 集群信息、已添加的 `source-id`、要迁移的表,保存如下任务配置文件 `task.yaml`: From 4d49ace3e0927b5bddb69d03a6e47c423becc36e Mon Sep 17 00:00:00 2001 From: Grace Cai Date: Tue, 21 Dec 2021 19:14:36 +0800 Subject: [PATCH 02/11] Update zh/dm-daily-check.md Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com> --- zh/dm-daily-check.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/zh/dm-daily-check.md b/zh/dm-daily-check.md index b120f1321..c330b41ba 100644 --- a/zh/dm-daily-check.md +++ b/zh/dm-daily-check.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration 日常巡检 summary: 了解 DM 工具的日常巡检。 -aliases: ['/docs-cn/tidb-data-migration/dev/daily-check/',‘/zh/tidb-data-migration/dev/daily-check/’] +aliases: ['/docs-cn/tidb-data-migration/dev/daily-check/','/zh/tidb-data-migration/dev/daily-check/'] --- # TiDB Data Migration 日常巡检 From a030378a14c87afbc61ce4961f87b0ff14df0862 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 19:15:46 +0800 Subject: [PATCH 03/11] rename_files --- en/TOC.md | 58 +-- en/_index.md | 10 +- en/benchmark-v1.0-ga.md | 6 +- en/benchmark-v2.0-ga.md | 6 +- en/dm-alert-rules.md | 27 +- ...hmark-v5.3.0.md => dm-benchmark-v5.3.0.md} | 7 +- ...line-flags.md => dm-command-line-flags.md} | 1 + ...nfig-overview.md => dm-config-overview.md} | 10 +- en/{create-task.md => dm-create-task.md} | 3 +- en/dm-daily-check.md | 2 +- en/{enable-tls.md => dm-enable-tls.md} | 1 + ...error-handling.md => dm-error-handling.md} | 12 +- ...t-config.md => dm-export-import-config.md} | 1 + en/{faq.md => dm-faq.md} | 4 +- en/dm-glossary.md | 10 +- en/{handle-alerts.md => dm-handle-alerts.md} | 384 +++++++++--------- ...ues.md => dm-handle-performance-issues.md} | 3 +- en/dm-hardware-and-software-requirements.md | 2 +- en/{key-features.md => dm-key-features.md} | 6 +- en/{manage-schema.md => dm-manage-schema.md} | 1 + en/{manage-source.md => dm-manage-source.md} | 5 +- en/{open-api.md => dm-open-api.md} | 1 + en/{overview.md => dm-overview.md} | 12 +- en/{pause-task.md => dm-pause-task.md} | 1 + ...ormance-test.md => dm-performance-test.md} | 9 +- en/{precheck.md => dm-precheck.md} | 2 +- en/{query-status.md => dm-query-status.md} | 2 +- en/{resume-task.md => dm-resume-task.md} | 1 + ...ile.md => dm-source-configuration-file.md} | 4 +- en/{stop-task.md => dm-stop-task.md} | 3 +- ...uide.md => dm-task-configuration-guide.md} | 13 +- ...figuration.md => dm-tune-configuration.md} | 1 + en/feature-expression-filter.md | 4 +- en/feature-shard-merge-pessimistic.md | 4 +- en/handle-failed-ddl-statements.md | 2 +- en/maintain-dm-using-tiup.md | 2 +- en/manually-upgrade-dm-1.0-to-2.0.md | 10 +- en/migrate-data-using-dm.md | 6 +- en/migrate-from-mysql-aurora.md | 4 +- en/quick-create-migration-task.md | 2 +- en/quick-start-create-source.md | 4 +- en/relay-log.md | 4 +- en/task-configuration-file-full.md | 10 +- en/task-configuration-file.md | 6 +- en/usage-scenario-downstream-more-columns.md | 8 +- en/usage-scenario-shard-merge.md | 10 +- en/usage-scenario-simple-migration.md | 12 +- zh/TOC.md | 56 +-- zh/_index.md | 14 +- zh/benchmark-v1.0-ga.md | 6 +- zh/benchmark-v2.0-ga.md | 6 +- zh/dm-alert-rules.md | 2 +- ...hmark-v5.3.0.md => dm-benchmark-v5.3.0.md} | 7 +- ...line-flags.md => dm-command-line-flags.md} | 2 +- ...nfig-overview.md => dm-config-overview.md} | 10 +- zh/{create-task.md => dm-create-task.md} | 4 +- zh/dm-daily-check.md | 2 +- zh/{enable-tls.md => dm-enable-tls.md} | 1 + ...error-handling.md => dm-error-handling.md} | 12 +- ...t-config.md => dm-export-import-config.md} | 1 + zh/{faq.md => dm-faq.md} | 4 +- zh/dm-glossary.md | 10 +- zh/{handle-alerts.md => dm-handle-alerts.md} | 26 +- ...ues.md => dm-handle-performance-issues.md} | 4 +- zh/dm-hardware-and-software-requirements.md | 2 +- zh/{key-features.md => dm-key-features.md} | 6 +- zh/{manage-schema.md => dm-manage-schema.md} | 1 + zh/{manage-source.md => dm-manage-source.md} | 6 +- zh/{open-api.md => dm-open-api.md} | 1 + zh/{overview.md => dm-overview.md} | 12 +- zh/{pause-task.md => dm-pause-task.md} | 2 +- ...ormance-test.md => dm-performance-test.md} | 10 +- zh/{precheck.md => dm-precheck.md} | 2 +- zh/{query-status.md => dm-query-status.md} | 2 +- zh/{resume-task.md => dm-resume-task.md} | 2 +- ...ile.md => dm-source-configuration-file.md} | 4 +- zh/{stop-task.md => dm-stop-task.md} | 4 +- ...uide.md => dm-task-configuration-guide.md} | 13 +- ...figuration.md => dm-tune-configuration.md} | 2 +- zh/feature-expression-filter.md | 4 +- zh/feature-shard-merge-pessimistic.md | 4 +- zh/handle-failed-ddl-statements.md | 2 +- zh/maintain-dm-using-tiup.md | 2 +- zh/manually-upgrade-dm-1.0-to-2.0.md | 10 +- zh/migrate-data-using-dm.md | 6 +- zh/migrate-from-mysql-aurora.md | 4 +- zh/quick-create-migration-task.md | 2 +- zh/quick-start-create-source.md | 4 +- zh/relay-log.md | 6 +- zh/shard-merge-best-practices.md | 2 +- zh/task-configuration-file-full.md | 10 +- zh/task-configuration-file.md | 6 +- zh/usage-scenario-downstream-more-columns.md | 8 +- zh/usage-scenario-shard-merge.md | 10 +- zh/usage-scenario-simple-migration.md | 12 +- 95 files changed, 526 insertions(+), 498 deletions(-) rename en/{benchmark-v5.3.0.md => dm-benchmark-v5.3.0.md} (97%) rename en/{command-line-flags.md => dm-command-line-flags.md} (98%) rename en/{config-overview.md => dm-config-overview.md} (80%) rename en/{create-task.md => dm-create-task.md} (93%) rename en/{enable-tls.md => dm-enable-tls.md} (98%) rename en/{error-handling.md => dm-error-handling.md} (97%) rename en/{export-import-config.md => dm-export-import-config.md} (97%) rename en/{faq.md => dm-faq.md} (98%) rename en/{handle-alerts.md => dm-handle-alerts.md} (84%) rename en/{handle-performance-issues.md => dm-handle-performance-issues.md} (97%) rename en/{key-features.md => dm-key-features.md} (98%) rename en/{manage-schema.md => dm-manage-schema.md} (99%) rename en/{manage-source.md => dm-manage-source.md} (96%) rename en/{open-api.md => dm-open-api.md} (99%) rename en/{overview.md => dm-overview.md} (80%) rename en/{pause-task.md => dm-pause-task.md} (97%) rename en/{performance-test.md => dm-performance-test.md} (94%) rename en/{precheck.md => dm-precheck.md} (97%) rename en/{query-status.md => dm-query-status.md} (99%) rename en/{resume-task.md => dm-resume-task.md} (96%) rename en/{source-configuration-file.md => dm-source-configuration-file.md} (97%) rename en/{stop-task.md => dm-stop-task.md} (92%) rename en/{task-configuration-guide.md => dm-task-configuration-guide.md} (97%) rename en/{tune-configuration.md => dm-tune-configuration.md} (98%) rename zh/{benchmark-v5.3.0.md => dm-benchmark-v5.3.0.md} (93%) rename zh/{command-line-flags.md => dm-command-line-flags.md} (98%) rename zh/{config-overview.md => dm-config-overview.md} (74%) rename zh/{create-task.md => dm-create-task.md} (91%) rename zh/{enable-tls.md => dm-enable-tls.md} (98%) rename zh/{error-handling.md => dm-error-handling.md} (97%) rename zh/{export-import-config.md => dm-export-import-config.md} (96%) rename zh/{faq.md => dm-faq.md} (98%) rename zh/{handle-alerts.md => dm-handle-alerts.md} (75%) rename zh/{handle-performance-issues.md => dm-handle-performance-issues.md} (97%) rename zh/{key-features.md => dm-key-features.md} (98%) rename zh/{manage-schema.md => dm-manage-schema.md} (99%) rename zh/{manage-source.md => dm-manage-source.md} (95%) rename zh/{open-api.md => dm-open-api.md} (99%) rename zh/{overview.md => dm-overview.md} (82%) rename zh/{pause-task.md => dm-pause-task.md} (95%) rename zh/{performance-test.md => dm-performance-test.md} (93%) rename zh/{precheck.md => dm-precheck.md} (97%) rename zh/{query-status.md => dm-query-status.md} (99%) rename zh/{resume-task.md => dm-resume-task.md} (92%) rename zh/{source-configuration-file.md => dm-source-configuration-file.md} (97%) rename zh/{stop-task.md => dm-stop-task.md} (89%) rename zh/{task-configuration-guide.md => dm-task-configuration-guide.md} (95%) rename zh/{tune-configuration.md => dm-tune-configuration.md} (98%) diff --git a/en/TOC.md b/en/TOC.md index f33d01a29..8f65b68eb 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -2,12 +2,12 @@ - About DM - - [DM Overview](overview.md) + - [DM Overview](dm-overview.md) - [DM 5.3 Release Notes](releases/5.3.0.md) - Basic Features - - [Table Routing](key-features.md#table-routing) - - [Block and Allow Lists](key-features.md#block-and-allow-table-lists) - - [Binlog Event Filter](key-features.md#binlog-event-filter) + - [Table Routing](dm-key-features.md#table-routing) + - [Block and Allow Lists](dm-key-features.md#block-and-allow-table-lists) + - [Binlog Event Filter](dm-key-features.md#binlog-event-filter) - Advanced Features - Merge and Migrate Data from Sharded Tables - [Overview](feature-shard-merge.md) @@ -16,7 +16,7 @@ - [Migrate from MySQL Databases that Use GH-ost/PT-osc](feature-online-ddl.md) - [Filter Certain Row Changes Using SQL Expressions](feature-expression-filter.md) - [DM Architecture](dm-arch.md) - - [Benchmarks](benchmark-v5.3.0.md) + - [Benchmarks](dm-benchmark-v5.3.0.md) - Quick Start - [Quick Start](quick-start-with-dm.md) - [Deploy a DM cluster Using TiUP](deploy-a-dm-cluster-using-tiup.md) @@ -35,56 +35,56 @@ - [Use Binary](deploy-a-dm-cluster-using-binary.md) - [Use Kubernetes](https://docs.pingcap.com/tidb-in-kubernetes/dev/deploy-tidb-dm) - [Migrate Data Using DM](migrate-data-using-dm.md) - - [Test DM Performance](performance-test.md) + - [Test DM Performance](dm-performance-test.md) - Maintain - Tools - [Maintain DM Clusters Using TiUP (Recommended)](maintain-dm-using-tiup.md) - [Maintain DM Clusters Using dmctl](dmctl-introduction.md) - - [Maintain DM Clusters Using OpenAPI](open-api.md) + - [Maintain DM Clusters Using OpenAPI](dm-open-api.md) - Cluster Upgrade - [Manually Upgrade from v1.0.x to v2.0+](manually-upgrade-dm-1.0-to-2.0.md) - - [Manage Data Source](manage-source.md) + - [Manage Data Source](dm-manage-source.md) - Manage a Data Migration Task - - [Task Configuration Guide](task-configuration-guide.md) - - [Precheck a Task](precheck.md) - - [Create a Task](create-task.md) - - [Query Status](query-status.md) - - [Pause a Task](pause-task.md) - - [Resume a Task](resume-task.md) - - [Stop a Task](stop-task.md) - - [Export and Import Data Sources and Task Configuration of Clusters](export-import-config.md) + - [Task Configuration Guide](dm-task-configuration-guide.md) + - [Precheck a Task](dm-precheck.md) + - [Create a Task](dm-create-task.md) + - [Query Status](dm-query-status.md) + - [Pause a Task](dm-pause-task.md) + - [Resume a Task](dm-resume-task.md) + - [Stop a Task](dm-stop-task.md) + - [Export and Import Data Sources and Task Configuration of Clusters](dm-export-import-config.md) - [Handle Failed DDL Statements](handle-failed-ddl-statements.md) - [Manually Handle Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - - [Manage Schemas of Tables to be Migrated](manage-schema.md) - - [Handle Alerts](handle-alerts.md) + - [Manage Schemas of Tables to be Migrated](dm-manage-schema.md) + - [Handle Alerts](dm-handle-alerts.md) - [Daily Check](dm-daily-check.md) - Usage Scenarios - [Migrate from Aurora to TiDB](migrate-from-mysql-aurora.md) - [Migrate when TiDB Tables Have More Columns](usage-scenario-downstream-more-columns.md) - [Switch the MySQL Instance to Be Migrated](usage-scenario-master-slave-switch.md) - Troubleshoot - - [Handle Errors](error-handling.md) - - [Handle Performance Issues](handle-performance-issues.md) + - [Handle Errors](dm-error-handling.md) + - [Handle Performance Issues](dm-handle-performance-issues.md) - Performance Tuning - - [Optimize Configuration](tune-configuration.md) + - [Optimize Configuration](dm-tune-configuration.md) - Reference - Architecture - - [DM Architecture Overview](overview.md) + - [DM Architecture Overview](dm-overview.md) - [DM-worker](dm-worker-intro.md) - - [Command-line Flags](command-line-flags.md) + - [Command-line Flags](dm-command-line-flags.md) - Configuration - - [Overview](config-overview.md) + - [Overview](dm-config-overview.md) - [DM-master Configuration](dm-master-configuration-file.md) - [DM-worker Configuration](dm-worker-configuration-file.md) - - [Upstream Database Configuration](source-configuration-file.md) - - [Data Migration Task Configuration](task-configuration-guide.md) + - [Upstream Database Configuration](dm-source-configuration-file.md) + - [Data Migration Task Configuration](dm-task-configuration-guide.md) - Secure - - [Enable TLS for DM Connections](enable-tls.md) + - [Enable TLS for DM Connections](dm-enable-tls.md) - [Generate Self-signed Certificates](dm-generate-self-signed-certificates.md) - [Monitoring Metrics](monitor-a-dm-cluster.md) - [Alert Rules](dm-alert-rules.md) - - [Error Codes](error-handling.md#handle-common-errors) -- [FAQ](faq.md) + - [Error Codes](dm-error-handling.md#handle-common-errors) +- [FAQ](dm-faq.md) - [Glossary](dm-glossary.md) - Release Notes - v5.3 diff --git a/en/_index.md b/en/_index.md index 917a4cd16..759fff170 100644 --- a/en/_index.md +++ b/en/_index.md @@ -17,7 +17,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] About TiDB Data Migration -- [What is DM?](overview.md) +- [What is DM?](dm-overview.md) - [DM Architecture](dm-arch.md) - [Performance](benchmark-v2.0-ga.md) @@ -41,7 +41,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] - [Deploy DM Using TiUP Offline](deploy-a-dm-cluster-using-tiup-offline.md) - [Deploy DM Using Binary](deploy-a-dm-cluster-using-binary.md) - [Use DM to Migrate Data](migrate-data-using-dm.md) -- [DM Performance Test](performance-test.md) +- [DM Performance Test](dm-performance-test.md) @@ -52,7 +52,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] - [Maintain DM Clusters Using dmctl](dmctl-introduction.md) - [Upgrade DM](manually-upgrade-dm-1.0-to-2.0.md) - [Manually Handle Sharding DDL Locks](manually-handling-sharding-ddl-locks.md) -- [Handle Alerts](handle-alerts.md) +- [Handle Alerts](dm-handle-alerts.md) - [Daily Check](dm-daily-check.md) @@ -69,9 +69,9 @@ aliases: ['/docs/tidb-data-migration/dev/'] Reference - [DM Architecture](dm-arch.md) -- [Configuration File Overview](config-overview.md) +- [Configuration File Overview](dm-config-overview.md) - [Monitoring Metrics and Alerts](monitor-a-dm-cluster.md) -- [Error Handling](error-handling.md) +- [Error Handling](dm-error-handling.md) diff --git a/en/benchmark-v1.0-ga.md b/en/benchmark-v1.0-ga.md index 100fbfff5..c002b1cea 100644 --- a/en/benchmark-v1.0-ga.md +++ b/en/benchmark-v1.0-ga.md @@ -60,11 +60,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41), to do the test. For detailed test scenario description, see [performance test](performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). ### Full import benchmark case -For details, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). +For details, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). #### Full import benchmark result @@ -105,7 +105,7 @@ Full import data size in this benchmark case is 3.78 GB, load unit pool size use ### Incremental replication benchmark case -For details about the test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). +For details about the test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). #### Benchmark result for incremental replication diff --git a/en/benchmark-v2.0-ga.md b/en/benchmark-v2.0-ga.md index 3d163f404..7cca6a9cc 100644 --- a/en/benchmark-v2.0-ga.md +++ b/en/benchmark-v2.0-ga.md @@ -59,11 +59,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34), to do the test. For detailed test scenario description, see [performance test](performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). ### Full import benchmark case -For detailed full import test method, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). +For detailed full import test method, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). #### Full import benchmark results @@ -105,7 +105,7 @@ In this test, the full amount of imported data is 3.78 GB and the `pool-size` of ### Incremental replication benchmark case -For detailed incremental replication test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). +For detailed incremental replication test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). #### Incremental replication benchmark result diff --git a/en/dm-alert-rules.md b/en/dm-alert-rules.md index 1006d63a0..ad0f3b905 100644 --- a/en/dm-alert-rules.md +++ b/en/dm-alert-rules.md @@ -1,13 +1,14 @@ ---- -title: DM Alert Information -summary: Introduce the alert information of DM. -aliases: ['/tidb-data-migration/dev/alert-rules/'] ---- - -# DM Alert Information - -The [alert system](migrate-data-using-dm.md#step-8-monitor-the-task-and-check-logs) is deployed by default when you deploy a DM cluster using TiUP. - -For more information about DM alert rules and the solutions, refer to [handle alerts](handle-alerts.md). - -Both DM alert information and monitoring metrics are based on Prometheus. For more information about their relationship, refer to [DM monitoring metrics](monitor-a-dm-cluster.md). +--- +title: DM Alert Information +summary: Introduce the alert information of DM. +aliases: ['/tidb-data-migration/dev/alert-rules/'] +--- + +# DM Alert Information + +The [alert system](migrate-data-using-dm.md#step-8-monitor-the-task-and-check-logs) is deployed by default when you deploy a DM cluster using TiUP. + +For more information about DM alert rules and the solutions, refer to [handle alerts](dm-handle-alerts.md). + +Both DM alert information and monitoring metrics are based on Prometheus. For more information about their relationship, refer to [DM monitoring metrics](monitor-a-dm-cluster.md). +ter.md). diff --git a/en/benchmark-v5.3.0.md b/en/dm-benchmark-v5.3.0.md similarity index 97% rename from en/benchmark-v5.3.0.md rename to en/dm-benchmark-v5.3.0.md index f5e098d48..9db15b8ac 100644 --- a/en/benchmark-v5.3.0.md +++ b/en/dm-benchmark-v5.3.0.md @@ -1,6 +1,7 @@ --- title: DM 5.3.0 Benchmark Report summary: Learn about the performance of 5.3.0. +aliases: ['/tidb-data-migration/dev/benchmark-v5.3.0.md/] --- # DM 5.3.0 Benchmark Report @@ -53,11 +54,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4), to do the test. For detailed test scenario description, see [performance test](performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). ### Full import benchmark case -For detailed full import test method, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). +For detailed full import test method, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). #### Full import benchmark results @@ -99,7 +100,7 @@ In this test, the full amount of imported data is 3.78 GB and the `pool-size` of ### Incremental replication benchmark case -For detailed incremental replication test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). +For detailed incremental replication test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). #### Incremental replication benchmark result diff --git a/en/command-line-flags.md b/en/dm-command-line-flags.md similarity index 98% rename from en/command-line-flags.md rename to en/dm-command-line-flags.md index 2bbd720b2..854db8e9b 100644 --- a/en/command-line-flags.md +++ b/en/dm-command-line-flags.md @@ -1,6 +1,7 @@ --- title: Command-line Flags summary: Learn about the command-line flags in DM. +aliases: ['/tidb-data-migration/dev/command-line-flags.md/] --- # Command-line Flags diff --git a/en/config-overview.md b/en/dm-config-overview.md similarity index 80% rename from en/config-overview.md rename to en/dm-config-overview.md index d42d477c9..079d94f49 100644 --- a/en/config-overview.md +++ b/en/dm-config-overview.md @@ -1,7 +1,7 @@ --- title: Data Migration Configuration File Overview summary: This document gives an overview of Data Migration configuration files. -aliases: ['/docs/tidb-data-migration/dev/config-overview/'] +aliases: ['/docs/tidb-data-migration/dev/config-overview/','/tidb-data-migration/dev/config-overview.md/] --- # Data Migration Configuration File Overview @@ -12,7 +12,7 @@ This document gives an overview of configuration files of DM (Data Migration). - `dm-master.toml`: The configuration file of running the DM-master process, including the topology information and the logs of the DM-master. For more details, refer to [DM-master Configuration File](dm-master-configuration-file.md). - `dm-worker.toml`: The configuration file of running the DM-worker process, including the topology information and the logs of the DM-worker. For more details, refer to [DM-worker Configuration File](dm-worker-configuration-file.md). -- `source.yaml`: The configuration of the upstream database such as MySQL and MariaDB. For more details, refer to [Upstream Database Configuration File](source-configuration-file.md). +- `source.yaml`: The configuration of the upstream database such as MySQL and MariaDB. For more details, refer to [Upstream Database Configuration File](dm-source-configuration-file.md). ## DM migration task configuration @@ -20,9 +20,9 @@ This document gives an overview of configuration files of DM (Data Migration). You can take the following steps to create a data migration task: -1. [Load the data source configuration into the DM cluster using dmctl](manage-source.md#operate-data-source). -2. Refer to the description in the [Task Configuration Guide](task-configuration-guide.md) and create the configuration file `your_task.yaml`. -3. [Create the data migration task using dmctl](create-task.md). +1. [Load the data source configuration into the DM cluster using dmctl](dm-manage-source.md#operate-data-source). +2. Refer to the description in the [Task Configuration Guide](dm-task-configuration-guide.md) and create the configuration file `your_task.yaml`. +3. [Create the data migration task using dmctl](dm-create-task.md). ### Important concepts diff --git a/en/create-task.md b/en/dm-create-task.md similarity index 93% rename from en/create-task.md rename to en/dm-create-task.md index f36990664..02dca4db7 100644 --- a/en/create-task.md +++ b/en/dm-create-task.md @@ -1,11 +1,12 @@ --- title: Create a Data Migration Task summary: Learn how to create a data migration task in TiDB Data Migration. +aliases: ['/tidb-data-migration/dev/create-task.md/] --- # Create a Data Migration Task -You can use the `start-task` command to create a data migration task. When the data migration task is started, DM [prechecks privileges and configurations](precheck.md). +You can use the `start-task` command to create a data migration task. When the data migration task is started, DM [prechecks privileges and configurations](dm-precheck.md). {{< copyable "" >}} diff --git a/en/dm-daily-check.md b/en/dm-daily-check.md index 5da9af8d5..ef8954666 100644 --- a/en/dm-daily-check.md +++ b/en/dm-daily-check.md @@ -8,7 +8,7 @@ aliases: ['/docs/tidb-data-migration/dev/daily-check/','/tidb-data-migration/dev This document summarizes how to perform a daily check on TiDB Data Migration (DM). -+ Method 1: Execute the `query-status` command to check the running status of the task and the error output (if any). For details, see [Query Status](query-status.md). ++ Method 1: Execute the `query-status` command to check the running status of the task and the error output (if any). For details, see [Query Status](dm-query-status.md). + Method 2: If Prometheus and Grafana are correctly deployed when you deploy the DM cluster using TiUP, you can view DM monitoring metrics in Grafana. For example, suppose that the Grafana's address is `172.16.10.71`, go to , enter the Grafana dashboard, and select the DM Dashboard to check monitoring metrics of DM. For more information of these metrics, see [DM Monitoring Metrics](monitor-a-dm-cluster.md). diff --git a/en/enable-tls.md b/en/dm-enable-tls.md similarity index 98% rename from en/enable-tls.md rename to en/dm-enable-tls.md index 4137e4247..e4cc52818 100644 --- a/en/enable-tls.md +++ b/en/dm-enable-tls.md @@ -1,6 +1,7 @@ --- title: Enable TLS for DM Connections summary: Learn how to enable TLS for DM connections. +aliases: ['/tidb-data-migration/dev/enable-tls.md/] --- # Enable TLS for DM Connections diff --git a/en/error-handling.md b/en/dm-error-handling.md similarity index 97% rename from en/error-handling.md rename to en/dm-error-handling.md index 7af062fa8..1e3879170 100644 --- a/en/error-handling.md +++ b/en/dm-error-handling.md @@ -1,7 +1,7 @@ --- title: Handle Errors summary: Learn about the error system and how to handle common errors when you use DM. -aliases: ['/docs/tidb-data-migration/dev/error-handling/','/docs/tidb-data-migration/dev/troubleshoot-dm/','/docs/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-system/'] +aliases: ['/docs/tidb-data-migration/dev/error-handling/','/docs/tidb-data-migration/dev/troubleshoot-dm/','/docs/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-handling.md/] --- # Handle Errors @@ -90,7 +90,7 @@ If you encounter an error while running DM, take the following steps to troubles resume-task ${task name} ``` -However, you need to reset the data migration task in some cases. For details, refer to [Reset the Data Migration Task](faq.md#how-to-reset-the-data-migration-task). +However, you need to reset the data migration task in some cases. For details, refer to [Reset the Data Migration Task](dm-faq.md#how-to-reset-the-data-migration-task). ## Handle common errors @@ -102,8 +102,8 @@ However, you need to reset the data migration task in some cases. For details, r | `code=10005` | Occurs when performing the `QUERY` type SQL statements. | | | `code=10006` | Occurs when performing the `EXECUTE` type SQL statements, including DDL statements and DML statements of the `INSERT`, `UPDATE`or `DELETE` type. For more detailed error information, check the error message which usually includes the error code and error information returned for database operations. | | -| `code=11006` | Occurs when the built-in parser of DM parses the incompatible DDL statements. | Refer to [Data Migration - incompatible DDL statements](faq.md#how-to-handle-incompatible-ddl-statements) for solution. | -| `code=20010` | Occurs when decrypting the database password that is provided in task configuration. | Check whether the downstream database password provided in the configuration task is [correctly encrypted using dmctl](manage-source.md#encrypt-the-database-password). | +| `code=11006` | Occurs when the built-in parser of DM parses the incompatible DDL statements. | Refer to [Data Migration - incompatible DDL statements](dm-faq.md#how-to-handle-incompatible-ddl-statements) for solution. | +| `code=20010` | Occurs when decrypting the database password that is provided in task configuration. | Check whether the downstream database password provided in the configuration task is [correctly encrypted using dmctl](dm-manage-source.md#encrypt-the-database-password). | | `code=26002` | The task check fails to establish database connection. For more detailed error information, check the error message which usually includes the error code and error information returned for database operations. | Check whether the machine where DM-master is located has permission to access the upstream. | | `code=32001` | Abnormal dump processing unit | If the error message contains `mydumper: argument list too long.`, configure the table to be exported by manually adding the `--regex` regular expression in the Mydumper argument `extra-args` in the `task.yaml` file according to the block-allow list. For example, to export all tables named `hello`, add `--regex '.*\\.hello$'`; to export all tables, add `--regex '.*'`. | | `code=38008` | An error occurs in the gRPC communication among DM components. | Check `class`. Find out the error occurs in the interaction of which components. Determine the type of communication error. If the error occurs when establishing gRPC connection, check whether the communication server is working normally. | @@ -179,9 +179,9 @@ For binlog replication processing units, manually recover migration using the fo ### `Access denied for user 'root'@'172.31.43.27' (using password: YES)` shows when you query the task or check the log -For database related passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. For how to encrypt the plaintext password, see [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). +For database related passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. For how to encrypt the plaintext password, see [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). -In addition, the user of the upstream and downstream databases must have the corresponding read and write privileges. Data Migration also [prechecks the corresponding privileges automatically](precheck.md) while starting the data migration task. +In addition, the user of the upstream and downstream databases must have the corresponding read and write privileges. Data Migration also [prechecks the corresponding privileges automatically](dm-precheck.md) while starting the data migration task. ### The `load` processing unit reports the error `packet for query is too large. Try adjusting the 'max_allowed_packet' variable` diff --git a/en/export-import-config.md b/en/dm-export-import-config.md similarity index 97% rename from en/export-import-config.md rename to en/dm-export-import-config.md index 17aa83607..0aecc7b80 100644 --- a/en/export-import-config.md +++ b/en/dm-export-import-config.md @@ -1,6 +1,7 @@ --- title: Export and Import Data Sources and Task Configuration of Clusters summary: Learn how to export and import data sources and task configuration of clusters when you use DM. +aliases: ['/tidb-data-migration/dev/export-import-config.md/] --- # Export and Import Data Sources and Task Configuration of Clusters diff --git a/en/faq.md b/en/dm-faq.md similarity index 98% rename from en/faq.md rename to en/dm-faq.md index 40f5d8f26..0250a7031 100644 --- a/en/faq.md +++ b/en/dm-faq.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration FAQ summary: Learn about frequently asked questions (FAQs) about TiDB Data Migration (DM). -aliases: ['/docs/tidb-data-migration/dev/faq/'] +aliases: ['/docs/tidb-data-migration/dev/faq/','/tidb-data-migration/dev/faq.md/] --- # TiDB Data Migration FAQ @@ -188,7 +188,7 @@ Sometimes, the error message contains the `parse statement` information, for exa if the DDL is not needed, you can use a filter rule with \"*\" schema-pattern to ignore it.\n\t : parse statement: line 1 column 11 near \"EVENT `event_del_big_table` \r\nDISABLE\" %!!(MISSING)(EXTRA string=ALTER EVENT `event_del_big_table` \r\nDISABLE ``` -The reason for this type of error is that the TiDB parser cannot parse DDL statements sent by the upstream, such as `ALTER EVENT`, so `sql-skip` does not take effect as expected. You can add [binlog event filters](key-features.md#binlog-event-filter) in the configuration file to filter those statements and set `schema-pattern: "*"`. Starting from DM v2.0.1, DM pre-filters statements related to `EVENT`. +The reason for this type of error is that the TiDB parser cannot parse DDL statements sent by the upstream, such as `ALTER EVENT`, so `sql-skip` does not take effect as expected. You can add [binlog event filters](dm-key-features.md#binlog-event-filter) in the configuration file to filter those statements and set `schema-pattern: "*"`. Starting from DM v2.0.1, DM pre-filters statements related to `EVENT`. Since DM v2.0, `handle-error` replaces `sql-skip`. You can use `handle-error` instead to avoid this issue. diff --git a/en/dm-glossary.md b/en/dm-glossary.md index c6dfb7bd3..67a0066a4 100644 --- a/en/dm-glossary.md +++ b/en/dm-glossary.md @@ -20,7 +20,7 @@ Binlog events are information about data modification made to a MySQL or MariaDB ### Binlog event filter -[Binlog event filter](key-features.md#binlog-event-filter) is a more fine-grained filtering feature than the block and allow lists filtering rule. Refer to [binlog event filter](overview.md#binlog-event-filtering) for details. +[Binlog event filter](dm-key-features.md#binlog-event-filter) is a more fine-grained filtering feature than the block and allow lists filtering rule. Refer to [binlog event filter](dm-overview.md#binlog-event-filtering) for details. ### Binlog position @@ -32,7 +32,7 @@ Binlog replication processing unit is the processing unit used in DM-worker to r ### Block & allow table list -Block & allow table list is the feature that filters or only migrates all operations of some databases or some tables. Refer to [block & allow table lists](overview.md#block-and-allow-lists-migration-at-the-schema-and-table-levels) for details. This feature is similar to [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) and [MariaDB Replication Filters](https://mariadb.com/kb/en/replication-filters/). +Block & allow table list is the feature that filters or only migrates all operations of some databases or some tables. Refer to [block & allow table lists](dm-overview.md#block-and-allow-lists-migration-at-the-schema-and-table-levels) for details. This feature is similar to [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) and [MariaDB Replication Filters](https://mariadb.com/kb/en/replication-filters/). ## C @@ -120,13 +120,13 @@ The subtask is a part of a data migration task that is running on each DM-worker ### Subtask status -The subtask status is the status of a data migration subtask. The current status options include `New`, `Running`, `Paused`, `Stopped`, and `Finished`. Refer to [subtask status](query-status.md#subtask-status) for more details about the status of a data migration task or subtask. +The subtask status is the status of a data migration subtask. The current status options include `New`, `Running`, `Paused`, `Stopped`, and `Finished`. Refer to [subtask status](dm-query-status.md#subtask-status) for more details about the status of a data migration task or subtask. ## T ### Table routing -The table routing feature enables DM to migrate a certain table of the upstream MySQL or MariaDB instance to the specified table in the downstream, which can be used to merge and migrate sharded tables. Refer to [table routing](key-features.md#table-routing) for details. +The table routing feature enables DM to migrate a certain table of the upstream MySQL or MariaDB instance to the specified table in the downstream, which can be used to merge and migrate sharded tables. Refer to [table routing](dm-key-features.md#table-routing) for details. ### Task @@ -134,4 +134,4 @@ The data migration task, which is started after you successfully execute a `star ### Task status -The task status refers to the status of a data migration task. The task status depends on the statuses of all its subtasks. Refer to [subtask status](query-status.md#subtask-status) for details. +The task status refers to the status of a data migration task. The task status depends on the statuses of all its subtasks. Refer to [subtask status](dm-query-status.md#subtask-status) for details. diff --git a/en/handle-alerts.md b/en/dm-handle-alerts.md similarity index 84% rename from en/handle-alerts.md rename to en/dm-handle-alerts.md index 2fca8ca7f..02cda1c59 100644 --- a/en/handle-alerts.md +++ b/en/dm-handle-alerts.md @@ -1,189 +1,195 @@ ---- -title: Handle Alerts -summary: Understand how to deal with the alert information in DM. ---- - -# Handle Alerts - -This document introduces how to deal with the alert information in DM. - -## Alerts related to high availability - -### `DM_master_all_down` - -- Description: - - If all DM-master nodes are offline, this alert is triggered. - -- Solution: - - You can take the following steps to handle the alert: - - 1. Check the environment of the cluster. - 2. Check the logs of all DM-master nodes for troubleshooting. - -### `DM_worker_offline` - -- Description: - - If a DM-worker node is offline for more than one hour, this alert is triggered. In a high-availability architecture, this alert might not directly interrupt the task but increases the risk of interruption. - -- Solution: - - You can take the following steps to handle the alert: - - 1. View the working status of the corresponding DM-worker node. - 2. Check whether the node is connected. - 3. Troubleshoot errors through logs. - -### `DM_DDL_error` - -- Description: - - This error occurs when DM is processing the sharding DDL operations. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_pending_DDL` - -- Description: - - If a sharding DDL operation is pending for more than one hour, this alert is triggered. - -- Solution: - - In some scenarios, the pending sharding DDL operation might be what users expect. Otherwise, refer to [Handle Sharding DDL Locks Manually in DM](manually-handling-sharding-ddl-locks.md) for solution. - -## Alert rules related to task status - -### `DM_task_state` - -- Description: - - When a sub-task of DM-worker is in the `Paused` state for over 20 minutes, an alert is triggered. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -## Alert rules related to relay log - -### `DM_relay_process_exits_with_error` - -- Description: - - When the relay log processing unit encounters an error, this unit moves to `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_remain_storage_of_relay_log` - -- Description: - - When the free space of the disk where the relay log is located is less than 10G, an alert is triggered. - -- Solutions: - - You can take the following methods to handle the alert: - - - Delete unwanted data manually to increase free disk space. - - Reconfigure the [automatic data purge strategy of the relay log](relay-log.md#automatic-data-purge) or [purge data manually](relay-log.md#manual-data-purge). - - Execute the command `pause-relay` to pause the relay log pulling process. After there is enough free disk space, resume the process by running the command `resume-relay`. Note that you must not purge upstream binlog files that have not been pulled after the relay log pulling process is paused. - -### `DM_relay_log_data_corruption` - -- Description: - - When the relay log processing unit validates the binlog event read from the upstream and detects abnormal checksum information, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_fail_to_read_binlog_from_master` - -- Description: - - If an error occurs when the relay log processing unit tries to read the binlog event from the upstream, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_fail_to_write_relay_log` - -- Description: - - If an error occurs when the relay log processing unit tries to write the binlog event into the relay log file, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_binlog_file_gap_between_master_relay` - -- Description: - - When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files pulled by the relay log processing unit by **more than** 1 for 10 minutes, and an alert is triggered. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -## Alert rules related to Dump/Load - -### `DM_dump_process_exists_with_error` - -- Description: - - When the Dump processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_load_process_exists_with_error` - -- Description: - - When the Load processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -## Alert rules related to binlog replication - -### `DM_sync_process_exists_with_error` - -- Description: - - When the binlog replication processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_binlog_file_gap_between_master_syncer` - -- Description: - - When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files processed by the relay log processing unit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](handle-performance-issues.md). - -### `DM_binlog_file_gap_between_relay_syncer` - -- Description: - - When the number of the binlog files in the current relay log processing unit exceeds that of the latest binlog files processed by the binlog replication processing unit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](handle-performance-issues.md). +--- +title: Handle Alerts +summary: Understand how to deal with the alert information in DM. +aliases: ['/tidb-data-migration/dev/handle-alerts.md/] +--- + +# Handle Alerts + +This document introduces how to deal with the alert information in DM. + +## Alerts related to high availability + +### `DM_master_all_down` + +- Description: + + If all DM-master nodes are offline, this alert is triggered. + +- Solution: + + You can take the following steps to handle the alert: + + 1. Check the environment of the cluster. + 2. Check the logs of all DM-master nodes for troubleshooting. + +### `DM_worker_offline` + +- Description: + + If a DM-worker node is offline for more than one hour, this alert is triggered. In a high-availability architecture, this alert might not directly interrupt the task but increases the risk of interruption. + +- Solution: + + You can take the following steps to handle the alert: + + 1. View the working status of the corresponding DM-worker node. + 2. Check whether the node is connected. + 3. Troubleshoot errors through logs. + +### `DM_DDL_error` + +- Description: + + This error occurs when DM is processing the sharding DDL operations. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_pending_DDL` + +- Description: + + If a sharding DDL operation is pending for more than one hour, this alert is triggered. + +- Solution: + + In some scenarios, the pending sharding DDL operation might be what users expect. Otherwise, refer to [Handle Sharding DDL Locks Manually in DM](manually-handling-sharding-ddl-locks.md) for solution. + +## Alert rules related to task status + +### `DM_task_state` + +- Description: + + When a sub-task of DM-worker is in the `Paused` state for over 20 minutes, an alert is triggered. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to relay log + +### `DM_relay_process_exits_with_error` + +- Description: + + When the relay log processing unit encounters an error, this unit moves to `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_remain_storage_of_relay_log` + +- Description: + + When the free space of the disk where the relay log is located is less than 10G, an alert is triggered. + +- Solutions: + + You can take the following methods to handle the alert: + + - Delete unwanted data manually to increase free disk space. + - Reconfigure the [automatic data purge strategy of the relay log](relay-log.md#automatic-data-purge) or [purge data manually](relay-log.md#manual-data-purge). + - Execute the command `pause-relay` to pause the relay log pulling process. After there is enough free disk space, resume the process by running the command `resume-relay`. Note that you must not purge upstream binlog files that have not been pulled after the relay log pulling process is paused. + +### `DM_relay_log_data_corruption` + +- Description: + + When the relay log processing unit validates the binlog event read from the upstream and detects abnormal checksum information, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_fail_to_read_binlog_from_master` + +- Description: + + If an error occurs when the relay log processing unit tries to read the binlog event from the upstream, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_fail_to_write_relay_log` + +- Description: + + If an error occurs when the relay log processing unit tries to write the binlog event into the relay log file, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_binlog_file_gap_between_master_relay` + +- Description: + + When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files pulled by the relay log processing unit by **more than** 1 for 10 minutes, and an alert is triggered. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to Dump/Load + +### `DM_dump_process_exists_with_error` + +- Description: + + When the Dump processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_load_process_exists_with_error` + +- Description: + + When the Load processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to binlog replication + +### `DM_sync_process_exists_with_error` + +- Description: + + When the binlog replication processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_binlog_file_gap_between_master_syncer` + +- Description: + + When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files processed by the relay log processing unit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](dm-handle-performance-issues.md). + +### `DM_binlog_file_gap_between_relay_syncer` + +- Description: + + When the number of the binlog files in the current relay log processing unit exceeds that of the latest binlog files processed by the binlog replication processing unit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](dm-handle-performance-issues.md). +nit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](handle-performance-issues.md). diff --git a/en/handle-performance-issues.md b/en/dm-handle-performance-issues.md similarity index 97% rename from en/handle-performance-issues.md rename to en/dm-handle-performance-issues.md index 6f47bb314..c6b188deb 100644 --- a/en/handle-performance-issues.md +++ b/en/dm-handle-performance-issues.md @@ -1,6 +1,7 @@ --- title: Handle Performance Issues summary: Learn about common performance issues that might exist in DM and how to deal with them. +aliases: ['/tidb-data-migration/dev/handle-performance-issues.md/] --- # Handle Performance Issues @@ -72,7 +73,7 @@ The Binlog replication unit decides whether to read the binlog event from the up ### binlog event conversion -The Binlog replication unit constructs DML, parses DDL, and performs [table router](key-features.md#table-routing) conversion from binlog event data. The related metric is `transform binlog event duration`. +The Binlog replication unit constructs DML, parses DDL, and performs [table router](dm-key-features.md#table-routing) conversion from binlog event data. The related metric is `transform binlog event duration`. The duration is mainly affected by the write operations upstream. Take the `INSERT INTO` statement as an example, the time consumed to convert a single `VALUES` greatly differs from that to convert a lot of `VALUES`. The time consumed might range from tens of microseconds to hundreds of microseconds. However, usually this is not a bottleneck of the system. diff --git a/en/dm-hardware-and-software-requirements.md b/en/dm-hardware-and-software-requirements.md index d039e50a4..74b0b407a 100644 --- a/en/dm-hardware-and-software-requirements.md +++ b/en/dm-hardware-and-software-requirements.md @@ -46,4 +46,4 @@ DM can be deployed and run on a 64-bit generic hardware server platform (Intel x > **Note:** > > - In the production environment, it is not recommended to deploy and run DM-master and DM-worker on the same server, because when DM-worker writes data to disks, it might interfere with the use of disks by DM-master's high availability component. -> - If a performance issue occurs, you are recommended to modify the task configuration file according to the [Optimize Configuration of DM](tune-configuration.md) document. If the performance is not effectively optimized by tuning the configuration file, you can try to upgrade the hardware of your server. +> - If a performance issue occurs, you are recommended to modify the task configuration file according to the [Optimize Configuration of DM](dm-tune-configuration.md) document. If the performance is not effectively optimized by tuning the configuration file, you can try to upgrade the hardware of your server. diff --git a/en/key-features.md b/en/dm-key-features.md similarity index 98% rename from en/key-features.md rename to en/dm-key-features.md index 225b36315..f87454da8 100644 --- a/en/key-features.md +++ b/en/dm-key-features.md @@ -1,7 +1,7 @@ --- title: Key Features summary: Learn about the key features of DM and appropriate parameter configurations. -aliases: ['/docs/tidb-data-migration/dev/feature-overview/','/tidb-data-migration/dev/feature-overview'] +aliases: ['/docs/tidb-data-migration/dev/feature-overview/','/tidb-data-migration/dev/feature-overview','/tidb-data-migration/dev/key-features.md/] --- # Key Features @@ -236,7 +236,7 @@ Binlog event filter is a more fine-grained filtering rule than the block and all > **Note:** > > - If the same table matches multiple rules, these rules are applied in order and the block list has priority over the allow list. This means if both the `Ignore` and `Do` rules are applied to a table, the `Ignore` rule takes effect. -> - Starting from DM v2.0.2, you can configure binlog event filters in the source configuration file. For details, see [Upstream Database Configuration File](source-configuration-file.md). +> - Starting from DM v2.0.2, you can configure binlog event filters in the source configuration file. For details, see [Upstream Database Configuration File](dm-source-configuration-file.md). ### Parameter configuration @@ -376,7 +376,7 @@ In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM prov ### Restrictions - DM only supports gh-ost and pt-osc. -- When `online-ddl` is enabled, the checkpoint corresponding to incremental replication should not be in the process of online DDL execution. For example, if an upstream online DDL operation starts at `position-A` and ends at `position-B` of the binlog, the starting point of incremental replication should be earlier than `position-A` or later than `position-B`; otherwise, an error occurs. For details, refer to [FAQ](faq.md#how-to-handle-the-error-returned-by-the-ddl-operation-related-to-the-gh-ost-table-after-online-ddl-scheme-gh-ost-is-set). +- When `online-ddl` is enabled, the checkpoint corresponding to incremental replication should not be in the process of online DDL execution. For example, if an upstream online DDL operation starts at `position-A` and ends at `position-B` of the binlog, the starting point of incremental replication should be earlier than `position-A` or later than `position-B`; otherwise, an error occurs. For details, refer to [FAQ](dm-faq.md#how-to-handle-the-error-returned-by-the-ddl-operation-related-to-the-gh-ost-table-after-online-ddl-scheme-gh-ost-is-set). ### Parameter configuration diff --git a/en/manage-schema.md b/en/dm-manage-schema.md similarity index 99% rename from en/manage-schema.md rename to en/dm-manage-schema.md index a81f753ff..e7bdaa58d 100644 --- a/en/manage-schema.md +++ b/en/dm-manage-schema.md @@ -1,6 +1,7 @@ --- title: Manage Table Schemas of Tables to be Migrated summary: Learn how to manage the schema of the table to be migrated in DM. +aliases: ['/tidb-data-migration/dev/manage-schema.md/] --- # Manage Table Schemas of Tables to be Migrated diff --git a/en/manage-source.md b/en/dm-manage-source.md similarity index 96% rename from en/manage-source.md rename to en/dm-manage-source.md index 96013e867..ac96de908 100644 --- a/en/manage-source.md +++ b/en/dm-manage-source.md @@ -1,6 +1,7 @@ --- title: Manage Data Source Configurations summary: Learn how to manage upstream MySQL instances in TiDB Data Migration. +aliases: ['/tidb-data-migration/dev/manage-source.md/] --- # Manage Data Source Configurations @@ -69,7 +70,7 @@ Use the following `operate-source` command to create a source configuration file operate-source create ./source.yaml ``` -For the configuration of `source.yaml`, refer to [Upstream Database Configuration File Introduction](source-configuration-file.md). +For the configuration of `source.yaml`, refer to [Upstream Database Configuration File Introduction](dm-source-configuration-file.md). The following is an example of the returned result: @@ -168,7 +169,7 @@ Global Flags: -s, --source strings MySQL Source ID. ``` -Before transferring, DM checks whether the worker to be unbound still has running tasks. If the worker has any running tasks, you need to [pause the tasks](pause-task.md) first, change the binding, and then [resume the tasks](resume-task.md). +Before transferring, DM checks whether the worker to be unbound still has running tasks. If the worker has any running tasks, you need to [pause the tasks](dm-pause-task.md) first, change the binding, and then [resume the tasks](dm-resume-task.md). ### Usage example diff --git a/en/open-api.md b/en/dm-open-api.md similarity index 99% rename from en/open-api.md rename to en/dm-open-api.md index 237f56d50..7fc95f51a 100644 --- a/en/open-api.md +++ b/en/dm-open-api.md @@ -1,6 +1,7 @@ --- title: Maintain DM Clusters Using OpenAPI summary: Learn about how to use OpenAPI interface to manage the cluster status and data replication. +aliases: ['/tidb-data-migration/dev/open-api.md/] --- # Maintain DM Clusters Using OpenAPI diff --git a/en/overview.md b/en/dm-overview.md similarity index 80% rename from en/overview.md rename to en/dm-overview.md index 3a10277a5..b454ce579 100644 --- a/en/overview.md +++ b/en/dm-overview.md @@ -1,7 +1,7 @@ --- title: Data Migration Overview summary: Learn about the Data Migration tool, the architecture, the key components, and features. -aliases: ['/docs/tidb-data-migration/dev/overview/'] +aliases: ['/docs/tidb-data-migration/dev/overview/','/tidb-data-migration/dev/overview.md/] --- @@ -29,15 +29,15 @@ This section describes the basic data migration features provided by DM. ### Block and allow lists migration at the schema and table levels -The [block and allow lists filtering rule](key-features.md#block-and-allow-table-lists) is similar to the `replication-rules-db`/`replication-rules-table` feature of MySQL, which can be used to filter or replicate all operations of some databases only or some tables only. +The [block and allow lists filtering rule](dm-key-features.md#block-and-allow-table-lists) is similar to the `replication-rules-db`/`replication-rules-table` feature of MySQL, which can be used to filter or replicate all operations of some databases only or some tables only. ### Binlog event filtering -The [binlog event filtering](key-features.md#binlog-event-filter) feature means that DM can filter certain types of SQL statements from certain tables in the source database. For example, you can filter all `INSERT` statements in the table `test`.`sbtest` or filter all `TRUNCATE TABLE` statements in the schema `test`. +The [binlog event filtering](dm-key-features.md#binlog-event-filter) feature means that DM can filter certain types of SQL statements from certain tables in the source database. For example, you can filter all `INSERT` statements in the table `test`.`sbtest` or filter all `TRUNCATE TABLE` statements in the schema `test`. ### Schema and table routing -The [schema and table routing](key-features.md#table-routing) feature means that DM can migrate a certain table of the source database to the specified table in the downstream. For example, you can migrate the table structure and data from the table `test`.`sbtest1` in the source database to the table `test`.`sbtest2` in TiDB. This is also a core feature for merging and migrating sharded databases and tables. +The [schema and table routing](dm-key-features.md#table-routing) feature means that DM can migrate a certain table of the source database to the specified table in the downstream. For example, you can migrate the table structure and data from the table `test`.`sbtest1` in the source database to the table `test`.`sbtest2` in TiDB. This is also a core feature for merging and migrating sharded databases and tables. ## Advanced features @@ -47,7 +47,7 @@ DM supports merging and migrating the original sharded instances and tables from ### Optimization for third-party online-schema-change tools in the migration process -In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM provides support for these tools to avoid migrating unnecessary intermediate data. For details, see [Online DDL Tools](key-features.md#online-ddl-tools) +In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM provides support for these tools to avoid migrating unnecessary intermediate data. For details, see [Online DDL Tools](dm-key-features.md#online-ddl-tools) ### Filter certain row changes using SQL expressions @@ -77,7 +77,7 @@ Before using the DM tool, note the following restrictions: - Currently, TiDB is not compatible with all the DDL statements that MySQL supports. Because DM uses the TiDB parser to process DDL statements, it only supports the DDL syntax supported by the TiDB parser. For details, see [MySQL Compatibility](https://pingcap.com/docs/stable/reference/mysql-compatibility/#ddl). - - DM reports an error when it encounters an incompatible DDL statement. To solve this error, you need to manually handle it using dmctl, either skipping this DDL statement or replacing it with a specified DDL statement(s). For details, see [Skip or replace abnormal SQL statements](faq.md#how-to-handle-incompatible-ddl-statements). + - DM reports an error when it encounters an incompatible DDL statement. To solve this error, you need to manually handle it using dmctl, either skipping this DDL statement or replacing it with a specified DDL statement(s). For details, see [Skip or replace abnormal SQL statements](dm-faq.md#how-to-handle-incompatible-ddl-statements). + Sharding merge with conflicts diff --git a/en/pause-task.md b/en/dm-pause-task.md similarity index 97% rename from en/pause-task.md rename to en/dm-pause-task.md index 60db3485b..0aeb04537 100644 --- a/en/pause-task.md +++ b/en/dm-pause-task.md @@ -1,6 +1,7 @@ --- title: Pause a Data Migration Task summary: Learn how to pause a data migration task in TiDB Data Migration. +aliases: ['/tidb-data-migration/dev/pause-task.md/] --- # Pause a Data Migration Task diff --git a/en/performance-test.md b/en/dm-performance-test.md similarity index 94% rename from en/performance-test.md rename to en/dm-performance-test.md index 898d16549..58cf3c13f 100644 --- a/en/performance-test.md +++ b/en/dm-performance-test.md @@ -1,6 +1,7 @@ --- title: DM Cluster Performance Test summary: Learn how to test the performance of DM clusters. +aliases: ['/tidb-data-migration/dev/performance-test.md/] --- # DM Cluster Performance Test @@ -50,7 +51,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### Create a data migration task -1. Create an upstream MySQL source and set `source-id` to `source-1`. For details, see [Load the Data Source Configurations](manage-source.md#operate-data-source). +1. Create an upstream MySQL source and set `source-id` to `source-1`. For details, see [Load the Data Source Configurations](dm-manage-source.md#operate-data-source). 2. Create a migration task (in `full` mode). The following is a task configuration template: @@ -84,7 +85,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 threads: 32 ``` -For details about how to create a migration task, see [Create a Data Migration Task](create-task.md). +For details about how to create a migration task, see [Create a Data Migration Task](dm-create-task.md). > **Note:** > @@ -109,7 +110,7 @@ Use `sysbench` to create test tables in the upstream. #### Create a data migration task -1. Create the source of the upstream MySQL. Set `source-id` to `source-1` (if the source has been created in the [full import benchmark case](#full-import-benchmark-case), you do not need to create it again). For details, see [Load the Data Source Configurations](manage-source.md#operate-data-source). +1. Create the source of the upstream MySQL. Set `source-id` to `source-1` (if the source has been created in the [full import benchmark case](#full-import-benchmark-case), you do not need to create it again). For details, see [Load the Data Source Configurations](dm-manage-source.md#operate-data-source). 2. Create a DM migration task (in `all` mode). The following is an example of the task configuration file: @@ -142,7 +143,7 @@ Use `sysbench` to create test tables in the upstream. batch: 100 ``` -For details about how to create a data migration task, see [Create a Data Migration Task](create-task.md). +For details about how to create a data migration task, see [Create a Data Migration Task](dm-create-task.md). > **Note:** > diff --git a/en/precheck.md b/en/dm-precheck.md similarity index 97% rename from en/precheck.md rename to en/dm-precheck.md index 91d0702f8..62f67dca9 100644 --- a/en/precheck.md +++ b/en/dm-precheck.md @@ -1,7 +1,7 @@ --- title: Precheck the Upstream MySQL Instance Configurations summary: Learn how to use the precheck feature provided by DM to detect errors in the upstream MySQL instance configurations. -aliases: ['/docs/tidb-data-migration/dev/precheck/'] +aliases: ['/docs/tidb-data-migration/dev/precheck/','/tidb-data-migration/dev/precheck.md/] --- # Precheck the Upstream MySQL Instance Configurations diff --git a/en/query-status.md b/en/dm-query-status.md similarity index 99% rename from en/query-status.md rename to en/dm-query-status.md index 80bf47a54..54e33c4f8 100644 --- a/en/query-status.md +++ b/en/dm-query-status.md @@ -1,7 +1,7 @@ --- title: Query Status summary: Learn how to query the status of a data replication task. -aliases: ['/docs/tidb-data-migration/dev/query-status/','/tidb-data-migration/dev/query-error/'] +aliases: ['/docs/tidb-data-migration/dev/query-status/','/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-status.md/] --- # Query Status diff --git a/en/resume-task.md b/en/dm-resume-task.md similarity index 96% rename from en/resume-task.md rename to en/dm-resume-task.md index abf58434d..e5e0ebd57 100644 --- a/en/resume-task.md +++ b/en/dm-resume-task.md @@ -1,6 +1,7 @@ --- title: Resume a Data Migration Task summary: Learn how to resume a data migration task. +aliases: ['/tidb-data-migration/dev/resume-task.md/] --- # Resume a Data Migration Task diff --git a/en/source-configuration-file.md b/en/dm-source-configuration-file.md similarity index 97% rename from en/source-configuration-file.md rename to en/dm-source-configuration-file.md index 426184931..a987e7810 100644 --- a/en/source-configuration-file.md +++ b/en/dm-source-configuration-file.md @@ -1,7 +1,7 @@ --- title: Upstream Database Configuration File summary: Learn the configuration file of the upstream database -aliases: ['/docs/tidb-data-migration/dev/source-configuration-file/'] +aliases: ['/docs/tidb-data-migration/dev/source-configuration-file/','/tidb-data-migration/dev/source-configuration-file.md/] --- # Upstream Database Configuration File @@ -111,4 +111,4 @@ Starting from DM v2.0.2, you can configure binlog event filters in the source co | Parameter | Description | | :------------ | :--------------------------------------- | | `case-sensitive` | Determines whether the filtering rules are case-sensitive. The default value is `false`. | -| `filters` | Sets binlog event filtering rules. For details, see [Binlog event filter parameter explanation](key-features.md#parameter-explanation-2). | +| `filters` | Sets binlog event filtering rules. For details, see [Binlog event filter parameter explanation](dm-key-features.md#parameter-explanation-2). | diff --git a/en/stop-task.md b/en/dm-stop-task.md similarity index 92% rename from en/stop-task.md rename to en/dm-stop-task.md index 540c007fe..470a7802a 100644 --- a/en/stop-task.md +++ b/en/dm-stop-task.md @@ -1,11 +1,12 @@ --- title: Stop a Data Migration Task summary: Learn how to stop a data migration task. +aliases: ['/tidb-data-migration/dev/stop-task.md/] --- # Stop a Data Migration Task -You can use the `stop-task` command to stop a data migration task. For differences between `stop-task` and `pause-task`, refer to [Pause a Data Migration Task](pause-task.md). +You can use the `stop-task` command to stop a data migration task. For differences between `stop-task` and `pause-task`, refer to [Pause a Data Migration Task](dm-pause-task.md). {{< copyable "" >}} diff --git a/en/task-configuration-guide.md b/en/dm-task-configuration-guide.md similarity index 97% rename from en/task-configuration-guide.md rename to en/dm-task-configuration-guide.md index 9ce393bfd..8e31c8605 100644 --- a/en/task-configuration-guide.md +++ b/en/dm-task-configuration-guide.md @@ -1,6 +1,7 @@ --- title: Data Migration Task Configuration Guide summary: Learn how to configure a data migration task in Data Migration (DM). +aliases: ['/tidb-data-migration/dev/task-configuration-guide.md/] --- # Data Migration Task Configuration Guide @@ -11,9 +12,9 @@ This document introduces how to configure a data migration task in Data Migratio Before configuring the data sources to be migrated for the task, you need to first make sure that DM has loaded the configuration files of the corresponding data sources. The following are some operation references: -- To view the data source, you can refer to [Check the data source configuration](manage-source.md#check-data-source-configurations). +- To view the data source, you can refer to [Check the data source configuration](dm-manage-source.md#check-data-source-configurations). - To create a data source, you can refer to [Create data source](migrate-data-using-dm.md#step-3-create-data-source). -- To generate a data source configuration file, you can refer to [Source configuration file introduction](source-configuration-file.md). +- To generate a data source configuration file, you can refer to [Source configuration file introduction](dm-source-configuration-file.md). The following example of `mysql-instances` shows how to configure data sources that need to be migrated for the data migration task: @@ -78,7 +79,7 @@ To configure the block and allow list of data source tables for the data migrati tbl-name: "log" ``` - For detailed configuration rules, see [Block and allow table lists](key-features.md#block-and-allow-table-lists). + For detailed configuration rules, see [Block and allow table lists](dm-key-features.md#block-and-allow-table-lists). 2. Reference the block and allow list rules in the data source configuration to filter tables to be migrated. @@ -113,7 +114,7 @@ To configure the filters of binlog events for the data migration task, perform t action: Do ``` - For detailed configuration rules, see [Binlog event filter](key-features.md#binlog-event-filter). + For detailed configuration rules, see [Binlog event filter](dm-key-features.md#binlog-event-filter). 2. Reference the binlog event filtering rules in the data source configuration to filter specified binlog events of specified tables or schemas in the data source. @@ -151,7 +152,7 @@ To configure the routing mapping rules for migrating data source tables to speci target-schema: "test" ``` - For detailed configuration rules, see [Table Routing](key-features.md#table-routing). + For detailed configuration rules, see [Table Routing](dm-key-features.md#table-routing). 2. Reference the routing mapping rules in the data source configuration to filter tables to be migrated. @@ -186,7 +187,7 @@ shard-mode: "pessimistic" # The shard merge mode. Optional modes are ""/"p ## Other configurations -The following is an overall task configuration example of this document. The complete task configuration template can be found in [DM task configuration file full introduction](task-configuration-file-full.md). For the usage and configuration of other configuration items, refer to [Features of Data Migration](key-features.md). +The following is an overall task configuration example of this document. The complete task configuration template can be found in [DM task configuration file full introduction](task-configuration-file-full.md). For the usage and configuration of other configuration items, refer to [Features of Data Migration](dm-key-features.md). ```yaml --- diff --git a/en/tune-configuration.md b/en/dm-tune-configuration.md similarity index 98% rename from en/tune-configuration.md rename to en/dm-tune-configuration.md index 09b1bd2d1..57e3a0191 100644 --- a/en/tune-configuration.md +++ b/en/dm-tune-configuration.md @@ -1,6 +1,7 @@ --- title: Optimize Configuration of DM summary: Learn how to optimize the configuration of the data migration task to improve the performance of data migration. +aliases: ['/tidb-data-migration/dev/tune-configuration.md/] --- # Optimize Configuration of DM diff --git a/en/feature-expression-filter.md b/en/feature-expression-filter.md index d91b071bf..8003d1c5e 100644 --- a/en/feature-expression-filter.md +++ b/en/feature-expression-filter.md @@ -6,7 +6,7 @@ title: Filter Certain Row Changes Using SQL Expressions ## Overview -In the process of data migration, DM provides the [Binlog Event Filter](key-features.md#binlog-event-filter) feature to filter certain types of binlog events. For example, for archiving or auditing purposes, `DELETE` event might be filtered when data is migrated to the downstream. However, Binlog Event Filter cannot judge with a greater granularity whether the `DELETE` event of a certain row should be filtered. +In the process of data migration, DM provides the [Binlog Event Filter](dm-key-features.md#binlog-event-filter) feature to filter certain types of binlog events. For example, for archiving or auditing purposes, `DELETE` event might be filtered when data is migrated to the downstream. However, Binlog Event Filter cannot judge with a greater granularity whether the `DELETE` event of a certain row should be filtered. To solve the above issue, DM supports filtering certain row changes using SQL expressions. The binlog in the `ROW` format supported by DM has the values of all columns in binlog events. You can configure SQL expressions according to these values. If the SQL expressions evaluate a row change as `TRUE`, DM will not migrate the row change downstream. @@ -16,7 +16,7 @@ To solve the above issue, DM supports filtering certain row changes using SQL ex ## Configuration example -Similar to [Binlog Event Filter](key-features.md#binlog-event-filter), you also need to configure the expression-filter feature in the configuration file of the data migration task, as shown below. For complete configuration and its descriptions, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md#task-configuration-file-template-advanced): +Similar to [Binlog Event Filter](dm-key-features.md#binlog-event-filter), you also need to configure the expression-filter feature in the configuration file of the data migration task, as shown below. For complete configuration and its descriptions, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md#task-configuration-file-template-advanced): ```yml name: test diff --git a/en/feature-shard-merge-pessimistic.md b/en/feature-shard-merge-pessimistic.md index 97393de62..8d0f6a6b3 100644 --- a/en/feature-shard-merge-pessimistic.md +++ b/en/feature-shard-merge-pessimistic.md @@ -25,7 +25,7 @@ DM has the following sharding DDL usage restrictions in the pessimistic mode: - A single `RENAME TABLE` statement can only involve a single `RENAME` operation. - The sharding group migration task requires each DDL statement to involve operations on only one table. - The table schema of each sharded table must be the same at the starting point of the incremental replication task, so as to make sure the DML statements of different sharded tables can be migrated into the downstream with a definite table schema, and the subsequent sharding DDL statements can be correctly matched and migrated. -- If you need to change the [table routing](key-features.md#table-routing) rule, you have to wait for the migration of all sharding DDL statements to complete. +- If you need to change the [table routing](dm-key-features.md#table-routing) rule, you have to wait for the migration of all sharding DDL statements to complete. - During the migration of sharding DDL statements, an error is reported if you use `dmctl` to change `router-rules`. - If you need to `CREATE` a new table to a sharding group where DDL statements are being executed, you have to make sure that the table schema is the same as the newly modified table schema. - For example, both the original `table_1` and `table_2` have two columns (a, b) initially, and have three columns (a, b, c) after the sharding DDL operation, so after the migration the newly created table should also have three columns (a, b, c). @@ -75,7 +75,7 @@ The characteristics of DM handling the sharding DDL migration among multiple DM- - After receiving the DDL statement from the binlog event, each DM-worker sends the DDL information to `DM-master`. - `DM-master` creates or updates the DDL lock based on the DDL information received from each DM-worker and the sharding group information. - If all members of the sharding group receive a same specific DDL statement, this indicates that all DML statements before the DDL execution on the upstream sharded tables have been completely migrated, and this DDL statement can be executed. Then DM can continue to migrate the subsequent DML statements. -- After being converted by the [table router](key-features.md#table-routing), the DDL statement of the upstream sharded tables must be consistent with the DDL statement to be executed in the downstream. Therefore, this DDL statement only needs to be executed once by the DDL owner and all other DM-workers can ignore this DDL statement. +- After being converted by the [table router](dm-key-features.md#table-routing), the DDL statement of the upstream sharded tables must be consistent with the DDL statement to be executed in the downstream. Therefore, this DDL statement only needs to be executed once by the DDL owner and all other DM-workers can ignore this DDL statement. In the above example, only one sharded table needs to be merged in the upstream MySQL instance corresponding to each DM-worker. But in actual scenarios, there might be multiple sharded tables in multiple sharded schemas to be merged in one MySQL instance. And when this happens, it becomes more complex to coordinate the sharding DDL migration. diff --git a/en/handle-failed-ddl-statements.md b/en/handle-failed-ddl-statements.md index 7eecbba54..ff6b31ceb 100644 --- a/en/handle-failed-ddl-statements.md +++ b/en/handle-failed-ddl-statements.md @@ -29,7 +29,7 @@ When you use dmctl to manually handle the failed DDL statements, the commonly us ### query-status -The `query-status` command is used to query the current status of items such as the subtask and the relay unit in each MySQL instance. For details, see [query status](query-status.md). +The `query-status` command is used to query the current status of items such as the subtask and the relay unit in each MySQL instance. For details, see [query status](dm-query-status.md). ### handle-error diff --git a/en/maintain-dm-using-tiup.md b/en/maintain-dm-using-tiup.md index ff3ecd310..bb1d94291 100644 --- a/en/maintain-dm-using-tiup.md +++ b/en/maintain-dm-using-tiup.md @@ -179,7 +179,7 @@ For example, to scale out a DM-worker node in the `prod-cluster` cluster, take t > **Note:** > -> Since v2.0.5, dmctl support [Export and Import Data Sources and Task Configuration of Clusters](export-import-config.md)。 +> Since v2.0.5, dmctl support [Export and Import Data Sources and Task Configuration of Clusters](dm-export-import-config.md)。 > > Before upgrading, you can use `config export` to export the configuration files of clusters. After upgrading, if you need to downgrade to an earlier version, you can first redeploy the earlier cluster and then use `config import` to import the previous configuration files. > diff --git a/en/manually-upgrade-dm-1.0-to-2.0.md b/en/manually-upgrade-dm-1.0-to-2.0.md index a2f989ffd..cc636f9c7 100644 --- a/en/manually-upgrade-dm-1.0-to-2.0.md +++ b/en/manually-upgrade-dm-1.0-to-2.0.md @@ -25,7 +25,7 @@ The prepared configuration files of v2.0+ include the configuration files of the ### Upstream database configuration file -In v2.0+, the [upstream database configuration file](source-configuration-file.md) is separated from the process configuration of the DM-worker, so you need to obtain the source configuration based on the [v1.0.x DM-worker configuration](https://docs.pingcap.com/tidb-data-migration/stable/dm-worker-configuration-file). +In v2.0+, the [upstream database configuration file](dm-source-configuration-file.md) is separated from the process configuration of the DM-worker, so you need to obtain the source configuration based on the [v1.0.x DM-worker configuration](https://docs.pingcap.com/tidb-data-migration/stable/dm-worker-configuration-file). > **Note:** > @@ -98,7 +98,7 @@ from: ### Data migration task configuration file -For [data migration task configuration guide](task-configuration-guide.md), v2.0+ is basically compatible with v1.0.x. You can directly copy the configuration of v1.0.x. +For [data migration task configuration guide](dm-task-configuration-guide.md), v2.0+ is basically compatible with v1.0.x. You can directly copy the configuration of v1.0.x. ## Step 2: Deploy the v2.0+ cluster @@ -116,7 +116,7 @@ If the original v1.0.x cluster is deployed by binary, you can stop the DM-worker ## Step 4: Upgrade data migration task -1. Use the [`operate-source`](manage-source.md#operate-data-source) command to load the upstream database source configuration from [step 1](#step-1-prepare-v20-configuration-file) into the v2.0+ cluster. +1. Use the [`operate-source`](dm-manage-source.md#operate-data-source) command to load the upstream database source configuration from [step 1](#step-1-prepare-v20-configuration-file) into the v2.0+ cluster. 2. In the downstream TiDB cluster, obtain the corresponding global checkpoint information from the incremental checkpoint table of the v1.0.x data migration task. @@ -158,8 +158,8 @@ If the original v1.0.x cluster is deployed by binary, you can stop the DM-worker > > If `enable-gtid` is enabled in the source configuration, currently you need to parse the binlog or relay log file to obtain the GTID sets corresponding to the binlog position, and set it to `binlog-gtid` in the `meta`. -4. Use the [`start-task`](create-task.md) command to start the upgraded data migration task through the v2.0+ data migration task configuration file. +4. Use the [`start-task`](dm-create-task.md) command to start the upgraded data migration task through the v2.0+ data migration task configuration file. -5. Use the [`query-status`](query-status.md) command to confirm whether the data migration task is running normally. +5. Use the [`query-status`](dm-query-status.md) command to confirm whether the data migration task is running normally. If the data migration task runs normally, it indicates that the DM upgrade to v2.0+ is successful. diff --git a/en/migrate-data-using-dm.md b/en/migrate-data-using-dm.md index 1fe38c618..5a98a3d83 100644 --- a/en/migrate-data-using-dm.md +++ b/en/migrate-data-using-dm.md @@ -14,7 +14,7 @@ It is recommended to [deploy the DM cluster using TiUP](deploy-a-dm-cluster-usin > **Note:** > -> - For database passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. See [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). +> - For database passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. See [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). > - The user of the upstream and downstream databases must have the corresponding read and write privileges. ## Step 2: Check the cluster information @@ -37,7 +37,7 @@ After the DM cluster is deployed using TiUP, the configuration information is li | Upstream MySQL-2 | 172.16.10.82 | 3306 | root | VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU= | | Downstream TiDB | 172.16.10.83 | 4000 | root | | -The list of privileges needed on the MySQL host can be found in the [precheck](precheck.md) documentation. +The list of privileges needed on the MySQL host can be found in the [precheck](dm-precheck.md) documentation. ## Step 3: Create data source @@ -124,7 +124,7 @@ To detect possible errors of data migration configuration in advance, DM provide - DM automatically checks the corresponding privileges and configuration while starting the data migration task. - You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements. -For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](precheck.md). +For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](dm-precheck.md). > **Note:** > diff --git a/en/migrate-from-mysql-aurora.md b/en/migrate-from-mysql-aurora.md index c87752cfa..7268bee07 100644 --- a/en/migrate-from-mysql-aurora.md +++ b/en/migrate-from-mysql-aurora.md @@ -68,7 +68,7 @@ If GTID is enabled in Aurora, you can migrate data based on GTID. For how to ena > **Note:** > > + GTID-based data migration requires MySQL 5.7 (Aurora 2.04) version or later. -> + In addition to the Aurora-specific configuration above, the upstream database must meet other requirements for migrating from MySQL, such as table schemas, character sets, and privileges. See [Checking Items](precheck.md#checking-items) for details. +> + In addition to the Aurora-specific configuration above, the upstream database must meet other requirements for migrating from MySQL, such as table schemas, character sets, and privileges. See [Checking Items](dm-precheck.md#checking-items) for details. ## Step 2: Deploy the DM cluster @@ -122,7 +122,7 @@ The number of `master`s and `worker`s in the returned result is consistent with > **Note:** > -> The configuration file used by DM supports database passwords in plaintext or ciphertext. It is recommended to use password encrypted using dmctl. To obtain the ciphertext password, see [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). +> The configuration file used by DM supports database passwords in plaintext or ciphertext. It is recommended to use password encrypted using dmctl. To obtain the ciphertext password, see [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). Save the following configuration files of data source according to the example, in which the value of `source-id` will be used in the task configuration in [step 4](#step-4-configure-the-task). diff --git a/en/quick-create-migration-task.md b/en/quick-create-migration-task.md index 138950815..1267bd05e 100644 --- a/en/quick-create-migration-task.md +++ b/en/quick-create-migration-task.md @@ -17,7 +17,7 @@ This document introduces how to configure a data migration task in different sce In addition to scenario-based documents, you can also refer to the following ones: - For a complete example of data migration task configuration, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md). -- For a data migration task configuration guide, refer to [Data Migration Task Configuration Guide](task-configuration-guide.md). +- For a data migration task configuration guide, refer to [Data Migration Task Configuration Guide](dm-task-configuration-guide.md). ## Migrate Data from Multiple Data Sources to TiDB diff --git a/en/quick-start-create-source.md b/en/quick-start-create-source.md index 4bf148015..51fc1c95e 100644 --- a/en/quick-start-create-source.md +++ b/en/quick-start-create-source.md @@ -11,7 +11,7 @@ summary: Learn how to create a data source for Data Migration (DM). The document describes how to create a data source for the data migration task of TiDB Data Migration (DM). -A data source contains the information for accessing the upstream migration task. Because a data migration task requires referring its corresponding data source to obtain the configuration information of access, you need to create the data source of a task before creating a data migration task. For specific data source management commands, refer to [Manage Data Source Configurations](manage-source.md). +A data source contains the information for accessing the upstream migration task. Because a data migration task requires referring its corresponding data source to obtain the configuration information of access, you need to create the data source of a task before creating a data migration task. For specific data source management commands, refer to [Manage Data Source Configurations](dm-manage-source.md). ## Step 1: Configure the data source @@ -57,7 +57,7 @@ You can use the following command to create a data source: tiup dmctl --master-addr operate-source create ./source-mysql-01.yaml ``` -For other configuration parameters, refer to [Upstream Database Configuration File](source-configuration-file.md). +For other configuration parameters, refer to [Upstream Database Configuration File](dm-source-configuration-file.md). The returned results are as follows: diff --git a/en/relay-log.md b/en/relay-log.md index 395050aaf..267542104 100644 --- a/en/relay-log.md +++ b/en/relay-log.md @@ -90,7 +90,7 @@ The starting position of the relay log migration is determined by the following > **Note:** > -> Since DM v2.0.2, the configuration item `enable-relay` in the source configuration file is no longer valid. If DM finds that `enable-relay` is set to `true` when [loading the data source configuration](manage-source.md#operate-data-source), it outputs the following message: +> Since DM v2.0.2, the configuration item `enable-relay` in the source configuration file is no longer valid. If DM finds that `enable-relay` is set to `true` when [loading the data source configuration](dm-manage-source.md#operate-data-source), it outputs the following message: > > ``` > Please use `start-relay` to specify which workers should pull relay log of relay-enabled sources. @@ -132,7 +132,7 @@ In the command `start-relay`, you can configure one or more DM-workers to migrat In DM versions earlier than v2.0.2 (not including v2.0.2), DM checks the configuration item `enable-relay` in the source configuration file when binding a DM-worker to an upstream data source. If `enable-relay` is set to `true`, DM enables the relay log feature for the data source. -See [Upstream Database Configuration File](source-configuration-file.md) for how to set the configuration item `enable-relay`. +See [Upstream Database Configuration File](dm-source-configuration-file.md) for how to set the configuration item `enable-relay`. diff --git a/en/task-configuration-file-full.md b/en/task-configuration-file-full.md index 5f1919071..00cc225cf 100644 --- a/en/task-configuration-file-full.md +++ b/en/task-configuration-file-full.md @@ -7,11 +7,11 @@ aliases: ['/docs/tidb-data-migration/dev/task-configuration-file-full/','/docs/t This document introduces the advanced task configuration file of Data Migration (DM), including [global configuration](#global-configuration) and [instance configuration](#instance-configuration). -For the feature and configuration of each configuration item, see [Data migration features](overview.md#basic-features). +For the feature and configuration of each configuration item, see [Data migration features](dm-overview.md#basic-features). ## Important concepts -For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](config-overview.md#important-concepts). +For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](dm-config-overview.md#important-concepts). ## Task configuration file template (advanced) @@ -178,9 +178,9 @@ Arguments in each feature configuration set are explained in the comments in the | Parameter | Description | | :------------ | :--------------------------------------- | -| `routes` | The routing mapping rule set between the upstream and downstream tables. If the names of the upstream and downstream schemas and tables are the same, this item does not need to be configured. See [Table Routing](key-features.md#table-routing) for usage scenarios and sample configurations. | -| `filters` | The binlog event filter rule set of the matched table of the upstream database instance. If binlog filtering is not required, this item does not need to be configured. See [Binlog Event Filter](key-features.md#binlog-event-filter) for usage scenarios and sample configurations. | -| `block-allow-list` | The filter rule set of the block allow list of the matched table of the upstream database instance. It is recommended to specify the schemas and tables that need to be migrated through this item, otherwise all schemas and tables are migrated. See [Binlog Event Filter](key-features.md#binlog-event-filter) and [Block & Allow Lists](key-features.md#block-and-allow-table-lists) for usage scenarios and sample configurations. | +| `routes` | The routing mapping rule set between the upstream and downstream tables. If the names of the upstream and downstream schemas and tables are the same, this item does not need to be configured. See [Table Routing](dm-key-features.md#table-routing) for usage scenarios and sample configurations. | +| `filters` | The binlog event filter rule set of the matched table of the upstream database instance. If binlog filtering is not required, this item does not need to be configured. See [Binlog Event Filter](dm-key-features.md#binlog-event-filter) for usage scenarios and sample configurations. | +| `block-allow-list` | The filter rule set of the block allow list of the matched table of the upstream database instance. It is recommended to specify the schemas and tables that need to be migrated through this item, otherwise all schemas and tables are migrated. See [Binlog Event Filter](dm-key-features.md#binlog-event-filter) and [Block & Allow Lists](dm-key-features.md#block-and-allow-table-lists) for usage scenarios and sample configurations. | | `mydumpers` | Configuration arguments of dump processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `thread` only using `mydumper-thread`. | | `loaders` | Configuration arguments of load processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `pool-size` only using `loader-thread`. | | `syncers` | Configuration arguments of sync processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `worker-count` only using `syncer-thread`. | diff --git a/en/task-configuration-file.md b/en/task-configuration-file.md index ca10b1178..1ef591c75 100644 --- a/en/task-configuration-file.md +++ b/en/task-configuration-file.md @@ -10,11 +10,11 @@ This document introduces the basic task configuration file of Data Migration (DM DM also implements [an advanced task configuration file](task-configuration-file-full.md) which provides greater flexibility and more control over DM. -For the feature and configuration of each configuration item, see [Data migration features](key-features.md). +For the feature and configuration of each configuration item, see [Data migration features](dm-key-features.md). ## Important concepts -For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](config-overview.md#important-concepts). +For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](dm-config-overview.md#important-concepts). ## Task configuration file template (basic) @@ -80,7 +80,7 @@ Refer to the comments in the [template](#task-configuration-file-template-basic) ### Feature configuration set -For basic applications, you only need to modify the block and allow lists filtering rule. Refer to the comments about `block-allow-list` in the [template](#task-configuration-file-template-basic) or [Block & allow table lists](key-features.md#block-and-allow-table-lists) to see more details. +For basic applications, you only need to modify the block and allow lists filtering rule. Refer to the comments about `block-allow-list` in the [template](#task-configuration-file-template-basic) or [Block & allow table lists](dm-key-features.md#block-and-allow-table-lists) to see more details. ## Instance configuration diff --git a/en/usage-scenario-downstream-more-columns.md b/en/usage-scenario-downstream-more-columns.md index 9bb5f7ee5..fae473e87 100644 --- a/en/usage-scenario-downstream-more-columns.md +++ b/en/usage-scenario-downstream-more-columns.md @@ -48,7 +48,7 @@ Otherwise, after creating the task, the following data migration errors occur wh The reason for the above errors is that when DM migrates the binlog event, if DM has not maintained internally the table schema corresponding to that table, DM tries to use the current table schema in the downstream to parse the binlog event and generate the corresponding DML statement. If the number of columns in the binlog event is inconsistent with the number of columns in the downstream table schema, the above error might occur. -In such cases, you can execute the [`operate-schema`](manage-schema.md) command to specify for the table a table schema that matches the binlog event. If you are migrating sharded tables, you need to configure the table schema in DM for parsing MySQL binlog for each sharded tables according to the following steps: +In such cases, you can execute the [`operate-schema`](dm-manage-schema.md) command to specify for the table a table schema that matches the binlog event. If you are migrating sharded tables, you need to configure the table schema in DM for parsing MySQL binlog for each sharded tables according to the following steps: 1. Specify the table schema for the table `log.messages` to be migrated in the data source. The table schema needs to correspond to the data of the binlog event to be replicated by DM. Then save the `CREATE TABLE` table schema statement in a file. For example, save the following table schema in the `log.messages.sql` file: @@ -60,7 +60,7 @@ In such cases, you can execute the [`operate-schema`](manage-schema.md) command ) ``` -2. Execute the [`operate-schema`](manage-schema.md) command to set the table schema. At this time, the task should be in the `Paused` state because of the above error. +2. Execute the [`operate-schema`](dm-manage-schema.md) command to set the table schema. At this time, the task should be in the `Paused` state because of the above error. {{< copyable "shell-regular" >}} @@ -68,6 +68,6 @@ In such cases, you can execute the [`operate-schema`](manage-schema.md) command tiup dmctl --master-addr operate-schema set -s mysql-01 task-test -d log -t message log.message.sql ``` -3. Execute the [`resume-task`](resume-task.md) command to resume the `Paused` task. +3. Execute the [`resume-task`](dm-resume-task.md) command to resume the `Paused` task. -4. Execute the [`query-status`](query-status.md) command to check whether the data migration task is running normally. +4. Execute the [`query-status`](dm-query-status.md) command to check whether the data migration task is running normally. diff --git a/en/usage-scenario-shard-merge.md b/en/usage-scenario-shard-merge.md index cf1151594..a50999462 100644 --- a/en/usage-scenario-shard-merge.md +++ b/en/usage-scenario-shard-merge.md @@ -81,7 +81,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ## Migration solution -- To satisfy the migration requirements #1, you do not need to configure the [table routing rule](key-features.md#table-routing). You need to manually create a table based on the requirements in the section [Remove the `PRIMARY KEY` attribute from the column](shard-merge-best-practices.md#remove-the-primary-key-attribute-from-the-column): +- To satisfy the migration requirements #1, you do not need to configure the [table routing rule](dm-key-features.md#table-routing). You need to manually create a table based on the requirements in the section [Remove the `PRIMARY KEY` attribute from the column](shard-merge-best-practices.md#remove-the-primary-key-attribute-from-the-column): {{< copyable "sql" >}} @@ -104,7 +104,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ignore-checking-items: ["auto_increment_ID"] ``` -- To satisfy the migration requirement #2, configure the [table routing rule](key-features.md#table-routing) as follows: +- To satisfy the migration requirement #2, configure the [table routing rule](dm-key-features.md#table-routing) as follows: {{< copyable "" >}} @@ -121,7 +121,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` target-table: "sale" ``` -- To satisfy the migration requirements #3, configure the [Block and allow table lists](key-features.md#block-and-allow-table-lists) as follows: +- To satisfy the migration requirements #3, configure the [Block and allow table lists](dm-key-features.md#block-and-allow-table-lists) as follows: {{< copyable "" >}} @@ -134,7 +134,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` tbl-name: "log_bak" ``` -- To satisfy the migration requirement #4, configure the [binlog event filter rule](key-features.md#binlog-event-filter) as follows: +- To satisfy the migration requirement #4, configure the [binlog event filter rule](dm-key-features.md#binlog-event-filter) as follows: {{< copyable "" >}} @@ -154,7 +154,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ## Migration task configuration -The complete configuration of the migration task is shown as follows. For more details, see [Data Migration Task Configuration Guide](task-configuration-guide.md). +The complete configuration of the migration task is shown as follows. For more details, see [Data Migration Task Configuration Guide](dm-task-configuration-guide.md). {{< copyable "" >}} diff --git a/en/usage-scenario-simple-migration.md b/en/usage-scenario-simple-migration.md index b183ded07..7772b050f 100644 --- a/en/usage-scenario-simple-migration.md +++ b/en/usage-scenario-simple-migration.md @@ -61,7 +61,7 @@ Assume that the schemas migrated to the downstream are as follows: ## Migration solution -- To satisfy migration Requirements #1-i, #1-ii and #1-iii, configure the [table routing rules](key-features.md#table-routing) as follows: +- To satisfy migration Requirements #1-i, #1-ii and #1-iii, configure the [table routing rules](dm-key-features.md#table-routing) as follows: ```yaml routes: @@ -77,7 +77,7 @@ Assume that the schemas migrated to the downstream are as follows: target-schema: "user_south" ``` -- To satisfy the migration Requirement #2-i, configure the [table routing rules](key-features.md#table-routing) as follows: +- To satisfy the migration Requirement #2-i, configure the [table routing rules](dm-key-features.md#table-routing) as follows: ```yaml routes: @@ -94,7 +94,7 @@ Assume that the schemas migrated to the downstream are as follows: target-table: "store_shenzhen" ``` -- To satisfy the migration Requirement #1-iv, configure the [binlog filtering rules](key-features.md#binlog-event-filter) as follows: +- To satisfy the migration Requirement #1-iv, configure the [binlog filtering rules](dm-key-features.md#binlog-event-filter) as follows: ```yaml filters: @@ -110,7 +110,7 @@ Assume that the schemas migrated to the downstream are as follows: action: Ignore ``` -- To satisfy the migration Requirement #2-ii, configure the [binlog filtering rule](key-features.md#binlog-event-filter) as follows: +- To satisfy the migration Requirement #2-ii, configure the [binlog filtering rule](dm-key-features.md#binlog-event-filter) as follows: ```yaml filters: @@ -125,7 +125,7 @@ Assume that the schemas migrated to the downstream are as follows: > > `store-filter-rule` is different from `log-filter-rule & user-filter-rule`. `store-filter-rule` is a rule for the whole `store` schema, while `log-filter-rule` and `user-filter-rule` are rules for the `log` table in the `user` schema. -- To satisfy the migration Requirement #3, configure the [block and allow lists](key-features.md#block-and-allow-table-lists) as follows: +- To satisfy the migration Requirement #3, configure the [block and allow lists](dm-key-features.md#block-and-allow-table-lists) as follows: ```yaml block-allow-list: # Use black-white-list if the DM version is earlier than or equal to v2.0.0-beta.2. @@ -135,7 +135,7 @@ Assume that the schemas migrated to the downstream are as follows: ## Migration task configuration -The complete migration task configuration is shown below. For more details, see [data migration task configuration guide](task-configuration-guide.md). +The complete migration task configuration is shown below. For more details, see [data migration task configuration guide](dm-task-configuration-guide.md). ```yaml name: "one-tidb-secondary" diff --git a/zh/TOC.md b/zh/TOC.md index dd52a7c06..315ffcc25 100644 --- a/zh/TOC.md +++ b/zh/TOC.md @@ -2,12 +2,12 @@ - 关于 DM - - [DM 简介](overview.md) + - [DM 简介](dm-overview.md) - [DM 5.3 Release Notes](releases/5.3.0.md) - 基本功能 - - [Table routing](key-features.md#table-routing) - - [Block & Allow Lists](key-features.md#block--allow-table-lists) - - [Binlog Event Filter](key-features.md#binlog-event-filter) + - [Table routing](dm-key-features.md#table-routing) + - [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) + - [Binlog Event Filter](dm-key-features.md#binlog-event-filter) - 高级功能 - 分库分表合并迁移 - [概述](feature-shard-merge.md) @@ -16,7 +16,7 @@ - [迁移使用 GH-ost/PT-osc 的源数据库](feature-online-ddl.md) - [使用 SQL 表达式过滤某些行变更](feature-expression-filter.md) - [DM 架构](dm-arch.md) - - [性能数据](benchmark-v5.3.0.md) + - [性能数据](dm-benchmark-v5.3.0.md) - 快速上手 - [快速上手试用](quick-start-with-dm.md) - [使用 TiUP 部署 DM 集群](deploy-a-dm-cluster-using-tiup.md) @@ -35,56 +35,56 @@ - [使用 Binary](deploy-a-dm-cluster-using-binary.md) - [使用 Kubernetes](https://docs.pingcap.com/zh/tidb-in-kubernetes/dev/deploy-tidb-dm) - [使用 DM 迁移数据](migrate-data-using-dm.md) - - [测试 DM 性能](performance-test.md) + - [测试 DM 性能](dm-performance-test.md) - 运维操作 - 集群运维工具 - [使用 TiUP 运维集群(推荐)](maintain-dm-using-tiup.md) - [使用 dmctl 运维集群](dmctl-introduction.md) - - [使用 OpenAPI 运维集群](open-api.md) + - [使用 OpenAPI 运维集群](dm-open-api.md) - 升级版本 - [1.0.x 到 2.0+ 手动升级](manually-upgrade-dm-1.0-to-2.0.md) - - [管理数据源](manage-source.md) + - [管理数据源](dm-manage-source.md) - 管理迁移任务 - - [任务配置向导](task-configuration-guide.md) - - [任务前置检查](precheck.md) - - [创建任务](create-task.md) - - [查询状态](query-status.md) - - [暂停任务](pause-task.md) - - [恢复任务](resume-task.md) - - [停止任务](stop-task.md) - - [导出和导入集群的数据源和任务配置](export-import-config.md) + - [任务配置向导](dm-task-configuration-guide.md) + - [任务前置检查](dm-precheck.md) + - [创建任务](dm-create-task.md) + - [查询状态](dm-query-status.md) + - [暂停任务](dm-pause-task.md) + - [恢复任务](dm-resume-task.md) + - [停止任务](dm-stop-task.md) + - [导出和导入集群的数据源和任务配置](dm-export-import-config.md) - [处理出错的 DDL 语句](handle-failed-ddl-statements.md) - [手动处理 Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - - [管理迁移表的表结构](manage-schema.md) - - [处理告警](handle-alerts.md) + - [管理迁移表的表结构](dm-manage-schema.md) + - [处理告警](dm-handle-alerts.md) - [日常巡检](dm-daily-check.md) - 使用场景 - [从 Aurora 迁移数据到 TiDB](migrate-from-mysql-aurora.md) - [TiDB 表结构存在更多列的迁移场景](usage-scenario-downstream-more-columns.md) - [变更同步的 MySQL 实例](usage-scenario-master-slave-switch.md) - 故障处理 - - [故障及处理方法](error-handling.md) - - [性能问题及处理方法](handle-performance-issues.md) + - [故障及处理方法](dm-error-handling.md) + - [性能问题及处理方法](dm-handle-performance-issues.md) - 性能调优 - - [配置调优](tune-configuration.md) + - [配置调优](dm-tune-configuration.md) - 参考指南 - 架构 - [DM 架构简介](dm-arch.md) - [DM-worker 简介](dm-worker-intro.md) - - [DM 命令行参数](command-line-flags.md) + - [DM 命令行参数](dm-command-line-flags.md) - 配置 - - [概述](config-overview.md) + - [概述](dm-config-overview.md) - [DM-master 配置](dm-master-configuration-file.md) - [DM-worker 配置](dm-worker-configuration-file.md) - - [上游数据库配置](source-configuration-file.md) - - [数据迁移任务配置向导](task-configuration-guide.md) + - [上游数据库配置](dm-source-configuration-file.md) + - [数据迁移任务配置向导](dm-task-configuration-guide.md) - 安全 - - [为 DM 的连接开启加密传输](enable-tls.md) + - [为 DM 的连接开启加密传输](dm-enable-tls.md) - [生成自签名证书](dm-generate-self-signed-certificates.md) - [监控指标](monitor-a-dm-cluster.md) - [告警信息](dm-alert-rules.md) - - [错误码](error-handling.md#常见故障处理方法) -- [常见问题](faq.md) + - [错误码](dm-error-handling.md#常见故障处理方法) +- [常见问题](dm-faq.md) - [术语表](dm-glossary.md) - 版本发布历史 - v5.3 diff --git a/zh/_index.md b/zh/_index.md index ad6399046..22feb83b7 100644 --- a/zh/_index.md +++ b/zh/_index.md @@ -17,8 +17,8 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] 关于 TiDB Data Migration -- [什么是 DM?](overview.md) -- [DM 架构](overview.md) +- [什么是 DM?](dm-overview.md) +- [DM 架构](dm-overview.md) - [性能数据](benchmark-v2.0-ga.md) @@ -41,7 +41,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] - [使用 TiUP 离线镜像部署集群](deploy-a-dm-cluster-using-tiup-offline.md) - [使用 Binary 部署集群](deploy-a-dm-cluster-using-binary.md) - [使用 DM 迁移数据](migrate-data-using-dm.md) -- [测试 DM 性能](performance-test.md) +- [测试 DM 性能](dm-performance-test.md) @@ -52,7 +52,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] - [使用 dmctl 运维集群](dmctl-introduction.md) - [升级版本](manually-upgrade-dm-1.0-to-2.0.md) - [手动处理 Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) -- [处理告警](handle-alerts.md) +- [处理告警](dm-handle-alerts.md) - [日常巡检](dm-daily-check.md) @@ -69,11 +69,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] 参考指南 - [DM 架构](dm-arch.md) -- [DM 命令行参数](command-line-flags.md) -- [配置概述](config-overview.md) +- [DM 命令行参数](dm-command-line-flags.md) +- [配置概述](dm-config-overview.md) - [监控指标](monitor-a-dm-cluster.md) - [告警信息](dm-alert-rules.md) -- [错误码](error-handling.md#常见故障处理方法) +- [错误码](dm-error-handling.md#常见故障处理方法) diff --git a/zh/benchmark-v1.0-ga.md b/zh/benchmark-v1.0-ga.md index 0af824c8f..d6b6451bd 100644 --- a/zh/benchmark-v1.0-ga.md +++ b/zh/benchmark-v1.0-ga.md @@ -59,11 +59,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/benchmark-v1.0-ga/'] ## 测试场景 -可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41)。 +可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -106,7 +106,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/benchmark-v1.0-ga/'] ### 增量复制性能测试用例 -使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/benchmark-v2.0-ga.md b/zh/benchmark-v2.0-ga.md index 32cd77c7c..1e88355e3 100644 --- a/zh/benchmark-v2.0-ga.md +++ b/zh/benchmark-v2.0-ga.md @@ -59,11 +59,11 @@ summary: 了解 DM 2.0-GA 版本的性能。 ## 测试场景 -可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34)。 +可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -105,7 +105,7 @@ summary: 了解 DM 2.0-GA 版本的性能。 ### 增量复制性能测试用例 -使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/dm-alert-rules.md b/zh/dm-alert-rules.md index 02032d695..7e5c5d7ec 100644 --- a/zh/dm-alert-rules.md +++ b/zh/dm-alert-rules.md @@ -8,6 +8,6 @@ aliases: ['/docs-cn/tidb-data-migration/dev/alert-rules/','/zh/tidb-data-migrati 使用 TiUP 部署 DM 集群的时候,会默认部署一套[告警系统](migrate-data-using-dm.md#第-8-步监控任务与查看日志)。 -DM 的告警规则及其对应的处理方法可参考[告警处理](handle-alerts.md)。 +DM 的告警规则及其对应的处理方法可参考[告警处理](dm-handle-alerts.md)。 DM 的告警信息与监控指标均基于 Prometheus,告警规则与监控指标的对应关系可参考 [DM 监控指标](monitor-a-dm-cluster.md)。 diff --git a/zh/benchmark-v5.3.0.md b/zh/dm-benchmark-v5.3.0.md similarity index 93% rename from zh/benchmark-v5.3.0.md rename to zh/dm-benchmark-v5.3.0.md index a460f3aa1..29bd5df42 100644 --- a/zh/benchmark-v5.3.0.md +++ b/zh/dm-benchmark-v5.3.0.md @@ -1,6 +1,7 @@ --- title: DM 5.3.0 性能测试报告 summary: 了解 DM 5.3.0 版本的性能。 +aliases: ['/zh/tidb-data-migration/dev/benchmark-v5.3.0.md/] --- # DM 5.3.0 性能测试报告 @@ -53,11 +54,11 @@ summary: 了解 DM 5.3.0 版本的性能。 ## 测试场景 -可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4)。 +可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -99,7 +100,7 @@ summary: 了解 DM 5.3.0 版本的性能。 ### 增量复制性能测试用例 -使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/command-line-flags.md b/zh/dm-command-line-flags.md similarity index 98% rename from zh/command-line-flags.md rename to zh/dm-command-line-flags.md index 215165239..3320bf564 100644 --- a/zh/command-line-flags.md +++ b/zh/dm-command-line-flags.md @@ -1,7 +1,7 @@ --- title: DM 命令行参数 summary: 介绍 DM 各组件的主要命令行参数。 -aliases: ['/docs-cn/tidb-data-migration/dev/command-line-flags/'] +aliases: ['/docs-cn/tidb-data-migration/dev/command-line-flags/','/zh/tidb-data-migration/dev/command-line-flags.md/] --- # DM 命令行参数 diff --git a/zh/config-overview.md b/zh/dm-config-overview.md similarity index 74% rename from zh/config-overview.md rename to zh/dm-config-overview.md index 7cdcd6191..b2d3c1d40 100644 --- a/zh/config-overview.md +++ b/zh/dm-config-overview.md @@ -1,6 +1,6 @@ --- title: DM 配置简介 -aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/'] +aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/','/zh/tidb-data-migration/dev/config-overview.md/] --- # DM 配置简介 @@ -11,7 +11,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/'] - `dm-master.toml`:DM-master 进程的配置文件,包括 DM-master 的拓扑信息、日志等各项配置。配置说明详见 [DM-master 配置文件介绍](dm-master-configuration-file.md)。 - `dm-worker.toml`:DM-worker 进程的配置文件,包括 DM-worker 的拓扑信息、日志等各项配置。配置说明详见 [DM-worker 配置文件介绍](dm-worker-configuration-file.md)。 -- `source.yaml`:上游数据库 MySQL/MariaDB 相关配置。配置说明详见[上游数据库配置文件介绍](source-configuration-file.md)。 +- `source.yaml`:上游数据库 MySQL/MariaDB 相关配置。配置说明详见[上游数据库配置文件介绍](dm-source-configuration-file.md)。 ## 迁移任务配置 @@ -19,9 +19,9 @@ aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/'] 具体步骤如下: -1. [使用 dmctl 将数据源配置加载到 DM 集群](manage-source.md#数据源操作); -2. 参考[数据任务配置向导](task-configuration-guide.md)来创建 `your_task.yaml`; -3. [使用 dmctl 创建数据迁移任务](create-task.md)。 +1. [使用 dmctl 将数据源配置加载到 DM 集群](dm-manage-source.md#数据源操作); +2. 参考[数据任务配置向导](dm-task-configuration-guide.md)来创建 `your_task.yaml`; +3. [使用 dmctl 创建数据迁移任务](dm-create-task.md)。 ### 关键概念 diff --git a/zh/create-task.md b/zh/dm-create-task.md similarity index 91% rename from zh/create-task.md rename to zh/dm-create-task.md index eb60553de..2c1309aa9 100644 --- a/zh/create-task.md +++ b/zh/dm-create-task.md @@ -1,12 +1,12 @@ --- title: 创建数据迁移任务 summary: 了解 TiDB Data Migration 如何创建数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/create-task/'] +aliases: ['/docs-cn/tidb-data-migration/dev/create-task/','/zh/tidb-data-migration/dev/create-task.md/] --- # 创建数据迁移任务 -`start-task` 命令用于创建数据迁移任务。当数据迁移任务启动时,DM 将[自动对相应权限和配置进行前置检查](precheck.md)。 +`start-task` 命令用于创建数据迁移任务。当数据迁移任务启动时,DM 将[自动对相应权限和配置进行前置检查](dm-precheck.md)。 {{< copyable "" >}} diff --git a/zh/dm-daily-check.md b/zh/dm-daily-check.md index b120f1321..f4db4bb6c 100644 --- a/zh/dm-daily-check.md +++ b/zh/dm-daily-check.md @@ -8,7 +8,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/daily-check/',‘/zh/tidb-data-migra 本文总结了 TiDB Data Migration (DM) 工具日常巡检的方法: -+ 方法一:执行 `query-status` 命令查看任务运行状态以及相关错误输出。详见[查询状态](query-status.md)。 ++ 方法一:执行 `query-status` 命令查看任务运行状态以及相关错误输出。详见[查询状态](dm-query-status.md)。 + 方法二:如果使用 TiUP 部署 DM 集群时正确部署了 Prometheus 与 Grafana,如 Grafana 的地址为 `172.16.10.71`,可在浏览器中打开 进入 Grafana,选择 DM 的 Dashboard 即可查看 DM 相关监控项。具体监控指标参照[监控与告警设置](monitor-a-dm-cluster.md)。 diff --git a/zh/enable-tls.md b/zh/dm-enable-tls.md similarity index 98% rename from zh/enable-tls.md rename to zh/dm-enable-tls.md index b29dfcd1e..46040a994 100644 --- a/zh/enable-tls.md +++ b/zh/dm-enable-tls.md @@ -1,6 +1,7 @@ --- title: 为 DM 的连接开启加密传输 summary: 了解如何为 DM 的连接开启加密传输。 +aliases: ['/zh/tidb-data-migration/dev/enable-tls.md/] --- # 为 DM 的连接开启加密传输 diff --git a/zh/error-handling.md b/zh/dm-error-handling.md similarity index 97% rename from zh/error-handling.md rename to zh/dm-error-handling.md index 06393848c..4cb38a00c 100644 --- a/zh/error-handling.md +++ b/zh/dm-error-handling.md @@ -1,7 +1,7 @@ --- title: 故障及处理方法 summary: 了解 DM 的错误系统及常见故障的处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data-migration/dev/troubleshoot-dm/','/docs-cn/tidb-data-migration/dev/error-system/'] +aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data-migration/dev/troubleshoot-dm/','/docs-cn/tidb-data-migration/dev/error-system/','/zh/tidb-data-migration/dev/error-handling.md/] --- # 故障及处理方法 @@ -88,7 +88,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data resume-task ${task name} ``` -但在某些情况下,你还需要重置数据迁移任务。有关何时需要重置以及如何重置,详见[重置数据迁移任务](faq.md#如何重置数据迁移任务)。 +但在某些情况下,你还需要重置数据迁移任务。有关何时需要重置以及如何重置,详见[重置数据迁移任务](dm-faq.md#如何重置数据迁移任务)。 ## 常见故障处理方法 @@ -99,8 +99,8 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data | `code=10003` | 数据库底层 `invalid connection` 错误,通常表示 DM 到下游 TiDB 的数据库连接出现了异常(如网络故障、TiDB 重启、TiKV busy 等)且当前请求已有部分数据发送到了 TiDB。 | DM 提供针对此类错误的自动恢复。如果未能正常恢复,需要用户进一步检查错误信息并根据具体场景进行分析。 | | `code=10005` | 数据库查询类语句出错 | | | `code=10006` | 数据库 `EXECUTE` 类型语句出错,包括 DDL 和 `INSERT`/`UPDATE`/`DELETE` 类型的 DML。更详细的错误信息可通过错误 message 获取。错误 message 中通常包含操作数据库所返回的错误码和错误信息。 | | -| `code=11006` | DM 内置的 parser 解析不兼容的 DDL 时出错 | 可参考 [Data Migration 故障诊断-处理不兼容的 DDL 语句](faq.md#如何处理不兼容的-ddl-语句) 提供的解决方案 | -| `code=20010` | 处理任务配置时,解密数据库的密码出错 | 检查任务配置中提供的下游数据库密码是否有[使用 dmctl 正确加密](manage-source.md#加密数据库密码) | +| `code=11006` | DM 内置的 parser 解析不兼容的 DDL 时出错 | 可参考 [Data Migration 故障诊断-处理不兼容的 DDL 语句](dm-faq.md#如何处理不兼容的-ddl-语句) 提供的解决方案 | +| `code=20010` | 处理任务配置时,解密数据库的密码出错 | 检查任务配置中提供的下游数据库密码是否有[使用 dmctl 正确加密](dm-manage-source.md#加密数据库密码) | | `code=26002` | 任务检查创建数据库连接失败。更详细的错误信息可通过错误 message 获取。错误 message 中包含操作数据库所返回的错误码和错误信息。 | 检查 DM-master 所在的机器是否有权限访问上游 | | `code=32001` | dump 处理单元异常 | 如果报错 `msg` 包含 `mydumper: argument list too long.`,则需要用户根据 block-allow-list,在 `task.yaml` 的 dump 处理单元的 `extra-args` 参数中手动加上 `--regex` 正则表达式设置要导出的库表。例如,如果要导出所有库中表名字为 `hello` 的表,可加上 `--regex '.*\\.hello$'`,如果要导出所有表,可加上 `--regex '.*'`。 | | `code=38008` | DM 组件间的 gRPC 通信出错 | 检查 `class`, 定位错误发生在哪些组件的交互环节,根据错误 message 判断是哪类通信错误。如果是 gRPC 建立连接出错,可检查通信服务端是否运行正常。 | @@ -163,9 +163,9 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data ### 执行 `query-status` 或查看日志时出现 `Access denied for user 'root'@'172.31.43.27' (using password: YES)` -在所有 DM 配置文件中,数据库相关的密码都推荐使用经 dmctl 加密后的密文(若数据库密码为空,则无需加密)。有关如何使用 dmctl 加密明文密码,参见[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 +在所有 DM 配置文件中,数据库相关的密码都推荐使用经 dmctl 加密后的密文(若数据库密码为空,则无需加密)。有关如何使用 dmctl 加密明文密码,参见[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 -此外,在 DM 运行过程中,上下游数据库的用户必须具备相应的读写权限。在启动迁移任务过程中,DM 会自动进行相应权限的前置检查,详见[上游 MySQL 实例配置前置检查](precheck.md)。 +此外,在 DM 运行过程中,上下游数据库的用户必须具备相应的读写权限。在启动迁移任务过程中,DM 会自动进行相应权限的前置检查,详见[上游 MySQL 实例配置前置检查](dm-precheck.md)。 ### load 处理单元报错 `packet for query is too large. Try adjusting the 'max_allowed_packet' variable` diff --git a/zh/export-import-config.md b/zh/dm-export-import-config.md similarity index 96% rename from zh/export-import-config.md rename to zh/dm-export-import-config.md index 3c1cf41ab..afe71afb6 100644 --- a/zh/export-import-config.md +++ b/zh/dm-export-import-config.md @@ -1,6 +1,7 @@ --- title: 导出和导入集群的数据源和任务配置 summary: 了解 TiDB Data Migration 导出和导入集群的数据源和任务配置。 +aliases: ['/zh/tidb-data-migration/dev/export-import-config.md/] --- # 导出和导入集群的数据源和任务配置 diff --git a/zh/faq.md b/zh/dm-faq.md similarity index 98% rename from zh/faq.md rename to zh/dm-faq.md index 366bafab4..faca76aa9 100644 --- a/zh/faq.md +++ b/zh/dm-faq.md @@ -1,6 +1,6 @@ --- title: Data Migration 常见问题 -aliases: ['/docs-cn/tidb-data-migration/dev/faq/'] +aliases: ['/docs-cn/tidb-data-migration/dev/faq/','/zh/tidb-data-migration/dev/faq.md/] --- # Data Migration 常见问题 @@ -176,7 +176,7 @@ curl -X POST -d "tidb_general_log=0" http://{TiDBIP}:10080/settings if the DDL is not needed, you can use a filter rule with \"*\" schema-pattern to ignore it.\n\t : parse statement: line 1 column 11 near \"EVENT `event_del_big_table` \r\nDISABLE\" %!!(MISSING)(EXTRA string=ALTER EVENT `event_del_big_table` \r\nDISABLE ``` -出现报错的原因是 TiDB parser 无法解析上游的 DDL,例如 `ALTER EVENT`,所以 `sql-skip` 不会按预期生效。可以在任务配置文件中添加 [Binlog 过滤规则](key-features.md#binlog-event-filter)进行过滤,并设置 `schema-pattern: "*"`。从 DM 2.0.1 版本开始,已预设过滤了 `EVENT` 相关语句。 +出现报错的原因是 TiDB parser 无法解析上游的 DDL,例如 `ALTER EVENT`,所以 `sql-skip` 不会按预期生效。可以在任务配置文件中添加 [Binlog 过滤规则](dm-key-features.md#binlog-event-filter)进行过滤,并设置 `schema-pattern: "*"`。从 DM 2.0.1 版本开始,已预设过滤了 `EVENT` 相关语句。 在 DM v2.0 版本之后 `sql-skip` 已经被 `handle-error` 替代,`handle-error` 可以跳过该类错误。 diff --git a/zh/dm-glossary.md b/zh/dm-glossary.md index 996c53c9b..e3a76e26e 100644 --- a/zh/dm-glossary.md +++ b/zh/dm-glossary.md @@ -20,7 +20,7 @@ MySQL/MariaDB 生成的 Binlog 文件中的数据变更信息,具体请参考 ### Binlog event filter -比 Block & allow table list 更加细粒度的过滤功能,具体可参考 [Binlog Event Filter](overview.md#binlog-event-filter)。 +比 Block & allow table list 更加细粒度的过滤功能,具体可参考 [Binlog Event Filter](dm-overview.md#binlog-event-filter)。 ### Binlog position @@ -32,7 +32,7 @@ DM-worker 内部用于读取上游 Binlog 或本地 Relay log 并迁移到下游 ### Block & allow table list -针对上游数据库实例表的黑白名单过滤功能,具体可参考 [Block & Allow Table Lists](overview.md#block--allow-lists)。该功能与 [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) 及 [MariaDB Replication Filters](https://mariadb.com/kb/en/library/replication-filters/) 类似。 +针对上游数据库实例表的黑白名单过滤功能,具体可参考 [Block & Allow Table Lists](dm-overview.md#block--allow-lists)。该功能与 [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) 及 [MariaDB Replication Filters](https://mariadb.com/kb/en/library/replication-filters/) 类似。 ## C @@ -122,13 +122,13 @@ DM-worker 内部用于从上游拉取 Binlog 并写入数据到 Relay log 的处 ### Subtask status -数据迁移子任务所处的状态,目前包括 `New`、`Running`、`Paused`、`Stopped` 及 `Finished` 5 种状态。有关数据迁移任务、子任务状态的更多信息可参考[任务状态](query-status.md#任务状态)。 +数据迁移子任务所处的状态,目前包括 `New`、`Running`、`Paused`、`Stopped` 及 `Finished` 5 种状态。有关数据迁移任务、子任务状态的更多信息可参考[任务状态](dm-query-status.md#任务状态)。 ## T ### Table routing -用于支持将上游 MySQL/MariaDB 实例的某些表迁移到下游指定表的路由功能,可以用于分库分表的合并迁移,具体可参考 [Table routing](key-features.md#table-routing)。 +用于支持将上游 MySQL/MariaDB 实例的某些表迁移到下游指定表的路由功能,可以用于分库分表的合并迁移,具体可参考 [Table routing](dm-key-features.md#table-routing)。 ### Task @@ -136,4 +136,4 @@ DM-worker 内部用于从上游拉取 Binlog 并写入数据到 Relay log 的处 ### Task status -数据迁移子任务所处的状态,由 [Subtask status](#subtask-status) 整合而来,具体信息可查看[任务状态](query-status.md#任务状态)。 +数据迁移子任务所处的状态,由 [Subtask status](#subtask-status) 整合而来,具体信息可查看[任务状态](dm-query-status.md#任务状态)。 diff --git a/zh/handle-alerts.md b/zh/dm-handle-alerts.md similarity index 75% rename from zh/handle-alerts.md rename to zh/dm-handle-alerts.md index a78250a49..e8438999e 100644 --- a/zh/handle-alerts.md +++ b/zh/dm-handle-alerts.md @@ -1,7 +1,7 @@ --- title: 处理告警 summary: 了解 DM 中各主要告警信息的处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] +aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/','/zh/tidb-data-migration/dev/handle-alerts.md/] --- # 处理告警 @@ -20,7 +20,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] ### `DM_DDL_error` -处理 shard DDL 时出现错误,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +处理 shard DDL 时出现错误,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_pending_DDL` @@ -30,13 +30,13 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] ### `DM_task_state` -当 DM-worker 内有子任务处于 `Paused` 状态超过 20 分钟时会触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 DM-worker 内有子任务处于 `Paused` 状态超过 20 分钟时会触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ## relay log 告警 ### `DM_relay_process_exits_with_error` -当 relay log 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_remain_storage_of_relay_log` @@ -48,40 +48,40 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] ### `DM_relay_log_data_corruption` -当 relay log 处理单元在校验从上游读取到的 binlog event 且发现 checksum 信息异常时会转为 `Paused` 状态并立即触发告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在校验从上游读取到的 binlog event 且发现 checksum 信息异常时会转为 `Paused` 状态并立即触发告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_fail_to_read_binlog_from_master` -当 relay log 处理单元在尝试从上游读取 binlog event 发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在尝试从上游读取 binlog event 发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_fail_to_write_relay_log` -当 relay log 处理单元在尝试将 binlog event 写入 relay log 文件发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在尝试将 binlog event 写入 relay log 文件发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_binlog_file_gap_between_master_relay` -当 relay log 处理单元已拉取到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 relay log 处理单元相关的性能问题进行排查与处理。 +当 relay log 处理单元已拉取到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 relay log 处理单元相关的性能问题进行排查与处理。 ## Dump/Load 告警 ### `DM_dump_process_exists_with_error` -当 Dump 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 Dump 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_load_process_exists_with_error` -当 Load 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 Load 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ## Binlog replication 告警 ### `DM_sync_process_exists_with_error` -当 Binlog replication 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 Binlog replication 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_binlog_file_gap_between_master_syncer` -当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 +当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 ### `DM_binlog_file_gap_between_relay_syncer` -当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前 relay log 处理单元超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 +当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前 relay log 处理单元超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 diff --git a/zh/handle-performance-issues.md b/zh/dm-handle-performance-issues.md similarity index 97% rename from zh/handle-performance-issues.md rename to zh/dm-handle-performance-issues.md index 30cfb86f5..e5c8f25e5 100644 --- a/zh/handle-performance-issues.md +++ b/zh/dm-handle-performance-issues.md @@ -1,7 +1,7 @@ --- title: 性能问题及处理方法 summary: 了解 DM 可能存在的常见性能问题及其处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/handle-performance-issues/'] +aliases: ['/docs-cn/tidb-data-migration/dev/handle-performance-issues/','/zh/tidb-data-migration/dev/handle-performance-issues.md/] --- # 性能问题及处理方法 @@ -72,7 +72,7 @@ Binlog replication 模块会根据配置选择从上游 MySQL/MariaDB 或 relay ### binlog event 转换 -Binlog replication 模块从 binlog event 数据中尝试构造 DML、解析 DDL 以及进行 [table router](key-features.md#table-routing) 转换等,主要的性能指标是 `transform binlog event duration`。 +Binlog replication 模块从 binlog event 数据中尝试构造 DML、解析 DDL 以及进行 [table router](dm-key-features.md#table-routing) 转换等,主要的性能指标是 `transform binlog event duration`。 这部分的耗时受上游写入的业务特点影响较大,如对于 `INSERT INTO` 语句,转换单个 `VALUES` 的时间和转换大量 `VALUES` 的时间差距很多,其波动范围可能从几十微秒至上百微秒,但一般不会成为系统的瓶颈。 diff --git a/zh/dm-hardware-and-software-requirements.md b/zh/dm-hardware-and-software-requirements.md index 78f173217..b2ead8cac 100644 --- a/zh/dm-hardware-and-software-requirements.md +++ b/zh/dm-hardware-and-software-requirements.md @@ -46,4 +46,4 @@ DM 支持部署和运行在 Intel x86-64 架构的 64 位通用硬件服务器 > **注意:** > > - 在生产环境中,不建议将 DM-master 和 DM-worker 部署和运行在同一个服务器上,以防 DM-worker 对磁盘的写入干扰 DM-master 高可用组件使用磁盘。 -> - 在遇到性能问题时可参照[配置调优](tune-configuration.md)尝试修改任务配置。调优效果不明显时,可以尝试升级服务器配置。 +> - 在遇到性能问题时可参照[配置调优](dm-tune-configuration.md)尝试修改任务配置。调优效果不明显时,可以尝试升级服务器配置。 diff --git a/zh/key-features.md b/zh/dm-key-features.md similarity index 98% rename from zh/key-features.md rename to zh/dm-key-features.md index 00de57a6d..0dc72d958 100644 --- a/zh/key-features.md +++ b/zh/dm-key-features.md @@ -1,7 +1,7 @@ --- title: 主要特性 summary: 了解 DM 的各主要功能特性或相关的配置选项。 -aliases: ['/docs-cn/tidb-data-migration/dev/key-features/','/docs-cn/tidb-data-migration/dev/feature-overview/'] +aliases: ['/docs-cn/tidb-data-migration/dev/key-features/','/docs-cn/tidb-data-migration/dev/feature-overview/','/zh/tidb-data-migration/dev/key-features.md/] --- # 主要特性 @@ -248,7 +248,7 @@ Binlog event filter 是比迁移表黑白名单更加细粒度的过滤规则, > **注意:** > > - 同一个表匹配上多个规则,将会顺序应用这些规则,并且黑名单的优先级高于白名单,即如果同时存在规则 `Ignore` 和 `Do` 应用在某个 table 上,那么 `Ignore` 生效。 -> - 从 DM v2.0.2 开始,Binlog event filter 也可以在上游数据库配置文件中进行配置。见[上游数据库配置文件介绍](source-configuration-file.md)。 +> - 从 DM v2.0.2 开始,Binlog event filter 也可以在上游数据库配置文件中进行配置。见[上游数据库配置文件介绍](dm-source-configuration-file.md)。 ### 参数配置 @@ -396,7 +396,7 @@ filters: ### 使用限制 - DM 仅针对 gh-ost 与 pt-osc 做了特殊支持。 -- 在开启 `online-ddl` 时,增量复制对应的 checkpoint 应不处于 online DDL 执行过程中。如上游某次 online DDL 操作开始于 binlog `position-A`、结束于 `position-B`,则增量复制的起始点应早于 `position-A` 或晚于 `position-B`,否则可能出现迁移出错,具体可参考 [FAQ](faq.md#设置了-online-ddl-scheme-gh-ost-gh-ost-表相关的-ddl-报错该如何处理)。 +- 在开启 `online-ddl` 时,增量复制对应的 checkpoint 应不处于 online DDL 执行过程中。如上游某次 online DDL 操作开始于 binlog `position-A`、结束于 `position-B`,则增量复制的起始点应早于 `position-A` 或晚于 `position-B`,否则可能出现迁移出错,具体可参考 [FAQ](dm-faq.md#设置了-online-ddl-scheme-gh-ost-gh-ost-表相关的-ddl-报错该如何处理)。 ### 参数配置 diff --git a/zh/manage-schema.md b/zh/dm-manage-schema.md similarity index 99% rename from zh/manage-schema.md rename to zh/dm-manage-schema.md index 600af5e44..45e12336e 100644 --- a/zh/manage-schema.md +++ b/zh/dm-manage-schema.md @@ -1,6 +1,7 @@ --- title: 管理迁移表的表结构 summary: 了解如何管理待迁移表在 DM 内部的表结构。 +aliases: ['/zh/tidb-data-migration/dev/manage-schema.md/] --- # 管理迁移表的表结构 diff --git a/zh/manage-source.md b/zh/dm-manage-source.md similarity index 95% rename from zh/manage-source.md rename to zh/dm-manage-source.md index 49015ebca..55fdd79b3 100644 --- a/zh/manage-source.md +++ b/zh/dm-manage-source.md @@ -1,7 +1,7 @@ --- title: 管理上游数据源 summary: 了解如何管理上游 MySQL 实例。 -aliases: ['/docs-cn/tidb-data-migration/dev/manage-source/'] +aliases: ['/docs-cn/tidb-data-migration/dev/manage-source/','/zh/tidb-data-migration/dev/manage-source.md/] --- # 管理上游数据源配置 @@ -72,7 +72,7 @@ Global Flags: operate-source create ./source.yaml ``` -其中 `source.yaml` 的配置参考[上游数据库配置文件介绍](source-configuration-file.md)。 +其中 `source.yaml` 的配置参考[上游数据库配置文件介绍](dm-source-configuration-file.md)。 结果如下: @@ -174,7 +174,7 @@ Global Flags: -s, --source strings MySQL Source ID. ``` -在改变绑定关系前,DM 会检查待解绑的 worker 是否正在运行同步任务,如果正在运行则需要先[暂停任务](pause-task.md),并在改变绑定关系后[恢复任务](resume-task.md)。 +在改变绑定关系前,DM 会检查待解绑的 worker 是否正在运行同步任务,如果正在运行则需要先[暂停任务](dm-pause-task.md),并在改变绑定关系后[恢复任务](dm-resume-task.md)。 ### 命令用法示例 diff --git a/zh/open-api.md b/zh/dm-open-api.md similarity index 99% rename from zh/open-api.md rename to zh/dm-open-api.md index 83a43a5dd..904ee1ef2 100644 --- a/zh/open-api.md +++ b/zh/dm-open-api.md @@ -1,6 +1,7 @@ --- title: 使用 OpenAPI 运维集群 summary: 了解如何使用 OpenAPI 接口来管理集群状态和数据同步。 +aliases: ['/zh/tidb-data-migration/dev/open-api.md/] --- # 使用 OpenAPI 运维集群 diff --git a/zh/overview.md b/zh/dm-overview.md similarity index 82% rename from zh/overview.md rename to zh/dm-overview.md index 08b451d99..c6c5aab64 100644 --- a/zh/overview.md +++ b/zh/dm-overview.md @@ -1,6 +1,6 @@ --- title: Data Migration 简介 -aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overview/'] +aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overview/','/zh/tidb-data-migration/dev/overview.md/] --- # Data Migration 简介 @@ -30,15 +30,15 @@ aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overvi ### Block & allow lists -[Block & Allow Lists](key-features.md#block--allow-table-lists) 的过滤规则类似于 MySQL `replication-rules-db`/`replication-rules-table`,用于过滤或指定只迁移某些数据库或某些表的所有操作。 +[Block & Allow Lists](dm-key-features.md#block--allow-table-lists) 的过滤规则类似于 MySQL `replication-rules-db`/`replication-rules-table`,用于过滤或指定只迁移某些数据库或某些表的所有操作。 ### Binlog event filter -[Binlog Event Filter](key-features.md#binlog-event-filter) 用于过滤源数据库中特定表的特定类型操作,比如过滤掉表 `test`.`sbtest` 的 `INSERT` 操作或者过滤掉库 `test` 下所有表的 `TRUNCATE TABLE` 操作。 +[Binlog Event Filter](dm-key-features.md#binlog-event-filter) 用于过滤源数据库中特定表的特定类型操作,比如过滤掉表 `test`.`sbtest` 的 `INSERT` 操作或者过滤掉库 `test` 下所有表的 `TRUNCATE TABLE` 操作。 ### Table routing -[Table Routing](key-features.md#table-routing) 是将源数据库的表迁移到下游指定表的路由功能,比如将源数据表 `test`.`sbtest1` 的表结构和数据迁移到 TiDB 的表 `test`.`sbtest2`。它也是分库分表合并迁移所需的一个核心功能。 +[Table Routing](dm-key-features.md#table-routing) 是将源数据库的表迁移到下游指定表的路由功能,比如将源数据表 `test`.`sbtest1` 的表结构和数据迁移到 TiDB 的表 `test`.`sbtest2`。它也是分库分表合并迁移所需的一个核心功能。 ## 高级功能 @@ -48,7 +48,7 @@ DM 支持对源数据的分库分表进行合并迁移,但有一些使用限 ### 对第三方 Online Schema Change 工具变更过程的同步优化 -在 MySQL 生态中,gh-ost 与 pt-osc 等工具被广泛使用,DM 对其变更过程进行了特殊的优化,以避免对不必要的中间数据进行迁移。详细信息可参考 [online-ddl](key-features.md#online-ddl-工具支持)。 +在 MySQL 生态中,gh-ost 与 pt-osc 等工具被广泛使用,DM 对其变更过程进行了特殊的优化,以避免对不必要的中间数据进行迁移。详细信息可参考 [online-ddl](dm-key-features.md#online-ddl-工具支持)。 ### 使用 SQL 表达式过滤某些行变更 @@ -75,7 +75,7 @@ DM 支持对源数据的分库分表进行合并迁移,但有一些使用限 - 目前,TiDB 部分兼容 MySQL 支持的 DDL 语句。因为 DM 使用 TiDB parser 来解析处理 DDL 语句,所以目前仅支持 TiDB parser 支持的 DDL 语法。详见 [TiDB DDL 语法支持](https://pingcap.com/docs-cn/dev/reference/mysql-compatibility/#ddl)。 - - DM 遇到不兼容的 DDL 语句时会报错。要解决此报错,需要使用 dmctl 手动处理,要么跳过该 DDL 语句,要么用指定的 DDL 语句来替换它。详见[如何处理不兼容的 DDL 语句](faq.md#如何处理不兼容的-ddl-语句)。 + - DM 遇到不兼容的 DDL 语句时会报错。要解决此报错,需要使用 dmctl 手动处理,要么跳过该 DDL 语句,要么用指定的 DDL 语句来替换它。详见[如何处理不兼容的 DDL 语句](dm-faq.md#如何处理不兼容的-ddl-语句)。 + 分库分表数据冲突合并 diff --git a/zh/pause-task.md b/zh/dm-pause-task.md similarity index 95% rename from zh/pause-task.md rename to zh/dm-pause-task.md index 8db45435e..4c2bce101 100644 --- a/zh/pause-task.md +++ b/zh/dm-pause-task.md @@ -1,7 +1,7 @@ --- title: 暂停数据迁移任务 summary: 了解 TiDB Data Migration 如何暂停数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/pause-task/'] +aliases: ['/docs-cn/tidb-data-migration/dev/pause-task/','/zh/tidb-data-migration/dev/pause-task.md/] --- # 暂停数据迁移任务 diff --git a/zh/performance-test.md b/zh/dm-performance-test.md similarity index 93% rename from zh/performance-test.md rename to zh/dm-performance-test.md index fb5d98782..3d56fb43b 100644 --- a/zh/performance-test.md +++ b/zh/dm-performance-test.md @@ -1,7 +1,7 @@ --- title: DM 集群性能测试 summary: 了解如何测试 DM 集群的性能。 -aliases: ['/docs-cn/tidb-data-migration/dev/performance-test/'] +aliases: ['/docs-cn/tidb-data-migration/dev/performance-test/','/zh/tidb-data-migration/dev/performance-test.md/] --- # DM 集群性能测试 @@ -51,7 +51,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### 创建数据迁移任务 -1. 创建上游 MySQL 的 source,将 `source-id` 配置为 `source-1`。详细操作方法参考:[加载数据源配置](manage-source.md#数据源操作)。 +1. 创建上游 MySQL 的 source,将 `source-id` 配置为 `source-1`。详细操作方法参考:[加载数据源配置](dm-manage-source.md#数据源操作)。 2. 创建 `full` 模式的 DM 迁移任务,示例任务配置文件如下: @@ -85,7 +85,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 threads: 32 ``` -创建数据迁移任务的详细操作参考[创建数据迁移任务](create-task.md#创建数据迁移任务)。 +创建数据迁移任务的详细操作参考[创建数据迁移任务](dm-create-task.md#创建数据迁移任务)。 > **注意:** > @@ -110,7 +110,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### 创建数据迁移任务 -1. 创建上游 MySQL 的 source, source-id 配置为 `source-1`(如果在全量迁移性能测试中已经创建,则不需要再次创建)。详细操作方法参考:[加载数据源配置](manage-source.md#数据源操作)。 +1. 创建上游 MySQL 的 source, source-id 配置为 `source-1`(如果在全量迁移性能测试中已经创建,则不需要再次创建)。详细操作方法参考:[加载数据源配置](dm-manage-source.md#数据源操作)。 2. 创建 `all` 模式的 DM 迁移任务,示例任务配置文件如下: @@ -143,7 +143,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 batch: 100 ``` -创建数据迁移任务的详细操作参考[创建数据迁移任务](create-task.md#创建数据迁移任务)。 +创建数据迁移任务的详细操作参考[创建数据迁移任务](dm-create-task.md#创建数据迁移任务)。 > **注意:** > diff --git a/zh/precheck.md b/zh/dm-precheck.md similarity index 97% rename from zh/precheck.md rename to zh/dm-precheck.md index 3c503f072..a504715e6 100644 --- a/zh/precheck.md +++ b/zh/dm-precheck.md @@ -1,7 +1,7 @@ --- title: 上游 MySQL 实例配置前置检查 summary: 了解上游 MySQL 实例配置前置检查。 -aliases: ['/docs-cn/tidb-data-migration/dev/precheck/'] +aliases: ['/docs-cn/tidb-data-migration/dev/precheck/','/zh/tidb-data-migration/dev/precheck.md/] --- # 上游 MySQL 实例配置前置检查 diff --git a/zh/query-status.md b/zh/dm-query-status.md similarity index 99% rename from zh/query-status.md rename to zh/dm-query-status.md index 90e22ebea..b362fa825 100644 --- a/zh/query-status.md +++ b/zh/dm-query-status.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration 查询状态 summary: 深入了解 TiDB Data Migration 如何查询数据迁移任务状态 -aliases: ['/docs-cn/tidb-data-migration/dev/query-status/','/docs-cn/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-error/'] +aliases: ['/docs-cn/tidb-data-migration/dev/query-status/','/docs-cn/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-error/','/zh/tidb-data-migration/dev/query-status.md/] --- # TiDB Data Migration 查询状态 diff --git a/zh/resume-task.md b/zh/dm-resume-task.md similarity index 92% rename from zh/resume-task.md rename to zh/dm-resume-task.md index b7c0d194b..bc4d0b076 100644 --- a/zh/resume-task.md +++ b/zh/dm-resume-task.md @@ -1,7 +1,7 @@ --- title: 恢复数据迁移任务 summary: 了解 TiDB Data Migration 如何恢复数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/resume-task/'] +aliases: ['/docs-cn/tidb-data-migration/dev/resume-task/','/zh/tidb-data-migration/dev/resume-task.md/] --- # 恢复数据迁移任务 diff --git a/zh/source-configuration-file.md b/zh/dm-source-configuration-file.md similarity index 97% rename from zh/source-configuration-file.md rename to zh/dm-source-configuration-file.md index afcdfa292..d18c71ce2 100644 --- a/zh/source-configuration-file.md +++ b/zh/dm-source-configuration-file.md @@ -1,6 +1,6 @@ --- title: 上游数据库配置文件介绍 -aliases: ['/docs-cn/tidb-data-migration/dev/source-configuration-file/'] +aliases: ['/docs-cn/tidb-data-migration/dev/source-configuration-file/','/zh/tidb-data-migration/dev/source-configuration-file.md/] --- # 上游数据库配置文件介绍 @@ -107,4 +107,4 @@ DM 会定期检查当前任务状态以及错误信息,判断恢复任务能 | 配置项 | 说明 | | :------------ | :--------------------------------------- | | `case-sensitive` | Binlog event filter 标识符是否大小写敏感。默认值:false。| -| `filters` | 配置 Binlog event filter,含义见 [Binlog event filter 参数解释](key-features.md#参数解释-2)。 | +| `filters` | 配置 Binlog event filter,含义见 [Binlog event filter 参数解释](dm-key-features.md#参数解释-2)。 | diff --git a/zh/stop-task.md b/zh/dm-stop-task.md similarity index 89% rename from zh/stop-task.md rename to zh/dm-stop-task.md index d45f2b406..59dbe01e1 100644 --- a/zh/stop-task.md +++ b/zh/dm-stop-task.md @@ -1,12 +1,12 @@ --- title: 停止数据迁移任务 summary: 了解 TiDB Data Migration 如何停止数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/stop-task/'] +aliases: ['/docs-cn/tidb-data-migration/dev/stop-task/','/zh/tidb-data-migration/dev/stop-task.md/] --- # 停止数据迁移任务 -`stop-task` 命令用于停止数据迁移任务。有关 `stop-task` 与 `pause-task` 的区别,请参考[暂停数据迁移任务](pause-task.md)中的相关说明。 +`stop-task` 命令用于停止数据迁移任务。有关 `stop-task` 与 `pause-task` 的区别,请参考[暂停数据迁移任务](dm-pause-task.md)中的相关说明。 {{< copyable "" >}} diff --git a/zh/task-configuration-guide.md b/zh/dm-task-configuration-guide.md similarity index 95% rename from zh/task-configuration-guide.md rename to zh/dm-task-configuration-guide.md index 6037d4634..dccfbeba3 100644 --- a/zh/task-configuration-guide.md +++ b/zh/dm-task-configuration-guide.md @@ -1,5 +1,6 @@ --- title: DM 数据迁移任务配置向导 +aliases: ['/zh/tidb-data-migration/dev/task-configuration-guide.md/] --- # 数据迁移任务配置向导 @@ -10,9 +11,9 @@ title: DM 数据迁移任务配置向导 配置需要迁移的数据源之前,首先应该确认已经在 DM 创建相应数据源: -- 查看数据源可以参考 [查看数据源配置](manage-source.md#查看数据源配置) +- 查看数据源可以参考 [查看数据源配置](dm-manage-source.md#查看数据源配置) - 创建数据源可以参考 [在 DM 创建数据源](migrate-data-using-dm.md#第-3-步创建数据源) -- 数据源配置可以参考 [数据源配置文件介绍](source-configuration-file.md) +- 数据源配置可以参考 [数据源配置文件介绍](dm-source-configuration-file.md) 仿照下面的 `mysql-instances:` 示例定义数据迁移任务需要同步的单个或者多个数据源。 @@ -55,7 +56,7 @@ target-database: # 目标 TiDB 配置 如果不需要过滤或迁移特定表,可以跳过该项配置。 -配置从数据源迁移表的黑白名单,则需要添加两个定义,详细配置规则参考 [Block & Allow Lists](key-features.md#block--allow-table-lists): +配置从数据源迁移表的黑白名单,则需要添加两个定义,详细配置规则参考 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists): 1. 定义全局的黑白名单规则 @@ -89,7 +90,7 @@ target-database: # 目标 TiDB 配置 如果不需要过滤特定库或者特定表的特定操作,可以跳过该项配置。 -配置过滤特定操作,则需要添加两个定义,详细配置规则参考 [Binlog Event Filter](key-features.md#binlog-event-filter): +配置过滤特定操作,则需要添加两个定义,详细配置规则参考 [Binlog Event Filter](dm-key-features.md#binlog-event-filter): 1. 定义全局的数据源操作过滤规则 @@ -122,7 +123,7 @@ target-database: # 目标 TiDB 配置 如果不需要将数据源表路由到不同名的目标 TiDB 表,可以跳过该项配置。分库分表合并迁移的场景必须配置该规则。 -配置数据源表迁移到目标 TiDB 表的路由规则,则需要添加两个定义,详细配置规则参考 [Table Routing](key-features.md#table-routing): +配置数据源表迁移到目标 TiDB 表的路由规则,则需要添加两个定义,详细配置规则参考 [Table Routing](dm-key-features.md#table-routing): 1. 定义全局的路由规则 @@ -167,7 +168,7 @@ shard-mode: "pessimistic" # 默认值为 "" 即无需协调。如果为分 ## 其他配置 -下面是本数据迁移任务配置向导的完整示例。完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md),其他各配置项的功能和配置也可参阅[数据迁移功能](key-features.md)。 +下面是本数据迁移任务配置向导的完整示例。完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md),其他各配置项的功能和配置也可参阅[数据迁移功能](dm-key-features.md)。 ```yaml --- diff --git a/zh/tune-configuration.md b/zh/dm-tune-configuration.md similarity index 98% rename from zh/tune-configuration.md rename to zh/dm-tune-configuration.md index 943e82f84..cedef34af 100644 --- a/zh/tune-configuration.md +++ b/zh/dm-tune-configuration.md @@ -1,7 +1,7 @@ --- title: DM 配置优化 summary: 介绍如何通过优化配置来提高数据迁移性能。 -aliases: ['/docs-cn/tidb-data-migration/dev/tune-configuration/'] +aliases: ['/docs-cn/tidb-data-migration/dev/tune-configuration/','/zh/tidb-data-migration/dev/tune-configuration.md/] --- # DM 配置优化 diff --git a/zh/feature-expression-filter.md b/zh/feature-expression-filter.md index 4d0d4ad58..35d2f4ce3 100644 --- a/zh/feature-expression-filter.md +++ b/zh/feature-expression-filter.md @@ -6,7 +6,7 @@ title: 使用 SQL 表达式过滤某些行变更 ## 概述 -在数据迁移的过程中,DM 提供了 [Binlog Event Filter](key-features.md#binlog-event-filter) 功能过滤某些类型的 binlog event,例如不向下游迁移 `DELETE` 事件以达到归档、审计等目的。但是 Binlog Event Filter 无法以更细粒度判断某一行的 `DELETE` 事件是否要被过滤。 +在数据迁移的过程中,DM 提供了 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) 功能过滤某些类型的 binlog event,例如不向下游迁移 `DELETE` 事件以达到归档、审计等目的。但是 Binlog Event Filter 无法以更细粒度判断某一行的 `DELETE` 事件是否要被过滤。 为了解决上述问题,DM 支持使用 SQL 表达式过滤某些行变更。DM 支持的 `ROW` 格式的 binlog 中,binlog event 带有所有列的值。用户可以基于这些值配置 SQL 表达式。如果该表达式对于某条行变更的计算结果是 `TRUE`,DM 就不会向下游迁移该条行变更。 @@ -16,7 +16,7 @@ title: 使用 SQL 表达式过滤某些行变更 ## 配置示例 -与 [Binlog Event Filter](key-features.md#binlog-event-filter) 类似,表达式过滤需要在数据迁移任务配置文件里配置,详见下面配置样例。完整的配置及意义,可以参考 [DM 完整配置文件示例](task-configuration-file-full.md#完整配置文件示例): +与 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) 类似,表达式过滤需要在数据迁移任务配置文件里配置,详见下面配置样例。完整的配置及意义,可以参考 [DM 完整配置文件示例](task-configuration-file-full.md#完整配置文件示例): ```yml name: test diff --git a/zh/feature-shard-merge-pessimistic.md b/zh/feature-shard-merge-pessimistic.md index 192642579..70a7d3376 100644 --- a/zh/feature-shard-merge-pessimistic.md +++ b/zh/feature-shard-merge-pessimistic.md @@ -39,7 +39,7 @@ DM 在悲观模式下进行分表 DDL 的迁移有以下几点使用限制: - 增量复制任务需要确认开始迁移的 binlog position 上各分表的表结构必须一致,才能确保来自不同分表的 DML 语句能够迁移到表结构确定的下游,并且后续各分表的 DDL 语句能够正确匹配与迁移。 -- 如果需要变更 [table routing 规则](key-features.md#table-routing),必须先等所有 sharding DDL 语句迁移完成。 +- 如果需要变更 [table routing 规则](dm-key-features.md#table-routing),必须先等所有 sharding DDL 语句迁移完成。 - 在 sharding DDL 语句迁移过程中,使用 dmctl 尝试变更 router-rules 会报错。 @@ -109,7 +109,7 @@ DM 在悲观模式下进行分表 DDL 的迁移有以下几点使用限制: - 如果 sharding group 的所有成员都收到了某一条相同的 DDL 语句,则表明上游分表在该 DDL 执行前的 DML 语句都已经迁移完成,此时可以执行该 DDL 语句,并继续后续的 DML 迁移。 -- 上游所有分表的 DDL 在经过 [table router](key-features.md#table-routing) 转换后需要保持一致,因此仅需 DDL 锁的 owner 执行一次该 DDL 语句即可,其他 DM-worker 可直接忽略对应的 DDL 语句。 +- 上游所有分表的 DDL 在经过 [table router](dm-key-features.md#table-routing) 转换后需要保持一致,因此仅需 DDL 锁的 owner 执行一次该 DDL 语句即可,其他 DM-worker 可直接忽略对应的 DDL 语句。 在上面的示例中,每个 DM-worker 对应的上游 MySQL 实例中只有一个待合并的分表。但在实际场景下,一个 MySQL 实例可能有多个分库内的多个分表需要进行合并,这种情况下,sharding DDL 的协调迁移过程将更加复杂。 diff --git a/zh/handle-failed-ddl-statements.md b/zh/handle-failed-ddl-statements.md index f7597a643..beac5c2f3 100644 --- a/zh/handle-failed-ddl-statements.md +++ b/zh/handle-failed-ddl-statements.md @@ -29,7 +29,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/skip-or-replace-abnormal-sql-stateme ### query-status -`query-status` 命令用于查询当前 MySQL 实例内子任务及 relay 单元等的状态和错误信息,详见[查询状态](query-status.md)。 +`query-status` 命令用于查询当前 MySQL 实例内子任务及 relay 单元等的状态和错误信息,详见[查询状态](dm-query-status.md)。 ### handle-error diff --git a/zh/maintain-dm-using-tiup.md b/zh/maintain-dm-using-tiup.md index 001adeacd..2eaec7ccc 100644 --- a/zh/maintain-dm-using-tiup.md +++ b/zh/maintain-dm-using-tiup.md @@ -181,7 +181,7 @@ tiup dm scale-in prod-cluster -N 172.16.5.140:8262 > **注意:** > -> 从 v2.0.5 版本开始,dmctl 支持[导出和导入集群的数据源和任务配置](export-import-config.md)。 +> 从 v2.0.5 版本开始,dmctl 支持[导出和导入集群的数据源和任务配置](dm-export-import-config.md)。 > > 升级前,可使用 `config export` 命令导出集群的配置文件,升级后如需降级回退到旧版本,可重建旧集群后,使用 `config import` 导入之前的配置。 > diff --git a/zh/manually-upgrade-dm-1.0-to-2.0.md b/zh/manually-upgrade-dm-1.0-to-2.0.md index 3184d0a5a..2e490c52b 100644 --- a/zh/manually-upgrade-dm-1.0-to-2.0.md +++ b/zh/manually-upgrade-dm-1.0-to-2.0.md @@ -24,7 +24,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/manually-upgrade-dm-1.0-to-2.0/'] ### 上游数据库配置文件 -在 v2.0+ 中将[上游数据库 source 相关的配置](source-configuration-file.md)从 DM-worker 的进程配置中独立了出来,因此需要根据 [v1.0.x 的 DM-worker 配置](https://docs.pingcap.com/zh/tidb-data-migration/stable/dm-worker-configuration-file)拆分得到 source 配置。 +在 v2.0+ 中将[上游数据库 source 相关的配置](dm-source-configuration-file.md)从 DM-worker 的进程配置中独立了出来,因此需要根据 [v1.0.x 的 DM-worker 配置](https://docs.pingcap.com/zh/tidb-data-migration/stable/dm-worker-configuration-file)拆分得到 source 配置。 > **注意:** > @@ -99,7 +99,7 @@ from: ### 数据迁移任务配置文件 -对于[数据迁移任务配置向导](task-configuration-guide.md),v2.0+ 基本与 v1.0.x 保持兼容,可直接复制 v1.0.x 的配置。 +对于[数据迁移任务配置向导](dm-task-configuration-guide.md),v2.0+ 基本与 v1.0.x 保持兼容,可直接复制 v1.0.x 的配置。 ## 第 2 步:部署 v2.0+ 集群 @@ -117,7 +117,7 @@ from: ## 第 4 步:升级数据迁移任务 -1. 使用 [`operate-source`](manage-source.md#数据源操作) 命令将 [准备 v2.0+ 的配置文件](#第-1-步准备-v20-的配置文件) 中得到的上游数据库 source 配置加载到 v2.0+ 集群中。 +1. 使用 [`operate-source`](dm-manage-source.md#数据源操作) 命令将 [准备 v2.0+ 的配置文件](#第-1-步准备-v20-的配置文件) 中得到的上游数据库 source 配置加载到 v2.0+ 集群中。 2. 在下游 TiDB 中,从 v1.0.x 的数据复制任务对应的增量 checkpoint 表中获取对应的全局 checkpoint 信息。 @@ -159,8 +159,8 @@ from: > > 如在 source 配置中启动了 `enable-gtid`,当前需要通过解析 binlog 或 relay log 文件获取 binlog position 对应的 GTID sets 并在 `meta` 中设置为 `binlog-gtid`。 -4. 使用 [`start-task`](create-task.md) 命令以 v2.0+ 的数据迁移任务配置文件启动升级后的数据迁移任务。 +4. 使用 [`start-task`](dm-create-task.md) 命令以 v2.0+ 的数据迁移任务配置文件启动升级后的数据迁移任务。 -5. 使用 [`query-status`](query-status.md) 命令确认数据迁移任务是否运行正常。 +5. 使用 [`query-status`](dm-query-status.md) 命令确认数据迁移任务是否运行正常。 如果数据迁移任务运行正常,则表明 DM 升级到 v2.0+ 的操作成功。 diff --git a/zh/migrate-data-using-dm.md b/zh/migrate-data-using-dm.md index aecc7798d..eabfac5b4 100644 --- a/zh/migrate-data-using-dm.md +++ b/zh/migrate-data-using-dm.md @@ -13,7 +13,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/replicate-data-using-dm/','/zh/tidb- > **注意:** > -> - 在 DM 所有的配置文件中,对于数据库密码推荐使用 dmctl 加密后的密文。如果数据库密码为空,则不需要加密。关于如何使用 dmctl 加密明文密码,参考[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 +> - 在 DM 所有的配置文件中,对于数据库密码推荐使用 dmctl 加密后的密文。如果数据库密码为空,则不需要加密。关于如何使用 dmctl 加密明文密码,参考[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 > - 上下游数据库用户必须拥有相应的读写权限。 ## 第 2 步:检查集群信息 @@ -36,7 +36,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/replicate-data-using-dm/','/zh/tidb- | 上游 MySQL-2 | 172.16.10.82 | 3306 | root | VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU= | | 下游 TiDB | 172.16.10.83 | 4000 | root | | -上游 MySQL 数据库实例用户所需权限参见[上游 MySQL 实例配置前置检查](precheck.md)介绍。 +上游 MySQL 数据库实例用户所需权限参见[上游 MySQL 实例配置前置检查](dm-precheck.md)介绍。 ## 第 3 步:创建数据源 @@ -115,7 +115,7 @@ mydumpers: ## 第 5 步:启动任务 -为了提前发现数据迁移任务的一些配置错误,DM 中增加了[前置检查](precheck.md)功能: +为了提前发现数据迁移任务的一些配置错误,DM 中增加了[前置检查](dm-precheck.md)功能: - 启动数据迁移任务时,DM 自动检查相应的权限和配置。 - 也可使用 `check-task` 命令手动前置检查上游的 MySQL 实例配置是否符合 DM 的配置要求。 diff --git a/zh/migrate-from-mysql-aurora.md b/zh/migrate-from-mysql-aurora.md index b6023f052..6b2067bb1 100644 --- a/zh/migrate-from-mysql-aurora.md +++ b/zh/migrate-from-mysql-aurora.md @@ -68,7 +68,7 @@ DM 在增量复制阶段依赖 `ROW` 格式的 binlog,参见[为 Aurora 实例 > **注意:** > > + 基于 GTID 进行数据迁移需要 MySQL 5.7 (Aurora 2.04) 或更高版本。 -> + 除上述 Aurora 特有配置以外,上游数据库需满足迁移 MySQL 的其他要求,例如表结构、字符集、权限等,参见[上游 MySQL 实例检查内容](precheck.md#检查内容)。 +> + 除上述 Aurora 特有配置以外,上游数据库需满足迁移 MySQL 的其他要求,例如表结构、字符集、权限等,参见[上游 MySQL 实例检查内容](dm-precheck.md#检查内容)。 ## 第 2 步:部署 DM 集群 @@ -122,7 +122,7 @@ tiup dmctl --master-addr 127.0.0.1:8261 list-member > **注意:** > -> DM 所使用的配置文件支持明文或密文数据库密码,推荐使用密文数据库密码确保安全。如何获得密文数据库密码,参见[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 +> DM 所使用的配置文件支持明文或密文数据库密码,推荐使用密文数据库密码确保安全。如何获得密文数据库密码,参见[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 根据示例信息保存如下的数据源配置文件,其中 `source-id` 的值将在第 4 步配置任务时被引用。 diff --git a/zh/quick-create-migration-task.md b/zh/quick-create-migration-task.md index a67e8db50..035abba76 100644 --- a/zh/quick-create-migration-task.md +++ b/zh/quick-create-migration-task.md @@ -17,7 +17,7 @@ summary: 了解在不同业务需求场景下如何配置数据迁移任务。 除了业务需求场景导向的创建数据迁移任务教程之外: - 完整的数据迁移任务配置示例,请参考 [DM 任务完整配置文件介绍](task-configuration-file-full.md) -- 数据迁移任务的配置向导,请参考 [数据迁移任务配置向导](task-configuration-guide.md) +- 数据迁移任务的配置向导,请参考 [数据迁移任务配置向导](dm-task-configuration-guide.md) ## 多数据源汇总迁移到 TiDB diff --git a/zh/quick-start-create-source.md b/zh/quick-start-create-source.md index c553f53dd..0625ad006 100644 --- a/zh/quick-start-create-source.md +++ b/zh/quick-start-create-source.md @@ -11,7 +11,7 @@ summary: 了解如何为 DM 创建数据源。 本文档介绍如何为 TiDB Data Migration (DM) 的数据迁移任务创建数据源。 -数据源包含了访问迁移任务上游所需的信息。数据迁移任务需要引用对应的数据源来获取访问配置信息。因此,在创建数据迁移任务之前,需要先创建任务的数据源。详细的数据源管理命令请参考[管理上游数据源](manage-source.md)。 +数据源包含了访问迁移任务上游所需的信息。数据迁移任务需要引用对应的数据源来获取访问配置信息。因此,在创建数据迁移任务之前,需要先创建任务的数据源。详细的数据源管理命令请参考[管理上游数据源](dm-manage-source.md)。 ## 第一步:配置数据源 @@ -57,7 +57,7 @@ summary: 了解如何为 DM 创建数据源。 tiup dmctl --master-addr operate-source create ./source-mysql-01.yaml ``` -数据源配置文件的其他配置参考[数据源配置文件介绍](source-configuration-file.md)。 +数据源配置文件的其他配置参考[数据源配置文件介绍](dm-source-configuration-file.md)。 命令返回结果如下: diff --git a/zh/relay-log.md b/zh/relay-log.md index 37e459a7f..59aa4fa43 100644 --- a/zh/relay-log.md +++ b/zh/relay-log.md @@ -8,7 +8,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/relay-log/'] DM (Data Migration) 工具的 relay log 由若干组有编号的文件和一个索引文件组成。这些有编号的文件包含了描述数据库更改的事件。索引文件包含所有使用过的 relay log 的文件名。 -在启用 relay log 功能后,DM-worker 会自动将上游 binlog 迁移到本地配置目录(若使用 TiUP 部署 DM,则迁移目录默认为 ` / `)。本地配置目录 `` 的默认值是 `relay-dir`,可在[上游数据库配置文件](source-configuration-file.md)中进行修改)。DM-worker 在运行过程中,会将上游 binlog 实时迁移到本地文件。DM-worker 的 sync 处理单元会实时读取本地 relay log 的 binlog 事件,将这些事件转换为 SQL 语句,再将 SQL 语句迁移到下游数据库。 +在启用 relay log 功能后,DM-worker 会自动将上游 binlog 迁移到本地配置目录(若使用 TiUP 部署 DM,则迁移目录默认为 ` / `)。本地配置目录 `` 的默认值是 `relay-dir`,可在[上游数据库配置文件](dm-source-configuration-file.md)中进行修改)。DM-worker 在运行过程中,会将上游 binlog 实时迁移到本地文件。DM-worker 的 sync 处理单元会实时读取本地 relay log 的 binlog 事件,将这些事件转换为 SQL 语句,再将 SQL 语句迁移到下游数据库。 > **注意:** > @@ -99,7 +99,7 @@ Relay log 迁移的起始位置由如下规则决定: > **注意:** > -> 自 v2.0.2 起,上游数据源配置中的 `enable-relay` 项已经失效。在[加载数据源配置](manage-source.md#数据源操作)时,如果发现配置中的 `enable-relay` 项为 `true`,DM 会给出如下信息提示: +> 自 v2.0.2 起,上游数据源配置中的 `enable-relay` 项已经失效。在[加载数据源配置](dm-manage-source.md#数据源操作)时,如果发现配置中的 `enable-relay` 项为 `true`,DM 会给出如下信息提示: > > ``` > Please use `start-relay` to specify which workers should pull relay log of relay-enabled sources. @@ -141,7 +141,7 @@ Relay log 迁移的起始位置由如下规则决定: 在 v2.0.2 之前的版本(不含 v2.0.2),DM-worker 在绑定上游数据源时,会检查上游数据源配置中的 `enable-relay` 项。如果 `enable-relay` 为 `true`,则为该数据源启用 relay log 功能。 -具体配置方式参见[上游数据源配置文件介绍](source-configuration-file.md) +具体配置方式参见[上游数据源配置文件介绍](dm-source-configuration-file.md) diff --git a/zh/shard-merge-best-practices.md b/zh/shard-merge-best-practices.md index 030e4595e..03dbfaffa 100644 --- a/zh/shard-merge-best-practices.md +++ b/zh/shard-merge-best-practices.md @@ -117,7 +117,7 @@ CREATE TABLE `tbl_multi_pk` ( ## 上游 RDS 封装分库分表的处理 -上游数据源为 RDS 且使用了其分库分表功能的情况下,MySQL binlog 中的表名在 SQL client 连接时可能并不可见。例如在 UCloud 分布式数据库 [UDDB](https://www.ucloud.cn/site/product/uddb.html) 中,其 binlog 表名可能会多出 `_0001` 的后缀。这需要根据 binlog 中的表名规律,而不是 SQL client 所见的表名,来配置 [table routing 规则](key-features.md#table-routing)。 +上游数据源为 RDS 且使用了其分库分表功能的情况下,MySQL binlog 中的表名在 SQL client 连接时可能并不可见。例如在 UCloud 分布式数据库 [UDDB](https://www.ucloud.cn/site/product/uddb.html) 中,其 binlog 表名可能会多出 `_0001` 的后缀。这需要根据 binlog 中的表名规律,而不是 SQL client 所见的表名,来配置 [table routing 规则](dm-key-features.md#table-routing)。 ## 合表迁移过程中在上游增/删表 diff --git a/zh/task-configuration-file-full.md b/zh/task-configuration-file-full.md index 0b37e3b7b..98d28780c 100644 --- a/zh/task-configuration-file-full.md +++ b/zh/task-configuration-file-full.md @@ -7,11 +7,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/task-configuration-file-full/','/zh/ 本文档主要介绍 Data Migration (DM) 的任务完整的配置文件,包含[全局配置](#全局配置) 和[实例配置](#实例配置) 两部分。 -关于各配置项的功能和配置,请参阅[数据迁移功能](overview.md#基本功能)。 +关于各配置项的功能和配置,请参阅[数据迁移功能](dm-overview.md#基本功能)。 ## 关键概念 -关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](config-overview.md#关键概念)。 +关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](dm-config-overview.md#关键概念)。 ## 完整配置文件示例 @@ -177,9 +177,9 @@ mysql-instances: | 配置项 | 说明 | | :------------ | :--------------------------------------- | -| `routes` | 上游和下游表之间的路由 table routing 规则集。如果上游与下游的库名、表名一致,则不需要配置该项。使用场景及示例配置参见 [Table Routing](key-features.md#table-routing) | -| `filters` | 上游数据库实例匹配的表的 binlog event filter 规则集。如果不需要对 binlog 进行过滤,则不需要配置该项。使用场景及示例配置参见 [Binlog Event Filter](key-features.md#binlog-event-filter) | -| `block-allow-list` | 该上游数据库实例匹配的表的 block & allow lists 过滤规则集。建议通过该项指定需要迁移的库和表,否则会迁移所有的库和表。使用场景及示例配置参见 [Block & Allow Lists](key-features.md#block--allow-table-lists) | +| `routes` | 上游和下游表之间的路由 table routing 规则集。如果上游与下游的库名、表名一致,则不需要配置该项。使用场景及示例配置参见 [Table Routing](dm-key-features.md#table-routing) | +| `filters` | 上游数据库实例匹配的表的 binlog event filter 规则集。如果不需要对 binlog 进行过滤,则不需要配置该项。使用场景及示例配置参见 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) | +| `block-allow-list` | 该上游数据库实例匹配的表的 block & allow lists 过滤规则集。建议通过该项指定需要迁移的库和表,否则会迁移所有的库和表。使用场景及示例配置参见 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) | | `mydumpers` | dump 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `mydumper-thread` 对 `thread` 配置项单独进行配置。 | | `loaders` | load 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `loader-thread` 对 `pool-size` 配置项单独进行配置。 | | `syncers` | sync 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `syncer-thread` 对 `worker-count` 配置项单独进行配置。 | diff --git a/zh/task-configuration-file.md b/zh/task-configuration-file.md index af1ab95d6..fcedf7f45 100644 --- a/zh/task-configuration-file.md +++ b/zh/task-configuration-file.md @@ -7,11 +7,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/task-configuration-file/'] 本文档主要介绍 Data Migration (DM) 的任务基础配置文件,包含[全局配置](#全局配置)和[实例配置](#实例配置)两部分。 -完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md)。关于各配置项的功能和配置,请参阅[数据迁移功能](key-features.md)。 +完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md)。关于各配置项的功能和配置,请参阅[数据迁移功能](dm-key-features.md)。 ## 关键概念 -关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](config-overview.md#关键概念)。 +关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](dm-config-overview.md#关键概念)。 ## 基础配置文件示例 @@ -78,7 +78,7 @@ mysql-instances: ### 功能配置集 -对于一般的业务场景,只需要配置黑白名单过滤规则集,配置说明参见以上示例配置文件中 `block-allow-list` 的注释以及 [Block & Allow Lists](key-features.md#block--allow-table-lists) +对于一般的业务场景,只需要配置黑白名单过滤规则集,配置说明参见以上示例配置文件中 `block-allow-list` 的注释以及 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) ## 实例配置 diff --git a/zh/usage-scenario-downstream-more-columns.md b/zh/usage-scenario-downstream-more-columns.md index 98d537e30..82b600ea5 100644 --- a/zh/usage-scenario-downstream-more-columns.md +++ b/zh/usage-scenario-downstream-more-columns.md @@ -48,7 +48,7 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 出现以上错误的原因是 DM 迁移 binlog event 时,如果 DM 内部没有维护对应于该表的表结构,则会尝试使用下游当前的表结构来解析 binlog event 并生成相应的 DML 语句。如果 binlog event 里数据的列数与下游表结构的列数不一致时,则会产生上述错误。 -此时,我们可以使用 [`operate-schema`](manage-schema.md) 命令来为该表指定与 binlog event 匹配的表结构。如果你在进行分表合并的数据迁移,那么需要为每个分表按照如下步骤在 DM 中设置用于解析 MySQL binlog 的表结构。具体操作为: +此时,我们可以使用 [`operate-schema`](dm-manage-schema.md) 命令来为该表指定与 binlog event 匹配的表结构。如果你在进行分表合并的数据迁移,那么需要为每个分表按照如下步骤在 DM 中设置用于解析 MySQL binlog 的表结构。具体操作为: 1. 为数据源中需要迁移的表 `log.messages` 指定表结构,表结构需要对应 DM 将要开始同步的 binlog event 的数据。将对应的 `CREATE TABLE` 表结构语句并保存到文件,例如将以下表结构保存到 `log.messages.sql` 中。 @@ -60,7 +60,7 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 ) ``` -2. 使用 [`operate-schema`](manage-schema.md) 命令设置表结构(此时 task 应该由于上述错误而处于 `Paused` 状态)。 +2. 使用 [`operate-schema`](dm-manage-schema.md) 命令设置表结构(此时 task 应该由于上述错误而处于 `Paused` 状态)。 {{< copyable "" >}} @@ -68,6 +68,6 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 tiup dmctl --master-addr operate-schema set -s mysql-01 task-test -d log -t message log.message.sql ``` -3. 使用 [`resume-task`](resume-task.md) 命令恢复处于 `Paused` 状态的任务。 +3. 使用 [`resume-task`](dm-resume-task.md) 命令恢复处于 `Paused` 状态的任务。 -4. 使用 [`query-status`](query-status.md) 命令确认数据迁移任务是否运行正常。 +4. 使用 [`query-status`](dm-query-status.md) 命令确认数据迁移任务是否运行正常。 diff --git a/zh/usage-scenario-shard-merge.md b/zh/usage-scenario-shard-merge.md index e5e0ea7ed..299bd820f 100644 --- a/zh/usage-scenario-shard-merge.md +++ b/zh/usage-scenario-shard-merge.md @@ -78,7 +78,7 @@ CREATE TABLE `sale_01` ( ## 迁移方案 -- 要满足迁移需求 #1,无需配置 [table routing 规则](key-features.md#table-routing)。按照[去掉自增主键的主键属性](shard-merge-best-practices.md#去掉自增主键的主键属性)的要求,在下游手动建表。 +- 要满足迁移需求 #1,无需配置 [table routing 规则](dm-key-features.md#table-routing)。按照[去掉自增主键的主键属性](shard-merge-best-practices.md#去掉自增主键的主键属性)的要求,在下游手动建表。 {{< copyable "sql" >}} @@ -101,7 +101,7 @@ CREATE TABLE `sale_01` ( ignore-checking-items: ["auto_increment_ID"] ``` -- 要满足迁移需求 #2,配置 [table routing 规则](key-features.md#table-routing)如下: +- 要满足迁移需求 #2,配置 [table routing 规则](dm-key-features.md#table-routing)如下: {{< copyable "" >}} @@ -118,7 +118,7 @@ CREATE TABLE `sale_01` ( target-table: "sale" ``` -- 要满足迁移需求 #3,配置 [Block & Allow Lists](key-features.md#block--allow-table-lists) 如下: +- 要满足迁移需求 #3,配置 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) 如下: {{< copyable "" >}} @@ -131,7 +131,7 @@ CREATE TABLE `sale_01` ( tbl-name: "log_bak" ``` -- 要满足迁移需求 #4,配置 [Binlog event filter 规则](key-features.md#binlog-event-filter)如下: +- 要满足迁移需求 #4,配置 [Binlog event filter 规则](dm-key-features.md#binlog-event-filter)如下: {{< copyable "" >}} @@ -151,7 +151,7 @@ CREATE TABLE `sale_01` ( ## 迁移任务配置 -迁移任务的完整配置如下,更多详情请参阅[数据迁移任务配置向导](task-configuration-guide.md)。 +迁移任务的完整配置如下,更多详情请参阅[数据迁移任务配置向导](dm-task-configuration-guide.md)。 {{< copyable "" >}} diff --git a/zh/usage-scenario-simple-migration.md b/zh/usage-scenario-simple-migration.md index 967b4d94a..2e87461b0 100644 --- a/zh/usage-scenario-simple-migration.md +++ b/zh/usage-scenario-simple-migration.md @@ -68,7 +68,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', ## 迁移方案 -- 为了满足[迁移要求](#迁移要求)中第一点的前三条要求,需要配置以下 [table routing 规则](key-features.md#table-routing): +- 为了满足[迁移要求](#迁移要求)中第一点的前三条要求,需要配置以下 [table routing 规则](dm-key-features.md#table-routing): {{< copyable "" >}} @@ -86,7 +86,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', target-schema: "user_south" ``` -- 为了满足[迁移要求](#迁移要求)中第二点的第一条要求,需要配置以下 [table routing 规则](key-features.md#table-routing): +- 为了满足[迁移要求](#迁移要求)中第二点的第一条要求,需要配置以下 [table routing 规则](dm-key-features.md#table-routing): {{< copyable "" >}} @@ -105,7 +105,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', target-table: "store_shenzhen" ``` -- 为了满足[迁移要求](#迁移要求)中第一点的第四条要求,需要配置以下 [binlog event filter 规则](key-features.md#binlog-event-filter): +- 为了满足[迁移要求](#迁移要求)中第一点的第四条要求,需要配置以下 [binlog event filter 规则](dm-key-features.md#binlog-event-filter): {{< copyable "" >}} @@ -123,7 +123,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', action: Ignore ``` -- 为了满足[迁移要求](#迁移要求)中第二点的第二条要求,需要配置以下 [binlog event filter 规则](key-features.md#binlog-event-filter): +- 为了满足[迁移要求](#迁移要求)中第二点的第二条要求,需要配置以下 [binlog event filter 规则](dm-key-features.md#binlog-event-filter): {{< copyable "" >}} @@ -140,7 +140,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', > > `store-filter-rule` 不同于 `log-filter-rule` 和 `user-filter-rule`。`store-filter-rule` 是针对整个 `store` 库的规则,而 `log-filter-rule` 和 `user-filter-rule` 是针对 `user` 库中 `log` 表的规则。 -- 为了满足[迁移要求](#迁移要求)中的第三点要求,需要配置以下 [Block & Allow Lists](key-features.md#block--allow-table-lists): +- 为了满足[迁移要求](#迁移要求)中的第三点要求,需要配置以下 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists): {{< copyable "" >}} @@ -152,7 +152,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', ## 迁移任务配置 -以下是完整的迁移任务配置,更多详情请参阅 [数据迁移任务配置向导](task-configuration-guide.md)。 +以下是完整的迁移任务配置,更多详情请参阅 [数据迁移任务配置向导](dm-task-configuration-guide.md)。 {{< copyable "" >}} From 5c478e7a8ada8ef9ba1031a4e8d798294fbd61b4 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 19:25:48 +0800 Subject: [PATCH 04/11] Update dm-handle-alerts.md --- en/dm-handle-alerts.md | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/en/dm-handle-alerts.md b/en/dm-handle-alerts.md index 02cda1c59..b7e34d306 100644 --- a/en/dm-handle-alerts.md +++ b/en/dm-handle-alerts.md @@ -187,9 +187,4 @@ This document introduces how to deal with the alert information in DM. - Solution: - Refer to [Handle Performance Issues](dm-handle-performance-issues.md). -nit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](handle-performance-issues.md). + Refer to [Handle Performance Issues](handle-performance-issues.md). \ No newline at end of file From a77ff49a34c5c50daf6dc6079b44f37580411687 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 19:32:25 +0800 Subject: [PATCH 05/11] Update dm-handle-alerts.md --- en/dm-handle-alerts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en/dm-handle-alerts.md b/en/dm-handle-alerts.md index b7e34d306..75515d6aa 100644 --- a/en/dm-handle-alerts.md +++ b/en/dm-handle-alerts.md @@ -187,4 +187,4 @@ This document introduces how to deal with the alert information in DM. - Solution: - Refer to [Handle Performance Issues](handle-performance-issues.md). \ No newline at end of file + Refer to [Handle Performance Issues](dm-handle-performance-issues.md). \ No newline at end of file From 9fe9d2d49108973e380ffeec70e3638ec8acb997 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 19:35:56 +0800 Subject: [PATCH 06/11] Update dm-alert-rules.md --- zh/dm-alert-rules.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/zh/dm-alert-rules.md b/zh/dm-alert-rules.md index 7e5c5d7ec..312e6af8e 100644 --- a/zh/dm-alert-rules.md +++ b/zh/dm-alert-rules.md @@ -10,4 +10,4 @@ aliases: ['/docs-cn/tidb-data-migration/dev/alert-rules/','/zh/tidb-data-migrati DM 的告警规则及其对应的处理方法可参考[告警处理](dm-handle-alerts.md)。 -DM 的告警信息与监控指标均基于 Prometheus,告警规则与监控指标的对应关系可参考 [DM 监控指标](monitor-a-dm-cluster.md)。 +DM 的告警信息与监控指标均基于 Prometheus,告警规则与监控指标的对应关系可参考 [DM 监控指标](monitor-a-dm-cluster.md)。 \ No newline at end of file From beb13c58ee96a04066048b8b915e973bfe17cc99 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 19:47:20 +0800 Subject: [PATCH 07/11] revert_changes --- en/TOC.md | 58 +-- en/_index.md | 10 +- en/benchmark-v1.0-ga.md | 6 +- en/benchmark-v2.0-ga.md | 6 +- ...enchmark-v5.3.0.md => benchmark-v5.3.0.md} | 7 +- ...nd-line-flags.md => command-line-flags.md} | 1 - ...-config-overview.md => config-overview.md} | 10 +- en/{dm-create-task.md => create-task.md} | 3 +- en/dm-alert-rules.md | 27 +- en/dm-daily-check.md | 2 +- en/dm-glossary.md | 10 +- en/dm-handle-alerts.md | 190 --------- en/dm-hardware-and-software-requirements.md | 2 +- en/{dm-enable-tls.md => enable-tls.md} | 1 - ...dm-error-handling.md => error-handling.md} | 12 +- ...port-config.md => export-import-config.md} | 1 - en/{dm-faq.md => faq.md} | 4 +- en/feature-expression-filter.md | 4 +- en/feature-shard-merge-pessimistic.md | 4 +- en/handle-alerts.md | 382 ++++++++++++++++++ en/handle-failed-ddl-statements.md | 2 +- ...issues.md => handle-performance-issues.md} | 3 +- en/{dm-key-features.md => key-features.md} | 6 +- en/maintain-dm-using-tiup.md | 2 +- en/{dm-manage-schema.md => manage-schema.md} | 1 - en/{dm-manage-source.md => manage-source.md} | 5 +- en/manually-upgrade-dm-1.0-to-2.0.md | 10 +- en/migrate-data-using-dm.md | 6 +- en/migrate-from-mysql-aurora.md | 4 +- en/{dm-open-api.md => open-api.md} | 1 - en/{dm-overview.md => overview.md} | 12 +- en/{dm-pause-task.md => pause-task.md} | 1 - ...erformance-test.md => performance-test.md} | 9 +- en/{dm-precheck.md => precheck.md} | 2 +- en/{dm-query-status.md => query-status.md} | 2 +- en/quick-create-migration-task.md | 2 +- en/quick-start-create-source.md | 4 +- en/relay-log.md | 4 +- en/{dm-resume-task.md => resume-task.md} | 1 - ...n-file.md => source-configuration-file.md} | 4 +- en/{dm-stop-task.md => stop-task.md} | 3 +- en/task-configuration-file-full.md | 10 +- en/task-configuration-file.md | 6 +- ...n-guide.md => task-configuration-guide.md} | 13 +- ...configuration.md => tune-configuration.md} | 1 - en/usage-scenario-downstream-more-columns.md | 8 +- en/usage-scenario-shard-merge.md | 10 +- en/usage-scenario-simple-migration.md | 12 +- zh/TOC.md | 56 +-- zh/_index.md | 14 +- zh/benchmark-v1.0-ga.md | 6 +- zh/benchmark-v2.0-ga.md | 6 +- ...enchmark-v5.3.0.md => benchmark-v5.3.0.md} | 7 +- ...nd-line-flags.md => command-line-flags.md} | 2 +- ...-config-overview.md => config-overview.md} | 10 +- zh/{dm-create-task.md => create-task.md} | 4 +- zh/dm-alert-rules.md | 2 +- zh/dm-daily-check.md | 2 +- zh/dm-glossary.md | 10 +- zh/dm-hardware-and-software-requirements.md | 2 +- zh/{dm-enable-tls.md => enable-tls.md} | 1 - ...dm-error-handling.md => error-handling.md} | 12 +- ...port-config.md => export-import-config.md} | 1 - zh/{dm-faq.md => faq.md} | 4 +- zh/feature-expression-filter.md | 4 +- zh/feature-shard-merge-pessimistic.md | 4 +- zh/{dm-handle-alerts.md => handle-alerts.md} | 26 +- zh/handle-failed-ddl-statements.md | 2 +- ...issues.md => handle-performance-issues.md} | 4 +- zh/{dm-key-features.md => key-features.md} | 6 +- zh/maintain-dm-using-tiup.md | 2 +- zh/{dm-manage-schema.md => manage-schema.md} | 1 - zh/{dm-manage-source.md => manage-source.md} | 6 +- zh/manually-upgrade-dm-1.0-to-2.0.md | 10 +- zh/migrate-data-using-dm.md | 6 +- zh/migrate-from-mysql-aurora.md | 4 +- zh/{dm-open-api.md => open-api.md} | 1 - zh/{dm-overview.md => overview.md} | 12 +- zh/{dm-pause-task.md => pause-task.md} | 2 +- ...erformance-test.md => performance-test.md} | 10 +- zh/{dm-precheck.md => precheck.md} | 2 +- zh/{dm-query-status.md => query-status.md} | 2 +- zh/quick-create-migration-task.md | 2 +- zh/quick-start-create-source.md | 4 +- zh/relay-log.md | 6 +- zh/{dm-resume-task.md => resume-task.md} | 2 +- zh/shard-merge-best-practices.md | 2 +- ...n-file.md => source-configuration-file.md} | 4 +- zh/{dm-stop-task.md => stop-task.md} | 4 +- zh/task-configuration-file-full.md | 10 +- zh/task-configuration-file.md | 6 +- ...n-guide.md => task-configuration-guide.md} | 13 +- ...configuration.md => tune-configuration.md} | 2 +- zh/usage-scenario-downstream-more-columns.md | 8 +- zh/usage-scenario-shard-merge.md | 10 +- zh/usage-scenario-simple-migration.md | 12 +- 96 files changed, 691 insertions(+), 521 deletions(-) rename en/{dm-benchmark-v5.3.0.md => benchmark-v5.3.0.md} (97%) rename en/{dm-command-line-flags.md => command-line-flags.md} (98%) rename en/{dm-config-overview.md => config-overview.md} (80%) rename en/{dm-create-task.md => create-task.md} (93%) delete mode 100644 en/dm-handle-alerts.md rename en/{dm-enable-tls.md => enable-tls.md} (98%) rename en/{dm-error-handling.md => error-handling.md} (97%) rename en/{dm-export-import-config.md => export-import-config.md} (97%) rename en/{dm-faq.md => faq.md} (98%) create mode 100644 en/handle-alerts.md rename en/{dm-handle-performance-issues.md => handle-performance-issues.md} (97%) rename en/{dm-key-features.md => key-features.md} (98%) rename en/{dm-manage-schema.md => manage-schema.md} (99%) rename en/{dm-manage-source.md => manage-source.md} (96%) rename en/{dm-open-api.md => open-api.md} (99%) rename en/{dm-overview.md => overview.md} (80%) rename en/{dm-pause-task.md => pause-task.md} (97%) rename en/{dm-performance-test.md => performance-test.md} (94%) rename en/{dm-precheck.md => precheck.md} (97%) rename en/{dm-query-status.md => query-status.md} (99%) rename en/{dm-resume-task.md => resume-task.md} (96%) rename en/{dm-source-configuration-file.md => source-configuration-file.md} (97%) rename en/{dm-stop-task.md => stop-task.md} (92%) rename en/{dm-task-configuration-guide.md => task-configuration-guide.md} (97%) rename en/{dm-tune-configuration.md => tune-configuration.md} (98%) rename zh/{dm-benchmark-v5.3.0.md => benchmark-v5.3.0.md} (93%) rename zh/{dm-command-line-flags.md => command-line-flags.md} (98%) rename zh/{dm-config-overview.md => config-overview.md} (74%) rename zh/{dm-create-task.md => create-task.md} (91%) rename zh/{dm-enable-tls.md => enable-tls.md} (98%) rename zh/{dm-error-handling.md => error-handling.md} (97%) rename zh/{dm-export-import-config.md => export-import-config.md} (96%) rename zh/{dm-faq.md => faq.md} (98%) rename zh/{dm-handle-alerts.md => handle-alerts.md} (75%) rename zh/{dm-handle-performance-issues.md => handle-performance-issues.md} (97%) rename zh/{dm-key-features.md => key-features.md} (98%) rename zh/{dm-manage-schema.md => manage-schema.md} (99%) rename zh/{dm-manage-source.md => manage-source.md} (95%) rename zh/{dm-open-api.md => open-api.md} (99%) rename zh/{dm-overview.md => overview.md} (82%) rename zh/{dm-pause-task.md => pause-task.md} (95%) rename zh/{dm-performance-test.md => performance-test.md} (93%) rename zh/{dm-precheck.md => precheck.md} (97%) rename zh/{dm-query-status.md => query-status.md} (99%) rename zh/{dm-resume-task.md => resume-task.md} (92%) rename zh/{dm-source-configuration-file.md => source-configuration-file.md} (97%) rename zh/{dm-stop-task.md => stop-task.md} (89%) rename zh/{dm-task-configuration-guide.md => task-configuration-guide.md} (95%) rename zh/{dm-tune-configuration.md => tune-configuration.md} (98%) diff --git a/en/TOC.md b/en/TOC.md index 8f65b68eb..f33d01a29 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -2,12 +2,12 @@ - About DM - - [DM Overview](dm-overview.md) + - [DM Overview](overview.md) - [DM 5.3 Release Notes](releases/5.3.0.md) - Basic Features - - [Table Routing](dm-key-features.md#table-routing) - - [Block and Allow Lists](dm-key-features.md#block-and-allow-table-lists) - - [Binlog Event Filter](dm-key-features.md#binlog-event-filter) + - [Table Routing](key-features.md#table-routing) + - [Block and Allow Lists](key-features.md#block-and-allow-table-lists) + - [Binlog Event Filter](key-features.md#binlog-event-filter) - Advanced Features - Merge and Migrate Data from Sharded Tables - [Overview](feature-shard-merge.md) @@ -16,7 +16,7 @@ - [Migrate from MySQL Databases that Use GH-ost/PT-osc](feature-online-ddl.md) - [Filter Certain Row Changes Using SQL Expressions](feature-expression-filter.md) - [DM Architecture](dm-arch.md) - - [Benchmarks](dm-benchmark-v5.3.0.md) + - [Benchmarks](benchmark-v5.3.0.md) - Quick Start - [Quick Start](quick-start-with-dm.md) - [Deploy a DM cluster Using TiUP](deploy-a-dm-cluster-using-tiup.md) @@ -35,56 +35,56 @@ - [Use Binary](deploy-a-dm-cluster-using-binary.md) - [Use Kubernetes](https://docs.pingcap.com/tidb-in-kubernetes/dev/deploy-tidb-dm) - [Migrate Data Using DM](migrate-data-using-dm.md) - - [Test DM Performance](dm-performance-test.md) + - [Test DM Performance](performance-test.md) - Maintain - Tools - [Maintain DM Clusters Using TiUP (Recommended)](maintain-dm-using-tiup.md) - [Maintain DM Clusters Using dmctl](dmctl-introduction.md) - - [Maintain DM Clusters Using OpenAPI](dm-open-api.md) + - [Maintain DM Clusters Using OpenAPI](open-api.md) - Cluster Upgrade - [Manually Upgrade from v1.0.x to v2.0+](manually-upgrade-dm-1.0-to-2.0.md) - - [Manage Data Source](dm-manage-source.md) + - [Manage Data Source](manage-source.md) - Manage a Data Migration Task - - [Task Configuration Guide](dm-task-configuration-guide.md) - - [Precheck a Task](dm-precheck.md) - - [Create a Task](dm-create-task.md) - - [Query Status](dm-query-status.md) - - [Pause a Task](dm-pause-task.md) - - [Resume a Task](dm-resume-task.md) - - [Stop a Task](dm-stop-task.md) - - [Export and Import Data Sources and Task Configuration of Clusters](dm-export-import-config.md) + - [Task Configuration Guide](task-configuration-guide.md) + - [Precheck a Task](precheck.md) + - [Create a Task](create-task.md) + - [Query Status](query-status.md) + - [Pause a Task](pause-task.md) + - [Resume a Task](resume-task.md) + - [Stop a Task](stop-task.md) + - [Export and Import Data Sources and Task Configuration of Clusters](export-import-config.md) - [Handle Failed DDL Statements](handle-failed-ddl-statements.md) - [Manually Handle Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - - [Manage Schemas of Tables to be Migrated](dm-manage-schema.md) - - [Handle Alerts](dm-handle-alerts.md) + - [Manage Schemas of Tables to be Migrated](manage-schema.md) + - [Handle Alerts](handle-alerts.md) - [Daily Check](dm-daily-check.md) - Usage Scenarios - [Migrate from Aurora to TiDB](migrate-from-mysql-aurora.md) - [Migrate when TiDB Tables Have More Columns](usage-scenario-downstream-more-columns.md) - [Switch the MySQL Instance to Be Migrated](usage-scenario-master-slave-switch.md) - Troubleshoot - - [Handle Errors](dm-error-handling.md) - - [Handle Performance Issues](dm-handle-performance-issues.md) + - [Handle Errors](error-handling.md) + - [Handle Performance Issues](handle-performance-issues.md) - Performance Tuning - - [Optimize Configuration](dm-tune-configuration.md) + - [Optimize Configuration](tune-configuration.md) - Reference - Architecture - - [DM Architecture Overview](dm-overview.md) + - [DM Architecture Overview](overview.md) - [DM-worker](dm-worker-intro.md) - - [Command-line Flags](dm-command-line-flags.md) + - [Command-line Flags](command-line-flags.md) - Configuration - - [Overview](dm-config-overview.md) + - [Overview](config-overview.md) - [DM-master Configuration](dm-master-configuration-file.md) - [DM-worker Configuration](dm-worker-configuration-file.md) - - [Upstream Database Configuration](dm-source-configuration-file.md) - - [Data Migration Task Configuration](dm-task-configuration-guide.md) + - [Upstream Database Configuration](source-configuration-file.md) + - [Data Migration Task Configuration](task-configuration-guide.md) - Secure - - [Enable TLS for DM Connections](dm-enable-tls.md) + - [Enable TLS for DM Connections](enable-tls.md) - [Generate Self-signed Certificates](dm-generate-self-signed-certificates.md) - [Monitoring Metrics](monitor-a-dm-cluster.md) - [Alert Rules](dm-alert-rules.md) - - [Error Codes](dm-error-handling.md#handle-common-errors) -- [FAQ](dm-faq.md) + - [Error Codes](error-handling.md#handle-common-errors) +- [FAQ](faq.md) - [Glossary](dm-glossary.md) - Release Notes - v5.3 diff --git a/en/_index.md b/en/_index.md index 759fff170..917a4cd16 100644 --- a/en/_index.md +++ b/en/_index.md @@ -17,7 +17,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] About TiDB Data Migration -- [What is DM?](dm-overview.md) +- [What is DM?](overview.md) - [DM Architecture](dm-arch.md) - [Performance](benchmark-v2.0-ga.md) @@ -41,7 +41,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] - [Deploy DM Using TiUP Offline](deploy-a-dm-cluster-using-tiup-offline.md) - [Deploy DM Using Binary](deploy-a-dm-cluster-using-binary.md) - [Use DM to Migrate Data](migrate-data-using-dm.md) -- [DM Performance Test](dm-performance-test.md) +- [DM Performance Test](performance-test.md) @@ -52,7 +52,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] - [Maintain DM Clusters Using dmctl](dmctl-introduction.md) - [Upgrade DM](manually-upgrade-dm-1.0-to-2.0.md) - [Manually Handle Sharding DDL Locks](manually-handling-sharding-ddl-locks.md) -- [Handle Alerts](dm-handle-alerts.md) +- [Handle Alerts](handle-alerts.md) - [Daily Check](dm-daily-check.md) @@ -69,9 +69,9 @@ aliases: ['/docs/tidb-data-migration/dev/'] Reference - [DM Architecture](dm-arch.md) -- [Configuration File Overview](dm-config-overview.md) +- [Configuration File Overview](config-overview.md) - [Monitoring Metrics and Alerts](monitor-a-dm-cluster.md) -- [Error Handling](dm-error-handling.md) +- [Error Handling](error-handling.md) diff --git a/en/benchmark-v1.0-ga.md b/en/benchmark-v1.0-ga.md index c002b1cea..100fbfff5 100644 --- a/en/benchmark-v1.0-ga.md +++ b/en/benchmark-v1.0-ga.md @@ -60,11 +60,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41), to do the test. For detailed test scenario description, see [performance test](performance-test.md). ### Full import benchmark case -For details, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). +For details, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). #### Full import benchmark result @@ -105,7 +105,7 @@ Full import data size in this benchmark case is 3.78 GB, load unit pool size use ### Incremental replication benchmark case -For details about the test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). +For details about the test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). #### Benchmark result for incremental replication diff --git a/en/benchmark-v2.0-ga.md b/en/benchmark-v2.0-ga.md index 7cca6a9cc..3d163f404 100644 --- a/en/benchmark-v2.0-ga.md +++ b/en/benchmark-v2.0-ga.md @@ -59,11 +59,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34), to do the test. For detailed test scenario description, see [performance test](performance-test.md). ### Full import benchmark case -For detailed full import test method, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). +For detailed full import test method, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). #### Full import benchmark results @@ -105,7 +105,7 @@ In this test, the full amount of imported data is 3.78 GB and the `pool-size` of ### Incremental replication benchmark case -For detailed incremental replication test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). +For detailed incremental replication test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). #### Incremental replication benchmark result diff --git a/en/dm-benchmark-v5.3.0.md b/en/benchmark-v5.3.0.md similarity index 97% rename from en/dm-benchmark-v5.3.0.md rename to en/benchmark-v5.3.0.md index 9db15b8ac..f5e098d48 100644 --- a/en/dm-benchmark-v5.3.0.md +++ b/en/benchmark-v5.3.0.md @@ -1,7 +1,6 @@ --- title: DM 5.3.0 Benchmark Report summary: Learn about the performance of 5.3.0. -aliases: ['/tidb-data-migration/dev/benchmark-v5.3.0.md/] --- # DM 5.3.0 Benchmark Report @@ -54,11 +53,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4), to do the test. For detailed test scenario description, see [performance test](performance-test.md). ### Full import benchmark case -For detailed full import test method, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). +For detailed full import test method, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). #### Full import benchmark results @@ -100,7 +99,7 @@ In this test, the full amount of imported data is 3.78 GB and the `pool-size` of ### Incremental replication benchmark case -For detailed incremental replication test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). +For detailed incremental replication test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). #### Incremental replication benchmark result diff --git a/en/dm-command-line-flags.md b/en/command-line-flags.md similarity index 98% rename from en/dm-command-line-flags.md rename to en/command-line-flags.md index 854db8e9b..2bbd720b2 100644 --- a/en/dm-command-line-flags.md +++ b/en/command-line-flags.md @@ -1,7 +1,6 @@ --- title: Command-line Flags summary: Learn about the command-line flags in DM. -aliases: ['/tidb-data-migration/dev/command-line-flags.md/] --- # Command-line Flags diff --git a/en/dm-config-overview.md b/en/config-overview.md similarity index 80% rename from en/dm-config-overview.md rename to en/config-overview.md index 079d94f49..d42d477c9 100644 --- a/en/dm-config-overview.md +++ b/en/config-overview.md @@ -1,7 +1,7 @@ --- title: Data Migration Configuration File Overview summary: This document gives an overview of Data Migration configuration files. -aliases: ['/docs/tidb-data-migration/dev/config-overview/','/tidb-data-migration/dev/config-overview.md/] +aliases: ['/docs/tidb-data-migration/dev/config-overview/'] --- # Data Migration Configuration File Overview @@ -12,7 +12,7 @@ This document gives an overview of configuration files of DM (Data Migration). - `dm-master.toml`: The configuration file of running the DM-master process, including the topology information and the logs of the DM-master. For more details, refer to [DM-master Configuration File](dm-master-configuration-file.md). - `dm-worker.toml`: The configuration file of running the DM-worker process, including the topology information and the logs of the DM-worker. For more details, refer to [DM-worker Configuration File](dm-worker-configuration-file.md). -- `source.yaml`: The configuration of the upstream database such as MySQL and MariaDB. For more details, refer to [Upstream Database Configuration File](dm-source-configuration-file.md). +- `source.yaml`: The configuration of the upstream database such as MySQL and MariaDB. For more details, refer to [Upstream Database Configuration File](source-configuration-file.md). ## DM migration task configuration @@ -20,9 +20,9 @@ This document gives an overview of configuration files of DM (Data Migration). You can take the following steps to create a data migration task: -1. [Load the data source configuration into the DM cluster using dmctl](dm-manage-source.md#operate-data-source). -2. Refer to the description in the [Task Configuration Guide](dm-task-configuration-guide.md) and create the configuration file `your_task.yaml`. -3. [Create the data migration task using dmctl](dm-create-task.md). +1. [Load the data source configuration into the DM cluster using dmctl](manage-source.md#operate-data-source). +2. Refer to the description in the [Task Configuration Guide](task-configuration-guide.md) and create the configuration file `your_task.yaml`. +3. [Create the data migration task using dmctl](create-task.md). ### Important concepts diff --git a/en/dm-create-task.md b/en/create-task.md similarity index 93% rename from en/dm-create-task.md rename to en/create-task.md index 02dca4db7..f36990664 100644 --- a/en/dm-create-task.md +++ b/en/create-task.md @@ -1,12 +1,11 @@ --- title: Create a Data Migration Task summary: Learn how to create a data migration task in TiDB Data Migration. -aliases: ['/tidb-data-migration/dev/create-task.md/] --- # Create a Data Migration Task -You can use the `start-task` command to create a data migration task. When the data migration task is started, DM [prechecks privileges and configurations](dm-precheck.md). +You can use the `start-task` command to create a data migration task. When the data migration task is started, DM [prechecks privileges and configurations](precheck.md). {{< copyable "" >}} diff --git a/en/dm-alert-rules.md b/en/dm-alert-rules.md index ad0f3b905..1006d63a0 100644 --- a/en/dm-alert-rules.md +++ b/en/dm-alert-rules.md @@ -1,14 +1,13 @@ ---- -title: DM Alert Information -summary: Introduce the alert information of DM. -aliases: ['/tidb-data-migration/dev/alert-rules/'] ---- - -# DM Alert Information - -The [alert system](migrate-data-using-dm.md#step-8-monitor-the-task-and-check-logs) is deployed by default when you deploy a DM cluster using TiUP. - -For more information about DM alert rules and the solutions, refer to [handle alerts](dm-handle-alerts.md). - -Both DM alert information and monitoring metrics are based on Prometheus. For more information about their relationship, refer to [DM monitoring metrics](monitor-a-dm-cluster.md). -ter.md). +--- +title: DM Alert Information +summary: Introduce the alert information of DM. +aliases: ['/tidb-data-migration/dev/alert-rules/'] +--- + +# DM Alert Information + +The [alert system](migrate-data-using-dm.md#step-8-monitor-the-task-and-check-logs) is deployed by default when you deploy a DM cluster using TiUP. + +For more information about DM alert rules and the solutions, refer to [handle alerts](handle-alerts.md). + +Both DM alert information and monitoring metrics are based on Prometheus. For more information about their relationship, refer to [DM monitoring metrics](monitor-a-dm-cluster.md). diff --git a/en/dm-daily-check.md b/en/dm-daily-check.md index ef8954666..5da9af8d5 100644 --- a/en/dm-daily-check.md +++ b/en/dm-daily-check.md @@ -8,7 +8,7 @@ aliases: ['/docs/tidb-data-migration/dev/daily-check/','/tidb-data-migration/dev This document summarizes how to perform a daily check on TiDB Data Migration (DM). -+ Method 1: Execute the `query-status` command to check the running status of the task and the error output (if any). For details, see [Query Status](dm-query-status.md). ++ Method 1: Execute the `query-status` command to check the running status of the task and the error output (if any). For details, see [Query Status](query-status.md). + Method 2: If Prometheus and Grafana are correctly deployed when you deploy the DM cluster using TiUP, you can view DM monitoring metrics in Grafana. For example, suppose that the Grafana's address is `172.16.10.71`, go to , enter the Grafana dashboard, and select the DM Dashboard to check monitoring metrics of DM. For more information of these metrics, see [DM Monitoring Metrics](monitor-a-dm-cluster.md). diff --git a/en/dm-glossary.md b/en/dm-glossary.md index 67a0066a4..c6dfb7bd3 100644 --- a/en/dm-glossary.md +++ b/en/dm-glossary.md @@ -20,7 +20,7 @@ Binlog events are information about data modification made to a MySQL or MariaDB ### Binlog event filter -[Binlog event filter](dm-key-features.md#binlog-event-filter) is a more fine-grained filtering feature than the block and allow lists filtering rule. Refer to [binlog event filter](dm-overview.md#binlog-event-filtering) for details. +[Binlog event filter](key-features.md#binlog-event-filter) is a more fine-grained filtering feature than the block and allow lists filtering rule. Refer to [binlog event filter](overview.md#binlog-event-filtering) for details. ### Binlog position @@ -32,7 +32,7 @@ Binlog replication processing unit is the processing unit used in DM-worker to r ### Block & allow table list -Block & allow table list is the feature that filters or only migrates all operations of some databases or some tables. Refer to [block & allow table lists](dm-overview.md#block-and-allow-lists-migration-at-the-schema-and-table-levels) for details. This feature is similar to [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) and [MariaDB Replication Filters](https://mariadb.com/kb/en/replication-filters/). +Block & allow table list is the feature that filters or only migrates all operations of some databases or some tables. Refer to [block & allow table lists](overview.md#block-and-allow-lists-migration-at-the-schema-and-table-levels) for details. This feature is similar to [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) and [MariaDB Replication Filters](https://mariadb.com/kb/en/replication-filters/). ## C @@ -120,13 +120,13 @@ The subtask is a part of a data migration task that is running on each DM-worker ### Subtask status -The subtask status is the status of a data migration subtask. The current status options include `New`, `Running`, `Paused`, `Stopped`, and `Finished`. Refer to [subtask status](dm-query-status.md#subtask-status) for more details about the status of a data migration task or subtask. +The subtask status is the status of a data migration subtask. The current status options include `New`, `Running`, `Paused`, `Stopped`, and `Finished`. Refer to [subtask status](query-status.md#subtask-status) for more details about the status of a data migration task or subtask. ## T ### Table routing -The table routing feature enables DM to migrate a certain table of the upstream MySQL or MariaDB instance to the specified table in the downstream, which can be used to merge and migrate sharded tables. Refer to [table routing](dm-key-features.md#table-routing) for details. +The table routing feature enables DM to migrate a certain table of the upstream MySQL or MariaDB instance to the specified table in the downstream, which can be used to merge and migrate sharded tables. Refer to [table routing](key-features.md#table-routing) for details. ### Task @@ -134,4 +134,4 @@ The data migration task, which is started after you successfully execute a `star ### Task status -The task status refers to the status of a data migration task. The task status depends on the statuses of all its subtasks. Refer to [subtask status](dm-query-status.md#subtask-status) for details. +The task status refers to the status of a data migration task. The task status depends on the statuses of all its subtasks. Refer to [subtask status](query-status.md#subtask-status) for details. diff --git a/en/dm-handle-alerts.md b/en/dm-handle-alerts.md deleted file mode 100644 index 75515d6aa..000000000 --- a/en/dm-handle-alerts.md +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: Handle Alerts -summary: Understand how to deal with the alert information in DM. -aliases: ['/tidb-data-migration/dev/handle-alerts.md/] ---- - -# Handle Alerts - -This document introduces how to deal with the alert information in DM. - -## Alerts related to high availability - -### `DM_master_all_down` - -- Description: - - If all DM-master nodes are offline, this alert is triggered. - -- Solution: - - You can take the following steps to handle the alert: - - 1. Check the environment of the cluster. - 2. Check the logs of all DM-master nodes for troubleshooting. - -### `DM_worker_offline` - -- Description: - - If a DM-worker node is offline for more than one hour, this alert is triggered. In a high-availability architecture, this alert might not directly interrupt the task but increases the risk of interruption. - -- Solution: - - You can take the following steps to handle the alert: - - 1. View the working status of the corresponding DM-worker node. - 2. Check whether the node is connected. - 3. Troubleshoot errors through logs. - -### `DM_DDL_error` - -- Description: - - This error occurs when DM is processing the sharding DDL operations. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_pending_DDL` - -- Description: - - If a sharding DDL operation is pending for more than one hour, this alert is triggered. - -- Solution: - - In some scenarios, the pending sharding DDL operation might be what users expect. Otherwise, refer to [Handle Sharding DDL Locks Manually in DM](manually-handling-sharding-ddl-locks.md) for solution. - -## Alert rules related to task status - -### `DM_task_state` - -- Description: - - When a sub-task of DM-worker is in the `Paused` state for over 20 minutes, an alert is triggered. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -## Alert rules related to relay log - -### `DM_relay_process_exits_with_error` - -- Description: - - When the relay log processing unit encounters an error, this unit moves to `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_remain_storage_of_relay_log` - -- Description: - - When the free space of the disk where the relay log is located is less than 10G, an alert is triggered. - -- Solutions: - - You can take the following methods to handle the alert: - - - Delete unwanted data manually to increase free disk space. - - Reconfigure the [automatic data purge strategy of the relay log](relay-log.md#automatic-data-purge) or [purge data manually](relay-log.md#manual-data-purge). - - Execute the command `pause-relay` to pause the relay log pulling process. After there is enough free disk space, resume the process by running the command `resume-relay`. Note that you must not purge upstream binlog files that have not been pulled after the relay log pulling process is paused. - -### `DM_relay_log_data_corruption` - -- Description: - - When the relay log processing unit validates the binlog event read from the upstream and detects abnormal checksum information, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_fail_to_read_binlog_from_master` - -- Description: - - If an error occurs when the relay log processing unit tries to read the binlog event from the upstream, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_fail_to_write_relay_log` - -- Description: - - If an error occurs when the relay log processing unit tries to write the binlog event into the relay log file, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_binlog_file_gap_between_master_relay` - -- Description: - - When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files pulled by the relay log processing unit by **more than** 1 for 10 minutes, and an alert is triggered. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -## Alert rules related to Dump/Load - -### `DM_dump_process_exists_with_error` - -- Description: - - When the Dump processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_load_process_exists_with_error` - -- Description: - - When the Load processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -## Alert rules related to binlog replication - -### `DM_sync_process_exists_with_error` - -- Description: - - When the binlog replication processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_binlog_file_gap_between_master_syncer` - -- Description: - - When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files processed by the relay log processing unit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](dm-handle-performance-issues.md). - -### `DM_binlog_file_gap_between_relay_syncer` - -- Description: - - When the number of the binlog files in the current relay log processing unit exceeds that of the latest binlog files processed by the binlog replication processing unit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](dm-handle-performance-issues.md). \ No newline at end of file diff --git a/en/dm-hardware-and-software-requirements.md b/en/dm-hardware-and-software-requirements.md index 74b0b407a..d039e50a4 100644 --- a/en/dm-hardware-and-software-requirements.md +++ b/en/dm-hardware-and-software-requirements.md @@ -46,4 +46,4 @@ DM can be deployed and run on a 64-bit generic hardware server platform (Intel x > **Note:** > > - In the production environment, it is not recommended to deploy and run DM-master and DM-worker on the same server, because when DM-worker writes data to disks, it might interfere with the use of disks by DM-master's high availability component. -> - If a performance issue occurs, you are recommended to modify the task configuration file according to the [Optimize Configuration of DM](dm-tune-configuration.md) document. If the performance is not effectively optimized by tuning the configuration file, you can try to upgrade the hardware of your server. +> - If a performance issue occurs, you are recommended to modify the task configuration file according to the [Optimize Configuration of DM](tune-configuration.md) document. If the performance is not effectively optimized by tuning the configuration file, you can try to upgrade the hardware of your server. diff --git a/en/dm-enable-tls.md b/en/enable-tls.md similarity index 98% rename from en/dm-enable-tls.md rename to en/enable-tls.md index e4cc52818..4137e4247 100644 --- a/en/dm-enable-tls.md +++ b/en/enable-tls.md @@ -1,7 +1,6 @@ --- title: Enable TLS for DM Connections summary: Learn how to enable TLS for DM connections. -aliases: ['/tidb-data-migration/dev/enable-tls.md/] --- # Enable TLS for DM Connections diff --git a/en/dm-error-handling.md b/en/error-handling.md similarity index 97% rename from en/dm-error-handling.md rename to en/error-handling.md index 1e3879170..7af062fa8 100644 --- a/en/dm-error-handling.md +++ b/en/error-handling.md @@ -1,7 +1,7 @@ --- title: Handle Errors summary: Learn about the error system and how to handle common errors when you use DM. -aliases: ['/docs/tidb-data-migration/dev/error-handling/','/docs/tidb-data-migration/dev/troubleshoot-dm/','/docs/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-handling.md/] +aliases: ['/docs/tidb-data-migration/dev/error-handling/','/docs/tidb-data-migration/dev/troubleshoot-dm/','/docs/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-system/'] --- # Handle Errors @@ -90,7 +90,7 @@ If you encounter an error while running DM, take the following steps to troubles resume-task ${task name} ``` -However, you need to reset the data migration task in some cases. For details, refer to [Reset the Data Migration Task](dm-faq.md#how-to-reset-the-data-migration-task). +However, you need to reset the data migration task in some cases. For details, refer to [Reset the Data Migration Task](faq.md#how-to-reset-the-data-migration-task). ## Handle common errors @@ -102,8 +102,8 @@ However, you need to reset the data migration task in some cases. For details, r | `code=10005` | Occurs when performing the `QUERY` type SQL statements. | | | `code=10006` | Occurs when performing the `EXECUTE` type SQL statements, including DDL statements and DML statements of the `INSERT`, `UPDATE`or `DELETE` type. For more detailed error information, check the error message which usually includes the error code and error information returned for database operations. | | -| `code=11006` | Occurs when the built-in parser of DM parses the incompatible DDL statements. | Refer to [Data Migration - incompatible DDL statements](dm-faq.md#how-to-handle-incompatible-ddl-statements) for solution. | -| `code=20010` | Occurs when decrypting the database password that is provided in task configuration. | Check whether the downstream database password provided in the configuration task is [correctly encrypted using dmctl](dm-manage-source.md#encrypt-the-database-password). | +| `code=11006` | Occurs when the built-in parser of DM parses the incompatible DDL statements. | Refer to [Data Migration - incompatible DDL statements](faq.md#how-to-handle-incompatible-ddl-statements) for solution. | +| `code=20010` | Occurs when decrypting the database password that is provided in task configuration. | Check whether the downstream database password provided in the configuration task is [correctly encrypted using dmctl](manage-source.md#encrypt-the-database-password). | | `code=26002` | The task check fails to establish database connection. For more detailed error information, check the error message which usually includes the error code and error information returned for database operations. | Check whether the machine where DM-master is located has permission to access the upstream. | | `code=32001` | Abnormal dump processing unit | If the error message contains `mydumper: argument list too long.`, configure the table to be exported by manually adding the `--regex` regular expression in the Mydumper argument `extra-args` in the `task.yaml` file according to the block-allow list. For example, to export all tables named `hello`, add `--regex '.*\\.hello$'`; to export all tables, add `--regex '.*'`. | | `code=38008` | An error occurs in the gRPC communication among DM components. | Check `class`. Find out the error occurs in the interaction of which components. Determine the type of communication error. If the error occurs when establishing gRPC connection, check whether the communication server is working normally. | @@ -179,9 +179,9 @@ For binlog replication processing units, manually recover migration using the fo ### `Access denied for user 'root'@'172.31.43.27' (using password: YES)` shows when you query the task or check the log -For database related passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. For how to encrypt the plaintext password, see [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). +For database related passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. For how to encrypt the plaintext password, see [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). -In addition, the user of the upstream and downstream databases must have the corresponding read and write privileges. Data Migration also [prechecks the corresponding privileges automatically](dm-precheck.md) while starting the data migration task. +In addition, the user of the upstream and downstream databases must have the corresponding read and write privileges. Data Migration also [prechecks the corresponding privileges automatically](precheck.md) while starting the data migration task. ### The `load` processing unit reports the error `packet for query is too large. Try adjusting the 'max_allowed_packet' variable` diff --git a/en/dm-export-import-config.md b/en/export-import-config.md similarity index 97% rename from en/dm-export-import-config.md rename to en/export-import-config.md index 0aecc7b80..17aa83607 100644 --- a/en/dm-export-import-config.md +++ b/en/export-import-config.md @@ -1,7 +1,6 @@ --- title: Export and Import Data Sources and Task Configuration of Clusters summary: Learn how to export and import data sources and task configuration of clusters when you use DM. -aliases: ['/tidb-data-migration/dev/export-import-config.md/] --- # Export and Import Data Sources and Task Configuration of Clusters diff --git a/en/dm-faq.md b/en/faq.md similarity index 98% rename from en/dm-faq.md rename to en/faq.md index 0250a7031..40f5d8f26 100644 --- a/en/dm-faq.md +++ b/en/faq.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration FAQ summary: Learn about frequently asked questions (FAQs) about TiDB Data Migration (DM). -aliases: ['/docs/tidb-data-migration/dev/faq/','/tidb-data-migration/dev/faq.md/] +aliases: ['/docs/tidb-data-migration/dev/faq/'] --- # TiDB Data Migration FAQ @@ -188,7 +188,7 @@ Sometimes, the error message contains the `parse statement` information, for exa if the DDL is not needed, you can use a filter rule with \"*\" schema-pattern to ignore it.\n\t : parse statement: line 1 column 11 near \"EVENT `event_del_big_table` \r\nDISABLE\" %!!(MISSING)(EXTRA string=ALTER EVENT `event_del_big_table` \r\nDISABLE ``` -The reason for this type of error is that the TiDB parser cannot parse DDL statements sent by the upstream, such as `ALTER EVENT`, so `sql-skip` does not take effect as expected. You can add [binlog event filters](dm-key-features.md#binlog-event-filter) in the configuration file to filter those statements and set `schema-pattern: "*"`. Starting from DM v2.0.1, DM pre-filters statements related to `EVENT`. +The reason for this type of error is that the TiDB parser cannot parse DDL statements sent by the upstream, such as `ALTER EVENT`, so `sql-skip` does not take effect as expected. You can add [binlog event filters](key-features.md#binlog-event-filter) in the configuration file to filter those statements and set `schema-pattern: "*"`. Starting from DM v2.0.1, DM pre-filters statements related to `EVENT`. Since DM v2.0, `handle-error` replaces `sql-skip`. You can use `handle-error` instead to avoid this issue. diff --git a/en/feature-expression-filter.md b/en/feature-expression-filter.md index 8003d1c5e..d91b071bf 100644 --- a/en/feature-expression-filter.md +++ b/en/feature-expression-filter.md @@ -6,7 +6,7 @@ title: Filter Certain Row Changes Using SQL Expressions ## Overview -In the process of data migration, DM provides the [Binlog Event Filter](dm-key-features.md#binlog-event-filter) feature to filter certain types of binlog events. For example, for archiving or auditing purposes, `DELETE` event might be filtered when data is migrated to the downstream. However, Binlog Event Filter cannot judge with a greater granularity whether the `DELETE` event of a certain row should be filtered. +In the process of data migration, DM provides the [Binlog Event Filter](key-features.md#binlog-event-filter) feature to filter certain types of binlog events. For example, for archiving or auditing purposes, `DELETE` event might be filtered when data is migrated to the downstream. However, Binlog Event Filter cannot judge with a greater granularity whether the `DELETE` event of a certain row should be filtered. To solve the above issue, DM supports filtering certain row changes using SQL expressions. The binlog in the `ROW` format supported by DM has the values of all columns in binlog events. You can configure SQL expressions according to these values. If the SQL expressions evaluate a row change as `TRUE`, DM will not migrate the row change downstream. @@ -16,7 +16,7 @@ To solve the above issue, DM supports filtering certain row changes using SQL ex ## Configuration example -Similar to [Binlog Event Filter](dm-key-features.md#binlog-event-filter), you also need to configure the expression-filter feature in the configuration file of the data migration task, as shown below. For complete configuration and its descriptions, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md#task-configuration-file-template-advanced): +Similar to [Binlog Event Filter](key-features.md#binlog-event-filter), you also need to configure the expression-filter feature in the configuration file of the data migration task, as shown below. For complete configuration and its descriptions, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md#task-configuration-file-template-advanced): ```yml name: test diff --git a/en/feature-shard-merge-pessimistic.md b/en/feature-shard-merge-pessimistic.md index 8d0f6a6b3..97393de62 100644 --- a/en/feature-shard-merge-pessimistic.md +++ b/en/feature-shard-merge-pessimistic.md @@ -25,7 +25,7 @@ DM has the following sharding DDL usage restrictions in the pessimistic mode: - A single `RENAME TABLE` statement can only involve a single `RENAME` operation. - The sharding group migration task requires each DDL statement to involve operations on only one table. - The table schema of each sharded table must be the same at the starting point of the incremental replication task, so as to make sure the DML statements of different sharded tables can be migrated into the downstream with a definite table schema, and the subsequent sharding DDL statements can be correctly matched and migrated. -- If you need to change the [table routing](dm-key-features.md#table-routing) rule, you have to wait for the migration of all sharding DDL statements to complete. +- If you need to change the [table routing](key-features.md#table-routing) rule, you have to wait for the migration of all sharding DDL statements to complete. - During the migration of sharding DDL statements, an error is reported if you use `dmctl` to change `router-rules`. - If you need to `CREATE` a new table to a sharding group where DDL statements are being executed, you have to make sure that the table schema is the same as the newly modified table schema. - For example, both the original `table_1` and `table_2` have two columns (a, b) initially, and have three columns (a, b, c) after the sharding DDL operation, so after the migration the newly created table should also have three columns (a, b, c). @@ -75,7 +75,7 @@ The characteristics of DM handling the sharding DDL migration among multiple DM- - After receiving the DDL statement from the binlog event, each DM-worker sends the DDL information to `DM-master`. - `DM-master` creates or updates the DDL lock based on the DDL information received from each DM-worker and the sharding group information. - If all members of the sharding group receive a same specific DDL statement, this indicates that all DML statements before the DDL execution on the upstream sharded tables have been completely migrated, and this DDL statement can be executed. Then DM can continue to migrate the subsequent DML statements. -- After being converted by the [table router](dm-key-features.md#table-routing), the DDL statement of the upstream sharded tables must be consistent with the DDL statement to be executed in the downstream. Therefore, this DDL statement only needs to be executed once by the DDL owner and all other DM-workers can ignore this DDL statement. +- After being converted by the [table router](key-features.md#table-routing), the DDL statement of the upstream sharded tables must be consistent with the DDL statement to be executed in the downstream. Therefore, this DDL statement only needs to be executed once by the DDL owner and all other DM-workers can ignore this DDL statement. In the above example, only one sharded table needs to be merged in the upstream MySQL instance corresponding to each DM-worker. But in actual scenarios, there might be multiple sharded tables in multiple sharded schemas to be merged in one MySQL instance. And when this happens, it becomes more complex to coordinate the sharding DDL migration. diff --git a/en/handle-alerts.md b/en/handle-alerts.md new file mode 100644 index 000000000..31842fa12 --- /dev/null +++ b/en/handle-alerts.md @@ -0,0 +1,382 @@ +<<<<<<< HEAD:en/dm-handle-alerts.md +--- +title: Handle Alerts +summary: Understand how to deal with the alert information in DM. +aliases: ['/tidb-data-migration/dev/handle-alerts.md/] +--- + +# Handle Alerts + +This document introduces how to deal with the alert information in DM. + +## Alerts related to high availability + +### `DM_master_all_down` + +- Description: + + If all DM-master nodes are offline, this alert is triggered. + +- Solution: + + You can take the following steps to handle the alert: + + 1. Check the environment of the cluster. + 2. Check the logs of all DM-master nodes for troubleshooting. + +### `DM_worker_offline` + +- Description: + + If a DM-worker node is offline for more than one hour, this alert is triggered. In a high-availability architecture, this alert might not directly interrupt the task but increases the risk of interruption. + +- Solution: + + You can take the following steps to handle the alert: + + 1. View the working status of the corresponding DM-worker node. + 2. Check whether the node is connected. + 3. Troubleshoot errors through logs. + +### `DM_DDL_error` + +- Description: + + This error occurs when DM is processing the sharding DDL operations. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_pending_DDL` + +- Description: + + If a sharding DDL operation is pending for more than one hour, this alert is triggered. + +- Solution: + + In some scenarios, the pending sharding DDL operation might be what users expect. Otherwise, refer to [Handle Sharding DDL Locks Manually in DM](manually-handling-sharding-ddl-locks.md) for solution. + +## Alert rules related to task status + +### `DM_task_state` + +- Description: + + When a sub-task of DM-worker is in the `Paused` state for over 20 minutes, an alert is triggered. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to relay log + +### `DM_relay_process_exits_with_error` + +- Description: + + When the relay log processing unit encounters an error, this unit moves to `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_remain_storage_of_relay_log` + +- Description: + + When the free space of the disk where the relay log is located is less than 10G, an alert is triggered. + +- Solutions: + + You can take the following methods to handle the alert: + + - Delete unwanted data manually to increase free disk space. + - Reconfigure the [automatic data purge strategy of the relay log](relay-log.md#automatic-data-purge) or [purge data manually](relay-log.md#manual-data-purge). + - Execute the command `pause-relay` to pause the relay log pulling process. After there is enough free disk space, resume the process by running the command `resume-relay`. Note that you must not purge upstream binlog files that have not been pulled after the relay log pulling process is paused. + +### `DM_relay_log_data_corruption` + +- Description: + + When the relay log processing unit validates the binlog event read from the upstream and detects abnormal checksum information, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_fail_to_read_binlog_from_master` + +- Description: + + If an error occurs when the relay log processing unit tries to read the binlog event from the upstream, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_fail_to_write_relay_log` + +- Description: + + If an error occurs when the relay log processing unit tries to write the binlog event into the relay log file, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_binlog_file_gap_between_master_relay` + +- Description: + + When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files pulled by the relay log processing unit by **more than** 1 for 10 minutes, and an alert is triggered. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to Dump/Load + +### `DM_dump_process_exists_with_error` + +- Description: + + When the Dump processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_load_process_exists_with_error` + +- Description: + + When the Load processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to binlog replication + +### `DM_sync_process_exists_with_error` + +- Description: + + When the binlog replication processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_binlog_file_gap_between_master_syncer` + +- Description: + + When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files processed by the relay log processing unit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](dm-handle-performance-issues.md). + +### `DM_binlog_file_gap_between_relay_syncer` + +- Description: + + When the number of the binlog files in the current relay log processing unit exceeds that of the latest binlog files processed by the binlog replication processing unit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](dm-handle-performance-issues.md). +======= +--- +title: Handle Alerts +summary: Understand how to deal with the alert information in DM. +--- + +# Handle Alerts + +This document introduces how to deal with the alert information in DM. + +## Alerts related to high availability + +### `DM_master_all_down` + +- Description: + + If all DM-master nodes are offline, this alert is triggered. + +- Solution: + + You can take the following steps to handle the alert: + + 1. Check the environment of the cluster. + 2. Check the logs of all DM-master nodes for troubleshooting. + +### `DM_worker_offline` + +- Description: + + If a DM-worker node is offline for more than one hour, this alert is triggered. In a high-availability architecture, this alert might not directly interrupt the task but increases the risk of interruption. + +- Solution: + + You can take the following steps to handle the alert: + + 1. View the working status of the corresponding DM-worker node. + 2. Check whether the node is connected. + 3. Troubleshoot errors through logs. + +### `DM_DDL_error` + +- Description: + + This error occurs when DM is processing the sharding DDL operations. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +### `DM_pending_DDL` + +- Description: + + If a sharding DDL operation is pending for more than one hour, this alert is triggered. + +- Solution: + + In some scenarios, the pending sharding DDL operation might be what users expect. Otherwise, refer to [Handle Sharding DDL Locks Manually in DM](manually-handling-sharding-ddl-locks.md) for solution. + +## Alert rules related to task status + +### `DM_task_state` + +- Description: + + When a sub-task of DM-worker is in the `Paused` state for over 20 minutes, an alert is triggered. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +## Alert rules related to relay log + +### `DM_relay_process_exits_with_error` + +- Description: + + When the relay log processing unit encounters an error, this unit moves to `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +### `DM_remain_storage_of_relay_log` + +- Description: + + When the free space of the disk where the relay log is located is less than 10G, an alert is triggered. + +- Solutions: + + You can take the following methods to handle the alert: + + - Delete unwanted data manually to increase free disk space. + - Reconfigure the [automatic data purge strategy of the relay log](relay-log.md#automatic-data-purge) or [purge data manually](relay-log.md#manual-data-purge). + - Execute the command `pause-relay` to pause the relay log pulling process. After there is enough free disk space, resume the process by running the command `resume-relay`. Note that you must not purge upstream binlog files that have not been pulled after the relay log pulling process is paused. + +### `DM_relay_log_data_corruption` + +- Description: + + When the relay log processing unit validates the binlog event read from the upstream and detects abnormal checksum information, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +### `DM_fail_to_read_binlog_from_master` + +- Description: + + If an error occurs when the relay log processing unit tries to read the binlog event from the upstream, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +### `DM_fail_to_write_relay_log` + +- Description: + + If an error occurs when the relay log processing unit tries to write the binlog event into the relay log file, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +### `DM_binlog_file_gap_between_master_relay` + +- Description: + + When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files pulled by the relay log processing unit by **more than** 1 for 10 minutes, and an alert is triggered. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +## Alert rules related to Dump/Load + +### `DM_dump_process_exists_with_error` + +- Description: + + When the Dump processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +### `DM_load_process_exists_with_error` + +- Description: + + When the Load processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +## Alert rules related to binlog replication + +### `DM_sync_process_exists_with_error` + +- Description: + + When the binlog replication processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](error-handling.md#troubleshooting). + +### `DM_binlog_file_gap_between_master_syncer` + +- Description: + + When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files processed by the relay log processing unit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](handle-performance-issues.md). + +### `DM_binlog_file_gap_between_relay_syncer` + +- Description: + + When the number of the binlog files in the current relay log processing unit exceeds that of the latest binlog files processed by the binlog replication processing unit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](handle-performance-issues.md). +>>>>>>> parent of a030378 (rename_files):en/handle-alerts.md diff --git a/en/handle-failed-ddl-statements.md b/en/handle-failed-ddl-statements.md index ff6b31ceb..7eecbba54 100644 --- a/en/handle-failed-ddl-statements.md +++ b/en/handle-failed-ddl-statements.md @@ -29,7 +29,7 @@ When you use dmctl to manually handle the failed DDL statements, the commonly us ### query-status -The `query-status` command is used to query the current status of items such as the subtask and the relay unit in each MySQL instance. For details, see [query status](dm-query-status.md). +The `query-status` command is used to query the current status of items such as the subtask and the relay unit in each MySQL instance. For details, see [query status](query-status.md). ### handle-error diff --git a/en/dm-handle-performance-issues.md b/en/handle-performance-issues.md similarity index 97% rename from en/dm-handle-performance-issues.md rename to en/handle-performance-issues.md index c6b188deb..6f47bb314 100644 --- a/en/dm-handle-performance-issues.md +++ b/en/handle-performance-issues.md @@ -1,7 +1,6 @@ --- title: Handle Performance Issues summary: Learn about common performance issues that might exist in DM and how to deal with them. -aliases: ['/tidb-data-migration/dev/handle-performance-issues.md/] --- # Handle Performance Issues @@ -73,7 +72,7 @@ The Binlog replication unit decides whether to read the binlog event from the up ### binlog event conversion -The Binlog replication unit constructs DML, parses DDL, and performs [table router](dm-key-features.md#table-routing) conversion from binlog event data. The related metric is `transform binlog event duration`. +The Binlog replication unit constructs DML, parses DDL, and performs [table router](key-features.md#table-routing) conversion from binlog event data. The related metric is `transform binlog event duration`. The duration is mainly affected by the write operations upstream. Take the `INSERT INTO` statement as an example, the time consumed to convert a single `VALUES` greatly differs from that to convert a lot of `VALUES`. The time consumed might range from tens of microseconds to hundreds of microseconds. However, usually this is not a bottleneck of the system. diff --git a/en/dm-key-features.md b/en/key-features.md similarity index 98% rename from en/dm-key-features.md rename to en/key-features.md index f87454da8..225b36315 100644 --- a/en/dm-key-features.md +++ b/en/key-features.md @@ -1,7 +1,7 @@ --- title: Key Features summary: Learn about the key features of DM and appropriate parameter configurations. -aliases: ['/docs/tidb-data-migration/dev/feature-overview/','/tidb-data-migration/dev/feature-overview','/tidb-data-migration/dev/key-features.md/] +aliases: ['/docs/tidb-data-migration/dev/feature-overview/','/tidb-data-migration/dev/feature-overview'] --- # Key Features @@ -236,7 +236,7 @@ Binlog event filter is a more fine-grained filtering rule than the block and all > **Note:** > > - If the same table matches multiple rules, these rules are applied in order and the block list has priority over the allow list. This means if both the `Ignore` and `Do` rules are applied to a table, the `Ignore` rule takes effect. -> - Starting from DM v2.0.2, you can configure binlog event filters in the source configuration file. For details, see [Upstream Database Configuration File](dm-source-configuration-file.md). +> - Starting from DM v2.0.2, you can configure binlog event filters in the source configuration file. For details, see [Upstream Database Configuration File](source-configuration-file.md). ### Parameter configuration @@ -376,7 +376,7 @@ In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM prov ### Restrictions - DM only supports gh-ost and pt-osc. -- When `online-ddl` is enabled, the checkpoint corresponding to incremental replication should not be in the process of online DDL execution. For example, if an upstream online DDL operation starts at `position-A` and ends at `position-B` of the binlog, the starting point of incremental replication should be earlier than `position-A` or later than `position-B`; otherwise, an error occurs. For details, refer to [FAQ](dm-faq.md#how-to-handle-the-error-returned-by-the-ddl-operation-related-to-the-gh-ost-table-after-online-ddl-scheme-gh-ost-is-set). +- When `online-ddl` is enabled, the checkpoint corresponding to incremental replication should not be in the process of online DDL execution. For example, if an upstream online DDL operation starts at `position-A` and ends at `position-B` of the binlog, the starting point of incremental replication should be earlier than `position-A` or later than `position-B`; otherwise, an error occurs. For details, refer to [FAQ](faq.md#how-to-handle-the-error-returned-by-the-ddl-operation-related-to-the-gh-ost-table-after-online-ddl-scheme-gh-ost-is-set). ### Parameter configuration diff --git a/en/maintain-dm-using-tiup.md b/en/maintain-dm-using-tiup.md index bb1d94291..ff3ecd310 100644 --- a/en/maintain-dm-using-tiup.md +++ b/en/maintain-dm-using-tiup.md @@ -179,7 +179,7 @@ For example, to scale out a DM-worker node in the `prod-cluster` cluster, take t > **Note:** > -> Since v2.0.5, dmctl support [Export and Import Data Sources and Task Configuration of Clusters](dm-export-import-config.md)。 +> Since v2.0.5, dmctl support [Export and Import Data Sources and Task Configuration of Clusters](export-import-config.md)。 > > Before upgrading, you can use `config export` to export the configuration files of clusters. After upgrading, if you need to downgrade to an earlier version, you can first redeploy the earlier cluster and then use `config import` to import the previous configuration files. > diff --git a/en/dm-manage-schema.md b/en/manage-schema.md similarity index 99% rename from en/dm-manage-schema.md rename to en/manage-schema.md index e7bdaa58d..a81f753ff 100644 --- a/en/dm-manage-schema.md +++ b/en/manage-schema.md @@ -1,7 +1,6 @@ --- title: Manage Table Schemas of Tables to be Migrated summary: Learn how to manage the schema of the table to be migrated in DM. -aliases: ['/tidb-data-migration/dev/manage-schema.md/] --- # Manage Table Schemas of Tables to be Migrated diff --git a/en/dm-manage-source.md b/en/manage-source.md similarity index 96% rename from en/dm-manage-source.md rename to en/manage-source.md index ac96de908..96013e867 100644 --- a/en/dm-manage-source.md +++ b/en/manage-source.md @@ -1,7 +1,6 @@ --- title: Manage Data Source Configurations summary: Learn how to manage upstream MySQL instances in TiDB Data Migration. -aliases: ['/tidb-data-migration/dev/manage-source.md/] --- # Manage Data Source Configurations @@ -70,7 +69,7 @@ Use the following `operate-source` command to create a source configuration file operate-source create ./source.yaml ``` -For the configuration of `source.yaml`, refer to [Upstream Database Configuration File Introduction](dm-source-configuration-file.md). +For the configuration of `source.yaml`, refer to [Upstream Database Configuration File Introduction](source-configuration-file.md). The following is an example of the returned result: @@ -169,7 +168,7 @@ Global Flags: -s, --source strings MySQL Source ID. ``` -Before transferring, DM checks whether the worker to be unbound still has running tasks. If the worker has any running tasks, you need to [pause the tasks](dm-pause-task.md) first, change the binding, and then [resume the tasks](dm-resume-task.md). +Before transferring, DM checks whether the worker to be unbound still has running tasks. If the worker has any running tasks, you need to [pause the tasks](pause-task.md) first, change the binding, and then [resume the tasks](resume-task.md). ### Usage example diff --git a/en/manually-upgrade-dm-1.0-to-2.0.md b/en/manually-upgrade-dm-1.0-to-2.0.md index cc636f9c7..a2f989ffd 100644 --- a/en/manually-upgrade-dm-1.0-to-2.0.md +++ b/en/manually-upgrade-dm-1.0-to-2.0.md @@ -25,7 +25,7 @@ The prepared configuration files of v2.0+ include the configuration files of the ### Upstream database configuration file -In v2.0+, the [upstream database configuration file](dm-source-configuration-file.md) is separated from the process configuration of the DM-worker, so you need to obtain the source configuration based on the [v1.0.x DM-worker configuration](https://docs.pingcap.com/tidb-data-migration/stable/dm-worker-configuration-file). +In v2.0+, the [upstream database configuration file](source-configuration-file.md) is separated from the process configuration of the DM-worker, so you need to obtain the source configuration based on the [v1.0.x DM-worker configuration](https://docs.pingcap.com/tidb-data-migration/stable/dm-worker-configuration-file). > **Note:** > @@ -98,7 +98,7 @@ from: ### Data migration task configuration file -For [data migration task configuration guide](dm-task-configuration-guide.md), v2.0+ is basically compatible with v1.0.x. You can directly copy the configuration of v1.0.x. +For [data migration task configuration guide](task-configuration-guide.md), v2.0+ is basically compatible with v1.0.x. You can directly copy the configuration of v1.0.x. ## Step 2: Deploy the v2.0+ cluster @@ -116,7 +116,7 @@ If the original v1.0.x cluster is deployed by binary, you can stop the DM-worker ## Step 4: Upgrade data migration task -1. Use the [`operate-source`](dm-manage-source.md#operate-data-source) command to load the upstream database source configuration from [step 1](#step-1-prepare-v20-configuration-file) into the v2.0+ cluster. +1. Use the [`operate-source`](manage-source.md#operate-data-source) command to load the upstream database source configuration from [step 1](#step-1-prepare-v20-configuration-file) into the v2.0+ cluster. 2. In the downstream TiDB cluster, obtain the corresponding global checkpoint information from the incremental checkpoint table of the v1.0.x data migration task. @@ -158,8 +158,8 @@ If the original v1.0.x cluster is deployed by binary, you can stop the DM-worker > > If `enable-gtid` is enabled in the source configuration, currently you need to parse the binlog or relay log file to obtain the GTID sets corresponding to the binlog position, and set it to `binlog-gtid` in the `meta`. -4. Use the [`start-task`](dm-create-task.md) command to start the upgraded data migration task through the v2.0+ data migration task configuration file. +4. Use the [`start-task`](create-task.md) command to start the upgraded data migration task through the v2.0+ data migration task configuration file. -5. Use the [`query-status`](dm-query-status.md) command to confirm whether the data migration task is running normally. +5. Use the [`query-status`](query-status.md) command to confirm whether the data migration task is running normally. If the data migration task runs normally, it indicates that the DM upgrade to v2.0+ is successful. diff --git a/en/migrate-data-using-dm.md b/en/migrate-data-using-dm.md index 5a98a3d83..1fe38c618 100644 --- a/en/migrate-data-using-dm.md +++ b/en/migrate-data-using-dm.md @@ -14,7 +14,7 @@ It is recommended to [deploy the DM cluster using TiUP](deploy-a-dm-cluster-usin > **Note:** > -> - For database passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. See [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). +> - For database passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. See [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). > - The user of the upstream and downstream databases must have the corresponding read and write privileges. ## Step 2: Check the cluster information @@ -37,7 +37,7 @@ After the DM cluster is deployed using TiUP, the configuration information is li | Upstream MySQL-2 | 172.16.10.82 | 3306 | root | VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU= | | Downstream TiDB | 172.16.10.83 | 4000 | root | | -The list of privileges needed on the MySQL host can be found in the [precheck](dm-precheck.md) documentation. +The list of privileges needed on the MySQL host can be found in the [precheck](precheck.md) documentation. ## Step 3: Create data source @@ -124,7 +124,7 @@ To detect possible errors of data migration configuration in advance, DM provide - DM automatically checks the corresponding privileges and configuration while starting the data migration task. - You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements. -For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](dm-precheck.md). +For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](precheck.md). > **Note:** > diff --git a/en/migrate-from-mysql-aurora.md b/en/migrate-from-mysql-aurora.md index 7268bee07..c87752cfa 100644 --- a/en/migrate-from-mysql-aurora.md +++ b/en/migrate-from-mysql-aurora.md @@ -68,7 +68,7 @@ If GTID is enabled in Aurora, you can migrate data based on GTID. For how to ena > **Note:** > > + GTID-based data migration requires MySQL 5.7 (Aurora 2.04) version or later. -> + In addition to the Aurora-specific configuration above, the upstream database must meet other requirements for migrating from MySQL, such as table schemas, character sets, and privileges. See [Checking Items](dm-precheck.md#checking-items) for details. +> + In addition to the Aurora-specific configuration above, the upstream database must meet other requirements for migrating from MySQL, such as table schemas, character sets, and privileges. See [Checking Items](precheck.md#checking-items) for details. ## Step 2: Deploy the DM cluster @@ -122,7 +122,7 @@ The number of `master`s and `worker`s in the returned result is consistent with > **Note:** > -> The configuration file used by DM supports database passwords in plaintext or ciphertext. It is recommended to use password encrypted using dmctl. To obtain the ciphertext password, see [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). +> The configuration file used by DM supports database passwords in plaintext or ciphertext. It is recommended to use password encrypted using dmctl. To obtain the ciphertext password, see [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). Save the following configuration files of data source according to the example, in which the value of `source-id` will be used in the task configuration in [step 4](#step-4-configure-the-task). diff --git a/en/dm-open-api.md b/en/open-api.md similarity index 99% rename from en/dm-open-api.md rename to en/open-api.md index 7fc95f51a..237f56d50 100644 --- a/en/dm-open-api.md +++ b/en/open-api.md @@ -1,7 +1,6 @@ --- title: Maintain DM Clusters Using OpenAPI summary: Learn about how to use OpenAPI interface to manage the cluster status and data replication. -aliases: ['/tidb-data-migration/dev/open-api.md/] --- # Maintain DM Clusters Using OpenAPI diff --git a/en/dm-overview.md b/en/overview.md similarity index 80% rename from en/dm-overview.md rename to en/overview.md index b454ce579..3a10277a5 100644 --- a/en/dm-overview.md +++ b/en/overview.md @@ -1,7 +1,7 @@ --- title: Data Migration Overview summary: Learn about the Data Migration tool, the architecture, the key components, and features. -aliases: ['/docs/tidb-data-migration/dev/overview/','/tidb-data-migration/dev/overview.md/] +aliases: ['/docs/tidb-data-migration/dev/overview/'] --- @@ -29,15 +29,15 @@ This section describes the basic data migration features provided by DM. ### Block and allow lists migration at the schema and table levels -The [block and allow lists filtering rule](dm-key-features.md#block-and-allow-table-lists) is similar to the `replication-rules-db`/`replication-rules-table` feature of MySQL, which can be used to filter or replicate all operations of some databases only or some tables only. +The [block and allow lists filtering rule](key-features.md#block-and-allow-table-lists) is similar to the `replication-rules-db`/`replication-rules-table` feature of MySQL, which can be used to filter or replicate all operations of some databases only or some tables only. ### Binlog event filtering -The [binlog event filtering](dm-key-features.md#binlog-event-filter) feature means that DM can filter certain types of SQL statements from certain tables in the source database. For example, you can filter all `INSERT` statements in the table `test`.`sbtest` or filter all `TRUNCATE TABLE` statements in the schema `test`. +The [binlog event filtering](key-features.md#binlog-event-filter) feature means that DM can filter certain types of SQL statements from certain tables in the source database. For example, you can filter all `INSERT` statements in the table `test`.`sbtest` or filter all `TRUNCATE TABLE` statements in the schema `test`. ### Schema and table routing -The [schema and table routing](dm-key-features.md#table-routing) feature means that DM can migrate a certain table of the source database to the specified table in the downstream. For example, you can migrate the table structure and data from the table `test`.`sbtest1` in the source database to the table `test`.`sbtest2` in TiDB. This is also a core feature for merging and migrating sharded databases and tables. +The [schema and table routing](key-features.md#table-routing) feature means that DM can migrate a certain table of the source database to the specified table in the downstream. For example, you can migrate the table structure and data from the table `test`.`sbtest1` in the source database to the table `test`.`sbtest2` in TiDB. This is also a core feature for merging and migrating sharded databases and tables. ## Advanced features @@ -47,7 +47,7 @@ DM supports merging and migrating the original sharded instances and tables from ### Optimization for third-party online-schema-change tools in the migration process -In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM provides support for these tools to avoid migrating unnecessary intermediate data. For details, see [Online DDL Tools](dm-key-features.md#online-ddl-tools) +In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM provides support for these tools to avoid migrating unnecessary intermediate data. For details, see [Online DDL Tools](key-features.md#online-ddl-tools) ### Filter certain row changes using SQL expressions @@ -77,7 +77,7 @@ Before using the DM tool, note the following restrictions: - Currently, TiDB is not compatible with all the DDL statements that MySQL supports. Because DM uses the TiDB parser to process DDL statements, it only supports the DDL syntax supported by the TiDB parser. For details, see [MySQL Compatibility](https://pingcap.com/docs/stable/reference/mysql-compatibility/#ddl). - - DM reports an error when it encounters an incompatible DDL statement. To solve this error, you need to manually handle it using dmctl, either skipping this DDL statement or replacing it with a specified DDL statement(s). For details, see [Skip or replace abnormal SQL statements](dm-faq.md#how-to-handle-incompatible-ddl-statements). + - DM reports an error when it encounters an incompatible DDL statement. To solve this error, you need to manually handle it using dmctl, either skipping this DDL statement or replacing it with a specified DDL statement(s). For details, see [Skip or replace abnormal SQL statements](faq.md#how-to-handle-incompatible-ddl-statements). + Sharding merge with conflicts diff --git a/en/dm-pause-task.md b/en/pause-task.md similarity index 97% rename from en/dm-pause-task.md rename to en/pause-task.md index 0aeb04537..60db3485b 100644 --- a/en/dm-pause-task.md +++ b/en/pause-task.md @@ -1,7 +1,6 @@ --- title: Pause a Data Migration Task summary: Learn how to pause a data migration task in TiDB Data Migration. -aliases: ['/tidb-data-migration/dev/pause-task.md/] --- # Pause a Data Migration Task diff --git a/en/dm-performance-test.md b/en/performance-test.md similarity index 94% rename from en/dm-performance-test.md rename to en/performance-test.md index 58cf3c13f..898d16549 100644 --- a/en/dm-performance-test.md +++ b/en/performance-test.md @@ -1,7 +1,6 @@ --- title: DM Cluster Performance Test summary: Learn how to test the performance of DM clusters. -aliases: ['/tidb-data-migration/dev/performance-test.md/] --- # DM Cluster Performance Test @@ -51,7 +50,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### Create a data migration task -1. Create an upstream MySQL source and set `source-id` to `source-1`. For details, see [Load the Data Source Configurations](dm-manage-source.md#operate-data-source). +1. Create an upstream MySQL source and set `source-id` to `source-1`. For details, see [Load the Data Source Configurations](manage-source.md#operate-data-source). 2. Create a migration task (in `full` mode). The following is a task configuration template: @@ -85,7 +84,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 threads: 32 ``` -For details about how to create a migration task, see [Create a Data Migration Task](dm-create-task.md). +For details about how to create a migration task, see [Create a Data Migration Task](create-task.md). > **Note:** > @@ -110,7 +109,7 @@ Use `sysbench` to create test tables in the upstream. #### Create a data migration task -1. Create the source of the upstream MySQL. Set `source-id` to `source-1` (if the source has been created in the [full import benchmark case](#full-import-benchmark-case), you do not need to create it again). For details, see [Load the Data Source Configurations](dm-manage-source.md#operate-data-source). +1. Create the source of the upstream MySQL. Set `source-id` to `source-1` (if the source has been created in the [full import benchmark case](#full-import-benchmark-case), you do not need to create it again). For details, see [Load the Data Source Configurations](manage-source.md#operate-data-source). 2. Create a DM migration task (in `all` mode). The following is an example of the task configuration file: @@ -143,7 +142,7 @@ Use `sysbench` to create test tables in the upstream. batch: 100 ``` -For details about how to create a data migration task, see [Create a Data Migration Task](dm-create-task.md). +For details about how to create a data migration task, see [Create a Data Migration Task](create-task.md). > **Note:** > diff --git a/en/dm-precheck.md b/en/precheck.md similarity index 97% rename from en/dm-precheck.md rename to en/precheck.md index 62f67dca9..91d0702f8 100644 --- a/en/dm-precheck.md +++ b/en/precheck.md @@ -1,7 +1,7 @@ --- title: Precheck the Upstream MySQL Instance Configurations summary: Learn how to use the precheck feature provided by DM to detect errors in the upstream MySQL instance configurations. -aliases: ['/docs/tidb-data-migration/dev/precheck/','/tidb-data-migration/dev/precheck.md/] +aliases: ['/docs/tidb-data-migration/dev/precheck/'] --- # Precheck the Upstream MySQL Instance Configurations diff --git a/en/dm-query-status.md b/en/query-status.md similarity index 99% rename from en/dm-query-status.md rename to en/query-status.md index 54e33c4f8..80bf47a54 100644 --- a/en/dm-query-status.md +++ b/en/query-status.md @@ -1,7 +1,7 @@ --- title: Query Status summary: Learn how to query the status of a data replication task. -aliases: ['/docs/tidb-data-migration/dev/query-status/','/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-status.md/] +aliases: ['/docs/tidb-data-migration/dev/query-status/','/tidb-data-migration/dev/query-error/'] --- # Query Status diff --git a/en/quick-create-migration-task.md b/en/quick-create-migration-task.md index 1267bd05e..138950815 100644 --- a/en/quick-create-migration-task.md +++ b/en/quick-create-migration-task.md @@ -17,7 +17,7 @@ This document introduces how to configure a data migration task in different sce In addition to scenario-based documents, you can also refer to the following ones: - For a complete example of data migration task configuration, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md). -- For a data migration task configuration guide, refer to [Data Migration Task Configuration Guide](dm-task-configuration-guide.md). +- For a data migration task configuration guide, refer to [Data Migration Task Configuration Guide](task-configuration-guide.md). ## Migrate Data from Multiple Data Sources to TiDB diff --git a/en/quick-start-create-source.md b/en/quick-start-create-source.md index 51fc1c95e..4bf148015 100644 --- a/en/quick-start-create-source.md +++ b/en/quick-start-create-source.md @@ -11,7 +11,7 @@ summary: Learn how to create a data source for Data Migration (DM). The document describes how to create a data source for the data migration task of TiDB Data Migration (DM). -A data source contains the information for accessing the upstream migration task. Because a data migration task requires referring its corresponding data source to obtain the configuration information of access, you need to create the data source of a task before creating a data migration task. For specific data source management commands, refer to [Manage Data Source Configurations](dm-manage-source.md). +A data source contains the information for accessing the upstream migration task. Because a data migration task requires referring its corresponding data source to obtain the configuration information of access, you need to create the data source of a task before creating a data migration task. For specific data source management commands, refer to [Manage Data Source Configurations](manage-source.md). ## Step 1: Configure the data source @@ -57,7 +57,7 @@ You can use the following command to create a data source: tiup dmctl --master-addr operate-source create ./source-mysql-01.yaml ``` -For other configuration parameters, refer to [Upstream Database Configuration File](dm-source-configuration-file.md). +For other configuration parameters, refer to [Upstream Database Configuration File](source-configuration-file.md). The returned results are as follows: diff --git a/en/relay-log.md b/en/relay-log.md index 267542104..395050aaf 100644 --- a/en/relay-log.md +++ b/en/relay-log.md @@ -90,7 +90,7 @@ The starting position of the relay log migration is determined by the following > **Note:** > -> Since DM v2.0.2, the configuration item `enable-relay` in the source configuration file is no longer valid. If DM finds that `enable-relay` is set to `true` when [loading the data source configuration](dm-manage-source.md#operate-data-source), it outputs the following message: +> Since DM v2.0.2, the configuration item `enable-relay` in the source configuration file is no longer valid. If DM finds that `enable-relay` is set to `true` when [loading the data source configuration](manage-source.md#operate-data-source), it outputs the following message: > > ``` > Please use `start-relay` to specify which workers should pull relay log of relay-enabled sources. @@ -132,7 +132,7 @@ In the command `start-relay`, you can configure one or more DM-workers to migrat In DM versions earlier than v2.0.2 (not including v2.0.2), DM checks the configuration item `enable-relay` in the source configuration file when binding a DM-worker to an upstream data source. If `enable-relay` is set to `true`, DM enables the relay log feature for the data source. -See [Upstream Database Configuration File](dm-source-configuration-file.md) for how to set the configuration item `enable-relay`. +See [Upstream Database Configuration File](source-configuration-file.md) for how to set the configuration item `enable-relay`. diff --git a/en/dm-resume-task.md b/en/resume-task.md similarity index 96% rename from en/dm-resume-task.md rename to en/resume-task.md index e5e0ebd57..abf58434d 100644 --- a/en/dm-resume-task.md +++ b/en/resume-task.md @@ -1,7 +1,6 @@ --- title: Resume a Data Migration Task summary: Learn how to resume a data migration task. -aliases: ['/tidb-data-migration/dev/resume-task.md/] --- # Resume a Data Migration Task diff --git a/en/dm-source-configuration-file.md b/en/source-configuration-file.md similarity index 97% rename from en/dm-source-configuration-file.md rename to en/source-configuration-file.md index a987e7810..426184931 100644 --- a/en/dm-source-configuration-file.md +++ b/en/source-configuration-file.md @@ -1,7 +1,7 @@ --- title: Upstream Database Configuration File summary: Learn the configuration file of the upstream database -aliases: ['/docs/tidb-data-migration/dev/source-configuration-file/','/tidb-data-migration/dev/source-configuration-file.md/] +aliases: ['/docs/tidb-data-migration/dev/source-configuration-file/'] --- # Upstream Database Configuration File @@ -111,4 +111,4 @@ Starting from DM v2.0.2, you can configure binlog event filters in the source co | Parameter | Description | | :------------ | :--------------------------------------- | | `case-sensitive` | Determines whether the filtering rules are case-sensitive. The default value is `false`. | -| `filters` | Sets binlog event filtering rules. For details, see [Binlog event filter parameter explanation](dm-key-features.md#parameter-explanation-2). | +| `filters` | Sets binlog event filtering rules. For details, see [Binlog event filter parameter explanation](key-features.md#parameter-explanation-2). | diff --git a/en/dm-stop-task.md b/en/stop-task.md similarity index 92% rename from en/dm-stop-task.md rename to en/stop-task.md index 470a7802a..540c007fe 100644 --- a/en/dm-stop-task.md +++ b/en/stop-task.md @@ -1,12 +1,11 @@ --- title: Stop a Data Migration Task summary: Learn how to stop a data migration task. -aliases: ['/tidb-data-migration/dev/stop-task.md/] --- # Stop a Data Migration Task -You can use the `stop-task` command to stop a data migration task. For differences between `stop-task` and `pause-task`, refer to [Pause a Data Migration Task](dm-pause-task.md). +You can use the `stop-task` command to stop a data migration task. For differences between `stop-task` and `pause-task`, refer to [Pause a Data Migration Task](pause-task.md). {{< copyable "" >}} diff --git a/en/task-configuration-file-full.md b/en/task-configuration-file-full.md index 00cc225cf..5f1919071 100644 --- a/en/task-configuration-file-full.md +++ b/en/task-configuration-file-full.md @@ -7,11 +7,11 @@ aliases: ['/docs/tidb-data-migration/dev/task-configuration-file-full/','/docs/t This document introduces the advanced task configuration file of Data Migration (DM), including [global configuration](#global-configuration) and [instance configuration](#instance-configuration). -For the feature and configuration of each configuration item, see [Data migration features](dm-overview.md#basic-features). +For the feature and configuration of each configuration item, see [Data migration features](overview.md#basic-features). ## Important concepts -For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](dm-config-overview.md#important-concepts). +For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](config-overview.md#important-concepts). ## Task configuration file template (advanced) @@ -178,9 +178,9 @@ Arguments in each feature configuration set are explained in the comments in the | Parameter | Description | | :------------ | :--------------------------------------- | -| `routes` | The routing mapping rule set between the upstream and downstream tables. If the names of the upstream and downstream schemas and tables are the same, this item does not need to be configured. See [Table Routing](dm-key-features.md#table-routing) for usage scenarios and sample configurations. | -| `filters` | The binlog event filter rule set of the matched table of the upstream database instance. If binlog filtering is not required, this item does not need to be configured. See [Binlog Event Filter](dm-key-features.md#binlog-event-filter) for usage scenarios and sample configurations. | -| `block-allow-list` | The filter rule set of the block allow list of the matched table of the upstream database instance. It is recommended to specify the schemas and tables that need to be migrated through this item, otherwise all schemas and tables are migrated. See [Binlog Event Filter](dm-key-features.md#binlog-event-filter) and [Block & Allow Lists](dm-key-features.md#block-and-allow-table-lists) for usage scenarios and sample configurations. | +| `routes` | The routing mapping rule set between the upstream and downstream tables. If the names of the upstream and downstream schemas and tables are the same, this item does not need to be configured. See [Table Routing](key-features.md#table-routing) for usage scenarios and sample configurations. | +| `filters` | The binlog event filter rule set of the matched table of the upstream database instance. If binlog filtering is not required, this item does not need to be configured. See [Binlog Event Filter](key-features.md#binlog-event-filter) for usage scenarios and sample configurations. | +| `block-allow-list` | The filter rule set of the block allow list of the matched table of the upstream database instance. It is recommended to specify the schemas and tables that need to be migrated through this item, otherwise all schemas and tables are migrated. See [Binlog Event Filter](key-features.md#binlog-event-filter) and [Block & Allow Lists](key-features.md#block-and-allow-table-lists) for usage scenarios and sample configurations. | | `mydumpers` | Configuration arguments of dump processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `thread` only using `mydumper-thread`. | | `loaders` | Configuration arguments of load processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `pool-size` only using `loader-thread`. | | `syncers` | Configuration arguments of sync processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `worker-count` only using `syncer-thread`. | diff --git a/en/task-configuration-file.md b/en/task-configuration-file.md index 1ef591c75..ca10b1178 100644 --- a/en/task-configuration-file.md +++ b/en/task-configuration-file.md @@ -10,11 +10,11 @@ This document introduces the basic task configuration file of Data Migration (DM DM also implements [an advanced task configuration file](task-configuration-file-full.md) which provides greater flexibility and more control over DM. -For the feature and configuration of each configuration item, see [Data migration features](dm-key-features.md). +For the feature and configuration of each configuration item, see [Data migration features](key-features.md). ## Important concepts -For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](dm-config-overview.md#important-concepts). +For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](config-overview.md#important-concepts). ## Task configuration file template (basic) @@ -80,7 +80,7 @@ Refer to the comments in the [template](#task-configuration-file-template-basic) ### Feature configuration set -For basic applications, you only need to modify the block and allow lists filtering rule. Refer to the comments about `block-allow-list` in the [template](#task-configuration-file-template-basic) or [Block & allow table lists](dm-key-features.md#block-and-allow-table-lists) to see more details. +For basic applications, you only need to modify the block and allow lists filtering rule. Refer to the comments about `block-allow-list` in the [template](#task-configuration-file-template-basic) or [Block & allow table lists](key-features.md#block-and-allow-table-lists) to see more details. ## Instance configuration diff --git a/en/dm-task-configuration-guide.md b/en/task-configuration-guide.md similarity index 97% rename from en/dm-task-configuration-guide.md rename to en/task-configuration-guide.md index 8e31c8605..9ce393bfd 100644 --- a/en/dm-task-configuration-guide.md +++ b/en/task-configuration-guide.md @@ -1,7 +1,6 @@ --- title: Data Migration Task Configuration Guide summary: Learn how to configure a data migration task in Data Migration (DM). -aliases: ['/tidb-data-migration/dev/task-configuration-guide.md/] --- # Data Migration Task Configuration Guide @@ -12,9 +11,9 @@ This document introduces how to configure a data migration task in Data Migratio Before configuring the data sources to be migrated for the task, you need to first make sure that DM has loaded the configuration files of the corresponding data sources. The following are some operation references: -- To view the data source, you can refer to [Check the data source configuration](dm-manage-source.md#check-data-source-configurations). +- To view the data source, you can refer to [Check the data source configuration](manage-source.md#check-data-source-configurations). - To create a data source, you can refer to [Create data source](migrate-data-using-dm.md#step-3-create-data-source). -- To generate a data source configuration file, you can refer to [Source configuration file introduction](dm-source-configuration-file.md). +- To generate a data source configuration file, you can refer to [Source configuration file introduction](source-configuration-file.md). The following example of `mysql-instances` shows how to configure data sources that need to be migrated for the data migration task: @@ -79,7 +78,7 @@ To configure the block and allow list of data source tables for the data migrati tbl-name: "log" ``` - For detailed configuration rules, see [Block and allow table lists](dm-key-features.md#block-and-allow-table-lists). + For detailed configuration rules, see [Block and allow table lists](key-features.md#block-and-allow-table-lists). 2. Reference the block and allow list rules in the data source configuration to filter tables to be migrated. @@ -114,7 +113,7 @@ To configure the filters of binlog events for the data migration task, perform t action: Do ``` - For detailed configuration rules, see [Binlog event filter](dm-key-features.md#binlog-event-filter). + For detailed configuration rules, see [Binlog event filter](key-features.md#binlog-event-filter). 2. Reference the binlog event filtering rules in the data source configuration to filter specified binlog events of specified tables or schemas in the data source. @@ -152,7 +151,7 @@ To configure the routing mapping rules for migrating data source tables to speci target-schema: "test" ``` - For detailed configuration rules, see [Table Routing](dm-key-features.md#table-routing). + For detailed configuration rules, see [Table Routing](key-features.md#table-routing). 2. Reference the routing mapping rules in the data source configuration to filter tables to be migrated. @@ -187,7 +186,7 @@ shard-mode: "pessimistic" # The shard merge mode. Optional modes are ""/"p ## Other configurations -The following is an overall task configuration example of this document. The complete task configuration template can be found in [DM task configuration file full introduction](task-configuration-file-full.md). For the usage and configuration of other configuration items, refer to [Features of Data Migration](dm-key-features.md). +The following is an overall task configuration example of this document. The complete task configuration template can be found in [DM task configuration file full introduction](task-configuration-file-full.md). For the usage and configuration of other configuration items, refer to [Features of Data Migration](key-features.md). ```yaml --- diff --git a/en/dm-tune-configuration.md b/en/tune-configuration.md similarity index 98% rename from en/dm-tune-configuration.md rename to en/tune-configuration.md index 57e3a0191..09b1bd2d1 100644 --- a/en/dm-tune-configuration.md +++ b/en/tune-configuration.md @@ -1,7 +1,6 @@ --- title: Optimize Configuration of DM summary: Learn how to optimize the configuration of the data migration task to improve the performance of data migration. -aliases: ['/tidb-data-migration/dev/tune-configuration.md/] --- # Optimize Configuration of DM diff --git a/en/usage-scenario-downstream-more-columns.md b/en/usage-scenario-downstream-more-columns.md index fae473e87..9bb5f7ee5 100644 --- a/en/usage-scenario-downstream-more-columns.md +++ b/en/usage-scenario-downstream-more-columns.md @@ -48,7 +48,7 @@ Otherwise, after creating the task, the following data migration errors occur wh The reason for the above errors is that when DM migrates the binlog event, if DM has not maintained internally the table schema corresponding to that table, DM tries to use the current table schema in the downstream to parse the binlog event and generate the corresponding DML statement. If the number of columns in the binlog event is inconsistent with the number of columns in the downstream table schema, the above error might occur. -In such cases, you can execute the [`operate-schema`](dm-manage-schema.md) command to specify for the table a table schema that matches the binlog event. If you are migrating sharded tables, you need to configure the table schema in DM for parsing MySQL binlog for each sharded tables according to the following steps: +In such cases, you can execute the [`operate-schema`](manage-schema.md) command to specify for the table a table schema that matches the binlog event. If you are migrating sharded tables, you need to configure the table schema in DM for parsing MySQL binlog for each sharded tables according to the following steps: 1. Specify the table schema for the table `log.messages` to be migrated in the data source. The table schema needs to correspond to the data of the binlog event to be replicated by DM. Then save the `CREATE TABLE` table schema statement in a file. For example, save the following table schema in the `log.messages.sql` file: @@ -60,7 +60,7 @@ In such cases, you can execute the [`operate-schema`](dm-manage-schema.md) comma ) ``` -2. Execute the [`operate-schema`](dm-manage-schema.md) command to set the table schema. At this time, the task should be in the `Paused` state because of the above error. +2. Execute the [`operate-schema`](manage-schema.md) command to set the table schema. At this time, the task should be in the `Paused` state because of the above error. {{< copyable "shell-regular" >}} @@ -68,6 +68,6 @@ In such cases, you can execute the [`operate-schema`](dm-manage-schema.md) comma tiup dmctl --master-addr operate-schema set -s mysql-01 task-test -d log -t message log.message.sql ``` -3. Execute the [`resume-task`](dm-resume-task.md) command to resume the `Paused` task. +3. Execute the [`resume-task`](resume-task.md) command to resume the `Paused` task. -4. Execute the [`query-status`](dm-query-status.md) command to check whether the data migration task is running normally. +4. Execute the [`query-status`](query-status.md) command to check whether the data migration task is running normally. diff --git a/en/usage-scenario-shard-merge.md b/en/usage-scenario-shard-merge.md index a50999462..cf1151594 100644 --- a/en/usage-scenario-shard-merge.md +++ b/en/usage-scenario-shard-merge.md @@ -81,7 +81,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ## Migration solution -- To satisfy the migration requirements #1, you do not need to configure the [table routing rule](dm-key-features.md#table-routing). You need to manually create a table based on the requirements in the section [Remove the `PRIMARY KEY` attribute from the column](shard-merge-best-practices.md#remove-the-primary-key-attribute-from-the-column): +- To satisfy the migration requirements #1, you do not need to configure the [table routing rule](key-features.md#table-routing). You need to manually create a table based on the requirements in the section [Remove the `PRIMARY KEY` attribute from the column](shard-merge-best-practices.md#remove-the-primary-key-attribute-from-the-column): {{< copyable "sql" >}} @@ -104,7 +104,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ignore-checking-items: ["auto_increment_ID"] ``` -- To satisfy the migration requirement #2, configure the [table routing rule](dm-key-features.md#table-routing) as follows: +- To satisfy the migration requirement #2, configure the [table routing rule](key-features.md#table-routing) as follows: {{< copyable "" >}} @@ -121,7 +121,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` target-table: "sale" ``` -- To satisfy the migration requirements #3, configure the [Block and allow table lists](dm-key-features.md#block-and-allow-table-lists) as follows: +- To satisfy the migration requirements #3, configure the [Block and allow table lists](key-features.md#block-and-allow-table-lists) as follows: {{< copyable "" >}} @@ -134,7 +134,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` tbl-name: "log_bak" ``` -- To satisfy the migration requirement #4, configure the [binlog event filter rule](dm-key-features.md#binlog-event-filter) as follows: +- To satisfy the migration requirement #4, configure the [binlog event filter rule](key-features.md#binlog-event-filter) as follows: {{< copyable "" >}} @@ -154,7 +154,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ## Migration task configuration -The complete configuration of the migration task is shown as follows. For more details, see [Data Migration Task Configuration Guide](dm-task-configuration-guide.md). +The complete configuration of the migration task is shown as follows. For more details, see [Data Migration Task Configuration Guide](task-configuration-guide.md). {{< copyable "" >}} diff --git a/en/usage-scenario-simple-migration.md b/en/usage-scenario-simple-migration.md index 7772b050f..b183ded07 100644 --- a/en/usage-scenario-simple-migration.md +++ b/en/usage-scenario-simple-migration.md @@ -61,7 +61,7 @@ Assume that the schemas migrated to the downstream are as follows: ## Migration solution -- To satisfy migration Requirements #1-i, #1-ii and #1-iii, configure the [table routing rules](dm-key-features.md#table-routing) as follows: +- To satisfy migration Requirements #1-i, #1-ii and #1-iii, configure the [table routing rules](key-features.md#table-routing) as follows: ```yaml routes: @@ -77,7 +77,7 @@ Assume that the schemas migrated to the downstream are as follows: target-schema: "user_south" ``` -- To satisfy the migration Requirement #2-i, configure the [table routing rules](dm-key-features.md#table-routing) as follows: +- To satisfy the migration Requirement #2-i, configure the [table routing rules](key-features.md#table-routing) as follows: ```yaml routes: @@ -94,7 +94,7 @@ Assume that the schemas migrated to the downstream are as follows: target-table: "store_shenzhen" ``` -- To satisfy the migration Requirement #1-iv, configure the [binlog filtering rules](dm-key-features.md#binlog-event-filter) as follows: +- To satisfy the migration Requirement #1-iv, configure the [binlog filtering rules](key-features.md#binlog-event-filter) as follows: ```yaml filters: @@ -110,7 +110,7 @@ Assume that the schemas migrated to the downstream are as follows: action: Ignore ``` -- To satisfy the migration Requirement #2-ii, configure the [binlog filtering rule](dm-key-features.md#binlog-event-filter) as follows: +- To satisfy the migration Requirement #2-ii, configure the [binlog filtering rule](key-features.md#binlog-event-filter) as follows: ```yaml filters: @@ -125,7 +125,7 @@ Assume that the schemas migrated to the downstream are as follows: > > `store-filter-rule` is different from `log-filter-rule & user-filter-rule`. `store-filter-rule` is a rule for the whole `store` schema, while `log-filter-rule` and `user-filter-rule` are rules for the `log` table in the `user` schema. -- To satisfy the migration Requirement #3, configure the [block and allow lists](dm-key-features.md#block-and-allow-table-lists) as follows: +- To satisfy the migration Requirement #3, configure the [block and allow lists](key-features.md#block-and-allow-table-lists) as follows: ```yaml block-allow-list: # Use black-white-list if the DM version is earlier than or equal to v2.0.0-beta.2. @@ -135,7 +135,7 @@ Assume that the schemas migrated to the downstream are as follows: ## Migration task configuration -The complete migration task configuration is shown below. For more details, see [data migration task configuration guide](dm-task-configuration-guide.md). +The complete migration task configuration is shown below. For more details, see [data migration task configuration guide](task-configuration-guide.md). ```yaml name: "one-tidb-secondary" diff --git a/zh/TOC.md b/zh/TOC.md index 315ffcc25..dd52a7c06 100644 --- a/zh/TOC.md +++ b/zh/TOC.md @@ -2,12 +2,12 @@ - 关于 DM - - [DM 简介](dm-overview.md) + - [DM 简介](overview.md) - [DM 5.3 Release Notes](releases/5.3.0.md) - 基本功能 - - [Table routing](dm-key-features.md#table-routing) - - [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) - - [Binlog Event Filter](dm-key-features.md#binlog-event-filter) + - [Table routing](key-features.md#table-routing) + - [Block & Allow Lists](key-features.md#block--allow-table-lists) + - [Binlog Event Filter](key-features.md#binlog-event-filter) - 高级功能 - 分库分表合并迁移 - [概述](feature-shard-merge.md) @@ -16,7 +16,7 @@ - [迁移使用 GH-ost/PT-osc 的源数据库](feature-online-ddl.md) - [使用 SQL 表达式过滤某些行变更](feature-expression-filter.md) - [DM 架构](dm-arch.md) - - [性能数据](dm-benchmark-v5.3.0.md) + - [性能数据](benchmark-v5.3.0.md) - 快速上手 - [快速上手试用](quick-start-with-dm.md) - [使用 TiUP 部署 DM 集群](deploy-a-dm-cluster-using-tiup.md) @@ -35,56 +35,56 @@ - [使用 Binary](deploy-a-dm-cluster-using-binary.md) - [使用 Kubernetes](https://docs.pingcap.com/zh/tidb-in-kubernetes/dev/deploy-tidb-dm) - [使用 DM 迁移数据](migrate-data-using-dm.md) - - [测试 DM 性能](dm-performance-test.md) + - [测试 DM 性能](performance-test.md) - 运维操作 - 集群运维工具 - [使用 TiUP 运维集群(推荐)](maintain-dm-using-tiup.md) - [使用 dmctl 运维集群](dmctl-introduction.md) - - [使用 OpenAPI 运维集群](dm-open-api.md) + - [使用 OpenAPI 运维集群](open-api.md) - 升级版本 - [1.0.x 到 2.0+ 手动升级](manually-upgrade-dm-1.0-to-2.0.md) - - [管理数据源](dm-manage-source.md) + - [管理数据源](manage-source.md) - 管理迁移任务 - - [任务配置向导](dm-task-configuration-guide.md) - - [任务前置检查](dm-precheck.md) - - [创建任务](dm-create-task.md) - - [查询状态](dm-query-status.md) - - [暂停任务](dm-pause-task.md) - - [恢复任务](dm-resume-task.md) - - [停止任务](dm-stop-task.md) - - [导出和导入集群的数据源和任务配置](dm-export-import-config.md) + - [任务配置向导](task-configuration-guide.md) + - [任务前置检查](precheck.md) + - [创建任务](create-task.md) + - [查询状态](query-status.md) + - [暂停任务](pause-task.md) + - [恢复任务](resume-task.md) + - [停止任务](stop-task.md) + - [导出和导入集群的数据源和任务配置](export-import-config.md) - [处理出错的 DDL 语句](handle-failed-ddl-statements.md) - [手动处理 Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - - [管理迁移表的表结构](dm-manage-schema.md) - - [处理告警](dm-handle-alerts.md) + - [管理迁移表的表结构](manage-schema.md) + - [处理告警](handle-alerts.md) - [日常巡检](dm-daily-check.md) - 使用场景 - [从 Aurora 迁移数据到 TiDB](migrate-from-mysql-aurora.md) - [TiDB 表结构存在更多列的迁移场景](usage-scenario-downstream-more-columns.md) - [变更同步的 MySQL 实例](usage-scenario-master-slave-switch.md) - 故障处理 - - [故障及处理方法](dm-error-handling.md) - - [性能问题及处理方法](dm-handle-performance-issues.md) + - [故障及处理方法](error-handling.md) + - [性能问题及处理方法](handle-performance-issues.md) - 性能调优 - - [配置调优](dm-tune-configuration.md) + - [配置调优](tune-configuration.md) - 参考指南 - 架构 - [DM 架构简介](dm-arch.md) - [DM-worker 简介](dm-worker-intro.md) - - [DM 命令行参数](dm-command-line-flags.md) + - [DM 命令行参数](command-line-flags.md) - 配置 - - [概述](dm-config-overview.md) + - [概述](config-overview.md) - [DM-master 配置](dm-master-configuration-file.md) - [DM-worker 配置](dm-worker-configuration-file.md) - - [上游数据库配置](dm-source-configuration-file.md) - - [数据迁移任务配置向导](dm-task-configuration-guide.md) + - [上游数据库配置](source-configuration-file.md) + - [数据迁移任务配置向导](task-configuration-guide.md) - 安全 - - [为 DM 的连接开启加密传输](dm-enable-tls.md) + - [为 DM 的连接开启加密传输](enable-tls.md) - [生成自签名证书](dm-generate-self-signed-certificates.md) - [监控指标](monitor-a-dm-cluster.md) - [告警信息](dm-alert-rules.md) - - [错误码](dm-error-handling.md#常见故障处理方法) -- [常见问题](dm-faq.md) + - [错误码](error-handling.md#常见故障处理方法) +- [常见问题](faq.md) - [术语表](dm-glossary.md) - 版本发布历史 - v5.3 diff --git a/zh/_index.md b/zh/_index.md index 22feb83b7..ad6399046 100644 --- a/zh/_index.md +++ b/zh/_index.md @@ -17,8 +17,8 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] 关于 TiDB Data Migration -- [什么是 DM?](dm-overview.md) -- [DM 架构](dm-overview.md) +- [什么是 DM?](overview.md) +- [DM 架构](overview.md) - [性能数据](benchmark-v2.0-ga.md) @@ -41,7 +41,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] - [使用 TiUP 离线镜像部署集群](deploy-a-dm-cluster-using-tiup-offline.md) - [使用 Binary 部署集群](deploy-a-dm-cluster-using-binary.md) - [使用 DM 迁移数据](migrate-data-using-dm.md) -- [测试 DM 性能](dm-performance-test.md) +- [测试 DM 性能](performance-test.md) @@ -52,7 +52,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] - [使用 dmctl 运维集群](dmctl-introduction.md) - [升级版本](manually-upgrade-dm-1.0-to-2.0.md) - [手动处理 Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) -- [处理告警](dm-handle-alerts.md) +- [处理告警](handle-alerts.md) - [日常巡检](dm-daily-check.md) @@ -69,11 +69,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] 参考指南 - [DM 架构](dm-arch.md) -- [DM 命令行参数](dm-command-line-flags.md) -- [配置概述](dm-config-overview.md) +- [DM 命令行参数](command-line-flags.md) +- [配置概述](config-overview.md) - [监控指标](monitor-a-dm-cluster.md) - [告警信息](dm-alert-rules.md) -- [错误码](dm-error-handling.md#常见故障处理方法) +- [错误码](error-handling.md#常见故障处理方法) diff --git a/zh/benchmark-v1.0-ga.md b/zh/benchmark-v1.0-ga.md index d6b6451bd..0af824c8f 100644 --- a/zh/benchmark-v1.0-ga.md +++ b/zh/benchmark-v1.0-ga.md @@ -59,11 +59,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/benchmark-v1.0-ga/'] ## 测试场景 -可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41)。 +可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -106,7 +106,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/benchmark-v1.0-ga/'] ### 增量复制性能测试用例 -使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/benchmark-v2.0-ga.md b/zh/benchmark-v2.0-ga.md index 1e88355e3..32cd77c7c 100644 --- a/zh/benchmark-v2.0-ga.md +++ b/zh/benchmark-v2.0-ga.md @@ -59,11 +59,11 @@ summary: 了解 DM 2.0-GA 版本的性能。 ## 测试场景 -可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34)。 +可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -105,7 +105,7 @@ summary: 了解 DM 2.0-GA 版本的性能。 ### 增量复制性能测试用例 -使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/dm-benchmark-v5.3.0.md b/zh/benchmark-v5.3.0.md similarity index 93% rename from zh/dm-benchmark-v5.3.0.md rename to zh/benchmark-v5.3.0.md index 29bd5df42..a460f3aa1 100644 --- a/zh/dm-benchmark-v5.3.0.md +++ b/zh/benchmark-v5.3.0.md @@ -1,7 +1,6 @@ --- title: DM 5.3.0 性能测试报告 summary: 了解 DM 5.3.0 版本的性能。 -aliases: ['/zh/tidb-data-migration/dev/benchmark-v5.3.0.md/] --- # DM 5.3.0 性能测试报告 @@ -54,11 +53,11 @@ aliases: ['/zh/tidb-data-migration/dev/benchmark-v5.3.0.md/] ## 测试场景 -可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4)。 +可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -100,7 +99,7 @@ aliases: ['/zh/tidb-data-migration/dev/benchmark-v5.3.0.md/] ### 增量复制性能测试用例 -使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/dm-command-line-flags.md b/zh/command-line-flags.md similarity index 98% rename from zh/dm-command-line-flags.md rename to zh/command-line-flags.md index 3320bf564..215165239 100644 --- a/zh/dm-command-line-flags.md +++ b/zh/command-line-flags.md @@ -1,7 +1,7 @@ --- title: DM 命令行参数 summary: 介绍 DM 各组件的主要命令行参数。 -aliases: ['/docs-cn/tidb-data-migration/dev/command-line-flags/','/zh/tidb-data-migration/dev/command-line-flags.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/command-line-flags/'] --- # DM 命令行参数 diff --git a/zh/dm-config-overview.md b/zh/config-overview.md similarity index 74% rename from zh/dm-config-overview.md rename to zh/config-overview.md index b2d3c1d40..7cdcd6191 100644 --- a/zh/dm-config-overview.md +++ b/zh/config-overview.md @@ -1,6 +1,6 @@ --- title: DM 配置简介 -aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/','/zh/tidb-data-migration/dev/config-overview.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/'] --- # DM 配置简介 @@ -11,7 +11,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/','/zh/tidb-data-mig - `dm-master.toml`:DM-master 进程的配置文件,包括 DM-master 的拓扑信息、日志等各项配置。配置说明详见 [DM-master 配置文件介绍](dm-master-configuration-file.md)。 - `dm-worker.toml`:DM-worker 进程的配置文件,包括 DM-worker 的拓扑信息、日志等各项配置。配置说明详见 [DM-worker 配置文件介绍](dm-worker-configuration-file.md)。 -- `source.yaml`:上游数据库 MySQL/MariaDB 相关配置。配置说明详见[上游数据库配置文件介绍](dm-source-configuration-file.md)。 +- `source.yaml`:上游数据库 MySQL/MariaDB 相关配置。配置说明详见[上游数据库配置文件介绍](source-configuration-file.md)。 ## 迁移任务配置 @@ -19,9 +19,9 @@ aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/','/zh/tidb-data-mig 具体步骤如下: -1. [使用 dmctl 将数据源配置加载到 DM 集群](dm-manage-source.md#数据源操作); -2. 参考[数据任务配置向导](dm-task-configuration-guide.md)来创建 `your_task.yaml`; -3. [使用 dmctl 创建数据迁移任务](dm-create-task.md)。 +1. [使用 dmctl 将数据源配置加载到 DM 集群](manage-source.md#数据源操作); +2. 参考[数据任务配置向导](task-configuration-guide.md)来创建 `your_task.yaml`; +3. [使用 dmctl 创建数据迁移任务](create-task.md)。 ### 关键概念 diff --git a/zh/dm-create-task.md b/zh/create-task.md similarity index 91% rename from zh/dm-create-task.md rename to zh/create-task.md index 2c1309aa9..eb60553de 100644 --- a/zh/dm-create-task.md +++ b/zh/create-task.md @@ -1,12 +1,12 @@ --- title: 创建数据迁移任务 summary: 了解 TiDB Data Migration 如何创建数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/create-task/','/zh/tidb-data-migration/dev/create-task.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/create-task/'] --- # 创建数据迁移任务 -`start-task` 命令用于创建数据迁移任务。当数据迁移任务启动时,DM 将[自动对相应权限和配置进行前置检查](dm-precheck.md)。 +`start-task` 命令用于创建数据迁移任务。当数据迁移任务启动时,DM 将[自动对相应权限和配置进行前置检查](precheck.md)。 {{< copyable "" >}} diff --git a/zh/dm-alert-rules.md b/zh/dm-alert-rules.md index 312e6af8e..2a63a1093 100644 --- a/zh/dm-alert-rules.md +++ b/zh/dm-alert-rules.md @@ -8,6 +8,6 @@ aliases: ['/docs-cn/tidb-data-migration/dev/alert-rules/','/zh/tidb-data-migrati 使用 TiUP 部署 DM 集群的时候,会默认部署一套[告警系统](migrate-data-using-dm.md#第-8-步监控任务与查看日志)。 -DM 的告警规则及其对应的处理方法可参考[告警处理](dm-handle-alerts.md)。 +DM 的告警规则及其对应的处理方法可参考[告警处理](handle-alerts.md)。 DM 的告警信息与监控指标均基于 Prometheus,告警规则与监控指标的对应关系可参考 [DM 监控指标](monitor-a-dm-cluster.md)。 \ No newline at end of file diff --git a/zh/dm-daily-check.md b/zh/dm-daily-check.md index 2a5b2f17e..c330b41ba 100644 --- a/zh/dm-daily-check.md +++ b/zh/dm-daily-check.md @@ -8,7 +8,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/daily-check/','/zh/tidb-data-migrati 本文总结了 TiDB Data Migration (DM) 工具日常巡检的方法: -+ 方法一:执行 `query-status` 命令查看任务运行状态以及相关错误输出。详见[查询状态](dm-query-status.md)。 ++ 方法一:执行 `query-status` 命令查看任务运行状态以及相关错误输出。详见[查询状态](query-status.md)。 + 方法二:如果使用 TiUP 部署 DM 集群时正确部署了 Prometheus 与 Grafana,如 Grafana 的地址为 `172.16.10.71`,可在浏览器中打开 进入 Grafana,选择 DM 的 Dashboard 即可查看 DM 相关监控项。具体监控指标参照[监控与告警设置](monitor-a-dm-cluster.md)。 diff --git a/zh/dm-glossary.md b/zh/dm-glossary.md index e3a76e26e..996c53c9b 100644 --- a/zh/dm-glossary.md +++ b/zh/dm-glossary.md @@ -20,7 +20,7 @@ MySQL/MariaDB 生成的 Binlog 文件中的数据变更信息,具体请参考 ### Binlog event filter -比 Block & allow table list 更加细粒度的过滤功能,具体可参考 [Binlog Event Filter](dm-overview.md#binlog-event-filter)。 +比 Block & allow table list 更加细粒度的过滤功能,具体可参考 [Binlog Event Filter](overview.md#binlog-event-filter)。 ### Binlog position @@ -32,7 +32,7 @@ DM-worker 内部用于读取上游 Binlog 或本地 Relay log 并迁移到下游 ### Block & allow table list -针对上游数据库实例表的黑白名单过滤功能,具体可参考 [Block & Allow Table Lists](dm-overview.md#block--allow-lists)。该功能与 [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) 及 [MariaDB Replication Filters](https://mariadb.com/kb/en/library/replication-filters/) 类似。 +针对上游数据库实例表的黑白名单过滤功能,具体可参考 [Block & Allow Table Lists](overview.md#block--allow-lists)。该功能与 [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) 及 [MariaDB Replication Filters](https://mariadb.com/kb/en/library/replication-filters/) 类似。 ## C @@ -122,13 +122,13 @@ DM-worker 内部用于从上游拉取 Binlog 并写入数据到 Relay log 的处 ### Subtask status -数据迁移子任务所处的状态,目前包括 `New`、`Running`、`Paused`、`Stopped` 及 `Finished` 5 种状态。有关数据迁移任务、子任务状态的更多信息可参考[任务状态](dm-query-status.md#任务状态)。 +数据迁移子任务所处的状态,目前包括 `New`、`Running`、`Paused`、`Stopped` 及 `Finished` 5 种状态。有关数据迁移任务、子任务状态的更多信息可参考[任务状态](query-status.md#任务状态)。 ## T ### Table routing -用于支持将上游 MySQL/MariaDB 实例的某些表迁移到下游指定表的路由功能,可以用于分库分表的合并迁移,具体可参考 [Table routing](dm-key-features.md#table-routing)。 +用于支持将上游 MySQL/MariaDB 实例的某些表迁移到下游指定表的路由功能,可以用于分库分表的合并迁移,具体可参考 [Table routing](key-features.md#table-routing)。 ### Task @@ -136,4 +136,4 @@ DM-worker 内部用于从上游拉取 Binlog 并写入数据到 Relay log 的处 ### Task status -数据迁移子任务所处的状态,由 [Subtask status](#subtask-status) 整合而来,具体信息可查看[任务状态](dm-query-status.md#任务状态)。 +数据迁移子任务所处的状态,由 [Subtask status](#subtask-status) 整合而来,具体信息可查看[任务状态](query-status.md#任务状态)。 diff --git a/zh/dm-hardware-and-software-requirements.md b/zh/dm-hardware-and-software-requirements.md index b2ead8cac..78f173217 100644 --- a/zh/dm-hardware-and-software-requirements.md +++ b/zh/dm-hardware-and-software-requirements.md @@ -46,4 +46,4 @@ DM 支持部署和运行在 Intel x86-64 架构的 64 位通用硬件服务器 > **注意:** > > - 在生产环境中,不建议将 DM-master 和 DM-worker 部署和运行在同一个服务器上,以防 DM-worker 对磁盘的写入干扰 DM-master 高可用组件使用磁盘。 -> - 在遇到性能问题时可参照[配置调优](dm-tune-configuration.md)尝试修改任务配置。调优效果不明显时,可以尝试升级服务器配置。 +> - 在遇到性能问题时可参照[配置调优](tune-configuration.md)尝试修改任务配置。调优效果不明显时,可以尝试升级服务器配置。 diff --git a/zh/dm-enable-tls.md b/zh/enable-tls.md similarity index 98% rename from zh/dm-enable-tls.md rename to zh/enable-tls.md index 46040a994..b29dfcd1e 100644 --- a/zh/dm-enable-tls.md +++ b/zh/enable-tls.md @@ -1,7 +1,6 @@ --- title: 为 DM 的连接开启加密传输 summary: 了解如何为 DM 的连接开启加密传输。 -aliases: ['/zh/tidb-data-migration/dev/enable-tls.md/] --- # 为 DM 的连接开启加密传输 diff --git a/zh/dm-error-handling.md b/zh/error-handling.md similarity index 97% rename from zh/dm-error-handling.md rename to zh/error-handling.md index 4cb38a00c..06393848c 100644 --- a/zh/dm-error-handling.md +++ b/zh/error-handling.md @@ -1,7 +1,7 @@ --- title: 故障及处理方法 summary: 了解 DM 的错误系统及常见故障的处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data-migration/dev/troubleshoot-dm/','/docs-cn/tidb-data-migration/dev/error-system/','/zh/tidb-data-migration/dev/error-handling.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data-migration/dev/troubleshoot-dm/','/docs-cn/tidb-data-migration/dev/error-system/'] --- # 故障及处理方法 @@ -88,7 +88,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data resume-task ${task name} ``` -但在某些情况下,你还需要重置数据迁移任务。有关何时需要重置以及如何重置,详见[重置数据迁移任务](dm-faq.md#如何重置数据迁移任务)。 +但在某些情况下,你还需要重置数据迁移任务。有关何时需要重置以及如何重置,详见[重置数据迁移任务](faq.md#如何重置数据迁移任务)。 ## 常见故障处理方法 @@ -99,8 +99,8 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data | `code=10003` | 数据库底层 `invalid connection` 错误,通常表示 DM 到下游 TiDB 的数据库连接出现了异常(如网络故障、TiDB 重启、TiKV busy 等)且当前请求已有部分数据发送到了 TiDB。 | DM 提供针对此类错误的自动恢复。如果未能正常恢复,需要用户进一步检查错误信息并根据具体场景进行分析。 | | `code=10005` | 数据库查询类语句出错 | | | `code=10006` | 数据库 `EXECUTE` 类型语句出错,包括 DDL 和 `INSERT`/`UPDATE`/`DELETE` 类型的 DML。更详细的错误信息可通过错误 message 获取。错误 message 中通常包含操作数据库所返回的错误码和错误信息。 | | -| `code=11006` | DM 内置的 parser 解析不兼容的 DDL 时出错 | 可参考 [Data Migration 故障诊断-处理不兼容的 DDL 语句](dm-faq.md#如何处理不兼容的-ddl-语句) 提供的解决方案 | -| `code=20010` | 处理任务配置时,解密数据库的密码出错 | 检查任务配置中提供的下游数据库密码是否有[使用 dmctl 正确加密](dm-manage-source.md#加密数据库密码) | +| `code=11006` | DM 内置的 parser 解析不兼容的 DDL 时出错 | 可参考 [Data Migration 故障诊断-处理不兼容的 DDL 语句](faq.md#如何处理不兼容的-ddl-语句) 提供的解决方案 | +| `code=20010` | 处理任务配置时,解密数据库的密码出错 | 检查任务配置中提供的下游数据库密码是否有[使用 dmctl 正确加密](manage-source.md#加密数据库密码) | | `code=26002` | 任务检查创建数据库连接失败。更详细的错误信息可通过错误 message 获取。错误 message 中包含操作数据库所返回的错误码和错误信息。 | 检查 DM-master 所在的机器是否有权限访问上游 | | `code=32001` | dump 处理单元异常 | 如果报错 `msg` 包含 `mydumper: argument list too long.`,则需要用户根据 block-allow-list,在 `task.yaml` 的 dump 处理单元的 `extra-args` 参数中手动加上 `--regex` 正则表达式设置要导出的库表。例如,如果要导出所有库中表名字为 `hello` 的表,可加上 `--regex '.*\\.hello$'`,如果要导出所有表,可加上 `--regex '.*'`。 | | `code=38008` | DM 组件间的 gRPC 通信出错 | 检查 `class`, 定位错误发生在哪些组件的交互环节,根据错误 message 判断是哪类通信错误。如果是 gRPC 建立连接出错,可检查通信服务端是否运行正常。 | @@ -163,9 +163,9 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data ### 执行 `query-status` 或查看日志时出现 `Access denied for user 'root'@'172.31.43.27' (using password: YES)` -在所有 DM 配置文件中,数据库相关的密码都推荐使用经 dmctl 加密后的密文(若数据库密码为空,则无需加密)。有关如何使用 dmctl 加密明文密码,参见[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 +在所有 DM 配置文件中,数据库相关的密码都推荐使用经 dmctl 加密后的密文(若数据库密码为空,则无需加密)。有关如何使用 dmctl 加密明文密码,参见[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 -此外,在 DM 运行过程中,上下游数据库的用户必须具备相应的读写权限。在启动迁移任务过程中,DM 会自动进行相应权限的前置检查,详见[上游 MySQL 实例配置前置检查](dm-precheck.md)。 +此外,在 DM 运行过程中,上下游数据库的用户必须具备相应的读写权限。在启动迁移任务过程中,DM 会自动进行相应权限的前置检查,详见[上游 MySQL 实例配置前置检查](precheck.md)。 ### load 处理单元报错 `packet for query is too large. Try adjusting the 'max_allowed_packet' variable` diff --git a/zh/dm-export-import-config.md b/zh/export-import-config.md similarity index 96% rename from zh/dm-export-import-config.md rename to zh/export-import-config.md index afe71afb6..3c1cf41ab 100644 --- a/zh/dm-export-import-config.md +++ b/zh/export-import-config.md @@ -1,7 +1,6 @@ --- title: 导出和导入集群的数据源和任务配置 summary: 了解 TiDB Data Migration 导出和导入集群的数据源和任务配置。 -aliases: ['/zh/tidb-data-migration/dev/export-import-config.md/] --- # 导出和导入集群的数据源和任务配置 diff --git a/zh/dm-faq.md b/zh/faq.md similarity index 98% rename from zh/dm-faq.md rename to zh/faq.md index faca76aa9..366bafab4 100644 --- a/zh/dm-faq.md +++ b/zh/faq.md @@ -1,6 +1,6 @@ --- title: Data Migration 常见问题 -aliases: ['/docs-cn/tidb-data-migration/dev/faq/','/zh/tidb-data-migration/dev/faq.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/faq/'] --- # Data Migration 常见问题 @@ -176,7 +176,7 @@ curl -X POST -d "tidb_general_log=0" http://{TiDBIP}:10080/settings if the DDL is not needed, you can use a filter rule with \"*\" schema-pattern to ignore it.\n\t : parse statement: line 1 column 11 near \"EVENT `event_del_big_table` \r\nDISABLE\" %!!(MISSING)(EXTRA string=ALTER EVENT `event_del_big_table` \r\nDISABLE ``` -出现报错的原因是 TiDB parser 无法解析上游的 DDL,例如 `ALTER EVENT`,所以 `sql-skip` 不会按预期生效。可以在任务配置文件中添加 [Binlog 过滤规则](dm-key-features.md#binlog-event-filter)进行过滤,并设置 `schema-pattern: "*"`。从 DM 2.0.1 版本开始,已预设过滤了 `EVENT` 相关语句。 +出现报错的原因是 TiDB parser 无法解析上游的 DDL,例如 `ALTER EVENT`,所以 `sql-skip` 不会按预期生效。可以在任务配置文件中添加 [Binlog 过滤规则](key-features.md#binlog-event-filter)进行过滤,并设置 `schema-pattern: "*"`。从 DM 2.0.1 版本开始,已预设过滤了 `EVENT` 相关语句。 在 DM v2.0 版本之后 `sql-skip` 已经被 `handle-error` 替代,`handle-error` 可以跳过该类错误。 diff --git a/zh/feature-expression-filter.md b/zh/feature-expression-filter.md index 35d2f4ce3..4d0d4ad58 100644 --- a/zh/feature-expression-filter.md +++ b/zh/feature-expression-filter.md @@ -6,7 +6,7 @@ title: 使用 SQL 表达式过滤某些行变更 ## 概述 -在数据迁移的过程中,DM 提供了 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) 功能过滤某些类型的 binlog event,例如不向下游迁移 `DELETE` 事件以达到归档、审计等目的。但是 Binlog Event Filter 无法以更细粒度判断某一行的 `DELETE` 事件是否要被过滤。 +在数据迁移的过程中,DM 提供了 [Binlog Event Filter](key-features.md#binlog-event-filter) 功能过滤某些类型的 binlog event,例如不向下游迁移 `DELETE` 事件以达到归档、审计等目的。但是 Binlog Event Filter 无法以更细粒度判断某一行的 `DELETE` 事件是否要被过滤。 为了解决上述问题,DM 支持使用 SQL 表达式过滤某些行变更。DM 支持的 `ROW` 格式的 binlog 中,binlog event 带有所有列的值。用户可以基于这些值配置 SQL 表达式。如果该表达式对于某条行变更的计算结果是 `TRUE`,DM 就不会向下游迁移该条行变更。 @@ -16,7 +16,7 @@ title: 使用 SQL 表达式过滤某些行变更 ## 配置示例 -与 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) 类似,表达式过滤需要在数据迁移任务配置文件里配置,详见下面配置样例。完整的配置及意义,可以参考 [DM 完整配置文件示例](task-configuration-file-full.md#完整配置文件示例): +与 [Binlog Event Filter](key-features.md#binlog-event-filter) 类似,表达式过滤需要在数据迁移任务配置文件里配置,详见下面配置样例。完整的配置及意义,可以参考 [DM 完整配置文件示例](task-configuration-file-full.md#完整配置文件示例): ```yml name: test diff --git a/zh/feature-shard-merge-pessimistic.md b/zh/feature-shard-merge-pessimistic.md index 70a7d3376..192642579 100644 --- a/zh/feature-shard-merge-pessimistic.md +++ b/zh/feature-shard-merge-pessimistic.md @@ -39,7 +39,7 @@ DM 在悲观模式下进行分表 DDL 的迁移有以下几点使用限制: - 增量复制任务需要确认开始迁移的 binlog position 上各分表的表结构必须一致,才能确保来自不同分表的 DML 语句能够迁移到表结构确定的下游,并且后续各分表的 DDL 语句能够正确匹配与迁移。 -- 如果需要变更 [table routing 规则](dm-key-features.md#table-routing),必须先等所有 sharding DDL 语句迁移完成。 +- 如果需要变更 [table routing 规则](key-features.md#table-routing),必须先等所有 sharding DDL 语句迁移完成。 - 在 sharding DDL 语句迁移过程中,使用 dmctl 尝试变更 router-rules 会报错。 @@ -109,7 +109,7 @@ DM 在悲观模式下进行分表 DDL 的迁移有以下几点使用限制: - 如果 sharding group 的所有成员都收到了某一条相同的 DDL 语句,则表明上游分表在该 DDL 执行前的 DML 语句都已经迁移完成,此时可以执行该 DDL 语句,并继续后续的 DML 迁移。 -- 上游所有分表的 DDL 在经过 [table router](dm-key-features.md#table-routing) 转换后需要保持一致,因此仅需 DDL 锁的 owner 执行一次该 DDL 语句即可,其他 DM-worker 可直接忽略对应的 DDL 语句。 +- 上游所有分表的 DDL 在经过 [table router](key-features.md#table-routing) 转换后需要保持一致,因此仅需 DDL 锁的 owner 执行一次该 DDL 语句即可,其他 DM-worker 可直接忽略对应的 DDL 语句。 在上面的示例中,每个 DM-worker 对应的上游 MySQL 实例中只有一个待合并的分表。但在实际场景下,一个 MySQL 实例可能有多个分库内的多个分表需要进行合并,这种情况下,sharding DDL 的协调迁移过程将更加复杂。 diff --git a/zh/dm-handle-alerts.md b/zh/handle-alerts.md similarity index 75% rename from zh/dm-handle-alerts.md rename to zh/handle-alerts.md index e8438999e..a78250a49 100644 --- a/zh/dm-handle-alerts.md +++ b/zh/handle-alerts.md @@ -1,7 +1,7 @@ --- title: 处理告警 summary: 了解 DM 中各主要告警信息的处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/','/zh/tidb-data-migration/dev/handle-alerts.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] --- # 处理告警 @@ -20,7 +20,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/','/zh/tidb-data-migra ### `DM_DDL_error` -处理 shard DDL 时出现错误,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +处理 shard DDL 时出现错误,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ### `DM_pending_DDL` @@ -30,13 +30,13 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/','/zh/tidb-data-migra ### `DM_task_state` -当 DM-worker 内有子任务处于 `Paused` 状态超过 20 分钟时会触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +当 DM-worker 内有子任务处于 `Paused` 状态超过 20 分钟时会触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ## relay log 告警 ### `DM_relay_process_exits_with_error` -当 relay log 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ### `DM_remain_storage_of_relay_log` @@ -48,40 +48,40 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/','/zh/tidb-data-migra ### `DM_relay_log_data_corruption` -当 relay log 处理单元在校验从上游读取到的 binlog event 且发现 checksum 信息异常时会转为 `Paused` 状态并立即触发告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在校验从上游读取到的 binlog event 且发现 checksum 信息异常时会转为 `Paused` 状态并立即触发告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ### `DM_fail_to_read_binlog_from_master` -当 relay log 处理单元在尝试从上游读取 binlog event 发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在尝试从上游读取 binlog event 发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ### `DM_fail_to_write_relay_log` -当 relay log 处理单元在尝试将 binlog event 写入 relay log 文件发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在尝试将 binlog event 写入 relay log 文件发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ### `DM_binlog_file_gap_between_master_relay` -当 relay log 处理单元已拉取到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 relay log 处理单元相关的性能问题进行排查与处理。 +当 relay log 处理单元已拉取到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 relay log 处理单元相关的性能问题进行排查与处理。 ## Dump/Load 告警 ### `DM_dump_process_exists_with_error` -当 Dump 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +当 Dump 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ### `DM_load_process_exists_with_error` -当 Load 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +当 Load 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ## Binlog replication 告警 ### `DM_sync_process_exists_with_error` -当 Binlog replication 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 +当 Binlog replication 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 ### `DM_binlog_file_gap_between_master_syncer` -当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 +当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 ### `DM_binlog_file_gap_between_relay_syncer` -当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前 relay log 处理单元超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 +当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前 relay log 处理单元超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 diff --git a/zh/handle-failed-ddl-statements.md b/zh/handle-failed-ddl-statements.md index beac5c2f3..f7597a643 100644 --- a/zh/handle-failed-ddl-statements.md +++ b/zh/handle-failed-ddl-statements.md @@ -29,7 +29,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/skip-or-replace-abnormal-sql-stateme ### query-status -`query-status` 命令用于查询当前 MySQL 实例内子任务及 relay 单元等的状态和错误信息,详见[查询状态](dm-query-status.md)。 +`query-status` 命令用于查询当前 MySQL 实例内子任务及 relay 单元等的状态和错误信息,详见[查询状态](query-status.md)。 ### handle-error diff --git a/zh/dm-handle-performance-issues.md b/zh/handle-performance-issues.md similarity index 97% rename from zh/dm-handle-performance-issues.md rename to zh/handle-performance-issues.md index e5c8f25e5..30cfb86f5 100644 --- a/zh/dm-handle-performance-issues.md +++ b/zh/handle-performance-issues.md @@ -1,7 +1,7 @@ --- title: 性能问题及处理方法 summary: 了解 DM 可能存在的常见性能问题及其处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/handle-performance-issues/','/zh/tidb-data-migration/dev/handle-performance-issues.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/handle-performance-issues/'] --- # 性能问题及处理方法 @@ -72,7 +72,7 @@ Binlog replication 模块会根据配置选择从上游 MySQL/MariaDB 或 relay ### binlog event 转换 -Binlog replication 模块从 binlog event 数据中尝试构造 DML、解析 DDL 以及进行 [table router](dm-key-features.md#table-routing) 转换等,主要的性能指标是 `transform binlog event duration`。 +Binlog replication 模块从 binlog event 数据中尝试构造 DML、解析 DDL 以及进行 [table router](key-features.md#table-routing) 转换等,主要的性能指标是 `transform binlog event duration`。 这部分的耗时受上游写入的业务特点影响较大,如对于 `INSERT INTO` 语句,转换单个 `VALUES` 的时间和转换大量 `VALUES` 的时间差距很多,其波动范围可能从几十微秒至上百微秒,但一般不会成为系统的瓶颈。 diff --git a/zh/dm-key-features.md b/zh/key-features.md similarity index 98% rename from zh/dm-key-features.md rename to zh/key-features.md index 0dc72d958..00de57a6d 100644 --- a/zh/dm-key-features.md +++ b/zh/key-features.md @@ -1,7 +1,7 @@ --- title: 主要特性 summary: 了解 DM 的各主要功能特性或相关的配置选项。 -aliases: ['/docs-cn/tidb-data-migration/dev/key-features/','/docs-cn/tidb-data-migration/dev/feature-overview/','/zh/tidb-data-migration/dev/key-features.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/key-features/','/docs-cn/tidb-data-migration/dev/feature-overview/'] --- # 主要特性 @@ -248,7 +248,7 @@ Binlog event filter 是比迁移表黑白名单更加细粒度的过滤规则, > **注意:** > > - 同一个表匹配上多个规则,将会顺序应用这些规则,并且黑名单的优先级高于白名单,即如果同时存在规则 `Ignore` 和 `Do` 应用在某个 table 上,那么 `Ignore` 生效。 -> - 从 DM v2.0.2 开始,Binlog event filter 也可以在上游数据库配置文件中进行配置。见[上游数据库配置文件介绍](dm-source-configuration-file.md)。 +> - 从 DM v2.0.2 开始,Binlog event filter 也可以在上游数据库配置文件中进行配置。见[上游数据库配置文件介绍](source-configuration-file.md)。 ### 参数配置 @@ -396,7 +396,7 @@ filters: ### 使用限制 - DM 仅针对 gh-ost 与 pt-osc 做了特殊支持。 -- 在开启 `online-ddl` 时,增量复制对应的 checkpoint 应不处于 online DDL 执行过程中。如上游某次 online DDL 操作开始于 binlog `position-A`、结束于 `position-B`,则增量复制的起始点应早于 `position-A` 或晚于 `position-B`,否则可能出现迁移出错,具体可参考 [FAQ](dm-faq.md#设置了-online-ddl-scheme-gh-ost-gh-ost-表相关的-ddl-报错该如何处理)。 +- 在开启 `online-ddl` 时,增量复制对应的 checkpoint 应不处于 online DDL 执行过程中。如上游某次 online DDL 操作开始于 binlog `position-A`、结束于 `position-B`,则增量复制的起始点应早于 `position-A` 或晚于 `position-B`,否则可能出现迁移出错,具体可参考 [FAQ](faq.md#设置了-online-ddl-scheme-gh-ost-gh-ost-表相关的-ddl-报错该如何处理)。 ### 参数配置 diff --git a/zh/maintain-dm-using-tiup.md b/zh/maintain-dm-using-tiup.md index 2eaec7ccc..001adeacd 100644 --- a/zh/maintain-dm-using-tiup.md +++ b/zh/maintain-dm-using-tiup.md @@ -181,7 +181,7 @@ tiup dm scale-in prod-cluster -N 172.16.5.140:8262 > **注意:** > -> 从 v2.0.5 版本开始,dmctl 支持[导出和导入集群的数据源和任务配置](dm-export-import-config.md)。 +> 从 v2.0.5 版本开始,dmctl 支持[导出和导入集群的数据源和任务配置](export-import-config.md)。 > > 升级前,可使用 `config export` 命令导出集群的配置文件,升级后如需降级回退到旧版本,可重建旧集群后,使用 `config import` 导入之前的配置。 > diff --git a/zh/dm-manage-schema.md b/zh/manage-schema.md similarity index 99% rename from zh/dm-manage-schema.md rename to zh/manage-schema.md index 45e12336e..600af5e44 100644 --- a/zh/dm-manage-schema.md +++ b/zh/manage-schema.md @@ -1,7 +1,6 @@ --- title: 管理迁移表的表结构 summary: 了解如何管理待迁移表在 DM 内部的表结构。 -aliases: ['/zh/tidb-data-migration/dev/manage-schema.md/] --- # 管理迁移表的表结构 diff --git a/zh/dm-manage-source.md b/zh/manage-source.md similarity index 95% rename from zh/dm-manage-source.md rename to zh/manage-source.md index 55fdd79b3..49015ebca 100644 --- a/zh/dm-manage-source.md +++ b/zh/manage-source.md @@ -1,7 +1,7 @@ --- title: 管理上游数据源 summary: 了解如何管理上游 MySQL 实例。 -aliases: ['/docs-cn/tidb-data-migration/dev/manage-source/','/zh/tidb-data-migration/dev/manage-source.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/manage-source/'] --- # 管理上游数据源配置 @@ -72,7 +72,7 @@ Global Flags: operate-source create ./source.yaml ``` -其中 `source.yaml` 的配置参考[上游数据库配置文件介绍](dm-source-configuration-file.md)。 +其中 `source.yaml` 的配置参考[上游数据库配置文件介绍](source-configuration-file.md)。 结果如下: @@ -174,7 +174,7 @@ Global Flags: -s, --source strings MySQL Source ID. ``` -在改变绑定关系前,DM 会检查待解绑的 worker 是否正在运行同步任务,如果正在运行则需要先[暂停任务](dm-pause-task.md),并在改变绑定关系后[恢复任务](dm-resume-task.md)。 +在改变绑定关系前,DM 会检查待解绑的 worker 是否正在运行同步任务,如果正在运行则需要先[暂停任务](pause-task.md),并在改变绑定关系后[恢复任务](resume-task.md)。 ### 命令用法示例 diff --git a/zh/manually-upgrade-dm-1.0-to-2.0.md b/zh/manually-upgrade-dm-1.0-to-2.0.md index 2e490c52b..3184d0a5a 100644 --- a/zh/manually-upgrade-dm-1.0-to-2.0.md +++ b/zh/manually-upgrade-dm-1.0-to-2.0.md @@ -24,7 +24,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/manually-upgrade-dm-1.0-to-2.0/'] ### 上游数据库配置文件 -在 v2.0+ 中将[上游数据库 source 相关的配置](dm-source-configuration-file.md)从 DM-worker 的进程配置中独立了出来,因此需要根据 [v1.0.x 的 DM-worker 配置](https://docs.pingcap.com/zh/tidb-data-migration/stable/dm-worker-configuration-file)拆分得到 source 配置。 +在 v2.0+ 中将[上游数据库 source 相关的配置](source-configuration-file.md)从 DM-worker 的进程配置中独立了出来,因此需要根据 [v1.0.x 的 DM-worker 配置](https://docs.pingcap.com/zh/tidb-data-migration/stable/dm-worker-configuration-file)拆分得到 source 配置。 > **注意:** > @@ -99,7 +99,7 @@ from: ### 数据迁移任务配置文件 -对于[数据迁移任务配置向导](dm-task-configuration-guide.md),v2.0+ 基本与 v1.0.x 保持兼容,可直接复制 v1.0.x 的配置。 +对于[数据迁移任务配置向导](task-configuration-guide.md),v2.0+ 基本与 v1.0.x 保持兼容,可直接复制 v1.0.x 的配置。 ## 第 2 步:部署 v2.0+ 集群 @@ -117,7 +117,7 @@ from: ## 第 4 步:升级数据迁移任务 -1. 使用 [`operate-source`](dm-manage-source.md#数据源操作) 命令将 [准备 v2.0+ 的配置文件](#第-1-步准备-v20-的配置文件) 中得到的上游数据库 source 配置加载到 v2.0+ 集群中。 +1. 使用 [`operate-source`](manage-source.md#数据源操作) 命令将 [准备 v2.0+ 的配置文件](#第-1-步准备-v20-的配置文件) 中得到的上游数据库 source 配置加载到 v2.0+ 集群中。 2. 在下游 TiDB 中,从 v1.0.x 的数据复制任务对应的增量 checkpoint 表中获取对应的全局 checkpoint 信息。 @@ -159,8 +159,8 @@ from: > > 如在 source 配置中启动了 `enable-gtid`,当前需要通过解析 binlog 或 relay log 文件获取 binlog position 对应的 GTID sets 并在 `meta` 中设置为 `binlog-gtid`。 -4. 使用 [`start-task`](dm-create-task.md) 命令以 v2.0+ 的数据迁移任务配置文件启动升级后的数据迁移任务。 +4. 使用 [`start-task`](create-task.md) 命令以 v2.0+ 的数据迁移任务配置文件启动升级后的数据迁移任务。 -5. 使用 [`query-status`](dm-query-status.md) 命令确认数据迁移任务是否运行正常。 +5. 使用 [`query-status`](query-status.md) 命令确认数据迁移任务是否运行正常。 如果数据迁移任务运行正常,则表明 DM 升级到 v2.0+ 的操作成功。 diff --git a/zh/migrate-data-using-dm.md b/zh/migrate-data-using-dm.md index eabfac5b4..aecc7798d 100644 --- a/zh/migrate-data-using-dm.md +++ b/zh/migrate-data-using-dm.md @@ -13,7 +13,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/replicate-data-using-dm/','/zh/tidb- > **注意:** > -> - 在 DM 所有的配置文件中,对于数据库密码推荐使用 dmctl 加密后的密文。如果数据库密码为空,则不需要加密。关于如何使用 dmctl 加密明文密码,参考[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 +> - 在 DM 所有的配置文件中,对于数据库密码推荐使用 dmctl 加密后的密文。如果数据库密码为空,则不需要加密。关于如何使用 dmctl 加密明文密码,参考[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 > - 上下游数据库用户必须拥有相应的读写权限。 ## 第 2 步:检查集群信息 @@ -36,7 +36,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/replicate-data-using-dm/','/zh/tidb- | 上游 MySQL-2 | 172.16.10.82 | 3306 | root | VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU= | | 下游 TiDB | 172.16.10.83 | 4000 | root | | -上游 MySQL 数据库实例用户所需权限参见[上游 MySQL 实例配置前置检查](dm-precheck.md)介绍。 +上游 MySQL 数据库实例用户所需权限参见[上游 MySQL 实例配置前置检查](precheck.md)介绍。 ## 第 3 步:创建数据源 @@ -115,7 +115,7 @@ mydumpers: ## 第 5 步:启动任务 -为了提前发现数据迁移任务的一些配置错误,DM 中增加了[前置检查](dm-precheck.md)功能: +为了提前发现数据迁移任务的一些配置错误,DM 中增加了[前置检查](precheck.md)功能: - 启动数据迁移任务时,DM 自动检查相应的权限和配置。 - 也可使用 `check-task` 命令手动前置检查上游的 MySQL 实例配置是否符合 DM 的配置要求。 diff --git a/zh/migrate-from-mysql-aurora.md b/zh/migrate-from-mysql-aurora.md index 6b2067bb1..b6023f052 100644 --- a/zh/migrate-from-mysql-aurora.md +++ b/zh/migrate-from-mysql-aurora.md @@ -68,7 +68,7 @@ DM 在增量复制阶段依赖 `ROW` 格式的 binlog,参见[为 Aurora 实例 > **注意:** > > + 基于 GTID 进行数据迁移需要 MySQL 5.7 (Aurora 2.04) 或更高版本。 -> + 除上述 Aurora 特有配置以外,上游数据库需满足迁移 MySQL 的其他要求,例如表结构、字符集、权限等,参见[上游 MySQL 实例检查内容](dm-precheck.md#检查内容)。 +> + 除上述 Aurora 特有配置以外,上游数据库需满足迁移 MySQL 的其他要求,例如表结构、字符集、权限等,参见[上游 MySQL 实例检查内容](precheck.md#检查内容)。 ## 第 2 步:部署 DM 集群 @@ -122,7 +122,7 @@ tiup dmctl --master-addr 127.0.0.1:8261 list-member > **注意:** > -> DM 所使用的配置文件支持明文或密文数据库密码,推荐使用密文数据库密码确保安全。如何获得密文数据库密码,参见[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 +> DM 所使用的配置文件支持明文或密文数据库密码,推荐使用密文数据库密码确保安全。如何获得密文数据库密码,参见[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 根据示例信息保存如下的数据源配置文件,其中 `source-id` 的值将在第 4 步配置任务时被引用。 diff --git a/zh/dm-open-api.md b/zh/open-api.md similarity index 99% rename from zh/dm-open-api.md rename to zh/open-api.md index 904ee1ef2..83a43a5dd 100644 --- a/zh/dm-open-api.md +++ b/zh/open-api.md @@ -1,7 +1,6 @@ --- title: 使用 OpenAPI 运维集群 summary: 了解如何使用 OpenAPI 接口来管理集群状态和数据同步。 -aliases: ['/zh/tidb-data-migration/dev/open-api.md/] --- # 使用 OpenAPI 运维集群 diff --git a/zh/dm-overview.md b/zh/overview.md similarity index 82% rename from zh/dm-overview.md rename to zh/overview.md index c6c5aab64..08b451d99 100644 --- a/zh/dm-overview.md +++ b/zh/overview.md @@ -1,6 +1,6 @@ --- title: Data Migration 简介 -aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overview/','/zh/tidb-data-migration/dev/overview.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overview/'] --- # Data Migration 简介 @@ -30,15 +30,15 @@ aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overvi ### Block & allow lists -[Block & Allow Lists](dm-key-features.md#block--allow-table-lists) 的过滤规则类似于 MySQL `replication-rules-db`/`replication-rules-table`,用于过滤或指定只迁移某些数据库或某些表的所有操作。 +[Block & Allow Lists](key-features.md#block--allow-table-lists) 的过滤规则类似于 MySQL `replication-rules-db`/`replication-rules-table`,用于过滤或指定只迁移某些数据库或某些表的所有操作。 ### Binlog event filter -[Binlog Event Filter](dm-key-features.md#binlog-event-filter) 用于过滤源数据库中特定表的特定类型操作,比如过滤掉表 `test`.`sbtest` 的 `INSERT` 操作或者过滤掉库 `test` 下所有表的 `TRUNCATE TABLE` 操作。 +[Binlog Event Filter](key-features.md#binlog-event-filter) 用于过滤源数据库中特定表的特定类型操作,比如过滤掉表 `test`.`sbtest` 的 `INSERT` 操作或者过滤掉库 `test` 下所有表的 `TRUNCATE TABLE` 操作。 ### Table routing -[Table Routing](dm-key-features.md#table-routing) 是将源数据库的表迁移到下游指定表的路由功能,比如将源数据表 `test`.`sbtest1` 的表结构和数据迁移到 TiDB 的表 `test`.`sbtest2`。它也是分库分表合并迁移所需的一个核心功能。 +[Table Routing](key-features.md#table-routing) 是将源数据库的表迁移到下游指定表的路由功能,比如将源数据表 `test`.`sbtest1` 的表结构和数据迁移到 TiDB 的表 `test`.`sbtest2`。它也是分库分表合并迁移所需的一个核心功能。 ## 高级功能 @@ -48,7 +48,7 @@ DM 支持对源数据的分库分表进行合并迁移,但有一些使用限 ### 对第三方 Online Schema Change 工具变更过程的同步优化 -在 MySQL 生态中,gh-ost 与 pt-osc 等工具被广泛使用,DM 对其变更过程进行了特殊的优化,以避免对不必要的中间数据进行迁移。详细信息可参考 [online-ddl](dm-key-features.md#online-ddl-工具支持)。 +在 MySQL 生态中,gh-ost 与 pt-osc 等工具被广泛使用,DM 对其变更过程进行了特殊的优化,以避免对不必要的中间数据进行迁移。详细信息可参考 [online-ddl](key-features.md#online-ddl-工具支持)。 ### 使用 SQL 表达式过滤某些行变更 @@ -75,7 +75,7 @@ DM 支持对源数据的分库分表进行合并迁移,但有一些使用限 - 目前,TiDB 部分兼容 MySQL 支持的 DDL 语句。因为 DM 使用 TiDB parser 来解析处理 DDL 语句,所以目前仅支持 TiDB parser 支持的 DDL 语法。详见 [TiDB DDL 语法支持](https://pingcap.com/docs-cn/dev/reference/mysql-compatibility/#ddl)。 - - DM 遇到不兼容的 DDL 语句时会报错。要解决此报错,需要使用 dmctl 手动处理,要么跳过该 DDL 语句,要么用指定的 DDL 语句来替换它。详见[如何处理不兼容的 DDL 语句](dm-faq.md#如何处理不兼容的-ddl-语句)。 + - DM 遇到不兼容的 DDL 语句时会报错。要解决此报错,需要使用 dmctl 手动处理,要么跳过该 DDL 语句,要么用指定的 DDL 语句来替换它。详见[如何处理不兼容的 DDL 语句](faq.md#如何处理不兼容的-ddl-语句)。 + 分库分表数据冲突合并 diff --git a/zh/dm-pause-task.md b/zh/pause-task.md similarity index 95% rename from zh/dm-pause-task.md rename to zh/pause-task.md index 4c2bce101..8db45435e 100644 --- a/zh/dm-pause-task.md +++ b/zh/pause-task.md @@ -1,7 +1,7 @@ --- title: 暂停数据迁移任务 summary: 了解 TiDB Data Migration 如何暂停数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/pause-task/','/zh/tidb-data-migration/dev/pause-task.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/pause-task/'] --- # 暂停数据迁移任务 diff --git a/zh/dm-performance-test.md b/zh/performance-test.md similarity index 93% rename from zh/dm-performance-test.md rename to zh/performance-test.md index 3d56fb43b..fb5d98782 100644 --- a/zh/dm-performance-test.md +++ b/zh/performance-test.md @@ -1,7 +1,7 @@ --- title: DM 集群性能测试 summary: 了解如何测试 DM 集群的性能。 -aliases: ['/docs-cn/tidb-data-migration/dev/performance-test/','/zh/tidb-data-migration/dev/performance-test.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/performance-test/'] --- # DM 集群性能测试 @@ -51,7 +51,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### 创建数据迁移任务 -1. 创建上游 MySQL 的 source,将 `source-id` 配置为 `source-1`。详细操作方法参考:[加载数据源配置](dm-manage-source.md#数据源操作)。 +1. 创建上游 MySQL 的 source,将 `source-id` 配置为 `source-1`。详细操作方法参考:[加载数据源配置](manage-source.md#数据源操作)。 2. 创建 `full` 模式的 DM 迁移任务,示例任务配置文件如下: @@ -85,7 +85,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 threads: 32 ``` -创建数据迁移任务的详细操作参考[创建数据迁移任务](dm-create-task.md#创建数据迁移任务)。 +创建数据迁移任务的详细操作参考[创建数据迁移任务](create-task.md#创建数据迁移任务)。 > **注意:** > @@ -110,7 +110,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### 创建数据迁移任务 -1. 创建上游 MySQL 的 source, source-id 配置为 `source-1`(如果在全量迁移性能测试中已经创建,则不需要再次创建)。详细操作方法参考:[加载数据源配置](dm-manage-source.md#数据源操作)。 +1. 创建上游 MySQL 的 source, source-id 配置为 `source-1`(如果在全量迁移性能测试中已经创建,则不需要再次创建)。详细操作方法参考:[加载数据源配置](manage-source.md#数据源操作)。 2. 创建 `all` 模式的 DM 迁移任务,示例任务配置文件如下: @@ -143,7 +143,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 batch: 100 ``` -创建数据迁移任务的详细操作参考[创建数据迁移任务](dm-create-task.md#创建数据迁移任务)。 +创建数据迁移任务的详细操作参考[创建数据迁移任务](create-task.md#创建数据迁移任务)。 > **注意:** > diff --git a/zh/dm-precheck.md b/zh/precheck.md similarity index 97% rename from zh/dm-precheck.md rename to zh/precheck.md index a504715e6..3c503f072 100644 --- a/zh/dm-precheck.md +++ b/zh/precheck.md @@ -1,7 +1,7 @@ --- title: 上游 MySQL 实例配置前置检查 summary: 了解上游 MySQL 实例配置前置检查。 -aliases: ['/docs-cn/tidb-data-migration/dev/precheck/','/zh/tidb-data-migration/dev/precheck.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/precheck/'] --- # 上游 MySQL 实例配置前置检查 diff --git a/zh/dm-query-status.md b/zh/query-status.md similarity index 99% rename from zh/dm-query-status.md rename to zh/query-status.md index b362fa825..90e22ebea 100644 --- a/zh/dm-query-status.md +++ b/zh/query-status.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration 查询状态 summary: 深入了解 TiDB Data Migration 如何查询数据迁移任务状态 -aliases: ['/docs-cn/tidb-data-migration/dev/query-status/','/docs-cn/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-error/','/zh/tidb-data-migration/dev/query-status.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/query-status/','/docs-cn/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-error/'] --- # TiDB Data Migration 查询状态 diff --git a/zh/quick-create-migration-task.md b/zh/quick-create-migration-task.md index 035abba76..a67e8db50 100644 --- a/zh/quick-create-migration-task.md +++ b/zh/quick-create-migration-task.md @@ -17,7 +17,7 @@ summary: 了解在不同业务需求场景下如何配置数据迁移任务。 除了业务需求场景导向的创建数据迁移任务教程之外: - 完整的数据迁移任务配置示例,请参考 [DM 任务完整配置文件介绍](task-configuration-file-full.md) -- 数据迁移任务的配置向导,请参考 [数据迁移任务配置向导](dm-task-configuration-guide.md) +- 数据迁移任务的配置向导,请参考 [数据迁移任务配置向导](task-configuration-guide.md) ## 多数据源汇总迁移到 TiDB diff --git a/zh/quick-start-create-source.md b/zh/quick-start-create-source.md index 0625ad006..c553f53dd 100644 --- a/zh/quick-start-create-source.md +++ b/zh/quick-start-create-source.md @@ -11,7 +11,7 @@ summary: 了解如何为 DM 创建数据源。 本文档介绍如何为 TiDB Data Migration (DM) 的数据迁移任务创建数据源。 -数据源包含了访问迁移任务上游所需的信息。数据迁移任务需要引用对应的数据源来获取访问配置信息。因此,在创建数据迁移任务之前,需要先创建任务的数据源。详细的数据源管理命令请参考[管理上游数据源](dm-manage-source.md)。 +数据源包含了访问迁移任务上游所需的信息。数据迁移任务需要引用对应的数据源来获取访问配置信息。因此,在创建数据迁移任务之前,需要先创建任务的数据源。详细的数据源管理命令请参考[管理上游数据源](manage-source.md)。 ## 第一步:配置数据源 @@ -57,7 +57,7 @@ summary: 了解如何为 DM 创建数据源。 tiup dmctl --master-addr operate-source create ./source-mysql-01.yaml ``` -数据源配置文件的其他配置参考[数据源配置文件介绍](dm-source-configuration-file.md)。 +数据源配置文件的其他配置参考[数据源配置文件介绍](source-configuration-file.md)。 命令返回结果如下: diff --git a/zh/relay-log.md b/zh/relay-log.md index 59aa4fa43..37e459a7f 100644 --- a/zh/relay-log.md +++ b/zh/relay-log.md @@ -8,7 +8,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/relay-log/'] DM (Data Migration) 工具的 relay log 由若干组有编号的文件和一个索引文件组成。这些有编号的文件包含了描述数据库更改的事件。索引文件包含所有使用过的 relay log 的文件名。 -在启用 relay log 功能后,DM-worker 会自动将上游 binlog 迁移到本地配置目录(若使用 TiUP 部署 DM,则迁移目录默认为 ` / `)。本地配置目录 `` 的默认值是 `relay-dir`,可在[上游数据库配置文件](dm-source-configuration-file.md)中进行修改)。DM-worker 在运行过程中,会将上游 binlog 实时迁移到本地文件。DM-worker 的 sync 处理单元会实时读取本地 relay log 的 binlog 事件,将这些事件转换为 SQL 语句,再将 SQL 语句迁移到下游数据库。 +在启用 relay log 功能后,DM-worker 会自动将上游 binlog 迁移到本地配置目录(若使用 TiUP 部署 DM,则迁移目录默认为 ` / `)。本地配置目录 `` 的默认值是 `relay-dir`,可在[上游数据库配置文件](source-configuration-file.md)中进行修改)。DM-worker 在运行过程中,会将上游 binlog 实时迁移到本地文件。DM-worker 的 sync 处理单元会实时读取本地 relay log 的 binlog 事件,将这些事件转换为 SQL 语句,再将 SQL 语句迁移到下游数据库。 > **注意:** > @@ -99,7 +99,7 @@ Relay log 迁移的起始位置由如下规则决定: > **注意:** > -> 自 v2.0.2 起,上游数据源配置中的 `enable-relay` 项已经失效。在[加载数据源配置](dm-manage-source.md#数据源操作)时,如果发现配置中的 `enable-relay` 项为 `true`,DM 会给出如下信息提示: +> 自 v2.0.2 起,上游数据源配置中的 `enable-relay` 项已经失效。在[加载数据源配置](manage-source.md#数据源操作)时,如果发现配置中的 `enable-relay` 项为 `true`,DM 会给出如下信息提示: > > ``` > Please use `start-relay` to specify which workers should pull relay log of relay-enabled sources. @@ -141,7 +141,7 @@ Relay log 迁移的起始位置由如下规则决定: 在 v2.0.2 之前的版本(不含 v2.0.2),DM-worker 在绑定上游数据源时,会检查上游数据源配置中的 `enable-relay` 项。如果 `enable-relay` 为 `true`,则为该数据源启用 relay log 功能。 -具体配置方式参见[上游数据源配置文件介绍](dm-source-configuration-file.md) +具体配置方式参见[上游数据源配置文件介绍](source-configuration-file.md) diff --git a/zh/dm-resume-task.md b/zh/resume-task.md similarity index 92% rename from zh/dm-resume-task.md rename to zh/resume-task.md index bc4d0b076..b7c0d194b 100644 --- a/zh/dm-resume-task.md +++ b/zh/resume-task.md @@ -1,7 +1,7 @@ --- title: 恢复数据迁移任务 summary: 了解 TiDB Data Migration 如何恢复数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/resume-task/','/zh/tidb-data-migration/dev/resume-task.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/resume-task/'] --- # 恢复数据迁移任务 diff --git a/zh/shard-merge-best-practices.md b/zh/shard-merge-best-practices.md index 03dbfaffa..030e4595e 100644 --- a/zh/shard-merge-best-practices.md +++ b/zh/shard-merge-best-practices.md @@ -117,7 +117,7 @@ CREATE TABLE `tbl_multi_pk` ( ## 上游 RDS 封装分库分表的处理 -上游数据源为 RDS 且使用了其分库分表功能的情况下,MySQL binlog 中的表名在 SQL client 连接时可能并不可见。例如在 UCloud 分布式数据库 [UDDB](https://www.ucloud.cn/site/product/uddb.html) 中,其 binlog 表名可能会多出 `_0001` 的后缀。这需要根据 binlog 中的表名规律,而不是 SQL client 所见的表名,来配置 [table routing 规则](dm-key-features.md#table-routing)。 +上游数据源为 RDS 且使用了其分库分表功能的情况下,MySQL binlog 中的表名在 SQL client 连接时可能并不可见。例如在 UCloud 分布式数据库 [UDDB](https://www.ucloud.cn/site/product/uddb.html) 中,其 binlog 表名可能会多出 `_0001` 的后缀。这需要根据 binlog 中的表名规律,而不是 SQL client 所见的表名,来配置 [table routing 规则](key-features.md#table-routing)。 ## 合表迁移过程中在上游增/删表 diff --git a/zh/dm-source-configuration-file.md b/zh/source-configuration-file.md similarity index 97% rename from zh/dm-source-configuration-file.md rename to zh/source-configuration-file.md index d18c71ce2..afcdfa292 100644 --- a/zh/dm-source-configuration-file.md +++ b/zh/source-configuration-file.md @@ -1,6 +1,6 @@ --- title: 上游数据库配置文件介绍 -aliases: ['/docs-cn/tidb-data-migration/dev/source-configuration-file/','/zh/tidb-data-migration/dev/source-configuration-file.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/source-configuration-file/'] --- # 上游数据库配置文件介绍 @@ -107,4 +107,4 @@ DM 会定期检查当前任务状态以及错误信息,判断恢复任务能 | 配置项 | 说明 | | :------------ | :--------------------------------------- | | `case-sensitive` | Binlog event filter 标识符是否大小写敏感。默认值:false。| -| `filters` | 配置 Binlog event filter,含义见 [Binlog event filter 参数解释](dm-key-features.md#参数解释-2)。 | +| `filters` | 配置 Binlog event filter,含义见 [Binlog event filter 参数解释](key-features.md#参数解释-2)。 | diff --git a/zh/dm-stop-task.md b/zh/stop-task.md similarity index 89% rename from zh/dm-stop-task.md rename to zh/stop-task.md index 59dbe01e1..d45f2b406 100644 --- a/zh/dm-stop-task.md +++ b/zh/stop-task.md @@ -1,12 +1,12 @@ --- title: 停止数据迁移任务 summary: 了解 TiDB Data Migration 如何停止数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/stop-task/','/zh/tidb-data-migration/dev/stop-task.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/stop-task/'] --- # 停止数据迁移任务 -`stop-task` 命令用于停止数据迁移任务。有关 `stop-task` 与 `pause-task` 的区别,请参考[暂停数据迁移任务](dm-pause-task.md)中的相关说明。 +`stop-task` 命令用于停止数据迁移任务。有关 `stop-task` 与 `pause-task` 的区别,请参考[暂停数据迁移任务](pause-task.md)中的相关说明。 {{< copyable "" >}} diff --git a/zh/task-configuration-file-full.md b/zh/task-configuration-file-full.md index 98d28780c..0b37e3b7b 100644 --- a/zh/task-configuration-file-full.md +++ b/zh/task-configuration-file-full.md @@ -7,11 +7,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/task-configuration-file-full/','/zh/ 本文档主要介绍 Data Migration (DM) 的任务完整的配置文件,包含[全局配置](#全局配置) 和[实例配置](#实例配置) 两部分。 -关于各配置项的功能和配置,请参阅[数据迁移功能](dm-overview.md#基本功能)。 +关于各配置项的功能和配置,请参阅[数据迁移功能](overview.md#基本功能)。 ## 关键概念 -关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](dm-config-overview.md#关键概念)。 +关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](config-overview.md#关键概念)。 ## 完整配置文件示例 @@ -177,9 +177,9 @@ mysql-instances: | 配置项 | 说明 | | :------------ | :--------------------------------------- | -| `routes` | 上游和下游表之间的路由 table routing 规则集。如果上游与下游的库名、表名一致,则不需要配置该项。使用场景及示例配置参见 [Table Routing](dm-key-features.md#table-routing) | -| `filters` | 上游数据库实例匹配的表的 binlog event filter 规则集。如果不需要对 binlog 进行过滤,则不需要配置该项。使用场景及示例配置参见 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) | -| `block-allow-list` | 该上游数据库实例匹配的表的 block & allow lists 过滤规则集。建议通过该项指定需要迁移的库和表,否则会迁移所有的库和表。使用场景及示例配置参见 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) | +| `routes` | 上游和下游表之间的路由 table routing 规则集。如果上游与下游的库名、表名一致,则不需要配置该项。使用场景及示例配置参见 [Table Routing](key-features.md#table-routing) | +| `filters` | 上游数据库实例匹配的表的 binlog event filter 规则集。如果不需要对 binlog 进行过滤,则不需要配置该项。使用场景及示例配置参见 [Binlog Event Filter](key-features.md#binlog-event-filter) | +| `block-allow-list` | 该上游数据库实例匹配的表的 block & allow lists 过滤规则集。建议通过该项指定需要迁移的库和表,否则会迁移所有的库和表。使用场景及示例配置参见 [Block & Allow Lists](key-features.md#block--allow-table-lists) | | `mydumpers` | dump 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `mydumper-thread` 对 `thread` 配置项单独进行配置。 | | `loaders` | load 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `loader-thread` 对 `pool-size` 配置项单独进行配置。 | | `syncers` | sync 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `syncer-thread` 对 `worker-count` 配置项单独进行配置。 | diff --git a/zh/task-configuration-file.md b/zh/task-configuration-file.md index fcedf7f45..af1ab95d6 100644 --- a/zh/task-configuration-file.md +++ b/zh/task-configuration-file.md @@ -7,11 +7,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/task-configuration-file/'] 本文档主要介绍 Data Migration (DM) 的任务基础配置文件,包含[全局配置](#全局配置)和[实例配置](#实例配置)两部分。 -完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md)。关于各配置项的功能和配置,请参阅[数据迁移功能](dm-key-features.md)。 +完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md)。关于各配置项的功能和配置,请参阅[数据迁移功能](key-features.md)。 ## 关键概念 -关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](dm-config-overview.md#关键概念)。 +关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](config-overview.md#关键概念)。 ## 基础配置文件示例 @@ -78,7 +78,7 @@ mysql-instances: ### 功能配置集 -对于一般的业务场景,只需要配置黑白名单过滤规则集,配置说明参见以上示例配置文件中 `block-allow-list` 的注释以及 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) +对于一般的业务场景,只需要配置黑白名单过滤规则集,配置说明参见以上示例配置文件中 `block-allow-list` 的注释以及 [Block & Allow Lists](key-features.md#block--allow-table-lists) ## 实例配置 diff --git a/zh/dm-task-configuration-guide.md b/zh/task-configuration-guide.md similarity index 95% rename from zh/dm-task-configuration-guide.md rename to zh/task-configuration-guide.md index dccfbeba3..6037d4634 100644 --- a/zh/dm-task-configuration-guide.md +++ b/zh/task-configuration-guide.md @@ -1,6 +1,5 @@ --- title: DM 数据迁移任务配置向导 -aliases: ['/zh/tidb-data-migration/dev/task-configuration-guide.md/] --- # 数据迁移任务配置向导 @@ -11,9 +10,9 @@ aliases: ['/zh/tidb-data-migration/dev/task-configuration-guide.md/] 配置需要迁移的数据源之前,首先应该确认已经在 DM 创建相应数据源: -- 查看数据源可以参考 [查看数据源配置](dm-manage-source.md#查看数据源配置) +- 查看数据源可以参考 [查看数据源配置](manage-source.md#查看数据源配置) - 创建数据源可以参考 [在 DM 创建数据源](migrate-data-using-dm.md#第-3-步创建数据源) -- 数据源配置可以参考 [数据源配置文件介绍](dm-source-configuration-file.md) +- 数据源配置可以参考 [数据源配置文件介绍](source-configuration-file.md) 仿照下面的 `mysql-instances:` 示例定义数据迁移任务需要同步的单个或者多个数据源。 @@ -56,7 +55,7 @@ target-database: # 目标 TiDB 配置 如果不需要过滤或迁移特定表,可以跳过该项配置。 -配置从数据源迁移表的黑白名单,则需要添加两个定义,详细配置规则参考 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists): +配置从数据源迁移表的黑白名单,则需要添加两个定义,详细配置规则参考 [Block & Allow Lists](key-features.md#block--allow-table-lists): 1. 定义全局的黑白名单规则 @@ -90,7 +89,7 @@ target-database: # 目标 TiDB 配置 如果不需要过滤特定库或者特定表的特定操作,可以跳过该项配置。 -配置过滤特定操作,则需要添加两个定义,详细配置规则参考 [Binlog Event Filter](dm-key-features.md#binlog-event-filter): +配置过滤特定操作,则需要添加两个定义,详细配置规则参考 [Binlog Event Filter](key-features.md#binlog-event-filter): 1. 定义全局的数据源操作过滤规则 @@ -123,7 +122,7 @@ target-database: # 目标 TiDB 配置 如果不需要将数据源表路由到不同名的目标 TiDB 表,可以跳过该项配置。分库分表合并迁移的场景必须配置该规则。 -配置数据源表迁移到目标 TiDB 表的路由规则,则需要添加两个定义,详细配置规则参考 [Table Routing](dm-key-features.md#table-routing): +配置数据源表迁移到目标 TiDB 表的路由规则,则需要添加两个定义,详细配置规则参考 [Table Routing](key-features.md#table-routing): 1. 定义全局的路由规则 @@ -168,7 +167,7 @@ shard-mode: "pessimistic" # 默认值为 "" 即无需协调。如果为分 ## 其他配置 -下面是本数据迁移任务配置向导的完整示例。完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md),其他各配置项的功能和配置也可参阅[数据迁移功能](dm-key-features.md)。 +下面是本数据迁移任务配置向导的完整示例。完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md),其他各配置项的功能和配置也可参阅[数据迁移功能](key-features.md)。 ```yaml --- diff --git a/zh/dm-tune-configuration.md b/zh/tune-configuration.md similarity index 98% rename from zh/dm-tune-configuration.md rename to zh/tune-configuration.md index cedef34af..943e82f84 100644 --- a/zh/dm-tune-configuration.md +++ b/zh/tune-configuration.md @@ -1,7 +1,7 @@ --- title: DM 配置优化 summary: 介绍如何通过优化配置来提高数据迁移性能。 -aliases: ['/docs-cn/tidb-data-migration/dev/tune-configuration/','/zh/tidb-data-migration/dev/tune-configuration.md/] +aliases: ['/docs-cn/tidb-data-migration/dev/tune-configuration/'] --- # DM 配置优化 diff --git a/zh/usage-scenario-downstream-more-columns.md b/zh/usage-scenario-downstream-more-columns.md index 82b600ea5..98d537e30 100644 --- a/zh/usage-scenario-downstream-more-columns.md +++ b/zh/usage-scenario-downstream-more-columns.md @@ -48,7 +48,7 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 出现以上错误的原因是 DM 迁移 binlog event 时,如果 DM 内部没有维护对应于该表的表结构,则会尝试使用下游当前的表结构来解析 binlog event 并生成相应的 DML 语句。如果 binlog event 里数据的列数与下游表结构的列数不一致时,则会产生上述错误。 -此时,我们可以使用 [`operate-schema`](dm-manage-schema.md) 命令来为该表指定与 binlog event 匹配的表结构。如果你在进行分表合并的数据迁移,那么需要为每个分表按照如下步骤在 DM 中设置用于解析 MySQL binlog 的表结构。具体操作为: +此时,我们可以使用 [`operate-schema`](manage-schema.md) 命令来为该表指定与 binlog event 匹配的表结构。如果你在进行分表合并的数据迁移,那么需要为每个分表按照如下步骤在 DM 中设置用于解析 MySQL binlog 的表结构。具体操作为: 1. 为数据源中需要迁移的表 `log.messages` 指定表结构,表结构需要对应 DM 将要开始同步的 binlog event 的数据。将对应的 `CREATE TABLE` 表结构语句并保存到文件,例如将以下表结构保存到 `log.messages.sql` 中。 @@ -60,7 +60,7 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 ) ``` -2. 使用 [`operate-schema`](dm-manage-schema.md) 命令设置表结构(此时 task 应该由于上述错误而处于 `Paused` 状态)。 +2. 使用 [`operate-schema`](manage-schema.md) 命令设置表结构(此时 task 应该由于上述错误而处于 `Paused` 状态)。 {{< copyable "" >}} @@ -68,6 +68,6 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 tiup dmctl --master-addr operate-schema set -s mysql-01 task-test -d log -t message log.message.sql ``` -3. 使用 [`resume-task`](dm-resume-task.md) 命令恢复处于 `Paused` 状态的任务。 +3. 使用 [`resume-task`](resume-task.md) 命令恢复处于 `Paused` 状态的任务。 -4. 使用 [`query-status`](dm-query-status.md) 命令确认数据迁移任务是否运行正常。 +4. 使用 [`query-status`](query-status.md) 命令确认数据迁移任务是否运行正常。 diff --git a/zh/usage-scenario-shard-merge.md b/zh/usage-scenario-shard-merge.md index 299bd820f..e5e0ea7ed 100644 --- a/zh/usage-scenario-shard-merge.md +++ b/zh/usage-scenario-shard-merge.md @@ -78,7 +78,7 @@ CREATE TABLE `sale_01` ( ## 迁移方案 -- 要满足迁移需求 #1,无需配置 [table routing 规则](dm-key-features.md#table-routing)。按照[去掉自增主键的主键属性](shard-merge-best-practices.md#去掉自增主键的主键属性)的要求,在下游手动建表。 +- 要满足迁移需求 #1,无需配置 [table routing 规则](key-features.md#table-routing)。按照[去掉自增主键的主键属性](shard-merge-best-practices.md#去掉自增主键的主键属性)的要求,在下游手动建表。 {{< copyable "sql" >}} @@ -101,7 +101,7 @@ CREATE TABLE `sale_01` ( ignore-checking-items: ["auto_increment_ID"] ``` -- 要满足迁移需求 #2,配置 [table routing 规则](dm-key-features.md#table-routing)如下: +- 要满足迁移需求 #2,配置 [table routing 规则](key-features.md#table-routing)如下: {{< copyable "" >}} @@ -118,7 +118,7 @@ CREATE TABLE `sale_01` ( target-table: "sale" ``` -- 要满足迁移需求 #3,配置 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) 如下: +- 要满足迁移需求 #3,配置 [Block & Allow Lists](key-features.md#block--allow-table-lists) 如下: {{< copyable "" >}} @@ -131,7 +131,7 @@ CREATE TABLE `sale_01` ( tbl-name: "log_bak" ``` -- 要满足迁移需求 #4,配置 [Binlog event filter 规则](dm-key-features.md#binlog-event-filter)如下: +- 要满足迁移需求 #4,配置 [Binlog event filter 规则](key-features.md#binlog-event-filter)如下: {{< copyable "" >}} @@ -151,7 +151,7 @@ CREATE TABLE `sale_01` ( ## 迁移任务配置 -迁移任务的完整配置如下,更多详情请参阅[数据迁移任务配置向导](dm-task-configuration-guide.md)。 +迁移任务的完整配置如下,更多详情请参阅[数据迁移任务配置向导](task-configuration-guide.md)。 {{< copyable "" >}} diff --git a/zh/usage-scenario-simple-migration.md b/zh/usage-scenario-simple-migration.md index 2e87461b0..967b4d94a 100644 --- a/zh/usage-scenario-simple-migration.md +++ b/zh/usage-scenario-simple-migration.md @@ -68,7 +68,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', ## 迁移方案 -- 为了满足[迁移要求](#迁移要求)中第一点的前三条要求,需要配置以下 [table routing 规则](dm-key-features.md#table-routing): +- 为了满足[迁移要求](#迁移要求)中第一点的前三条要求,需要配置以下 [table routing 规则](key-features.md#table-routing): {{< copyable "" >}} @@ -86,7 +86,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', target-schema: "user_south" ``` -- 为了满足[迁移要求](#迁移要求)中第二点的第一条要求,需要配置以下 [table routing 规则](dm-key-features.md#table-routing): +- 为了满足[迁移要求](#迁移要求)中第二点的第一条要求,需要配置以下 [table routing 规则](key-features.md#table-routing): {{< copyable "" >}} @@ -105,7 +105,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', target-table: "store_shenzhen" ``` -- 为了满足[迁移要求](#迁移要求)中第一点的第四条要求,需要配置以下 [binlog event filter 规则](dm-key-features.md#binlog-event-filter): +- 为了满足[迁移要求](#迁移要求)中第一点的第四条要求,需要配置以下 [binlog event filter 规则](key-features.md#binlog-event-filter): {{< copyable "" >}} @@ -123,7 +123,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', action: Ignore ``` -- 为了满足[迁移要求](#迁移要求)中第二点的第二条要求,需要配置以下 [binlog event filter 规则](dm-key-features.md#binlog-event-filter): +- 为了满足[迁移要求](#迁移要求)中第二点的第二条要求,需要配置以下 [binlog event filter 规则](key-features.md#binlog-event-filter): {{< copyable "" >}} @@ -140,7 +140,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', > > `store-filter-rule` 不同于 `log-filter-rule` 和 `user-filter-rule`。`store-filter-rule` 是针对整个 `store` 库的规则,而 `log-filter-rule` 和 `user-filter-rule` 是针对 `user` 库中 `log` 表的规则。 -- 为了满足[迁移要求](#迁移要求)中的第三点要求,需要配置以下 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists): +- 为了满足[迁移要求](#迁移要求)中的第三点要求,需要配置以下 [Block & Allow Lists](key-features.md#block--allow-table-lists): {{< copyable "" >}} @@ -152,7 +152,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', ## 迁移任务配置 -以下是完整的迁移任务配置,更多详情请参阅 [数据迁移任务配置向导](dm-task-configuration-guide.md)。 +以下是完整的迁移任务配置,更多详情请参阅 [数据迁移任务配置向导](task-configuration-guide.md)。 {{< copyable "" >}} From 9aa331274c37f4e25d953a29d751f6847bc6b4aa Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 19:48:09 +0800 Subject: [PATCH 08/11] revert_changes --- en/TOC.md | 58 +-- en/_index.md | 10 +- en/benchmark-v1.0-ga.md | 6 +- en/benchmark-v2.0-ga.md | 6 +- en/dm-alert-rules.md | 27 +- ...hmark-v5.3.0.md => dm-benchmark-v5.3.0.md} | 7 +- ...line-flags.md => dm-command-line-flags.md} | 1 + ...nfig-overview.md => dm-config-overview.md} | 10 +- en/{create-task.md => dm-create-task.md} | 3 +- en/dm-daily-check.md | 2 +- en/{enable-tls.md => dm-enable-tls.md} | 1 + ...error-handling.md => dm-error-handling.md} | 12 +- ...t-config.md => dm-export-import-config.md} | 1 + en/{faq.md => dm-faq.md} | 4 +- en/dm-glossary.md | 10 +- en/{handle-alerts.md => dm-handle-alerts.md} | 386 +++++++++--------- ...ues.md => dm-handle-performance-issues.md} | 3 +- en/dm-hardware-and-software-requirements.md | 2 +- en/{key-features.md => dm-key-features.md} | 6 +- en/{manage-schema.md => dm-manage-schema.md} | 1 + en/{manage-source.md => dm-manage-source.md} | 5 +- en/{open-api.md => dm-open-api.md} | 1 + en/{overview.md => dm-overview.md} | 12 +- en/{pause-task.md => dm-pause-task.md} | 1 + ...ormance-test.md => dm-performance-test.md} | 9 +- en/{precheck.md => dm-precheck.md} | 2 +- en/{query-status.md => dm-query-status.md} | 2 +- en/{resume-task.md => dm-resume-task.md} | 1 + ...ile.md => dm-source-configuration-file.md} | 4 +- en/{stop-task.md => dm-stop-task.md} | 3 +- ...uide.md => dm-task-configuration-guide.md} | 13 +- ...figuration.md => dm-tune-configuration.md} | 1 + en/feature-expression-filter.md | 4 +- en/feature-shard-merge-pessimistic.md | 4 +- en/handle-failed-ddl-statements.md | 2 +- en/maintain-dm-using-tiup.md | 2 +- en/manually-upgrade-dm-1.0-to-2.0.md | 10 +- en/migrate-data-using-dm.md | 6 +- en/migrate-from-mysql-aurora.md | 4 +- en/quick-create-migration-task.md | 2 +- en/quick-start-create-source.md | 4 +- en/relay-log.md | 4 +- en/task-configuration-file-full.md | 10 +- en/task-configuration-file.md | 6 +- en/usage-scenario-downstream-more-columns.md | 8 +- en/usage-scenario-shard-merge.md | 10 +- en/usage-scenario-simple-migration.md | 12 +- zh/TOC.md | 56 +-- zh/_index.md | 14 +- zh/benchmark-v1.0-ga.md | 6 +- zh/benchmark-v2.0-ga.md | 6 +- zh/dm-alert-rules.md | 2 +- ...hmark-v5.3.0.md => dm-benchmark-v5.3.0.md} | 7 +- ...line-flags.md => dm-command-line-flags.md} | 2 +- ...nfig-overview.md => dm-config-overview.md} | 10 +- zh/{create-task.md => dm-create-task.md} | 4 +- zh/dm-daily-check.md | 2 +- zh/{enable-tls.md => dm-enable-tls.md} | 1 + ...error-handling.md => dm-error-handling.md} | 12 +- ...t-config.md => dm-export-import-config.md} | 1 + zh/{faq.md => dm-faq.md} | 4 +- zh/dm-glossary.md | 10 +- zh/{handle-alerts.md => dm-handle-alerts.md} | 26 +- ...ues.md => dm-handle-performance-issues.md} | 4 +- zh/dm-hardware-and-software-requirements.md | 2 +- zh/{key-features.md => dm-key-features.md} | 6 +- zh/{manage-schema.md => dm-manage-schema.md} | 1 + zh/{manage-source.md => dm-manage-source.md} | 6 +- zh/{open-api.md => dm-open-api.md} | 1 + zh/{overview.md => dm-overview.md} | 12 +- zh/{pause-task.md => dm-pause-task.md} | 2 +- ...ormance-test.md => dm-performance-test.md} | 10 +- zh/{precheck.md => dm-precheck.md} | 2 +- zh/{query-status.md => dm-query-status.md} | 2 +- zh/{resume-task.md => dm-resume-task.md} | 2 +- ...ile.md => dm-source-configuration-file.md} | 4 +- zh/{stop-task.md => dm-stop-task.md} | 4 +- ...uide.md => dm-task-configuration-guide.md} | 13 +- ...figuration.md => dm-tune-configuration.md} | 2 +- zh/feature-expression-filter.md | 4 +- zh/feature-shard-merge-pessimistic.md | 4 +- zh/handle-failed-ddl-statements.md | 2 +- zh/maintain-dm-using-tiup.md | 2 +- zh/manually-upgrade-dm-1.0-to-2.0.md | 10 +- zh/migrate-data-using-dm.md | 6 +- zh/migrate-from-mysql-aurora.md | 4 +- zh/quick-create-migration-task.md | 2 +- zh/quick-start-create-source.md | 4 +- zh/relay-log.md | 6 +- zh/shard-merge-best-practices.md | 2 +- zh/task-configuration-file-full.md | 10 +- zh/task-configuration-file.md | 6 +- zh/usage-scenario-downstream-more-columns.md | 8 +- zh/usage-scenario-shard-merge.md | 10 +- zh/usage-scenario-simple-migration.md | 12 +- 95 files changed, 527 insertions(+), 499 deletions(-) rename en/{benchmark-v5.3.0.md => dm-benchmark-v5.3.0.md} (97%) rename en/{command-line-flags.md => dm-command-line-flags.md} (98%) rename en/{config-overview.md => dm-config-overview.md} (80%) rename en/{create-task.md => dm-create-task.md} (93%) rename en/{enable-tls.md => dm-enable-tls.md} (99%) rename en/{error-handling.md => dm-error-handling.md} (97%) rename en/{export-import-config.md => dm-export-import-config.md} (97%) rename en/{faq.md => dm-faq.md} (98%) rename en/{handle-alerts.md => dm-handle-alerts.md} (91%) rename en/{handle-performance-issues.md => dm-handle-performance-issues.md} (97%) rename en/{key-features.md => dm-key-features.md} (98%) rename en/{manage-schema.md => dm-manage-schema.md} (99%) rename en/{manage-source.md => dm-manage-source.md} (96%) rename en/{open-api.md => dm-open-api.md} (99%) rename en/{overview.md => dm-overview.md} (80%) rename en/{pause-task.md => dm-pause-task.md} (97%) rename en/{performance-test.md => dm-performance-test.md} (94%) rename en/{precheck.md => dm-precheck.md} (97%) rename en/{query-status.md => dm-query-status.md} (99%) rename en/{resume-task.md => dm-resume-task.md} (96%) rename en/{source-configuration-file.md => dm-source-configuration-file.md} (97%) rename en/{stop-task.md => dm-stop-task.md} (92%) rename en/{task-configuration-guide.md => dm-task-configuration-guide.md} (97%) rename en/{tune-configuration.md => dm-tune-configuration.md} (98%) rename zh/{benchmark-v5.3.0.md => dm-benchmark-v5.3.0.md} (93%) rename zh/{command-line-flags.md => dm-command-line-flags.md} (98%) rename zh/{config-overview.md => dm-config-overview.md} (74%) rename zh/{create-task.md => dm-create-task.md} (91%) rename zh/{enable-tls.md => dm-enable-tls.md} (98%) rename zh/{error-handling.md => dm-error-handling.md} (97%) rename zh/{export-import-config.md => dm-export-import-config.md} (97%) rename zh/{faq.md => dm-faq.md} (98%) rename zh/{handle-alerts.md => dm-handle-alerts.md} (75%) rename zh/{handle-performance-issues.md => dm-handle-performance-issues.md} (97%) rename zh/{key-features.md => dm-key-features.md} (98%) rename zh/{manage-schema.md => dm-manage-schema.md} (99%) rename zh/{manage-source.md => dm-manage-source.md} (95%) rename zh/{open-api.md => dm-open-api.md} (99%) rename zh/{overview.md => dm-overview.md} (83%) rename zh/{pause-task.md => dm-pause-task.md} (95%) rename zh/{performance-test.md => dm-performance-test.md} (93%) rename zh/{precheck.md => dm-precheck.md} (97%) rename zh/{query-status.md => dm-query-status.md} (99%) rename zh/{resume-task.md => dm-resume-task.md} (92%) rename zh/{source-configuration-file.md => dm-source-configuration-file.md} (97%) rename zh/{stop-task.md => dm-stop-task.md} (89%) rename zh/{task-configuration-guide.md => dm-task-configuration-guide.md} (95%) rename zh/{tune-configuration.md => dm-tune-configuration.md} (98%) diff --git a/en/TOC.md b/en/TOC.md index f33d01a29..8f65b68eb 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -2,12 +2,12 @@ - About DM - - [DM Overview](overview.md) + - [DM Overview](dm-overview.md) - [DM 5.3 Release Notes](releases/5.3.0.md) - Basic Features - - [Table Routing](key-features.md#table-routing) - - [Block and Allow Lists](key-features.md#block-and-allow-table-lists) - - [Binlog Event Filter](key-features.md#binlog-event-filter) + - [Table Routing](dm-key-features.md#table-routing) + - [Block and Allow Lists](dm-key-features.md#block-and-allow-table-lists) + - [Binlog Event Filter](dm-key-features.md#binlog-event-filter) - Advanced Features - Merge and Migrate Data from Sharded Tables - [Overview](feature-shard-merge.md) @@ -16,7 +16,7 @@ - [Migrate from MySQL Databases that Use GH-ost/PT-osc](feature-online-ddl.md) - [Filter Certain Row Changes Using SQL Expressions](feature-expression-filter.md) - [DM Architecture](dm-arch.md) - - [Benchmarks](benchmark-v5.3.0.md) + - [Benchmarks](dm-benchmark-v5.3.0.md) - Quick Start - [Quick Start](quick-start-with-dm.md) - [Deploy a DM cluster Using TiUP](deploy-a-dm-cluster-using-tiup.md) @@ -35,56 +35,56 @@ - [Use Binary](deploy-a-dm-cluster-using-binary.md) - [Use Kubernetes](https://docs.pingcap.com/tidb-in-kubernetes/dev/deploy-tidb-dm) - [Migrate Data Using DM](migrate-data-using-dm.md) - - [Test DM Performance](performance-test.md) + - [Test DM Performance](dm-performance-test.md) - Maintain - Tools - [Maintain DM Clusters Using TiUP (Recommended)](maintain-dm-using-tiup.md) - [Maintain DM Clusters Using dmctl](dmctl-introduction.md) - - [Maintain DM Clusters Using OpenAPI](open-api.md) + - [Maintain DM Clusters Using OpenAPI](dm-open-api.md) - Cluster Upgrade - [Manually Upgrade from v1.0.x to v2.0+](manually-upgrade-dm-1.0-to-2.0.md) - - [Manage Data Source](manage-source.md) + - [Manage Data Source](dm-manage-source.md) - Manage a Data Migration Task - - [Task Configuration Guide](task-configuration-guide.md) - - [Precheck a Task](precheck.md) - - [Create a Task](create-task.md) - - [Query Status](query-status.md) - - [Pause a Task](pause-task.md) - - [Resume a Task](resume-task.md) - - [Stop a Task](stop-task.md) - - [Export and Import Data Sources and Task Configuration of Clusters](export-import-config.md) + - [Task Configuration Guide](dm-task-configuration-guide.md) + - [Precheck a Task](dm-precheck.md) + - [Create a Task](dm-create-task.md) + - [Query Status](dm-query-status.md) + - [Pause a Task](dm-pause-task.md) + - [Resume a Task](dm-resume-task.md) + - [Stop a Task](dm-stop-task.md) + - [Export and Import Data Sources and Task Configuration of Clusters](dm-export-import-config.md) - [Handle Failed DDL Statements](handle-failed-ddl-statements.md) - [Manually Handle Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - - [Manage Schemas of Tables to be Migrated](manage-schema.md) - - [Handle Alerts](handle-alerts.md) + - [Manage Schemas of Tables to be Migrated](dm-manage-schema.md) + - [Handle Alerts](dm-handle-alerts.md) - [Daily Check](dm-daily-check.md) - Usage Scenarios - [Migrate from Aurora to TiDB](migrate-from-mysql-aurora.md) - [Migrate when TiDB Tables Have More Columns](usage-scenario-downstream-more-columns.md) - [Switch the MySQL Instance to Be Migrated](usage-scenario-master-slave-switch.md) - Troubleshoot - - [Handle Errors](error-handling.md) - - [Handle Performance Issues](handle-performance-issues.md) + - [Handle Errors](dm-error-handling.md) + - [Handle Performance Issues](dm-handle-performance-issues.md) - Performance Tuning - - [Optimize Configuration](tune-configuration.md) + - [Optimize Configuration](dm-tune-configuration.md) - Reference - Architecture - - [DM Architecture Overview](overview.md) + - [DM Architecture Overview](dm-overview.md) - [DM-worker](dm-worker-intro.md) - - [Command-line Flags](command-line-flags.md) + - [Command-line Flags](dm-command-line-flags.md) - Configuration - - [Overview](config-overview.md) + - [Overview](dm-config-overview.md) - [DM-master Configuration](dm-master-configuration-file.md) - [DM-worker Configuration](dm-worker-configuration-file.md) - - [Upstream Database Configuration](source-configuration-file.md) - - [Data Migration Task Configuration](task-configuration-guide.md) + - [Upstream Database Configuration](dm-source-configuration-file.md) + - [Data Migration Task Configuration](dm-task-configuration-guide.md) - Secure - - [Enable TLS for DM Connections](enable-tls.md) + - [Enable TLS for DM Connections](dm-enable-tls.md) - [Generate Self-signed Certificates](dm-generate-self-signed-certificates.md) - [Monitoring Metrics](monitor-a-dm-cluster.md) - [Alert Rules](dm-alert-rules.md) - - [Error Codes](error-handling.md#handle-common-errors) -- [FAQ](faq.md) + - [Error Codes](dm-error-handling.md#handle-common-errors) +- [FAQ](dm-faq.md) - [Glossary](dm-glossary.md) - Release Notes - v5.3 diff --git a/en/_index.md b/en/_index.md index 917a4cd16..759fff170 100644 --- a/en/_index.md +++ b/en/_index.md @@ -17,7 +17,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] About TiDB Data Migration -- [What is DM?](overview.md) +- [What is DM?](dm-overview.md) - [DM Architecture](dm-arch.md) - [Performance](benchmark-v2.0-ga.md) @@ -41,7 +41,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] - [Deploy DM Using TiUP Offline](deploy-a-dm-cluster-using-tiup-offline.md) - [Deploy DM Using Binary](deploy-a-dm-cluster-using-binary.md) - [Use DM to Migrate Data](migrate-data-using-dm.md) -- [DM Performance Test](performance-test.md) +- [DM Performance Test](dm-performance-test.md) @@ -52,7 +52,7 @@ aliases: ['/docs/tidb-data-migration/dev/'] - [Maintain DM Clusters Using dmctl](dmctl-introduction.md) - [Upgrade DM](manually-upgrade-dm-1.0-to-2.0.md) - [Manually Handle Sharding DDL Locks](manually-handling-sharding-ddl-locks.md) -- [Handle Alerts](handle-alerts.md) +- [Handle Alerts](dm-handle-alerts.md) - [Daily Check](dm-daily-check.md) @@ -69,9 +69,9 @@ aliases: ['/docs/tidb-data-migration/dev/'] Reference - [DM Architecture](dm-arch.md) -- [Configuration File Overview](config-overview.md) +- [Configuration File Overview](dm-config-overview.md) - [Monitoring Metrics and Alerts](monitor-a-dm-cluster.md) -- [Error Handling](error-handling.md) +- [Error Handling](dm-error-handling.md) diff --git a/en/benchmark-v1.0-ga.md b/en/benchmark-v1.0-ga.md index 100fbfff5..c002b1cea 100644 --- a/en/benchmark-v1.0-ga.md +++ b/en/benchmark-v1.0-ga.md @@ -60,11 +60,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41), to do the test. For detailed test scenario description, see [performance test](performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). ### Full import benchmark case -For details, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). +For details, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). #### Full import benchmark result @@ -105,7 +105,7 @@ Full import data size in this benchmark case is 3.78 GB, load unit pool size use ### Incremental replication benchmark case -For details about the test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). +For details about the test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). #### Benchmark result for incremental replication diff --git a/en/benchmark-v2.0-ga.md b/en/benchmark-v2.0-ga.md index 3d163f404..7cca6a9cc 100644 --- a/en/benchmark-v2.0-ga.md +++ b/en/benchmark-v2.0-ga.md @@ -59,11 +59,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34), to do the test. For detailed test scenario description, see [performance test](performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). ### Full import benchmark case -For detailed full import test method, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). +For detailed full import test method, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). #### Full import benchmark results @@ -105,7 +105,7 @@ In this test, the full amount of imported data is 3.78 GB and the `pool-size` of ### Incremental replication benchmark case -For detailed incremental replication test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). +For detailed incremental replication test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). #### Incremental replication benchmark result diff --git a/en/dm-alert-rules.md b/en/dm-alert-rules.md index 1006d63a0..ad0f3b905 100644 --- a/en/dm-alert-rules.md +++ b/en/dm-alert-rules.md @@ -1,13 +1,14 @@ ---- -title: DM Alert Information -summary: Introduce the alert information of DM. -aliases: ['/tidb-data-migration/dev/alert-rules/'] ---- - -# DM Alert Information - -The [alert system](migrate-data-using-dm.md#step-8-monitor-the-task-and-check-logs) is deployed by default when you deploy a DM cluster using TiUP. - -For more information about DM alert rules and the solutions, refer to [handle alerts](handle-alerts.md). - -Both DM alert information and monitoring metrics are based on Prometheus. For more information about their relationship, refer to [DM monitoring metrics](monitor-a-dm-cluster.md). +--- +title: DM Alert Information +summary: Introduce the alert information of DM. +aliases: ['/tidb-data-migration/dev/alert-rules/'] +--- + +# DM Alert Information + +The [alert system](migrate-data-using-dm.md#step-8-monitor-the-task-and-check-logs) is deployed by default when you deploy a DM cluster using TiUP. + +For more information about DM alert rules and the solutions, refer to [handle alerts](dm-handle-alerts.md). + +Both DM alert information and monitoring metrics are based on Prometheus. For more information about their relationship, refer to [DM monitoring metrics](monitor-a-dm-cluster.md). +ter.md). diff --git a/en/benchmark-v5.3.0.md b/en/dm-benchmark-v5.3.0.md similarity index 97% rename from en/benchmark-v5.3.0.md rename to en/dm-benchmark-v5.3.0.md index f5e098d48..9d48c88a2 100644 --- a/en/benchmark-v5.3.0.md +++ b/en/dm-benchmark-v5.3.0.md @@ -1,6 +1,7 @@ --- title: DM 5.3.0 Benchmark Report summary: Learn about the performance of 5.3.0. +aliases: ['/tidb-data-migration/dev/benchmark-v5.3.0/'] --- # DM 5.3.0 Benchmark Report @@ -53,11 +54,11 @@ Others: ## Test scenario -You can use a simple data migration flow, that is, MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4), to do the test. For detailed test scenario description, see [performance test](performance-test.md). +You can use a simple data migration flow, that is, MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4), to do the test. For detailed test scenario description, see [performance test](dm-performance-test.md). ### Full import benchmark case -For detailed full import test method, see [Full Import Benchmark Case](performance-test.md#full-import-benchmark-case). +For detailed full import test method, see [Full Import Benchmark Case](dm-performance-test.md#full-import-benchmark-case). #### Full import benchmark results @@ -99,7 +100,7 @@ In this test, the full amount of imported data is 3.78 GB and the `pool-size` of ### Incremental replication benchmark case -For detailed incremental replication test method, see [Incremental Replication Benchmark Case](performance-test.md#incremental-replication-benchmark-case). +For detailed incremental replication test method, see [Incremental Replication Benchmark Case](dm-performance-test.md#incremental-replication-benchmark-case). #### Incremental replication benchmark result diff --git a/en/command-line-flags.md b/en/dm-command-line-flags.md similarity index 98% rename from en/command-line-flags.md rename to en/dm-command-line-flags.md index 2bbd720b2..bf8bb5f8a 100644 --- a/en/command-line-flags.md +++ b/en/dm-command-line-flags.md @@ -1,6 +1,7 @@ --- title: Command-line Flags summary: Learn about the command-line flags in DM. +aliases: ['/tidb-data-migration/dev/command-line-flags/'] --- # Command-line Flags diff --git a/en/config-overview.md b/en/dm-config-overview.md similarity index 80% rename from en/config-overview.md rename to en/dm-config-overview.md index d42d477c9..112bbb918 100644 --- a/en/config-overview.md +++ b/en/dm-config-overview.md @@ -1,7 +1,7 @@ --- title: Data Migration Configuration File Overview summary: This document gives an overview of Data Migration configuration files. -aliases: ['/docs/tidb-data-migration/dev/config-overview/'] +aliases: ['/docs/tidb-data-migration/dev/config-overview/','/tidb-data-migration/dev/config-overview/'] --- # Data Migration Configuration File Overview @@ -12,7 +12,7 @@ This document gives an overview of configuration files of DM (Data Migration). - `dm-master.toml`: The configuration file of running the DM-master process, including the topology information and the logs of the DM-master. For more details, refer to [DM-master Configuration File](dm-master-configuration-file.md). - `dm-worker.toml`: The configuration file of running the DM-worker process, including the topology information and the logs of the DM-worker. For more details, refer to [DM-worker Configuration File](dm-worker-configuration-file.md). -- `source.yaml`: The configuration of the upstream database such as MySQL and MariaDB. For more details, refer to [Upstream Database Configuration File](source-configuration-file.md). +- `source.yaml`: The configuration of the upstream database such as MySQL and MariaDB. For more details, refer to [Upstream Database Configuration File](dm-source-configuration-file.md). ## DM migration task configuration @@ -20,9 +20,9 @@ This document gives an overview of configuration files of DM (Data Migration). You can take the following steps to create a data migration task: -1. [Load the data source configuration into the DM cluster using dmctl](manage-source.md#operate-data-source). -2. Refer to the description in the [Task Configuration Guide](task-configuration-guide.md) and create the configuration file `your_task.yaml`. -3. [Create the data migration task using dmctl](create-task.md). +1. [Load the data source configuration into the DM cluster using dmctl](dm-manage-source.md#operate-data-source). +2. Refer to the description in the [Task Configuration Guide](dm-task-configuration-guide.md) and create the configuration file `your_task.yaml`. +3. [Create the data migration task using dmctl](dm-create-task.md). ### Important concepts diff --git a/en/create-task.md b/en/dm-create-task.md similarity index 93% rename from en/create-task.md rename to en/dm-create-task.md index f36990664..9854ed984 100644 --- a/en/create-task.md +++ b/en/dm-create-task.md @@ -1,11 +1,12 @@ --- title: Create a Data Migration Task summary: Learn how to create a data migration task in TiDB Data Migration. +aliases: ['/tidb-data-migration/dev/create-task/'] --- # Create a Data Migration Task -You can use the `start-task` command to create a data migration task. When the data migration task is started, DM [prechecks privileges and configurations](precheck.md). +You can use the `start-task` command to create a data migration task. When the data migration task is started, DM [prechecks privileges and configurations](dm-precheck.md). {{< copyable "" >}} diff --git a/en/dm-daily-check.md b/en/dm-daily-check.md index 5da9af8d5..ef8954666 100644 --- a/en/dm-daily-check.md +++ b/en/dm-daily-check.md @@ -8,7 +8,7 @@ aliases: ['/docs/tidb-data-migration/dev/daily-check/','/tidb-data-migration/dev This document summarizes how to perform a daily check on TiDB Data Migration (DM). -+ Method 1: Execute the `query-status` command to check the running status of the task and the error output (if any). For details, see [Query Status](query-status.md). ++ Method 1: Execute the `query-status` command to check the running status of the task and the error output (if any). For details, see [Query Status](dm-query-status.md). + Method 2: If Prometheus and Grafana are correctly deployed when you deploy the DM cluster using TiUP, you can view DM monitoring metrics in Grafana. For example, suppose that the Grafana's address is `172.16.10.71`, go to , enter the Grafana dashboard, and select the DM Dashboard to check monitoring metrics of DM. For more information of these metrics, see [DM Monitoring Metrics](monitor-a-dm-cluster.md). diff --git a/en/enable-tls.md b/en/dm-enable-tls.md similarity index 99% rename from en/enable-tls.md rename to en/dm-enable-tls.md index 4137e4247..db0d7a64f 100644 --- a/en/enable-tls.md +++ b/en/dm-enable-tls.md @@ -1,6 +1,7 @@ --- title: Enable TLS for DM Connections summary: Learn how to enable TLS for DM connections. +aliases: ['/tidb-data-migration/dev/enable-tls/'] --- # Enable TLS for DM Connections diff --git a/en/error-handling.md b/en/dm-error-handling.md similarity index 97% rename from en/error-handling.md rename to en/dm-error-handling.md index 7af062fa8..ab990f71d 100644 --- a/en/error-handling.md +++ b/en/dm-error-handling.md @@ -1,7 +1,7 @@ --- title: Handle Errors summary: Learn about the error system and how to handle common errors when you use DM. -aliases: ['/docs/tidb-data-migration/dev/error-handling/','/docs/tidb-data-migration/dev/troubleshoot-dm/','/docs/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-system/'] +aliases: ['/docs/tidb-data-migration/dev/error-handling/','/docs/tidb-data-migration/dev/troubleshoot-dm/','/docs/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-system/','/tidb-data-migration/dev/error-handling/'] --- # Handle Errors @@ -90,7 +90,7 @@ If you encounter an error while running DM, take the following steps to troubles resume-task ${task name} ``` -However, you need to reset the data migration task in some cases. For details, refer to [Reset the Data Migration Task](faq.md#how-to-reset-the-data-migration-task). +However, you need to reset the data migration task in some cases. For details, refer to [Reset the Data Migration Task](dm-faq.md#how-to-reset-the-data-migration-task). ## Handle common errors @@ -102,8 +102,8 @@ However, you need to reset the data migration task in some cases. For details, r | `code=10005` | Occurs when performing the `QUERY` type SQL statements. | | | `code=10006` | Occurs when performing the `EXECUTE` type SQL statements, including DDL statements and DML statements of the `INSERT`, `UPDATE`or `DELETE` type. For more detailed error information, check the error message which usually includes the error code and error information returned for database operations. | | -| `code=11006` | Occurs when the built-in parser of DM parses the incompatible DDL statements. | Refer to [Data Migration - incompatible DDL statements](faq.md#how-to-handle-incompatible-ddl-statements) for solution. | -| `code=20010` | Occurs when decrypting the database password that is provided in task configuration. | Check whether the downstream database password provided in the configuration task is [correctly encrypted using dmctl](manage-source.md#encrypt-the-database-password). | +| `code=11006` | Occurs when the built-in parser of DM parses the incompatible DDL statements. | Refer to [Data Migration - incompatible DDL statements](dm-faq.md#how-to-handle-incompatible-ddl-statements) for solution. | +| `code=20010` | Occurs when decrypting the database password that is provided in task configuration. | Check whether the downstream database password provided in the configuration task is [correctly encrypted using dmctl](dm-manage-source.md#encrypt-the-database-password). | | `code=26002` | The task check fails to establish database connection. For more detailed error information, check the error message which usually includes the error code and error information returned for database operations. | Check whether the machine where DM-master is located has permission to access the upstream. | | `code=32001` | Abnormal dump processing unit | If the error message contains `mydumper: argument list too long.`, configure the table to be exported by manually adding the `--regex` regular expression in the Mydumper argument `extra-args` in the `task.yaml` file according to the block-allow list. For example, to export all tables named `hello`, add `--regex '.*\\.hello$'`; to export all tables, add `--regex '.*'`. | | `code=38008` | An error occurs in the gRPC communication among DM components. | Check `class`. Find out the error occurs in the interaction of which components. Determine the type of communication error. If the error occurs when establishing gRPC connection, check whether the communication server is working normally. | @@ -179,9 +179,9 @@ For binlog replication processing units, manually recover migration using the fo ### `Access denied for user 'root'@'172.31.43.27' (using password: YES)` shows when you query the task or check the log -For database related passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. For how to encrypt the plaintext password, see [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). +For database related passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. For how to encrypt the plaintext password, see [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). -In addition, the user of the upstream and downstream databases must have the corresponding read and write privileges. Data Migration also [prechecks the corresponding privileges automatically](precheck.md) while starting the data migration task. +In addition, the user of the upstream and downstream databases must have the corresponding read and write privileges. Data Migration also [prechecks the corresponding privileges automatically](dm-precheck.md) while starting the data migration task. ### The `load` processing unit reports the error `packet for query is too large. Try adjusting the 'max_allowed_packet' variable` diff --git a/en/export-import-config.md b/en/dm-export-import-config.md similarity index 97% rename from en/export-import-config.md rename to en/dm-export-import-config.md index 17aa83607..89d44dc30 100644 --- a/en/export-import-config.md +++ b/en/dm-export-import-config.md @@ -1,6 +1,7 @@ --- title: Export and Import Data Sources and Task Configuration of Clusters summary: Learn how to export and import data sources and task configuration of clusters when you use DM. +aliases: ['/tidb-data-migration/dev/export-import-config/'] --- # Export and Import Data Sources and Task Configuration of Clusters diff --git a/en/faq.md b/en/dm-faq.md similarity index 98% rename from en/faq.md rename to en/dm-faq.md index 40f5d8f26..9bd3e72d8 100644 --- a/en/faq.md +++ b/en/dm-faq.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration FAQ summary: Learn about frequently asked questions (FAQs) about TiDB Data Migration (DM). -aliases: ['/docs/tidb-data-migration/dev/faq/'] +aliases: ['/docs/tidb-data-migration/dev/faq/','/tidb-data-migration/dev/faq/'] --- # TiDB Data Migration FAQ @@ -188,7 +188,7 @@ Sometimes, the error message contains the `parse statement` information, for exa if the DDL is not needed, you can use a filter rule with \"*\" schema-pattern to ignore it.\n\t : parse statement: line 1 column 11 near \"EVENT `event_del_big_table` \r\nDISABLE\" %!!(MISSING)(EXTRA string=ALTER EVENT `event_del_big_table` \r\nDISABLE ``` -The reason for this type of error is that the TiDB parser cannot parse DDL statements sent by the upstream, such as `ALTER EVENT`, so `sql-skip` does not take effect as expected. You can add [binlog event filters](key-features.md#binlog-event-filter) in the configuration file to filter those statements and set `schema-pattern: "*"`. Starting from DM v2.0.1, DM pre-filters statements related to `EVENT`. +The reason for this type of error is that the TiDB parser cannot parse DDL statements sent by the upstream, such as `ALTER EVENT`, so `sql-skip` does not take effect as expected. You can add [binlog event filters](dm-key-features.md#binlog-event-filter) in the configuration file to filter those statements and set `schema-pattern: "*"`. Starting from DM v2.0.1, DM pre-filters statements related to `EVENT`. Since DM v2.0, `handle-error` replaces `sql-skip`. You can use `handle-error` instead to avoid this issue. diff --git a/en/dm-glossary.md b/en/dm-glossary.md index c6dfb7bd3..67a0066a4 100644 --- a/en/dm-glossary.md +++ b/en/dm-glossary.md @@ -20,7 +20,7 @@ Binlog events are information about data modification made to a MySQL or MariaDB ### Binlog event filter -[Binlog event filter](key-features.md#binlog-event-filter) is a more fine-grained filtering feature than the block and allow lists filtering rule. Refer to [binlog event filter](overview.md#binlog-event-filtering) for details. +[Binlog event filter](dm-key-features.md#binlog-event-filter) is a more fine-grained filtering feature than the block and allow lists filtering rule. Refer to [binlog event filter](dm-overview.md#binlog-event-filtering) for details. ### Binlog position @@ -32,7 +32,7 @@ Binlog replication processing unit is the processing unit used in DM-worker to r ### Block & allow table list -Block & allow table list is the feature that filters or only migrates all operations of some databases or some tables. Refer to [block & allow table lists](overview.md#block-and-allow-lists-migration-at-the-schema-and-table-levels) for details. This feature is similar to [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) and [MariaDB Replication Filters](https://mariadb.com/kb/en/replication-filters/). +Block & allow table list is the feature that filters or only migrates all operations of some databases or some tables. Refer to [block & allow table lists](dm-overview.md#block-and-allow-lists-migration-at-the-schema-and-table-levels) for details. This feature is similar to [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) and [MariaDB Replication Filters](https://mariadb.com/kb/en/replication-filters/). ## C @@ -120,13 +120,13 @@ The subtask is a part of a data migration task that is running on each DM-worker ### Subtask status -The subtask status is the status of a data migration subtask. The current status options include `New`, `Running`, `Paused`, `Stopped`, and `Finished`. Refer to [subtask status](query-status.md#subtask-status) for more details about the status of a data migration task or subtask. +The subtask status is the status of a data migration subtask. The current status options include `New`, `Running`, `Paused`, `Stopped`, and `Finished`. Refer to [subtask status](dm-query-status.md#subtask-status) for more details about the status of a data migration task or subtask. ## T ### Table routing -The table routing feature enables DM to migrate a certain table of the upstream MySQL or MariaDB instance to the specified table in the downstream, which can be used to merge and migrate sharded tables. Refer to [table routing](key-features.md#table-routing) for details. +The table routing feature enables DM to migrate a certain table of the upstream MySQL or MariaDB instance to the specified table in the downstream, which can be used to merge and migrate sharded tables. Refer to [table routing](dm-key-features.md#table-routing) for details. ### Task @@ -134,4 +134,4 @@ The data migration task, which is started after you successfully execute a `star ### Task status -The task status refers to the status of a data migration task. The task status depends on the statuses of all its subtasks. Refer to [subtask status](query-status.md#subtask-status) for details. +The task status refers to the status of a data migration task. The task status depends on the statuses of all its subtasks. Refer to [subtask status](dm-query-status.md#subtask-status) for details. diff --git a/en/handle-alerts.md b/en/dm-handle-alerts.md similarity index 91% rename from en/handle-alerts.md rename to en/dm-handle-alerts.md index 31842fa12..683a43b1d 100644 --- a/en/handle-alerts.md +++ b/en/dm-handle-alerts.md @@ -2,7 +2,7 @@ --- title: Handle Alerts summary: Understand how to deal with the alert information in DM. -aliases: ['/tidb-data-migration/dev/handle-alerts.md/] +aliases: ['/tidb-data-migration/dev/handle-alerts.md/,'/tidb-data-migration/dev/handle-alerts/'] --- # Handle Alerts @@ -190,193 +190,199 @@ This document introduces how to deal with the alert information in DM. Refer to [Handle Performance Issues](dm-handle-performance-issues.md). ======= ---- -title: Handle Alerts -summary: Understand how to deal with the alert information in DM. ---- - -# Handle Alerts - -This document introduces how to deal with the alert information in DM. - -## Alerts related to high availability - -### `DM_master_all_down` - -- Description: - - If all DM-master nodes are offline, this alert is triggered. - -- Solution: - - You can take the following steps to handle the alert: - - 1. Check the environment of the cluster. - 2. Check the logs of all DM-master nodes for troubleshooting. - -### `DM_worker_offline` - -- Description: - - If a DM-worker node is offline for more than one hour, this alert is triggered. In a high-availability architecture, this alert might not directly interrupt the task but increases the risk of interruption. - -- Solution: - - You can take the following steps to handle the alert: - - 1. View the working status of the corresponding DM-worker node. - 2. Check whether the node is connected. - 3. Troubleshoot errors through logs. - -### `DM_DDL_error` - -- Description: - - This error occurs when DM is processing the sharding DDL operations. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_pending_DDL` - -- Description: - - If a sharding DDL operation is pending for more than one hour, this alert is triggered. - -- Solution: - - In some scenarios, the pending sharding DDL operation might be what users expect. Otherwise, refer to [Handle Sharding DDL Locks Manually in DM](manually-handling-sharding-ddl-locks.md) for solution. - -## Alert rules related to task status - -### `DM_task_state` - -- Description: - - When a sub-task of DM-worker is in the `Paused` state for over 20 minutes, an alert is triggered. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -## Alert rules related to relay log - -### `DM_relay_process_exits_with_error` - -- Description: - - When the relay log processing unit encounters an error, this unit moves to `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_remain_storage_of_relay_log` - -- Description: - - When the free space of the disk where the relay log is located is less than 10G, an alert is triggered. - -- Solutions: - - You can take the following methods to handle the alert: - - - Delete unwanted data manually to increase free disk space. - - Reconfigure the [automatic data purge strategy of the relay log](relay-log.md#automatic-data-purge) or [purge data manually](relay-log.md#manual-data-purge). - - Execute the command `pause-relay` to pause the relay log pulling process. After there is enough free disk space, resume the process by running the command `resume-relay`. Note that you must not purge upstream binlog files that have not been pulled after the relay log pulling process is paused. - -### `DM_relay_log_data_corruption` - -- Description: - - When the relay log processing unit validates the binlog event read from the upstream and detects abnormal checksum information, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_fail_to_read_binlog_from_master` - -- Description: - - If an error occurs when the relay log processing unit tries to read the binlog event from the upstream, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_fail_to_write_relay_log` - -- Description: - - If an error occurs when the relay log processing unit tries to write the binlog event into the relay log file, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_binlog_file_gap_between_master_relay` - -- Description: - - When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files pulled by the relay log processing unit by **more than** 1 for 10 minutes, and an alert is triggered. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -## Alert rules related to Dump/Load - -### `DM_dump_process_exists_with_error` - -- Description: - - When the Dump processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_load_process_exists_with_error` - -- Description: - - When the Load processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -## Alert rules related to binlog replication - -### `DM_sync_process_exists_with_error` - -- Description: - - When the binlog replication processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](error-handling.md#troubleshooting). - -### `DM_binlog_file_gap_between_master_syncer` - -- Description: - - When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files processed by the relay log processing unit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](handle-performance-issues.md). - -### `DM_binlog_file_gap_between_relay_syncer` - -- Description: - - When the number of the binlog files in the current relay log processing unit exceeds that of the latest binlog files processed by the binlog replication processing unit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](handle-performance-issues.md). +--- +title: Handle Alerts +summary: Understand how to deal with the alert information in DM. +--- + +# Handle Alerts + +This document introduces how to deal with the alert information in DM. + +## Alerts related to high availability + +### `DM_master_all_down` + +- Description: + + If all DM-master nodes are offline, this alert is triggered. + +- Solution: + + You can take the following steps to handle the alert: + + 1. Check the environment of the cluster. + 2. Check the logs of all DM-master nodes for troubleshooting. + +### `DM_worker_offline` + +- Description: + + If a DM-worker node is offline for more than one hour, this alert is triggered. In a high-availability architecture, this alert might not directly interrupt the task but increases the risk of interruption. + +- Solution: + + You can take the following steps to handle the alert: + + 1. View the working status of the corresponding DM-worker node. + 2. Check whether the node is connected. + 3. Troubleshoot errors through logs. + +### `DM_DDL_error` + +- Description: + + This error occurs when DM is processing the sharding DDL operations. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_pending_DDL` + +- Description: + + If a sharding DDL operation is pending for more than one hour, this alert is triggered. + +- Solution: + + In some scenarios, the pending sharding DDL operation might be what users expect. Otherwise, refer to [Handle Sharding DDL Locks Manually in DM](manually-handling-sharding-ddl-locks.md) for solution. + +## Alert rules related to task status + +### `DM_task_state` + +- Description: + + When a sub-task of DM-worker is in the `Paused` state for over 20 minutes, an alert is triggered. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to relay log + +### `DM_relay_process_exits_with_error` + +- Description: + + When the relay log processing unit encounters an error, this unit moves to `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_remain_storage_of_relay_log` + +- Description: + + When the free space of the disk where the relay log is located is less than 10G, an alert is triggered. + +- Solutions: + + You can take the following methods to handle the alert: + + - Delete unwanted data manually to increase free disk space. + - Reconfigure the [automatic data purge strategy of the relay log](relay-log.md#automatic-data-purge) or [purge data manually](relay-log.md#manual-data-purge). + - Execute the command `pause-relay` to pause the relay log pulling process. After there is enough free disk space, resume the process by running the command `resume-relay`. Note that you must not purge upstream binlog files that have not been pulled after the relay log pulling process is paused. + +### `DM_relay_log_data_corruption` + +- Description: + + When the relay log processing unit validates the binlog event read from the upstream and detects abnormal checksum information, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_fail_to_read_binlog_from_master` + +- Description: + + If an error occurs when the relay log processing unit tries to read the binlog event from the upstream, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_fail_to_write_relay_log` + +- Description: + + If an error occurs when the relay log processing unit tries to write the binlog event into the relay log file, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_binlog_file_gap_between_master_relay` + +- Description: + + When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files pulled by the relay log processing unit by **more than** 1 for 10 minutes, and an alert is triggered. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to Dump/Load + +### `DM_dump_process_exists_with_error` + +- Description: + + When the Dump processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_load_process_exists_with_error` + +- Description: + + When the Load processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +## Alert rules related to binlog replication + +### `DM_sync_process_exists_with_error` + +- Description: + + When the binlog replication processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. + +- Solution: + + Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). + +### `DM_binlog_file_gap_between_master_syncer` + +- Description: + + When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files processed by the relay log processing unit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](dm-handle-performance-issues.md). + +### `DM_binlog_file_gap_between_relay_syncer` + +- Description: + + When the number of the binlog files in the current relay log processing unit exceeds that of the latest binlog files processed by the binlog replication processing unit by **more than** 1 for 10 minutes, an alert is triggered. + +- Solution: + + Refer to [Handle Performance Issues](dm-handle-performance-issues.md). +>>>>>>> parent of a030378 (rename_files):en/handle-alerts.md + + +- Solution: + + Refer to [Handle Performance Issues](handle-performance-issues.md). >>>>>>> parent of a030378 (rename_files):en/handle-alerts.md diff --git a/en/handle-performance-issues.md b/en/dm-handle-performance-issues.md similarity index 97% rename from en/handle-performance-issues.md rename to en/dm-handle-performance-issues.md index 6f47bb314..187aca491 100644 --- a/en/handle-performance-issues.md +++ b/en/dm-handle-performance-issues.md @@ -1,6 +1,7 @@ --- title: Handle Performance Issues summary: Learn about common performance issues that might exist in DM and how to deal with them. +aliases: ['/tidb-data-migration/dev/handle-performance-issues/'] --- # Handle Performance Issues @@ -72,7 +73,7 @@ The Binlog replication unit decides whether to read the binlog event from the up ### binlog event conversion -The Binlog replication unit constructs DML, parses DDL, and performs [table router](key-features.md#table-routing) conversion from binlog event data. The related metric is `transform binlog event duration`. +The Binlog replication unit constructs DML, parses DDL, and performs [table router](dm-key-features.md#table-routing) conversion from binlog event data. The related metric is `transform binlog event duration`. The duration is mainly affected by the write operations upstream. Take the `INSERT INTO` statement as an example, the time consumed to convert a single `VALUES` greatly differs from that to convert a lot of `VALUES`. The time consumed might range from tens of microseconds to hundreds of microseconds. However, usually this is not a bottleneck of the system. diff --git a/en/dm-hardware-and-software-requirements.md b/en/dm-hardware-and-software-requirements.md index d039e50a4..74b0b407a 100644 --- a/en/dm-hardware-and-software-requirements.md +++ b/en/dm-hardware-and-software-requirements.md @@ -46,4 +46,4 @@ DM can be deployed and run on a 64-bit generic hardware server platform (Intel x > **Note:** > > - In the production environment, it is not recommended to deploy and run DM-master and DM-worker on the same server, because when DM-worker writes data to disks, it might interfere with the use of disks by DM-master's high availability component. -> - If a performance issue occurs, you are recommended to modify the task configuration file according to the [Optimize Configuration of DM](tune-configuration.md) document. If the performance is not effectively optimized by tuning the configuration file, you can try to upgrade the hardware of your server. +> - If a performance issue occurs, you are recommended to modify the task configuration file according to the [Optimize Configuration of DM](dm-tune-configuration.md) document. If the performance is not effectively optimized by tuning the configuration file, you can try to upgrade the hardware of your server. diff --git a/en/key-features.md b/en/dm-key-features.md similarity index 98% rename from en/key-features.md rename to en/dm-key-features.md index 225b36315..424380948 100644 --- a/en/key-features.md +++ b/en/dm-key-features.md @@ -1,7 +1,7 @@ --- title: Key Features summary: Learn about the key features of DM and appropriate parameter configurations. -aliases: ['/docs/tidb-data-migration/dev/feature-overview/','/tidb-data-migration/dev/feature-overview'] +aliases: ['/docs/tidb-data-migration/dev/feature-overview/','/tidb-data-migration/dev/feature-overview','/tidb-data-migration/dev/key-features/'] --- # Key Features @@ -236,7 +236,7 @@ Binlog event filter is a more fine-grained filtering rule than the block and all > **Note:** > > - If the same table matches multiple rules, these rules are applied in order and the block list has priority over the allow list. This means if both the `Ignore` and `Do` rules are applied to a table, the `Ignore` rule takes effect. -> - Starting from DM v2.0.2, you can configure binlog event filters in the source configuration file. For details, see [Upstream Database Configuration File](source-configuration-file.md). +> - Starting from DM v2.0.2, you can configure binlog event filters in the source configuration file. For details, see [Upstream Database Configuration File](dm-source-configuration-file.md). ### Parameter configuration @@ -376,7 +376,7 @@ In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM prov ### Restrictions - DM only supports gh-ost and pt-osc. -- When `online-ddl` is enabled, the checkpoint corresponding to incremental replication should not be in the process of online DDL execution. For example, if an upstream online DDL operation starts at `position-A` and ends at `position-B` of the binlog, the starting point of incremental replication should be earlier than `position-A` or later than `position-B`; otherwise, an error occurs. For details, refer to [FAQ](faq.md#how-to-handle-the-error-returned-by-the-ddl-operation-related-to-the-gh-ost-table-after-online-ddl-scheme-gh-ost-is-set). +- When `online-ddl` is enabled, the checkpoint corresponding to incremental replication should not be in the process of online DDL execution. For example, if an upstream online DDL operation starts at `position-A` and ends at `position-B` of the binlog, the starting point of incremental replication should be earlier than `position-A` or later than `position-B`; otherwise, an error occurs. For details, refer to [FAQ](dm-faq.md#how-to-handle-the-error-returned-by-the-ddl-operation-related-to-the-gh-ost-table-after-online-ddl-scheme-gh-ost-is-set). ### Parameter configuration diff --git a/en/manage-schema.md b/en/dm-manage-schema.md similarity index 99% rename from en/manage-schema.md rename to en/dm-manage-schema.md index a81f753ff..ba8659fb1 100644 --- a/en/manage-schema.md +++ b/en/dm-manage-schema.md @@ -1,6 +1,7 @@ --- title: Manage Table Schemas of Tables to be Migrated summary: Learn how to manage the schema of the table to be migrated in DM. +aliases: ['/tidb-data-migration/dev/manage-schema/'] --- # Manage Table Schemas of Tables to be Migrated diff --git a/en/manage-source.md b/en/dm-manage-source.md similarity index 96% rename from en/manage-source.md rename to en/dm-manage-source.md index 96013e867..e6564e6e5 100644 --- a/en/manage-source.md +++ b/en/dm-manage-source.md @@ -1,6 +1,7 @@ --- title: Manage Data Source Configurations summary: Learn how to manage upstream MySQL instances in TiDB Data Migration. +aliases: ['/tidb-data-migration/dev/manage-source/'] --- # Manage Data Source Configurations @@ -69,7 +70,7 @@ Use the following `operate-source` command to create a source configuration file operate-source create ./source.yaml ``` -For the configuration of `source.yaml`, refer to [Upstream Database Configuration File Introduction](source-configuration-file.md). +For the configuration of `source.yaml`, refer to [Upstream Database Configuration File Introduction](dm-source-configuration-file.md). The following is an example of the returned result: @@ -168,7 +169,7 @@ Global Flags: -s, --source strings MySQL Source ID. ``` -Before transferring, DM checks whether the worker to be unbound still has running tasks. If the worker has any running tasks, you need to [pause the tasks](pause-task.md) first, change the binding, and then [resume the tasks](resume-task.md). +Before transferring, DM checks whether the worker to be unbound still has running tasks. If the worker has any running tasks, you need to [pause the tasks](dm-pause-task.md) first, change the binding, and then [resume the tasks](dm-resume-task.md). ### Usage example diff --git a/en/open-api.md b/en/dm-open-api.md similarity index 99% rename from en/open-api.md rename to en/dm-open-api.md index 237f56d50..0c745f543 100644 --- a/en/open-api.md +++ b/en/dm-open-api.md @@ -1,6 +1,7 @@ --- title: Maintain DM Clusters Using OpenAPI summary: Learn about how to use OpenAPI interface to manage the cluster status and data replication. +aliases: ['/tidb-data-migration/dev/open-api/'] --- # Maintain DM Clusters Using OpenAPI diff --git a/en/overview.md b/en/dm-overview.md similarity index 80% rename from en/overview.md rename to en/dm-overview.md index 3a10277a5..c28a72817 100644 --- a/en/overview.md +++ b/en/dm-overview.md @@ -1,7 +1,7 @@ --- title: Data Migration Overview summary: Learn about the Data Migration tool, the architecture, the key components, and features. -aliases: ['/docs/tidb-data-migration/dev/overview/'] +aliases: ['/docs/tidb-data-migration/dev/overview/','/tidb-data-migration/dev/overview/'] --- @@ -29,15 +29,15 @@ This section describes the basic data migration features provided by DM. ### Block and allow lists migration at the schema and table levels -The [block and allow lists filtering rule](key-features.md#block-and-allow-table-lists) is similar to the `replication-rules-db`/`replication-rules-table` feature of MySQL, which can be used to filter or replicate all operations of some databases only or some tables only. +The [block and allow lists filtering rule](dm-key-features.md#block-and-allow-table-lists) is similar to the `replication-rules-db`/`replication-rules-table` feature of MySQL, which can be used to filter or replicate all operations of some databases only or some tables only. ### Binlog event filtering -The [binlog event filtering](key-features.md#binlog-event-filter) feature means that DM can filter certain types of SQL statements from certain tables in the source database. For example, you can filter all `INSERT` statements in the table `test`.`sbtest` or filter all `TRUNCATE TABLE` statements in the schema `test`. +The [binlog event filtering](dm-key-features.md#binlog-event-filter) feature means that DM can filter certain types of SQL statements from certain tables in the source database. For example, you can filter all `INSERT` statements in the table `test`.`sbtest` or filter all `TRUNCATE TABLE` statements in the schema `test`. ### Schema and table routing -The [schema and table routing](key-features.md#table-routing) feature means that DM can migrate a certain table of the source database to the specified table in the downstream. For example, you can migrate the table structure and data from the table `test`.`sbtest1` in the source database to the table `test`.`sbtest2` in TiDB. This is also a core feature for merging and migrating sharded databases and tables. +The [schema and table routing](dm-key-features.md#table-routing) feature means that DM can migrate a certain table of the source database to the specified table in the downstream. For example, you can migrate the table structure and data from the table `test`.`sbtest1` in the source database to the table `test`.`sbtest2` in TiDB. This is also a core feature for merging and migrating sharded databases and tables. ## Advanced features @@ -47,7 +47,7 @@ DM supports merging and migrating the original sharded instances and tables from ### Optimization for third-party online-schema-change tools in the migration process -In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM provides support for these tools to avoid migrating unnecessary intermediate data. For details, see [Online DDL Tools](key-features.md#online-ddl-tools) +In the MySQL ecosystem, tools such as gh-ost and pt-osc are widely used. DM provides support for these tools to avoid migrating unnecessary intermediate data. For details, see [Online DDL Tools](dm-key-features.md#online-ddl-tools) ### Filter certain row changes using SQL expressions @@ -77,7 +77,7 @@ Before using the DM tool, note the following restrictions: - Currently, TiDB is not compatible with all the DDL statements that MySQL supports. Because DM uses the TiDB parser to process DDL statements, it only supports the DDL syntax supported by the TiDB parser. For details, see [MySQL Compatibility](https://pingcap.com/docs/stable/reference/mysql-compatibility/#ddl). - - DM reports an error when it encounters an incompatible DDL statement. To solve this error, you need to manually handle it using dmctl, either skipping this DDL statement or replacing it with a specified DDL statement(s). For details, see [Skip or replace abnormal SQL statements](faq.md#how-to-handle-incompatible-ddl-statements). + - DM reports an error when it encounters an incompatible DDL statement. To solve this error, you need to manually handle it using dmctl, either skipping this DDL statement or replacing it with a specified DDL statement(s). For details, see [Skip or replace abnormal SQL statements](dm-faq.md#how-to-handle-incompatible-ddl-statements). + Sharding merge with conflicts diff --git a/en/pause-task.md b/en/dm-pause-task.md similarity index 97% rename from en/pause-task.md rename to en/dm-pause-task.md index 60db3485b..86374aa93 100644 --- a/en/pause-task.md +++ b/en/dm-pause-task.md @@ -1,6 +1,7 @@ --- title: Pause a Data Migration Task summary: Learn how to pause a data migration task in TiDB Data Migration. +aliases: ['/tidb-data-migration/dev/pause-task/'] --- # Pause a Data Migration Task diff --git a/en/performance-test.md b/en/dm-performance-test.md similarity index 94% rename from en/performance-test.md rename to en/dm-performance-test.md index 898d16549..0dfd502e2 100644 --- a/en/performance-test.md +++ b/en/dm-performance-test.md @@ -1,6 +1,7 @@ --- title: DM Cluster Performance Test summary: Learn how to test the performance of DM clusters. +aliases: ['/tidb-data-migration/dev/performance-test/'] --- # DM Cluster Performance Test @@ -50,7 +51,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### Create a data migration task -1. Create an upstream MySQL source and set `source-id` to `source-1`. For details, see [Load the Data Source Configurations](manage-source.md#operate-data-source). +1. Create an upstream MySQL source and set `source-id` to `source-1`. For details, see [Load the Data Source Configurations](dm-manage-source.md#operate-data-source). 2. Create a migration task (in `full` mode). The following is a task configuration template: @@ -84,7 +85,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 threads: 32 ``` -For details about how to create a migration task, see [Create a Data Migration Task](create-task.md). +For details about how to create a migration task, see [Create a Data Migration Task](dm-create-task.md). > **Note:** > @@ -109,7 +110,7 @@ Use `sysbench` to create test tables in the upstream. #### Create a data migration task -1. Create the source of the upstream MySQL. Set `source-id` to `source-1` (if the source has been created in the [full import benchmark case](#full-import-benchmark-case), you do not need to create it again). For details, see [Load the Data Source Configurations](manage-source.md#operate-data-source). +1. Create the source of the upstream MySQL. Set `source-id` to `source-1` (if the source has been created in the [full import benchmark case](#full-import-benchmark-case), you do not need to create it again). For details, see [Load the Data Source Configurations](dm-manage-source.md#operate-data-source). 2. Create a DM migration task (in `all` mode). The following is an example of the task configuration file: @@ -142,7 +143,7 @@ Use `sysbench` to create test tables in the upstream. batch: 100 ``` -For details about how to create a data migration task, see [Create a Data Migration Task](create-task.md). +For details about how to create a data migration task, see [Create a Data Migration Task](dm-create-task.md). > **Note:** > diff --git a/en/precheck.md b/en/dm-precheck.md similarity index 97% rename from en/precheck.md rename to en/dm-precheck.md index 91d0702f8..2323d9cf5 100644 --- a/en/precheck.md +++ b/en/dm-precheck.md @@ -1,7 +1,7 @@ --- title: Precheck the Upstream MySQL Instance Configurations summary: Learn how to use the precheck feature provided by DM to detect errors in the upstream MySQL instance configurations. -aliases: ['/docs/tidb-data-migration/dev/precheck/'] +aliases: ['/docs/tidb-data-migration/dev/precheck/','/tidb-data-migration/dev/precheck/'] --- # Precheck the Upstream MySQL Instance Configurations diff --git a/en/query-status.md b/en/dm-query-status.md similarity index 99% rename from en/query-status.md rename to en/dm-query-status.md index 80bf47a54..96f80fc64 100644 --- a/en/query-status.md +++ b/en/dm-query-status.md @@ -1,7 +1,7 @@ --- title: Query Status summary: Learn how to query the status of a data replication task. -aliases: ['/docs/tidb-data-migration/dev/query-status/','/tidb-data-migration/dev/query-error/'] +aliases: ['/docs/tidb-data-migration/dev/query-status/','/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-status/'] --- # Query Status diff --git a/en/resume-task.md b/en/dm-resume-task.md similarity index 96% rename from en/resume-task.md rename to en/dm-resume-task.md index abf58434d..9697298fc 100644 --- a/en/resume-task.md +++ b/en/dm-resume-task.md @@ -1,6 +1,7 @@ --- title: Resume a Data Migration Task summary: Learn how to resume a data migration task. +aliases: ['/tidb-data-migration/dev/resume-task/'] --- # Resume a Data Migration Task diff --git a/en/source-configuration-file.md b/en/dm-source-configuration-file.md similarity index 97% rename from en/source-configuration-file.md rename to en/dm-source-configuration-file.md index 426184931..fc85f8d3f 100644 --- a/en/source-configuration-file.md +++ b/en/dm-source-configuration-file.md @@ -1,7 +1,7 @@ --- title: Upstream Database Configuration File summary: Learn the configuration file of the upstream database -aliases: ['/docs/tidb-data-migration/dev/source-configuration-file/'] +aliases: ['/docs/tidb-data-migration/dev/source-configuration-file/','/tidb-data-migration/dev/source-configuration-file/'] --- # Upstream Database Configuration File @@ -111,4 +111,4 @@ Starting from DM v2.0.2, you can configure binlog event filters in the source co | Parameter | Description | | :------------ | :--------------------------------------- | | `case-sensitive` | Determines whether the filtering rules are case-sensitive. The default value is `false`. | -| `filters` | Sets binlog event filtering rules. For details, see [Binlog event filter parameter explanation](key-features.md#parameter-explanation-2). | +| `filters` | Sets binlog event filtering rules. For details, see [Binlog event filter parameter explanation](dm-key-features.md#parameter-explanation-2). | diff --git a/en/stop-task.md b/en/dm-stop-task.md similarity index 92% rename from en/stop-task.md rename to en/dm-stop-task.md index 540c007fe..f29ab129d 100644 --- a/en/stop-task.md +++ b/en/dm-stop-task.md @@ -1,11 +1,12 @@ --- title: Stop a Data Migration Task summary: Learn how to stop a data migration task. +aliases: ['/tidb-data-migration/dev/stop-task/'] --- # Stop a Data Migration Task -You can use the `stop-task` command to stop a data migration task. For differences between `stop-task` and `pause-task`, refer to [Pause a Data Migration Task](pause-task.md). +You can use the `stop-task` command to stop a data migration task. For differences between `stop-task` and `pause-task`, refer to [Pause a Data Migration Task](dm-pause-task.md). {{< copyable "" >}} diff --git a/en/task-configuration-guide.md b/en/dm-task-configuration-guide.md similarity index 97% rename from en/task-configuration-guide.md rename to en/dm-task-configuration-guide.md index 9ce393bfd..97557511d 100644 --- a/en/task-configuration-guide.md +++ b/en/dm-task-configuration-guide.md @@ -1,6 +1,7 @@ --- title: Data Migration Task Configuration Guide summary: Learn how to configure a data migration task in Data Migration (DM). +aliases: ['/tidb-data-migration/dev/task-configuration-guide/'] --- # Data Migration Task Configuration Guide @@ -11,9 +12,9 @@ This document introduces how to configure a data migration task in Data Migratio Before configuring the data sources to be migrated for the task, you need to first make sure that DM has loaded the configuration files of the corresponding data sources. The following are some operation references: -- To view the data source, you can refer to [Check the data source configuration](manage-source.md#check-data-source-configurations). +- To view the data source, you can refer to [Check the data source configuration](dm-manage-source.md#check-data-source-configurations). - To create a data source, you can refer to [Create data source](migrate-data-using-dm.md#step-3-create-data-source). -- To generate a data source configuration file, you can refer to [Source configuration file introduction](source-configuration-file.md). +- To generate a data source configuration file, you can refer to [Source configuration file introduction](dm-source-configuration-file.md). The following example of `mysql-instances` shows how to configure data sources that need to be migrated for the data migration task: @@ -78,7 +79,7 @@ To configure the block and allow list of data source tables for the data migrati tbl-name: "log" ``` - For detailed configuration rules, see [Block and allow table lists](key-features.md#block-and-allow-table-lists). + For detailed configuration rules, see [Block and allow table lists](dm-key-features.md#block-and-allow-table-lists). 2. Reference the block and allow list rules in the data source configuration to filter tables to be migrated. @@ -113,7 +114,7 @@ To configure the filters of binlog events for the data migration task, perform t action: Do ``` - For detailed configuration rules, see [Binlog event filter](key-features.md#binlog-event-filter). + For detailed configuration rules, see [Binlog event filter](dm-key-features.md#binlog-event-filter). 2. Reference the binlog event filtering rules in the data source configuration to filter specified binlog events of specified tables or schemas in the data source. @@ -151,7 +152,7 @@ To configure the routing mapping rules for migrating data source tables to speci target-schema: "test" ``` - For detailed configuration rules, see [Table Routing](key-features.md#table-routing). + For detailed configuration rules, see [Table Routing](dm-key-features.md#table-routing). 2. Reference the routing mapping rules in the data source configuration to filter tables to be migrated. @@ -186,7 +187,7 @@ shard-mode: "pessimistic" # The shard merge mode. Optional modes are ""/"p ## Other configurations -The following is an overall task configuration example of this document. The complete task configuration template can be found in [DM task configuration file full introduction](task-configuration-file-full.md). For the usage and configuration of other configuration items, refer to [Features of Data Migration](key-features.md). +The following is an overall task configuration example of this document. The complete task configuration template can be found in [DM task configuration file full introduction](task-configuration-file-full.md). For the usage and configuration of other configuration items, refer to [Features of Data Migration](dm-key-features.md). ```yaml --- diff --git a/en/tune-configuration.md b/en/dm-tune-configuration.md similarity index 98% rename from en/tune-configuration.md rename to en/dm-tune-configuration.md index 09b1bd2d1..6209fab68 100644 --- a/en/tune-configuration.md +++ b/en/dm-tune-configuration.md @@ -1,6 +1,7 @@ --- title: Optimize Configuration of DM summary: Learn how to optimize the configuration of the data migration task to improve the performance of data migration. +aliases: ['/tidb-data-migration/dev/tune-configuration/'] --- # Optimize Configuration of DM diff --git a/en/feature-expression-filter.md b/en/feature-expression-filter.md index d91b071bf..8003d1c5e 100644 --- a/en/feature-expression-filter.md +++ b/en/feature-expression-filter.md @@ -6,7 +6,7 @@ title: Filter Certain Row Changes Using SQL Expressions ## Overview -In the process of data migration, DM provides the [Binlog Event Filter](key-features.md#binlog-event-filter) feature to filter certain types of binlog events. For example, for archiving or auditing purposes, `DELETE` event might be filtered when data is migrated to the downstream. However, Binlog Event Filter cannot judge with a greater granularity whether the `DELETE` event of a certain row should be filtered. +In the process of data migration, DM provides the [Binlog Event Filter](dm-key-features.md#binlog-event-filter) feature to filter certain types of binlog events. For example, for archiving or auditing purposes, `DELETE` event might be filtered when data is migrated to the downstream. However, Binlog Event Filter cannot judge with a greater granularity whether the `DELETE` event of a certain row should be filtered. To solve the above issue, DM supports filtering certain row changes using SQL expressions. The binlog in the `ROW` format supported by DM has the values of all columns in binlog events. You can configure SQL expressions according to these values. If the SQL expressions evaluate a row change as `TRUE`, DM will not migrate the row change downstream. @@ -16,7 +16,7 @@ To solve the above issue, DM supports filtering certain row changes using SQL ex ## Configuration example -Similar to [Binlog Event Filter](key-features.md#binlog-event-filter), you also need to configure the expression-filter feature in the configuration file of the data migration task, as shown below. For complete configuration and its descriptions, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md#task-configuration-file-template-advanced): +Similar to [Binlog Event Filter](dm-key-features.md#binlog-event-filter), you also need to configure the expression-filter feature in the configuration file of the data migration task, as shown below. For complete configuration and its descriptions, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md#task-configuration-file-template-advanced): ```yml name: test diff --git a/en/feature-shard-merge-pessimistic.md b/en/feature-shard-merge-pessimistic.md index 97393de62..8d0f6a6b3 100644 --- a/en/feature-shard-merge-pessimistic.md +++ b/en/feature-shard-merge-pessimistic.md @@ -25,7 +25,7 @@ DM has the following sharding DDL usage restrictions in the pessimistic mode: - A single `RENAME TABLE` statement can only involve a single `RENAME` operation. - The sharding group migration task requires each DDL statement to involve operations on only one table. - The table schema of each sharded table must be the same at the starting point of the incremental replication task, so as to make sure the DML statements of different sharded tables can be migrated into the downstream with a definite table schema, and the subsequent sharding DDL statements can be correctly matched and migrated. -- If you need to change the [table routing](key-features.md#table-routing) rule, you have to wait for the migration of all sharding DDL statements to complete. +- If you need to change the [table routing](dm-key-features.md#table-routing) rule, you have to wait for the migration of all sharding DDL statements to complete. - During the migration of sharding DDL statements, an error is reported if you use `dmctl` to change `router-rules`. - If you need to `CREATE` a new table to a sharding group where DDL statements are being executed, you have to make sure that the table schema is the same as the newly modified table schema. - For example, both the original `table_1` and `table_2` have two columns (a, b) initially, and have three columns (a, b, c) after the sharding DDL operation, so after the migration the newly created table should also have three columns (a, b, c). @@ -75,7 +75,7 @@ The characteristics of DM handling the sharding DDL migration among multiple DM- - After receiving the DDL statement from the binlog event, each DM-worker sends the DDL information to `DM-master`. - `DM-master` creates or updates the DDL lock based on the DDL information received from each DM-worker and the sharding group information. - If all members of the sharding group receive a same specific DDL statement, this indicates that all DML statements before the DDL execution on the upstream sharded tables have been completely migrated, and this DDL statement can be executed. Then DM can continue to migrate the subsequent DML statements. -- After being converted by the [table router](key-features.md#table-routing), the DDL statement of the upstream sharded tables must be consistent with the DDL statement to be executed in the downstream. Therefore, this DDL statement only needs to be executed once by the DDL owner and all other DM-workers can ignore this DDL statement. +- After being converted by the [table router](dm-key-features.md#table-routing), the DDL statement of the upstream sharded tables must be consistent with the DDL statement to be executed in the downstream. Therefore, this DDL statement only needs to be executed once by the DDL owner and all other DM-workers can ignore this DDL statement. In the above example, only one sharded table needs to be merged in the upstream MySQL instance corresponding to each DM-worker. But in actual scenarios, there might be multiple sharded tables in multiple sharded schemas to be merged in one MySQL instance. And when this happens, it becomes more complex to coordinate the sharding DDL migration. diff --git a/en/handle-failed-ddl-statements.md b/en/handle-failed-ddl-statements.md index 7eecbba54..ff6b31ceb 100644 --- a/en/handle-failed-ddl-statements.md +++ b/en/handle-failed-ddl-statements.md @@ -29,7 +29,7 @@ When you use dmctl to manually handle the failed DDL statements, the commonly us ### query-status -The `query-status` command is used to query the current status of items such as the subtask and the relay unit in each MySQL instance. For details, see [query status](query-status.md). +The `query-status` command is used to query the current status of items such as the subtask and the relay unit in each MySQL instance. For details, see [query status](dm-query-status.md). ### handle-error diff --git a/en/maintain-dm-using-tiup.md b/en/maintain-dm-using-tiup.md index ff3ecd310..bb1d94291 100644 --- a/en/maintain-dm-using-tiup.md +++ b/en/maintain-dm-using-tiup.md @@ -179,7 +179,7 @@ For example, to scale out a DM-worker node in the `prod-cluster` cluster, take t > **Note:** > -> Since v2.0.5, dmctl support [Export and Import Data Sources and Task Configuration of Clusters](export-import-config.md)。 +> Since v2.0.5, dmctl support [Export and Import Data Sources and Task Configuration of Clusters](dm-export-import-config.md)。 > > Before upgrading, you can use `config export` to export the configuration files of clusters. After upgrading, if you need to downgrade to an earlier version, you can first redeploy the earlier cluster and then use `config import` to import the previous configuration files. > diff --git a/en/manually-upgrade-dm-1.0-to-2.0.md b/en/manually-upgrade-dm-1.0-to-2.0.md index a2f989ffd..cc636f9c7 100644 --- a/en/manually-upgrade-dm-1.0-to-2.0.md +++ b/en/manually-upgrade-dm-1.0-to-2.0.md @@ -25,7 +25,7 @@ The prepared configuration files of v2.0+ include the configuration files of the ### Upstream database configuration file -In v2.0+, the [upstream database configuration file](source-configuration-file.md) is separated from the process configuration of the DM-worker, so you need to obtain the source configuration based on the [v1.0.x DM-worker configuration](https://docs.pingcap.com/tidb-data-migration/stable/dm-worker-configuration-file). +In v2.0+, the [upstream database configuration file](dm-source-configuration-file.md) is separated from the process configuration of the DM-worker, so you need to obtain the source configuration based on the [v1.0.x DM-worker configuration](https://docs.pingcap.com/tidb-data-migration/stable/dm-worker-configuration-file). > **Note:** > @@ -98,7 +98,7 @@ from: ### Data migration task configuration file -For [data migration task configuration guide](task-configuration-guide.md), v2.0+ is basically compatible with v1.0.x. You can directly copy the configuration of v1.0.x. +For [data migration task configuration guide](dm-task-configuration-guide.md), v2.0+ is basically compatible with v1.0.x. You can directly copy the configuration of v1.0.x. ## Step 2: Deploy the v2.0+ cluster @@ -116,7 +116,7 @@ If the original v1.0.x cluster is deployed by binary, you can stop the DM-worker ## Step 4: Upgrade data migration task -1. Use the [`operate-source`](manage-source.md#operate-data-source) command to load the upstream database source configuration from [step 1](#step-1-prepare-v20-configuration-file) into the v2.0+ cluster. +1. Use the [`operate-source`](dm-manage-source.md#operate-data-source) command to load the upstream database source configuration from [step 1](#step-1-prepare-v20-configuration-file) into the v2.0+ cluster. 2. In the downstream TiDB cluster, obtain the corresponding global checkpoint information from the incremental checkpoint table of the v1.0.x data migration task. @@ -158,8 +158,8 @@ If the original v1.0.x cluster is deployed by binary, you can stop the DM-worker > > If `enable-gtid` is enabled in the source configuration, currently you need to parse the binlog or relay log file to obtain the GTID sets corresponding to the binlog position, and set it to `binlog-gtid` in the `meta`. -4. Use the [`start-task`](create-task.md) command to start the upgraded data migration task through the v2.0+ data migration task configuration file. +4. Use the [`start-task`](dm-create-task.md) command to start the upgraded data migration task through the v2.0+ data migration task configuration file. -5. Use the [`query-status`](query-status.md) command to confirm whether the data migration task is running normally. +5. Use the [`query-status`](dm-query-status.md) command to confirm whether the data migration task is running normally. If the data migration task runs normally, it indicates that the DM upgrade to v2.0+ is successful. diff --git a/en/migrate-data-using-dm.md b/en/migrate-data-using-dm.md index 1fe38c618..5a98a3d83 100644 --- a/en/migrate-data-using-dm.md +++ b/en/migrate-data-using-dm.md @@ -14,7 +14,7 @@ It is recommended to [deploy the DM cluster using TiUP](deploy-a-dm-cluster-usin > **Note:** > -> - For database passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. See [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). +> - For database passwords in all the DM configuration files, it is recommended to use the passwords encrypted by `dmctl`. If a database password is empty, it is unnecessary to encrypt it. See [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). > - The user of the upstream and downstream databases must have the corresponding read and write privileges. ## Step 2: Check the cluster information @@ -37,7 +37,7 @@ After the DM cluster is deployed using TiUP, the configuration information is li | Upstream MySQL-2 | 172.16.10.82 | 3306 | root | VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU= | | Downstream TiDB | 172.16.10.83 | 4000 | root | | -The list of privileges needed on the MySQL host can be found in the [precheck](precheck.md) documentation. +The list of privileges needed on the MySQL host can be found in the [precheck](dm-precheck.md) documentation. ## Step 3: Create data source @@ -124,7 +124,7 @@ To detect possible errors of data migration configuration in advance, DM provide - DM automatically checks the corresponding privileges and configuration while starting the data migration task. - You can also use the `check-task` command to manually precheck whether the upstream MySQL instance configuration satisfies the DM requirements. -For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](precheck.md). +For details about the precheck feature, see [Precheck the upstream MySQL instance configuration](dm-precheck.md). > **Note:** > diff --git a/en/migrate-from-mysql-aurora.md b/en/migrate-from-mysql-aurora.md index c87752cfa..7268bee07 100644 --- a/en/migrate-from-mysql-aurora.md +++ b/en/migrate-from-mysql-aurora.md @@ -68,7 +68,7 @@ If GTID is enabled in Aurora, you can migrate data based on GTID. For how to ena > **Note:** > > + GTID-based data migration requires MySQL 5.7 (Aurora 2.04) version or later. -> + In addition to the Aurora-specific configuration above, the upstream database must meet other requirements for migrating from MySQL, such as table schemas, character sets, and privileges. See [Checking Items](precheck.md#checking-items) for details. +> + In addition to the Aurora-specific configuration above, the upstream database must meet other requirements for migrating from MySQL, such as table schemas, character sets, and privileges. See [Checking Items](dm-precheck.md#checking-items) for details. ## Step 2: Deploy the DM cluster @@ -122,7 +122,7 @@ The number of `master`s and `worker`s in the returned result is consistent with > **Note:** > -> The configuration file used by DM supports database passwords in plaintext or ciphertext. It is recommended to use password encrypted using dmctl. To obtain the ciphertext password, see [Encrypt the database password using dmctl](manage-source.md#encrypt-the-database-password). +> The configuration file used by DM supports database passwords in plaintext or ciphertext. It is recommended to use password encrypted using dmctl. To obtain the ciphertext password, see [Encrypt the database password using dmctl](dm-manage-source.md#encrypt-the-database-password). Save the following configuration files of data source according to the example, in which the value of `source-id` will be used in the task configuration in [step 4](#step-4-configure-the-task). diff --git a/en/quick-create-migration-task.md b/en/quick-create-migration-task.md index 138950815..1267bd05e 100644 --- a/en/quick-create-migration-task.md +++ b/en/quick-create-migration-task.md @@ -17,7 +17,7 @@ This document introduces how to configure a data migration task in different sce In addition to scenario-based documents, you can also refer to the following ones: - For a complete example of data migration task configuration, refer to [DM Advanced Task Configuration File](task-configuration-file-full.md). -- For a data migration task configuration guide, refer to [Data Migration Task Configuration Guide](task-configuration-guide.md). +- For a data migration task configuration guide, refer to [Data Migration Task Configuration Guide](dm-task-configuration-guide.md). ## Migrate Data from Multiple Data Sources to TiDB diff --git a/en/quick-start-create-source.md b/en/quick-start-create-source.md index 4bf148015..51fc1c95e 100644 --- a/en/quick-start-create-source.md +++ b/en/quick-start-create-source.md @@ -11,7 +11,7 @@ summary: Learn how to create a data source for Data Migration (DM). The document describes how to create a data source for the data migration task of TiDB Data Migration (DM). -A data source contains the information for accessing the upstream migration task. Because a data migration task requires referring its corresponding data source to obtain the configuration information of access, you need to create the data source of a task before creating a data migration task. For specific data source management commands, refer to [Manage Data Source Configurations](manage-source.md). +A data source contains the information for accessing the upstream migration task. Because a data migration task requires referring its corresponding data source to obtain the configuration information of access, you need to create the data source of a task before creating a data migration task. For specific data source management commands, refer to [Manage Data Source Configurations](dm-manage-source.md). ## Step 1: Configure the data source @@ -57,7 +57,7 @@ You can use the following command to create a data source: tiup dmctl --master-addr operate-source create ./source-mysql-01.yaml ``` -For other configuration parameters, refer to [Upstream Database Configuration File](source-configuration-file.md). +For other configuration parameters, refer to [Upstream Database Configuration File](dm-source-configuration-file.md). The returned results are as follows: diff --git a/en/relay-log.md b/en/relay-log.md index 395050aaf..267542104 100644 --- a/en/relay-log.md +++ b/en/relay-log.md @@ -90,7 +90,7 @@ The starting position of the relay log migration is determined by the following > **Note:** > -> Since DM v2.0.2, the configuration item `enable-relay` in the source configuration file is no longer valid. If DM finds that `enable-relay` is set to `true` when [loading the data source configuration](manage-source.md#operate-data-source), it outputs the following message: +> Since DM v2.0.2, the configuration item `enable-relay` in the source configuration file is no longer valid. If DM finds that `enable-relay` is set to `true` when [loading the data source configuration](dm-manage-source.md#operate-data-source), it outputs the following message: > > ``` > Please use `start-relay` to specify which workers should pull relay log of relay-enabled sources. @@ -132,7 +132,7 @@ In the command `start-relay`, you can configure one or more DM-workers to migrat In DM versions earlier than v2.0.2 (not including v2.0.2), DM checks the configuration item `enable-relay` in the source configuration file when binding a DM-worker to an upstream data source. If `enable-relay` is set to `true`, DM enables the relay log feature for the data source. -See [Upstream Database Configuration File](source-configuration-file.md) for how to set the configuration item `enable-relay`. +See [Upstream Database Configuration File](dm-source-configuration-file.md) for how to set the configuration item `enable-relay`. diff --git a/en/task-configuration-file-full.md b/en/task-configuration-file-full.md index 5f1919071..00cc225cf 100644 --- a/en/task-configuration-file-full.md +++ b/en/task-configuration-file-full.md @@ -7,11 +7,11 @@ aliases: ['/docs/tidb-data-migration/dev/task-configuration-file-full/','/docs/t This document introduces the advanced task configuration file of Data Migration (DM), including [global configuration](#global-configuration) and [instance configuration](#instance-configuration). -For the feature and configuration of each configuration item, see [Data migration features](overview.md#basic-features). +For the feature and configuration of each configuration item, see [Data migration features](dm-overview.md#basic-features). ## Important concepts -For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](config-overview.md#important-concepts). +For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](dm-config-overview.md#important-concepts). ## Task configuration file template (advanced) @@ -178,9 +178,9 @@ Arguments in each feature configuration set are explained in the comments in the | Parameter | Description | | :------------ | :--------------------------------------- | -| `routes` | The routing mapping rule set between the upstream and downstream tables. If the names of the upstream and downstream schemas and tables are the same, this item does not need to be configured. See [Table Routing](key-features.md#table-routing) for usage scenarios and sample configurations. | -| `filters` | The binlog event filter rule set of the matched table of the upstream database instance. If binlog filtering is not required, this item does not need to be configured. See [Binlog Event Filter](key-features.md#binlog-event-filter) for usage scenarios and sample configurations. | -| `block-allow-list` | The filter rule set of the block allow list of the matched table of the upstream database instance. It is recommended to specify the schemas and tables that need to be migrated through this item, otherwise all schemas and tables are migrated. See [Binlog Event Filter](key-features.md#binlog-event-filter) and [Block & Allow Lists](key-features.md#block-and-allow-table-lists) for usage scenarios and sample configurations. | +| `routes` | The routing mapping rule set between the upstream and downstream tables. If the names of the upstream and downstream schemas and tables are the same, this item does not need to be configured. See [Table Routing](dm-key-features.md#table-routing) for usage scenarios and sample configurations. | +| `filters` | The binlog event filter rule set of the matched table of the upstream database instance. If binlog filtering is not required, this item does not need to be configured. See [Binlog Event Filter](dm-key-features.md#binlog-event-filter) for usage scenarios and sample configurations. | +| `block-allow-list` | The filter rule set of the block allow list of the matched table of the upstream database instance. It is recommended to specify the schemas and tables that need to be migrated through this item, otherwise all schemas and tables are migrated. See [Binlog Event Filter](dm-key-features.md#binlog-event-filter) and [Block & Allow Lists](dm-key-features.md#block-and-allow-table-lists) for usage scenarios and sample configurations. | | `mydumpers` | Configuration arguments of dump processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `thread` only using `mydumper-thread`. | | `loaders` | Configuration arguments of load processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `pool-size` only using `loader-thread`. | | `syncers` | Configuration arguments of sync processing unit. If the default configuration is sufficient for your needs, this item does not need to be configured. Or you can configure `worker-count` only using `syncer-thread`. | diff --git a/en/task-configuration-file.md b/en/task-configuration-file.md index ca10b1178..1ef591c75 100644 --- a/en/task-configuration-file.md +++ b/en/task-configuration-file.md @@ -10,11 +10,11 @@ This document introduces the basic task configuration file of Data Migration (DM DM also implements [an advanced task configuration file](task-configuration-file-full.md) which provides greater flexibility and more control over DM. -For the feature and configuration of each configuration item, see [Data migration features](key-features.md). +For the feature and configuration of each configuration item, see [Data migration features](dm-key-features.md). ## Important concepts -For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](config-overview.md#important-concepts). +For description of important concepts including `source-id` and the DM-worker ID, see [Important concepts](dm-config-overview.md#important-concepts). ## Task configuration file template (basic) @@ -80,7 +80,7 @@ Refer to the comments in the [template](#task-configuration-file-template-basic) ### Feature configuration set -For basic applications, you only need to modify the block and allow lists filtering rule. Refer to the comments about `block-allow-list` in the [template](#task-configuration-file-template-basic) or [Block & allow table lists](key-features.md#block-and-allow-table-lists) to see more details. +For basic applications, you only need to modify the block and allow lists filtering rule. Refer to the comments about `block-allow-list` in the [template](#task-configuration-file-template-basic) or [Block & allow table lists](dm-key-features.md#block-and-allow-table-lists) to see more details. ## Instance configuration diff --git a/en/usage-scenario-downstream-more-columns.md b/en/usage-scenario-downstream-more-columns.md index 9bb5f7ee5..fae473e87 100644 --- a/en/usage-scenario-downstream-more-columns.md +++ b/en/usage-scenario-downstream-more-columns.md @@ -48,7 +48,7 @@ Otherwise, after creating the task, the following data migration errors occur wh The reason for the above errors is that when DM migrates the binlog event, if DM has not maintained internally the table schema corresponding to that table, DM tries to use the current table schema in the downstream to parse the binlog event and generate the corresponding DML statement. If the number of columns in the binlog event is inconsistent with the number of columns in the downstream table schema, the above error might occur. -In such cases, you can execute the [`operate-schema`](manage-schema.md) command to specify for the table a table schema that matches the binlog event. If you are migrating sharded tables, you need to configure the table schema in DM for parsing MySQL binlog for each sharded tables according to the following steps: +In such cases, you can execute the [`operate-schema`](dm-manage-schema.md) command to specify for the table a table schema that matches the binlog event. If you are migrating sharded tables, you need to configure the table schema in DM for parsing MySQL binlog for each sharded tables according to the following steps: 1. Specify the table schema for the table `log.messages` to be migrated in the data source. The table schema needs to correspond to the data of the binlog event to be replicated by DM. Then save the `CREATE TABLE` table schema statement in a file. For example, save the following table schema in the `log.messages.sql` file: @@ -60,7 +60,7 @@ In such cases, you can execute the [`operate-schema`](manage-schema.md) command ) ``` -2. Execute the [`operate-schema`](manage-schema.md) command to set the table schema. At this time, the task should be in the `Paused` state because of the above error. +2. Execute the [`operate-schema`](dm-manage-schema.md) command to set the table schema. At this time, the task should be in the `Paused` state because of the above error. {{< copyable "shell-regular" >}} @@ -68,6 +68,6 @@ In such cases, you can execute the [`operate-schema`](manage-schema.md) command tiup dmctl --master-addr operate-schema set -s mysql-01 task-test -d log -t message log.message.sql ``` -3. Execute the [`resume-task`](resume-task.md) command to resume the `Paused` task. +3. Execute the [`resume-task`](dm-resume-task.md) command to resume the `Paused` task. -4. Execute the [`query-status`](query-status.md) command to check whether the data migration task is running normally. +4. Execute the [`query-status`](dm-query-status.md) command to check whether the data migration task is running normally. diff --git a/en/usage-scenario-shard-merge.md b/en/usage-scenario-shard-merge.md index cf1151594..a50999462 100644 --- a/en/usage-scenario-shard-merge.md +++ b/en/usage-scenario-shard-merge.md @@ -81,7 +81,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ## Migration solution -- To satisfy the migration requirements #1, you do not need to configure the [table routing rule](key-features.md#table-routing). You need to manually create a table based on the requirements in the section [Remove the `PRIMARY KEY` attribute from the column](shard-merge-best-practices.md#remove-the-primary-key-attribute-from-the-column): +- To satisfy the migration requirements #1, you do not need to configure the [table routing rule](dm-key-features.md#table-routing). You need to manually create a table based on the requirements in the section [Remove the `PRIMARY KEY` attribute from the column](shard-merge-best-practices.md#remove-the-primary-key-attribute-from-the-column): {{< copyable "sql" >}} @@ -104,7 +104,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ignore-checking-items: ["auto_increment_ID"] ``` -- To satisfy the migration requirement #2, configure the [table routing rule](key-features.md#table-routing) as follows: +- To satisfy the migration requirement #2, configure the [table routing rule](dm-key-features.md#table-routing) as follows: {{< copyable "" >}} @@ -121,7 +121,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` target-table: "sale" ``` -- To satisfy the migration requirements #3, configure the [Block and allow table lists](key-features.md#block-and-allow-table-lists) as follows: +- To satisfy the migration requirements #3, configure the [Block and allow table lists](dm-key-features.md#block-and-allow-table-lists) as follows: {{< copyable "" >}} @@ -134,7 +134,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` tbl-name: "log_bak" ``` -- To satisfy the migration requirement #4, configure the [binlog event filter rule](key-features.md#binlog-event-filter) as follows: +- To satisfy the migration requirement #4, configure the [binlog event filter rule](dm-key-features.md#binlog-event-filter) as follows: {{< copyable "" >}} @@ -154,7 +154,7 @@ In the above structure, `sid` is the shard key, which can ensure that the same ` ## Migration task configuration -The complete configuration of the migration task is shown as follows. For more details, see [Data Migration Task Configuration Guide](task-configuration-guide.md). +The complete configuration of the migration task is shown as follows. For more details, see [Data Migration Task Configuration Guide](dm-task-configuration-guide.md). {{< copyable "" >}} diff --git a/en/usage-scenario-simple-migration.md b/en/usage-scenario-simple-migration.md index b183ded07..7772b050f 100644 --- a/en/usage-scenario-simple-migration.md +++ b/en/usage-scenario-simple-migration.md @@ -61,7 +61,7 @@ Assume that the schemas migrated to the downstream are as follows: ## Migration solution -- To satisfy migration Requirements #1-i, #1-ii and #1-iii, configure the [table routing rules](key-features.md#table-routing) as follows: +- To satisfy migration Requirements #1-i, #1-ii and #1-iii, configure the [table routing rules](dm-key-features.md#table-routing) as follows: ```yaml routes: @@ -77,7 +77,7 @@ Assume that the schemas migrated to the downstream are as follows: target-schema: "user_south" ``` -- To satisfy the migration Requirement #2-i, configure the [table routing rules](key-features.md#table-routing) as follows: +- To satisfy the migration Requirement #2-i, configure the [table routing rules](dm-key-features.md#table-routing) as follows: ```yaml routes: @@ -94,7 +94,7 @@ Assume that the schemas migrated to the downstream are as follows: target-table: "store_shenzhen" ``` -- To satisfy the migration Requirement #1-iv, configure the [binlog filtering rules](key-features.md#binlog-event-filter) as follows: +- To satisfy the migration Requirement #1-iv, configure the [binlog filtering rules](dm-key-features.md#binlog-event-filter) as follows: ```yaml filters: @@ -110,7 +110,7 @@ Assume that the schemas migrated to the downstream are as follows: action: Ignore ``` -- To satisfy the migration Requirement #2-ii, configure the [binlog filtering rule](key-features.md#binlog-event-filter) as follows: +- To satisfy the migration Requirement #2-ii, configure the [binlog filtering rule](dm-key-features.md#binlog-event-filter) as follows: ```yaml filters: @@ -125,7 +125,7 @@ Assume that the schemas migrated to the downstream are as follows: > > `store-filter-rule` is different from `log-filter-rule & user-filter-rule`. `store-filter-rule` is a rule for the whole `store` schema, while `log-filter-rule` and `user-filter-rule` are rules for the `log` table in the `user` schema. -- To satisfy the migration Requirement #3, configure the [block and allow lists](key-features.md#block-and-allow-table-lists) as follows: +- To satisfy the migration Requirement #3, configure the [block and allow lists](dm-key-features.md#block-and-allow-table-lists) as follows: ```yaml block-allow-list: # Use black-white-list if the DM version is earlier than or equal to v2.0.0-beta.2. @@ -135,7 +135,7 @@ Assume that the schemas migrated to the downstream are as follows: ## Migration task configuration -The complete migration task configuration is shown below. For more details, see [data migration task configuration guide](task-configuration-guide.md). +The complete migration task configuration is shown below. For more details, see [data migration task configuration guide](dm-task-configuration-guide.md). ```yaml name: "one-tidb-secondary" diff --git a/zh/TOC.md b/zh/TOC.md index dd52a7c06..315ffcc25 100644 --- a/zh/TOC.md +++ b/zh/TOC.md @@ -2,12 +2,12 @@ - 关于 DM - - [DM 简介](overview.md) + - [DM 简介](dm-overview.md) - [DM 5.3 Release Notes](releases/5.3.0.md) - 基本功能 - - [Table routing](key-features.md#table-routing) - - [Block & Allow Lists](key-features.md#block--allow-table-lists) - - [Binlog Event Filter](key-features.md#binlog-event-filter) + - [Table routing](dm-key-features.md#table-routing) + - [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) + - [Binlog Event Filter](dm-key-features.md#binlog-event-filter) - 高级功能 - 分库分表合并迁移 - [概述](feature-shard-merge.md) @@ -16,7 +16,7 @@ - [迁移使用 GH-ost/PT-osc 的源数据库](feature-online-ddl.md) - [使用 SQL 表达式过滤某些行变更](feature-expression-filter.md) - [DM 架构](dm-arch.md) - - [性能数据](benchmark-v5.3.0.md) + - [性能数据](dm-benchmark-v5.3.0.md) - 快速上手 - [快速上手试用](quick-start-with-dm.md) - [使用 TiUP 部署 DM 集群](deploy-a-dm-cluster-using-tiup.md) @@ -35,56 +35,56 @@ - [使用 Binary](deploy-a-dm-cluster-using-binary.md) - [使用 Kubernetes](https://docs.pingcap.com/zh/tidb-in-kubernetes/dev/deploy-tidb-dm) - [使用 DM 迁移数据](migrate-data-using-dm.md) - - [测试 DM 性能](performance-test.md) + - [测试 DM 性能](dm-performance-test.md) - 运维操作 - 集群运维工具 - [使用 TiUP 运维集群(推荐)](maintain-dm-using-tiup.md) - [使用 dmctl 运维集群](dmctl-introduction.md) - - [使用 OpenAPI 运维集群](open-api.md) + - [使用 OpenAPI 运维集群](dm-open-api.md) - 升级版本 - [1.0.x 到 2.0+ 手动升级](manually-upgrade-dm-1.0-to-2.0.md) - - [管理数据源](manage-source.md) + - [管理数据源](dm-manage-source.md) - 管理迁移任务 - - [任务配置向导](task-configuration-guide.md) - - [任务前置检查](precheck.md) - - [创建任务](create-task.md) - - [查询状态](query-status.md) - - [暂停任务](pause-task.md) - - [恢复任务](resume-task.md) - - [停止任务](stop-task.md) - - [导出和导入集群的数据源和任务配置](export-import-config.md) + - [任务配置向导](dm-task-configuration-guide.md) + - [任务前置检查](dm-precheck.md) + - [创建任务](dm-create-task.md) + - [查询状态](dm-query-status.md) + - [暂停任务](dm-pause-task.md) + - [恢复任务](dm-resume-task.md) + - [停止任务](dm-stop-task.md) + - [导出和导入集群的数据源和任务配置](dm-export-import-config.md) - [处理出错的 DDL 语句](handle-failed-ddl-statements.md) - [手动处理 Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) - - [管理迁移表的表结构](manage-schema.md) - - [处理告警](handle-alerts.md) + - [管理迁移表的表结构](dm-manage-schema.md) + - [处理告警](dm-handle-alerts.md) - [日常巡检](dm-daily-check.md) - 使用场景 - [从 Aurora 迁移数据到 TiDB](migrate-from-mysql-aurora.md) - [TiDB 表结构存在更多列的迁移场景](usage-scenario-downstream-more-columns.md) - [变更同步的 MySQL 实例](usage-scenario-master-slave-switch.md) - 故障处理 - - [故障及处理方法](error-handling.md) - - [性能问题及处理方法](handle-performance-issues.md) + - [故障及处理方法](dm-error-handling.md) + - [性能问题及处理方法](dm-handle-performance-issues.md) - 性能调优 - - [配置调优](tune-configuration.md) + - [配置调优](dm-tune-configuration.md) - 参考指南 - 架构 - [DM 架构简介](dm-arch.md) - [DM-worker 简介](dm-worker-intro.md) - - [DM 命令行参数](command-line-flags.md) + - [DM 命令行参数](dm-command-line-flags.md) - 配置 - - [概述](config-overview.md) + - [概述](dm-config-overview.md) - [DM-master 配置](dm-master-configuration-file.md) - [DM-worker 配置](dm-worker-configuration-file.md) - - [上游数据库配置](source-configuration-file.md) - - [数据迁移任务配置向导](task-configuration-guide.md) + - [上游数据库配置](dm-source-configuration-file.md) + - [数据迁移任务配置向导](dm-task-configuration-guide.md) - 安全 - - [为 DM 的连接开启加密传输](enable-tls.md) + - [为 DM 的连接开启加密传输](dm-enable-tls.md) - [生成自签名证书](dm-generate-self-signed-certificates.md) - [监控指标](monitor-a-dm-cluster.md) - [告警信息](dm-alert-rules.md) - - [错误码](error-handling.md#常见故障处理方法) -- [常见问题](faq.md) + - [错误码](dm-error-handling.md#常见故障处理方法) +- [常见问题](dm-faq.md) - [术语表](dm-glossary.md) - 版本发布历史 - v5.3 diff --git a/zh/_index.md b/zh/_index.md index ad6399046..22feb83b7 100644 --- a/zh/_index.md +++ b/zh/_index.md @@ -17,8 +17,8 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] 关于 TiDB Data Migration -- [什么是 DM?](overview.md) -- [DM 架构](overview.md) +- [什么是 DM?](dm-overview.md) +- [DM 架构](dm-overview.md) - [性能数据](benchmark-v2.0-ga.md) @@ -41,7 +41,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] - [使用 TiUP 离线镜像部署集群](deploy-a-dm-cluster-using-tiup-offline.md) - [使用 Binary 部署集群](deploy-a-dm-cluster-using-binary.md) - [使用 DM 迁移数据](migrate-data-using-dm.md) -- [测试 DM 性能](performance-test.md) +- [测试 DM 性能](dm-performance-test.md) @@ -52,7 +52,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] - [使用 dmctl 运维集群](dmctl-introduction.md) - [升级版本](manually-upgrade-dm-1.0-to-2.0.md) - [手动处理 Sharding DDL Lock](manually-handling-sharding-ddl-locks.md) -- [处理告警](handle-alerts.md) +- [处理告警](dm-handle-alerts.md) - [日常巡检](dm-daily-check.md) @@ -69,11 +69,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/'] 参考指南 - [DM 架构](dm-arch.md) -- [DM 命令行参数](command-line-flags.md) -- [配置概述](config-overview.md) +- [DM 命令行参数](dm-command-line-flags.md) +- [配置概述](dm-config-overview.md) - [监控指标](monitor-a-dm-cluster.md) - [告警信息](dm-alert-rules.md) -- [错误码](error-handling.md#常见故障处理方法) +- [错误码](dm-error-handling.md#常见故障处理方法) diff --git a/zh/benchmark-v1.0-ga.md b/zh/benchmark-v1.0-ga.md index 0af824c8f..d6b6451bd 100644 --- a/zh/benchmark-v1.0-ga.md +++ b/zh/benchmark-v1.0-ga.md @@ -59,11 +59,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/benchmark-v1.0-ga/'] ## 测试场景 -可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41)。 +可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.4.40) -> DM-worker -> TiDB (172.16.4.41)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -106,7 +106,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/benchmark-v1.0-ga/'] ### 增量复制性能测试用例 -使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/benchmark-v2.0-ga.md b/zh/benchmark-v2.0-ga.md index 32cd77c7c..1e88355e3 100644 --- a/zh/benchmark-v2.0-ga.md +++ b/zh/benchmark-v2.0-ga.md @@ -59,11 +59,11 @@ summary: 了解 DM 2.0-GA 版本的性能。 ## 测试场景 -可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34)。 +可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.5.33) -> DM-worker(172.16.5.32) -> TiDB (172.16.5.34)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -105,7 +105,7 @@ summary: 了解 DM 2.0-GA 版本的性能。 ### 增量复制性能测试用例 -使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/dm-alert-rules.md b/zh/dm-alert-rules.md index 2a63a1093..312e6af8e 100644 --- a/zh/dm-alert-rules.md +++ b/zh/dm-alert-rules.md @@ -8,6 +8,6 @@ aliases: ['/docs-cn/tidb-data-migration/dev/alert-rules/','/zh/tidb-data-migrati 使用 TiUP 部署 DM 集群的时候,会默认部署一套[告警系统](migrate-data-using-dm.md#第-8-步监控任务与查看日志)。 -DM 的告警规则及其对应的处理方法可参考[告警处理](handle-alerts.md)。 +DM 的告警规则及其对应的处理方法可参考[告警处理](dm-handle-alerts.md)。 DM 的告警信息与监控指标均基于 Prometheus,告警规则与监控指标的对应关系可参考 [DM 监控指标](monitor-a-dm-cluster.md)。 \ No newline at end of file diff --git a/zh/benchmark-v5.3.0.md b/zh/dm-benchmark-v5.3.0.md similarity index 93% rename from zh/benchmark-v5.3.0.md rename to zh/dm-benchmark-v5.3.0.md index a460f3aa1..dc39ec8e5 100644 --- a/zh/benchmark-v5.3.0.md +++ b/zh/dm-benchmark-v5.3.0.md @@ -1,6 +1,7 @@ --- title: DM 5.3.0 性能测试报告 summary: 了解 DM 5.3.0 版本的性能。 +aliases: ['/zh/tidb-data-migration/dev/benchmark-v5.3.0/'] --- # DM 5.3.0 性能测试报告 @@ -53,11 +54,11 @@ summary: 了解 DM 5.3.0 版本的性能。 ## 测试场景 -可以参考[性能测试](performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4)。 +可以参考[性能测试](dm-performance-test.md)中介绍的测试场景,测试单个 MySQL 实例到 TiDB 的数据迁移: MySQL1 (172.16.6.1) -> DM-worker(172.16.6.2) -> TiDB(load balance) (172.16.6.4)。 ### 全量导入性能测试 -可以参考[全量导入性能测试用例](performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 +可以参考[全量导入性能测试用例](dm-performance-test.md#全量导入性能测试用例)中介绍的方法进行测试。 #### 全量导入性能测试结果 @@ -99,7 +100,7 @@ summary: 了解 DM 5.3.0 版本的性能。 ### 增量复制性能测试用例 -使用[增量复制性能测试用例](performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 +使用[增量复制性能测试用例](dm-performance-test.md#增量复制性能测试用例)中介绍的方法进行测试。 #### 增量复制性能测试结果 diff --git a/zh/command-line-flags.md b/zh/dm-command-line-flags.md similarity index 98% rename from zh/command-line-flags.md rename to zh/dm-command-line-flags.md index 215165239..adc7075e1 100644 --- a/zh/command-line-flags.md +++ b/zh/dm-command-line-flags.md @@ -1,7 +1,7 @@ --- title: DM 命令行参数 summary: 介绍 DM 各组件的主要命令行参数。 -aliases: ['/docs-cn/tidb-data-migration/dev/command-line-flags/'] +aliases: ['/docs-cn/tidb-data-migration/dev/command-line-flags/','/zh/tidb-data-migration/dev/command-line-flags/'] --- # DM 命令行参数 diff --git a/zh/config-overview.md b/zh/dm-config-overview.md similarity index 74% rename from zh/config-overview.md rename to zh/dm-config-overview.md index 7cdcd6191..5ff5661d2 100644 --- a/zh/config-overview.md +++ b/zh/dm-config-overview.md @@ -1,6 +1,6 @@ --- title: DM 配置简介 -aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/'] +aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/','/zh/tidb-data-migration/dev/config-overview/'] --- # DM 配置简介 @@ -11,7 +11,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/'] - `dm-master.toml`:DM-master 进程的配置文件,包括 DM-master 的拓扑信息、日志等各项配置。配置说明详见 [DM-master 配置文件介绍](dm-master-configuration-file.md)。 - `dm-worker.toml`:DM-worker 进程的配置文件,包括 DM-worker 的拓扑信息、日志等各项配置。配置说明详见 [DM-worker 配置文件介绍](dm-worker-configuration-file.md)。 -- `source.yaml`:上游数据库 MySQL/MariaDB 相关配置。配置说明详见[上游数据库配置文件介绍](source-configuration-file.md)。 +- `source.yaml`:上游数据库 MySQL/MariaDB 相关配置。配置说明详见[上游数据库配置文件介绍](dm-source-configuration-file.md)。 ## 迁移任务配置 @@ -19,9 +19,9 @@ aliases: ['/docs-cn/tidb-data-migration/dev/config-overview/'] 具体步骤如下: -1. [使用 dmctl 将数据源配置加载到 DM 集群](manage-source.md#数据源操作); -2. 参考[数据任务配置向导](task-configuration-guide.md)来创建 `your_task.yaml`; -3. [使用 dmctl 创建数据迁移任务](create-task.md)。 +1. [使用 dmctl 将数据源配置加载到 DM 集群](dm-manage-source.md#数据源操作); +2. 参考[数据任务配置向导](dm-task-configuration-guide.md)来创建 `your_task.yaml`; +3. [使用 dmctl 创建数据迁移任务](dm-create-task.md)。 ### 关键概念 diff --git a/zh/create-task.md b/zh/dm-create-task.md similarity index 91% rename from zh/create-task.md rename to zh/dm-create-task.md index eb60553de..0306c91fe 100644 --- a/zh/create-task.md +++ b/zh/dm-create-task.md @@ -1,12 +1,12 @@ --- title: 创建数据迁移任务 summary: 了解 TiDB Data Migration 如何创建数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/create-task/'] +aliases: ['/docs-cn/tidb-data-migration/dev/create-task/','/zh/tidb-data-migration/dev/create-task/'] --- # 创建数据迁移任务 -`start-task` 命令用于创建数据迁移任务。当数据迁移任务启动时,DM 将[自动对相应权限和配置进行前置检查](precheck.md)。 +`start-task` 命令用于创建数据迁移任务。当数据迁移任务启动时,DM 将[自动对相应权限和配置进行前置检查](dm-precheck.md)。 {{< copyable "" >}} diff --git a/zh/dm-daily-check.md b/zh/dm-daily-check.md index c330b41ba..2a5b2f17e 100644 --- a/zh/dm-daily-check.md +++ b/zh/dm-daily-check.md @@ -8,7 +8,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/daily-check/','/zh/tidb-data-migrati 本文总结了 TiDB Data Migration (DM) 工具日常巡检的方法: -+ 方法一:执行 `query-status` 命令查看任务运行状态以及相关错误输出。详见[查询状态](query-status.md)。 ++ 方法一:执行 `query-status` 命令查看任务运行状态以及相关错误输出。详见[查询状态](dm-query-status.md)。 + 方法二:如果使用 TiUP 部署 DM 集群时正确部署了 Prometheus 与 Grafana,如 Grafana 的地址为 `172.16.10.71`,可在浏览器中打开 进入 Grafana,选择 DM 的 Dashboard 即可查看 DM 相关监控项。具体监控指标参照[监控与告警设置](monitor-a-dm-cluster.md)。 diff --git a/zh/enable-tls.md b/zh/dm-enable-tls.md similarity index 98% rename from zh/enable-tls.md rename to zh/dm-enable-tls.md index b29dfcd1e..4fa4e8553 100644 --- a/zh/enable-tls.md +++ b/zh/dm-enable-tls.md @@ -1,6 +1,7 @@ --- title: 为 DM 的连接开启加密传输 summary: 了解如何为 DM 的连接开启加密传输。 +aliases: ['/zh/tidb-data-migration/dev/enable-tls/'] --- # 为 DM 的连接开启加密传输 diff --git a/zh/error-handling.md b/zh/dm-error-handling.md similarity index 97% rename from zh/error-handling.md rename to zh/dm-error-handling.md index 06393848c..f824b5ef8 100644 --- a/zh/error-handling.md +++ b/zh/dm-error-handling.md @@ -1,7 +1,7 @@ --- title: 故障及处理方法 summary: 了解 DM 的错误系统及常见故障的处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data-migration/dev/troubleshoot-dm/','/docs-cn/tidb-data-migration/dev/error-system/'] +aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data-migration/dev/troubleshoot-dm/','/docs-cn/tidb-data-migration/dev/error-system/','/zh/tidb-data-migration/dev/error-handling/'] --- # 故障及处理方法 @@ -88,7 +88,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data resume-task ${task name} ``` -但在某些情况下,你还需要重置数据迁移任务。有关何时需要重置以及如何重置,详见[重置数据迁移任务](faq.md#如何重置数据迁移任务)。 +但在某些情况下,你还需要重置数据迁移任务。有关何时需要重置以及如何重置,详见[重置数据迁移任务](dm-faq.md#如何重置数据迁移任务)。 ## 常见故障处理方法 @@ -99,8 +99,8 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data | `code=10003` | 数据库底层 `invalid connection` 错误,通常表示 DM 到下游 TiDB 的数据库连接出现了异常(如网络故障、TiDB 重启、TiKV busy 等)且当前请求已有部分数据发送到了 TiDB。 | DM 提供针对此类错误的自动恢复。如果未能正常恢复,需要用户进一步检查错误信息并根据具体场景进行分析。 | | `code=10005` | 数据库查询类语句出错 | | | `code=10006` | 数据库 `EXECUTE` 类型语句出错,包括 DDL 和 `INSERT`/`UPDATE`/`DELETE` 类型的 DML。更详细的错误信息可通过错误 message 获取。错误 message 中通常包含操作数据库所返回的错误码和错误信息。 | | -| `code=11006` | DM 内置的 parser 解析不兼容的 DDL 时出错 | 可参考 [Data Migration 故障诊断-处理不兼容的 DDL 语句](faq.md#如何处理不兼容的-ddl-语句) 提供的解决方案 | -| `code=20010` | 处理任务配置时,解密数据库的密码出错 | 检查任务配置中提供的下游数据库密码是否有[使用 dmctl 正确加密](manage-source.md#加密数据库密码) | +| `code=11006` | DM 内置的 parser 解析不兼容的 DDL 时出错 | 可参考 [Data Migration 故障诊断-处理不兼容的 DDL 语句](dm-faq.md#如何处理不兼容的-ddl-语句) 提供的解决方案 | +| `code=20010` | 处理任务配置时,解密数据库的密码出错 | 检查任务配置中提供的下游数据库密码是否有[使用 dmctl 正确加密](dm-manage-source.md#加密数据库密码) | | `code=26002` | 任务检查创建数据库连接失败。更详细的错误信息可通过错误 message 获取。错误 message 中包含操作数据库所返回的错误码和错误信息。 | 检查 DM-master 所在的机器是否有权限访问上游 | | `code=32001` | dump 处理单元异常 | 如果报错 `msg` 包含 `mydumper: argument list too long.`,则需要用户根据 block-allow-list,在 `task.yaml` 的 dump 处理单元的 `extra-args` 参数中手动加上 `--regex` 正则表达式设置要导出的库表。例如,如果要导出所有库中表名字为 `hello` 的表,可加上 `--regex '.*\\.hello$'`,如果要导出所有表,可加上 `--regex '.*'`。 | | `code=38008` | DM 组件间的 gRPC 通信出错 | 检查 `class`, 定位错误发生在哪些组件的交互环节,根据错误 message 判断是哪类通信错误。如果是 gRPC 建立连接出错,可检查通信服务端是否运行正常。 | @@ -163,9 +163,9 @@ aliases: ['/docs-cn/tidb-data-migration/dev/error-handling/','/docs-cn/tidb-data ### 执行 `query-status` 或查看日志时出现 `Access denied for user 'root'@'172.31.43.27' (using password: YES)` -在所有 DM 配置文件中,数据库相关的密码都推荐使用经 dmctl 加密后的密文(若数据库密码为空,则无需加密)。有关如何使用 dmctl 加密明文密码,参见[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 +在所有 DM 配置文件中,数据库相关的密码都推荐使用经 dmctl 加密后的密文(若数据库密码为空,则无需加密)。有关如何使用 dmctl 加密明文密码,参见[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 -此外,在 DM 运行过程中,上下游数据库的用户必须具备相应的读写权限。在启动迁移任务过程中,DM 会自动进行相应权限的前置检查,详见[上游 MySQL 实例配置前置检查](precheck.md)。 +此外,在 DM 运行过程中,上下游数据库的用户必须具备相应的读写权限。在启动迁移任务过程中,DM 会自动进行相应权限的前置检查,详见[上游 MySQL 实例配置前置检查](dm-precheck.md)。 ### load 处理单元报错 `packet for query is too large. Try adjusting the 'max_allowed_packet' variable` diff --git a/zh/export-import-config.md b/zh/dm-export-import-config.md similarity index 97% rename from zh/export-import-config.md rename to zh/dm-export-import-config.md index 3c1cf41ab..b5d8d65fa 100644 --- a/zh/export-import-config.md +++ b/zh/dm-export-import-config.md @@ -1,6 +1,7 @@ --- title: 导出和导入集群的数据源和任务配置 summary: 了解 TiDB Data Migration 导出和导入集群的数据源和任务配置。 +aliases: ['/zh/tidb-data-migration/dev/export-import-config/'] --- # 导出和导入集群的数据源和任务配置 diff --git a/zh/faq.md b/zh/dm-faq.md similarity index 98% rename from zh/faq.md rename to zh/dm-faq.md index 366bafab4..5766903ff 100644 --- a/zh/faq.md +++ b/zh/dm-faq.md @@ -1,6 +1,6 @@ --- title: Data Migration 常见问题 -aliases: ['/docs-cn/tidb-data-migration/dev/faq/'] +aliases: ['/docs-cn/tidb-data-migration/dev/faq/','/zh/tidb-data-migration/dev/faq/'] --- # Data Migration 常见问题 @@ -176,7 +176,7 @@ curl -X POST -d "tidb_general_log=0" http://{TiDBIP}:10080/settings if the DDL is not needed, you can use a filter rule with \"*\" schema-pattern to ignore it.\n\t : parse statement: line 1 column 11 near \"EVENT `event_del_big_table` \r\nDISABLE\" %!!(MISSING)(EXTRA string=ALTER EVENT `event_del_big_table` \r\nDISABLE ``` -出现报错的原因是 TiDB parser 无法解析上游的 DDL,例如 `ALTER EVENT`,所以 `sql-skip` 不会按预期生效。可以在任务配置文件中添加 [Binlog 过滤规则](key-features.md#binlog-event-filter)进行过滤,并设置 `schema-pattern: "*"`。从 DM 2.0.1 版本开始,已预设过滤了 `EVENT` 相关语句。 +出现报错的原因是 TiDB parser 无法解析上游的 DDL,例如 `ALTER EVENT`,所以 `sql-skip` 不会按预期生效。可以在任务配置文件中添加 [Binlog 过滤规则](dm-key-features.md#binlog-event-filter)进行过滤,并设置 `schema-pattern: "*"`。从 DM 2.0.1 版本开始,已预设过滤了 `EVENT` 相关语句。 在 DM v2.0 版本之后 `sql-skip` 已经被 `handle-error` 替代,`handle-error` 可以跳过该类错误。 diff --git a/zh/dm-glossary.md b/zh/dm-glossary.md index 996c53c9b..e3a76e26e 100644 --- a/zh/dm-glossary.md +++ b/zh/dm-glossary.md @@ -20,7 +20,7 @@ MySQL/MariaDB 生成的 Binlog 文件中的数据变更信息,具体请参考 ### Binlog event filter -比 Block & allow table list 更加细粒度的过滤功能,具体可参考 [Binlog Event Filter](overview.md#binlog-event-filter)。 +比 Block & allow table list 更加细粒度的过滤功能,具体可参考 [Binlog Event Filter](dm-overview.md#binlog-event-filter)。 ### Binlog position @@ -32,7 +32,7 @@ DM-worker 内部用于读取上游 Binlog 或本地 Relay log 并迁移到下游 ### Block & allow table list -针对上游数据库实例表的黑白名单过滤功能,具体可参考 [Block & Allow Table Lists](overview.md#block--allow-lists)。该功能与 [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) 及 [MariaDB Replication Filters](https://mariadb.com/kb/en/library/replication-filters/) 类似。 +针对上游数据库实例表的黑白名单过滤功能,具体可参考 [Block & Allow Table Lists](dm-overview.md#block--allow-lists)。该功能与 [MySQL Replication Filtering](https://dev.mysql.com/doc/refman/5.6/en/replication-rules.html) 及 [MariaDB Replication Filters](https://mariadb.com/kb/en/library/replication-filters/) 类似。 ## C @@ -122,13 +122,13 @@ DM-worker 内部用于从上游拉取 Binlog 并写入数据到 Relay log 的处 ### Subtask status -数据迁移子任务所处的状态,目前包括 `New`、`Running`、`Paused`、`Stopped` 及 `Finished` 5 种状态。有关数据迁移任务、子任务状态的更多信息可参考[任务状态](query-status.md#任务状态)。 +数据迁移子任务所处的状态,目前包括 `New`、`Running`、`Paused`、`Stopped` 及 `Finished` 5 种状态。有关数据迁移任务、子任务状态的更多信息可参考[任务状态](dm-query-status.md#任务状态)。 ## T ### Table routing -用于支持将上游 MySQL/MariaDB 实例的某些表迁移到下游指定表的路由功能,可以用于分库分表的合并迁移,具体可参考 [Table routing](key-features.md#table-routing)。 +用于支持将上游 MySQL/MariaDB 实例的某些表迁移到下游指定表的路由功能,可以用于分库分表的合并迁移,具体可参考 [Table routing](dm-key-features.md#table-routing)。 ### Task @@ -136,4 +136,4 @@ DM-worker 内部用于从上游拉取 Binlog 并写入数据到 Relay log 的处 ### Task status -数据迁移子任务所处的状态,由 [Subtask status](#subtask-status) 整合而来,具体信息可查看[任务状态](query-status.md#任务状态)。 +数据迁移子任务所处的状态,由 [Subtask status](#subtask-status) 整合而来,具体信息可查看[任务状态](dm-query-status.md#任务状态)。 diff --git a/zh/handle-alerts.md b/zh/dm-handle-alerts.md similarity index 75% rename from zh/handle-alerts.md rename to zh/dm-handle-alerts.md index a78250a49..7d3f47db0 100644 --- a/zh/handle-alerts.md +++ b/zh/dm-handle-alerts.md @@ -1,7 +1,7 @@ --- title: 处理告警 summary: 了解 DM 中各主要告警信息的处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] +aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/','/zh/tidb-data-migration/dev/handle-alerts/'] --- # 处理告警 @@ -20,7 +20,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] ### `DM_DDL_error` -处理 shard DDL 时出现错误,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +处理 shard DDL 时出现错误,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_pending_DDL` @@ -30,13 +30,13 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] ### `DM_task_state` -当 DM-worker 内有子任务处于 `Paused` 状态超过 20 分钟时会触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 DM-worker 内有子任务处于 `Paused` 状态超过 20 分钟时会触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ## relay log 告警 ### `DM_relay_process_exits_with_error` -当 relay log 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_remain_storage_of_relay_log` @@ -48,40 +48,40 @@ aliases: ['/docs-cn/tidb-data-migration/dev/handle-alerts/'] ### `DM_relay_log_data_corruption` -当 relay log 处理单元在校验从上游读取到的 binlog event 且发现 checksum 信息异常时会转为 `Paused` 状态并立即触发告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在校验从上游读取到的 binlog event 且发现 checksum 信息异常时会转为 `Paused` 状态并立即触发告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_fail_to_read_binlog_from_master` -当 relay log 处理单元在尝试从上游读取 binlog event 发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在尝试从上游读取 binlog event 发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_fail_to_write_relay_log` -当 relay log 处理单元在尝试将 binlog event 写入 relay log 文件发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 relay log 处理单元在尝试将 binlog event 写入 relay log 文件发生错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_binlog_file_gap_between_master_relay` -当 relay log 处理单元已拉取到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 relay log 处理单元相关的性能问题进行排查与处理。 +当 relay log 处理单元已拉取到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 relay log 处理单元相关的性能问题进行排查与处理。 ## Dump/Load 告警 ### `DM_dump_process_exists_with_error` -当 Dump 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 Dump 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_load_process_exists_with_error` -当 Load 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 Load 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ## Binlog replication 告警 ### `DM_sync_process_exists_with_error` -当 Binlog replication 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](error-handling.md#dm-故障诊断)进行处理。 +当 Binlog replication 处理单元遇到错误时,会转为 `Paused` 状态并立即触发该告警,此时需要参考 [DM 故障诊断](dm-error-handling.md#dm-故障诊断)进行处理。 ### `DM_binlog_file_gap_between_master_syncer` -当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 +当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前上游 MySQL/MariaDB 超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 ### `DM_binlog_file_gap_between_relay_syncer` -当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前 relay log 处理单元超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 +当 Binlog replication 处理单元已处理到的最新的 binlog 文件个数落后于当前 relay log 处理单元超过 1 个(不含 1 个)且持续 10 分钟时 DM 会触发该告警,此时需要参考[性能问题及处理方法](dm-handle-performance-issues.md)对 Binlog replication 处理单元相关的性能问题进行排查与处理。 diff --git a/zh/handle-performance-issues.md b/zh/dm-handle-performance-issues.md similarity index 97% rename from zh/handle-performance-issues.md rename to zh/dm-handle-performance-issues.md index 30cfb86f5..de88608a5 100644 --- a/zh/handle-performance-issues.md +++ b/zh/dm-handle-performance-issues.md @@ -1,7 +1,7 @@ --- title: 性能问题及处理方法 summary: 了解 DM 可能存在的常见性能问题及其处理方法。 -aliases: ['/docs-cn/tidb-data-migration/dev/handle-performance-issues/'] +aliases: ['/docs-cn/tidb-data-migration/dev/handle-performance-issues/','/zh/tidb-data-migration/dev/handle-performance-issues/'] --- # 性能问题及处理方法 @@ -72,7 +72,7 @@ Binlog replication 模块会根据配置选择从上游 MySQL/MariaDB 或 relay ### binlog event 转换 -Binlog replication 模块从 binlog event 数据中尝试构造 DML、解析 DDL 以及进行 [table router](key-features.md#table-routing) 转换等,主要的性能指标是 `transform binlog event duration`。 +Binlog replication 模块从 binlog event 数据中尝试构造 DML、解析 DDL 以及进行 [table router](dm-key-features.md#table-routing) 转换等,主要的性能指标是 `transform binlog event duration`。 这部分的耗时受上游写入的业务特点影响较大,如对于 `INSERT INTO` 语句,转换单个 `VALUES` 的时间和转换大量 `VALUES` 的时间差距很多,其波动范围可能从几十微秒至上百微秒,但一般不会成为系统的瓶颈。 diff --git a/zh/dm-hardware-and-software-requirements.md b/zh/dm-hardware-and-software-requirements.md index 78f173217..b2ead8cac 100644 --- a/zh/dm-hardware-and-software-requirements.md +++ b/zh/dm-hardware-and-software-requirements.md @@ -46,4 +46,4 @@ DM 支持部署和运行在 Intel x86-64 架构的 64 位通用硬件服务器 > **注意:** > > - 在生产环境中,不建议将 DM-master 和 DM-worker 部署和运行在同一个服务器上,以防 DM-worker 对磁盘的写入干扰 DM-master 高可用组件使用磁盘。 -> - 在遇到性能问题时可参照[配置调优](tune-configuration.md)尝试修改任务配置。调优效果不明显时,可以尝试升级服务器配置。 +> - 在遇到性能问题时可参照[配置调优](dm-tune-configuration.md)尝试修改任务配置。调优效果不明显时,可以尝试升级服务器配置。 diff --git a/zh/key-features.md b/zh/dm-key-features.md similarity index 98% rename from zh/key-features.md rename to zh/dm-key-features.md index 00de57a6d..a90218a50 100644 --- a/zh/key-features.md +++ b/zh/dm-key-features.md @@ -1,7 +1,7 @@ --- title: 主要特性 summary: 了解 DM 的各主要功能特性或相关的配置选项。 -aliases: ['/docs-cn/tidb-data-migration/dev/key-features/','/docs-cn/tidb-data-migration/dev/feature-overview/'] +aliases: ['/docs-cn/tidb-data-migration/dev/key-features/','/docs-cn/tidb-data-migration/dev/feature-overview/','/zh/tidb-data-migration/dev/key-features/'] --- # 主要特性 @@ -248,7 +248,7 @@ Binlog event filter 是比迁移表黑白名单更加细粒度的过滤规则, > **注意:** > > - 同一个表匹配上多个规则,将会顺序应用这些规则,并且黑名单的优先级高于白名单,即如果同时存在规则 `Ignore` 和 `Do` 应用在某个 table 上,那么 `Ignore` 生效。 -> - 从 DM v2.0.2 开始,Binlog event filter 也可以在上游数据库配置文件中进行配置。见[上游数据库配置文件介绍](source-configuration-file.md)。 +> - 从 DM v2.0.2 开始,Binlog event filter 也可以在上游数据库配置文件中进行配置。见[上游数据库配置文件介绍](dm-source-configuration-file.md)。 ### 参数配置 @@ -396,7 +396,7 @@ filters: ### 使用限制 - DM 仅针对 gh-ost 与 pt-osc 做了特殊支持。 -- 在开启 `online-ddl` 时,增量复制对应的 checkpoint 应不处于 online DDL 执行过程中。如上游某次 online DDL 操作开始于 binlog `position-A`、结束于 `position-B`,则增量复制的起始点应早于 `position-A` 或晚于 `position-B`,否则可能出现迁移出错,具体可参考 [FAQ](faq.md#设置了-online-ddl-scheme-gh-ost-gh-ost-表相关的-ddl-报错该如何处理)。 +- 在开启 `online-ddl` 时,增量复制对应的 checkpoint 应不处于 online DDL 执行过程中。如上游某次 online DDL 操作开始于 binlog `position-A`、结束于 `position-B`,则增量复制的起始点应早于 `position-A` 或晚于 `position-B`,否则可能出现迁移出错,具体可参考 [FAQ](dm-faq.md#设置了-online-ddl-scheme-gh-ost-gh-ost-表相关的-ddl-报错该如何处理)。 ### 参数配置 diff --git a/zh/manage-schema.md b/zh/dm-manage-schema.md similarity index 99% rename from zh/manage-schema.md rename to zh/dm-manage-schema.md index 600af5e44..f083feca3 100644 --- a/zh/manage-schema.md +++ b/zh/dm-manage-schema.md @@ -1,6 +1,7 @@ --- title: 管理迁移表的表结构 summary: 了解如何管理待迁移表在 DM 内部的表结构。 +aliases: ['/zh/tidb-data-migration/dev/manage-schema/'] --- # 管理迁移表的表结构 diff --git a/zh/manage-source.md b/zh/dm-manage-source.md similarity index 95% rename from zh/manage-source.md rename to zh/dm-manage-source.md index 49015ebca..f51d49adc 100644 --- a/zh/manage-source.md +++ b/zh/dm-manage-source.md @@ -1,7 +1,7 @@ --- title: 管理上游数据源 summary: 了解如何管理上游 MySQL 实例。 -aliases: ['/docs-cn/tidb-data-migration/dev/manage-source/'] +aliases: ['/docs-cn/tidb-data-migration/dev/manage-source/','/zh/tidb-data-migration/dev/manage-source/'] --- # 管理上游数据源配置 @@ -72,7 +72,7 @@ Global Flags: operate-source create ./source.yaml ``` -其中 `source.yaml` 的配置参考[上游数据库配置文件介绍](source-configuration-file.md)。 +其中 `source.yaml` 的配置参考[上游数据库配置文件介绍](dm-source-configuration-file.md)。 结果如下: @@ -174,7 +174,7 @@ Global Flags: -s, --source strings MySQL Source ID. ``` -在改变绑定关系前,DM 会检查待解绑的 worker 是否正在运行同步任务,如果正在运行则需要先[暂停任务](pause-task.md),并在改变绑定关系后[恢复任务](resume-task.md)。 +在改变绑定关系前,DM 会检查待解绑的 worker 是否正在运行同步任务,如果正在运行则需要先[暂停任务](dm-pause-task.md),并在改变绑定关系后[恢复任务](dm-resume-task.md)。 ### 命令用法示例 diff --git a/zh/open-api.md b/zh/dm-open-api.md similarity index 99% rename from zh/open-api.md rename to zh/dm-open-api.md index 83a43a5dd..53c2f6a6a 100644 --- a/zh/open-api.md +++ b/zh/dm-open-api.md @@ -1,6 +1,7 @@ --- title: 使用 OpenAPI 运维集群 summary: 了解如何使用 OpenAPI 接口来管理集群状态和数据同步。 +aliases: ['/zh/tidb-data-migration/dev/open-api/'] --- # 使用 OpenAPI 运维集群 diff --git a/zh/overview.md b/zh/dm-overview.md similarity index 83% rename from zh/overview.md rename to zh/dm-overview.md index 08b451d99..f2ed84905 100644 --- a/zh/overview.md +++ b/zh/dm-overview.md @@ -1,6 +1,6 @@ --- title: Data Migration 简介 -aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overview/'] +aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overview/','/zh/tidb-data-migration/dev/overview/'] --- # Data Migration 简介 @@ -30,15 +30,15 @@ aliases: ['/docs-cn/tidb-data-migration/dev/overview/','/docs-cn/tools/dm/overvi ### Block & allow lists -[Block & Allow Lists](key-features.md#block--allow-table-lists) 的过滤规则类似于 MySQL `replication-rules-db`/`replication-rules-table`,用于过滤或指定只迁移某些数据库或某些表的所有操作。 +[Block & Allow Lists](dm-key-features.md#block--allow-table-lists) 的过滤规则类似于 MySQL `replication-rules-db`/`replication-rules-table`,用于过滤或指定只迁移某些数据库或某些表的所有操作。 ### Binlog event filter -[Binlog Event Filter](key-features.md#binlog-event-filter) 用于过滤源数据库中特定表的特定类型操作,比如过滤掉表 `test`.`sbtest` 的 `INSERT` 操作或者过滤掉库 `test` 下所有表的 `TRUNCATE TABLE` 操作。 +[Binlog Event Filter](dm-key-features.md#binlog-event-filter) 用于过滤源数据库中特定表的特定类型操作,比如过滤掉表 `test`.`sbtest` 的 `INSERT` 操作或者过滤掉库 `test` 下所有表的 `TRUNCATE TABLE` 操作。 ### Table routing -[Table Routing](key-features.md#table-routing) 是将源数据库的表迁移到下游指定表的路由功能,比如将源数据表 `test`.`sbtest1` 的表结构和数据迁移到 TiDB 的表 `test`.`sbtest2`。它也是分库分表合并迁移所需的一个核心功能。 +[Table Routing](dm-key-features.md#table-routing) 是将源数据库的表迁移到下游指定表的路由功能,比如将源数据表 `test`.`sbtest1` 的表结构和数据迁移到 TiDB 的表 `test`.`sbtest2`。它也是分库分表合并迁移所需的一个核心功能。 ## 高级功能 @@ -48,7 +48,7 @@ DM 支持对源数据的分库分表进行合并迁移,但有一些使用限 ### 对第三方 Online Schema Change 工具变更过程的同步优化 -在 MySQL 生态中,gh-ost 与 pt-osc 等工具被广泛使用,DM 对其变更过程进行了特殊的优化,以避免对不必要的中间数据进行迁移。详细信息可参考 [online-ddl](key-features.md#online-ddl-工具支持)。 +在 MySQL 生态中,gh-ost 与 pt-osc 等工具被广泛使用,DM 对其变更过程进行了特殊的优化,以避免对不必要的中间数据进行迁移。详细信息可参考 [online-ddl](dm-key-features.md#online-ddl-工具支持)。 ### 使用 SQL 表达式过滤某些行变更 @@ -75,7 +75,7 @@ DM 支持对源数据的分库分表进行合并迁移,但有一些使用限 - 目前,TiDB 部分兼容 MySQL 支持的 DDL 语句。因为 DM 使用 TiDB parser 来解析处理 DDL 语句,所以目前仅支持 TiDB parser 支持的 DDL 语法。详见 [TiDB DDL 语法支持](https://pingcap.com/docs-cn/dev/reference/mysql-compatibility/#ddl)。 - - DM 遇到不兼容的 DDL 语句时会报错。要解决此报错,需要使用 dmctl 手动处理,要么跳过该 DDL 语句,要么用指定的 DDL 语句来替换它。详见[如何处理不兼容的 DDL 语句](faq.md#如何处理不兼容的-ddl-语句)。 + - DM 遇到不兼容的 DDL 语句时会报错。要解决此报错,需要使用 dmctl 手动处理,要么跳过该 DDL 语句,要么用指定的 DDL 语句来替换它。详见[如何处理不兼容的 DDL 语句](dm-faq.md#如何处理不兼容的-ddl-语句)。 + 分库分表数据冲突合并 diff --git a/zh/pause-task.md b/zh/dm-pause-task.md similarity index 95% rename from zh/pause-task.md rename to zh/dm-pause-task.md index 8db45435e..92207ec45 100644 --- a/zh/pause-task.md +++ b/zh/dm-pause-task.md @@ -1,7 +1,7 @@ --- title: 暂停数据迁移任务 summary: 了解 TiDB Data Migration 如何暂停数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/pause-task/'] +aliases: ['/docs-cn/tidb-data-migration/dev/pause-task/','/zh/tidb-data-migration/dev/pause-task/'] --- # 暂停数据迁移任务 diff --git a/zh/performance-test.md b/zh/dm-performance-test.md similarity index 93% rename from zh/performance-test.md rename to zh/dm-performance-test.md index fb5d98782..7fa1142f1 100644 --- a/zh/performance-test.md +++ b/zh/dm-performance-test.md @@ -1,7 +1,7 @@ --- title: DM 集群性能测试 summary: 了解如何测试 DM 集群的性能。 -aliases: ['/docs-cn/tidb-data-migration/dev/performance-test/'] +aliases: ['/docs-cn/tidb-data-migration/dev/performance-test/','/zh/tidb-data-migration/dev/performance-test/'] --- # DM 集群性能测试 @@ -51,7 +51,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### 创建数据迁移任务 -1. 创建上游 MySQL 的 source,将 `source-id` 配置为 `source-1`。详细操作方法参考:[加载数据源配置](manage-source.md#数据源操作)。 +1. 创建上游 MySQL 的 source,将 `source-id` 配置为 `source-1`。详细操作方法参考:[加载数据源配置](dm-manage-source.md#数据源操作)。 2. 创建 `full` 模式的 DM 迁移任务,示例任务配置文件如下: @@ -85,7 +85,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 threads: 32 ``` -创建数据迁移任务的详细操作参考[创建数据迁移任务](create-task.md#创建数据迁移任务)。 +创建数据迁移任务的详细操作参考[创建数据迁移任务](dm-create-task.md#创建数据迁移任务)。 > **注意:** > @@ -110,7 +110,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 #### 创建数据迁移任务 -1. 创建上游 MySQL 的 source, source-id 配置为 `source-1`(如果在全量迁移性能测试中已经创建,则不需要再次创建)。详细操作方法参考:[加载数据源配置](manage-source.md#数据源操作)。 +1. 创建上游 MySQL 的 source, source-id 配置为 `source-1`(如果在全量迁移性能测试中已经创建,则不需要再次创建)。详细操作方法参考:[加载数据源配置](dm-manage-source.md#数据源操作)。 2. 创建 `all` 模式的 DM 迁移任务,示例任务配置文件如下: @@ -143,7 +143,7 @@ sysbench --test=oltp_insert --tables=4 --mysql-host=172.16.4.40 --mysql-port=330 batch: 100 ``` -创建数据迁移任务的详细操作参考[创建数据迁移任务](create-task.md#创建数据迁移任务)。 +创建数据迁移任务的详细操作参考[创建数据迁移任务](dm-create-task.md#创建数据迁移任务)。 > **注意:** > diff --git a/zh/precheck.md b/zh/dm-precheck.md similarity index 97% rename from zh/precheck.md rename to zh/dm-precheck.md index 3c503f072..d81ee8415 100644 --- a/zh/precheck.md +++ b/zh/dm-precheck.md @@ -1,7 +1,7 @@ --- title: 上游 MySQL 实例配置前置检查 summary: 了解上游 MySQL 实例配置前置检查。 -aliases: ['/docs-cn/tidb-data-migration/dev/precheck/'] +aliases: ['/docs-cn/tidb-data-migration/dev/precheck/','/zh/tidb-data-migration/dev/precheck/'] --- # 上游 MySQL 实例配置前置检查 diff --git a/zh/query-status.md b/zh/dm-query-status.md similarity index 99% rename from zh/query-status.md rename to zh/dm-query-status.md index 90e22ebea..6169f5a53 100644 --- a/zh/query-status.md +++ b/zh/dm-query-status.md @@ -1,7 +1,7 @@ --- title: TiDB Data Migration 查询状态 summary: 深入了解 TiDB Data Migration 如何查询数据迁移任务状态 -aliases: ['/docs-cn/tidb-data-migration/dev/query-status/','/docs-cn/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-error/'] +aliases: ['/docs-cn/tidb-data-migration/dev/query-status/','/docs-cn/tidb-data-migration/dev/query-error/','/tidb-data-migration/dev/query-error/','/zh/tidb-data-migration/dev/query-status/'] --- # TiDB Data Migration 查询状态 diff --git a/zh/resume-task.md b/zh/dm-resume-task.md similarity index 92% rename from zh/resume-task.md rename to zh/dm-resume-task.md index b7c0d194b..0e9beeeaa 100644 --- a/zh/resume-task.md +++ b/zh/dm-resume-task.md @@ -1,7 +1,7 @@ --- title: 恢复数据迁移任务 summary: 了解 TiDB Data Migration 如何恢复数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/resume-task/'] +aliases: ['/docs-cn/tidb-data-migration/dev/resume-task/','/zh/tidb-data-migration/dev/resume-task/'] --- # 恢复数据迁移任务 diff --git a/zh/source-configuration-file.md b/zh/dm-source-configuration-file.md similarity index 97% rename from zh/source-configuration-file.md rename to zh/dm-source-configuration-file.md index afcdfa292..a536a1126 100644 --- a/zh/source-configuration-file.md +++ b/zh/dm-source-configuration-file.md @@ -1,6 +1,6 @@ --- title: 上游数据库配置文件介绍 -aliases: ['/docs-cn/tidb-data-migration/dev/source-configuration-file/'] +aliases: ['/docs-cn/tidb-data-migration/dev/source-configuration-file/','/zh/tidb-data-migration/dev/source-configuration-file/'] --- # 上游数据库配置文件介绍 @@ -107,4 +107,4 @@ DM 会定期检查当前任务状态以及错误信息,判断恢复任务能 | 配置项 | 说明 | | :------------ | :--------------------------------------- | | `case-sensitive` | Binlog event filter 标识符是否大小写敏感。默认值:false。| -| `filters` | 配置 Binlog event filter,含义见 [Binlog event filter 参数解释](key-features.md#参数解释-2)。 | +| `filters` | 配置 Binlog event filter,含义见 [Binlog event filter 参数解释](dm-key-features.md#参数解释-2)。 | diff --git a/zh/stop-task.md b/zh/dm-stop-task.md similarity index 89% rename from zh/stop-task.md rename to zh/dm-stop-task.md index d45f2b406..3dc9046c8 100644 --- a/zh/stop-task.md +++ b/zh/dm-stop-task.md @@ -1,12 +1,12 @@ --- title: 停止数据迁移任务 summary: 了解 TiDB Data Migration 如何停止数据迁移任务。 -aliases: ['/docs-cn/tidb-data-migration/dev/stop-task/'] +aliases: ['/docs-cn/tidb-data-migration/dev/stop-task/','/zh/tidb-data-migration/dev/stop-task/'] --- # 停止数据迁移任务 -`stop-task` 命令用于停止数据迁移任务。有关 `stop-task` 与 `pause-task` 的区别,请参考[暂停数据迁移任务](pause-task.md)中的相关说明。 +`stop-task` 命令用于停止数据迁移任务。有关 `stop-task` 与 `pause-task` 的区别,请参考[暂停数据迁移任务](dm-pause-task.md)中的相关说明。 {{< copyable "" >}} diff --git a/zh/task-configuration-guide.md b/zh/dm-task-configuration-guide.md similarity index 95% rename from zh/task-configuration-guide.md rename to zh/dm-task-configuration-guide.md index 6037d4634..deef81542 100644 --- a/zh/task-configuration-guide.md +++ b/zh/dm-task-configuration-guide.md @@ -1,5 +1,6 @@ --- title: DM 数据迁移任务配置向导 +aliases: ['/zh/tidb-data-migration/dev/task-configuration-guide/'] --- # 数据迁移任务配置向导 @@ -10,9 +11,9 @@ title: DM 数据迁移任务配置向导 配置需要迁移的数据源之前,首先应该确认已经在 DM 创建相应数据源: -- 查看数据源可以参考 [查看数据源配置](manage-source.md#查看数据源配置) +- 查看数据源可以参考 [查看数据源配置](dm-manage-source.md#查看数据源配置) - 创建数据源可以参考 [在 DM 创建数据源](migrate-data-using-dm.md#第-3-步创建数据源) -- 数据源配置可以参考 [数据源配置文件介绍](source-configuration-file.md) +- 数据源配置可以参考 [数据源配置文件介绍](dm-source-configuration-file.md) 仿照下面的 `mysql-instances:` 示例定义数据迁移任务需要同步的单个或者多个数据源。 @@ -55,7 +56,7 @@ target-database: # 目标 TiDB 配置 如果不需要过滤或迁移特定表,可以跳过该项配置。 -配置从数据源迁移表的黑白名单,则需要添加两个定义,详细配置规则参考 [Block & Allow Lists](key-features.md#block--allow-table-lists): +配置从数据源迁移表的黑白名单,则需要添加两个定义,详细配置规则参考 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists): 1. 定义全局的黑白名单规则 @@ -89,7 +90,7 @@ target-database: # 目标 TiDB 配置 如果不需要过滤特定库或者特定表的特定操作,可以跳过该项配置。 -配置过滤特定操作,则需要添加两个定义,详细配置规则参考 [Binlog Event Filter](key-features.md#binlog-event-filter): +配置过滤特定操作,则需要添加两个定义,详细配置规则参考 [Binlog Event Filter](dm-key-features.md#binlog-event-filter): 1. 定义全局的数据源操作过滤规则 @@ -122,7 +123,7 @@ target-database: # 目标 TiDB 配置 如果不需要将数据源表路由到不同名的目标 TiDB 表,可以跳过该项配置。分库分表合并迁移的场景必须配置该规则。 -配置数据源表迁移到目标 TiDB 表的路由规则,则需要添加两个定义,详细配置规则参考 [Table Routing](key-features.md#table-routing): +配置数据源表迁移到目标 TiDB 表的路由规则,则需要添加两个定义,详细配置规则参考 [Table Routing](dm-key-features.md#table-routing): 1. 定义全局的路由规则 @@ -167,7 +168,7 @@ shard-mode: "pessimistic" # 默认值为 "" 即无需协调。如果为分 ## 其他配置 -下面是本数据迁移任务配置向导的完整示例。完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md),其他各配置项的功能和配置也可参阅[数据迁移功能](key-features.md)。 +下面是本数据迁移任务配置向导的完整示例。完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md),其他各配置项的功能和配置也可参阅[数据迁移功能](dm-key-features.md)。 ```yaml --- diff --git a/zh/tune-configuration.md b/zh/dm-tune-configuration.md similarity index 98% rename from zh/tune-configuration.md rename to zh/dm-tune-configuration.md index 943e82f84..90098a248 100644 --- a/zh/tune-configuration.md +++ b/zh/dm-tune-configuration.md @@ -1,7 +1,7 @@ --- title: DM 配置优化 summary: 介绍如何通过优化配置来提高数据迁移性能。 -aliases: ['/docs-cn/tidb-data-migration/dev/tune-configuration/'] +aliases: ['/docs-cn/tidb-data-migration/dev/tune-configuration/','/zh/tidb-data-migration/dev/tune-configuration/'] --- # DM 配置优化 diff --git a/zh/feature-expression-filter.md b/zh/feature-expression-filter.md index 4d0d4ad58..35d2f4ce3 100644 --- a/zh/feature-expression-filter.md +++ b/zh/feature-expression-filter.md @@ -6,7 +6,7 @@ title: 使用 SQL 表达式过滤某些行变更 ## 概述 -在数据迁移的过程中,DM 提供了 [Binlog Event Filter](key-features.md#binlog-event-filter) 功能过滤某些类型的 binlog event,例如不向下游迁移 `DELETE` 事件以达到归档、审计等目的。但是 Binlog Event Filter 无法以更细粒度判断某一行的 `DELETE` 事件是否要被过滤。 +在数据迁移的过程中,DM 提供了 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) 功能过滤某些类型的 binlog event,例如不向下游迁移 `DELETE` 事件以达到归档、审计等目的。但是 Binlog Event Filter 无法以更细粒度判断某一行的 `DELETE` 事件是否要被过滤。 为了解决上述问题,DM 支持使用 SQL 表达式过滤某些行变更。DM 支持的 `ROW` 格式的 binlog 中,binlog event 带有所有列的值。用户可以基于这些值配置 SQL 表达式。如果该表达式对于某条行变更的计算结果是 `TRUE`,DM 就不会向下游迁移该条行变更。 @@ -16,7 +16,7 @@ title: 使用 SQL 表达式过滤某些行变更 ## 配置示例 -与 [Binlog Event Filter](key-features.md#binlog-event-filter) 类似,表达式过滤需要在数据迁移任务配置文件里配置,详见下面配置样例。完整的配置及意义,可以参考 [DM 完整配置文件示例](task-configuration-file-full.md#完整配置文件示例): +与 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) 类似,表达式过滤需要在数据迁移任务配置文件里配置,详见下面配置样例。完整的配置及意义,可以参考 [DM 完整配置文件示例](task-configuration-file-full.md#完整配置文件示例): ```yml name: test diff --git a/zh/feature-shard-merge-pessimistic.md b/zh/feature-shard-merge-pessimistic.md index 192642579..70a7d3376 100644 --- a/zh/feature-shard-merge-pessimistic.md +++ b/zh/feature-shard-merge-pessimistic.md @@ -39,7 +39,7 @@ DM 在悲观模式下进行分表 DDL 的迁移有以下几点使用限制: - 增量复制任务需要确认开始迁移的 binlog position 上各分表的表结构必须一致,才能确保来自不同分表的 DML 语句能够迁移到表结构确定的下游,并且后续各分表的 DDL 语句能够正确匹配与迁移。 -- 如果需要变更 [table routing 规则](key-features.md#table-routing),必须先等所有 sharding DDL 语句迁移完成。 +- 如果需要变更 [table routing 规则](dm-key-features.md#table-routing),必须先等所有 sharding DDL 语句迁移完成。 - 在 sharding DDL 语句迁移过程中,使用 dmctl 尝试变更 router-rules 会报错。 @@ -109,7 +109,7 @@ DM 在悲观模式下进行分表 DDL 的迁移有以下几点使用限制: - 如果 sharding group 的所有成员都收到了某一条相同的 DDL 语句,则表明上游分表在该 DDL 执行前的 DML 语句都已经迁移完成,此时可以执行该 DDL 语句,并继续后续的 DML 迁移。 -- 上游所有分表的 DDL 在经过 [table router](key-features.md#table-routing) 转换后需要保持一致,因此仅需 DDL 锁的 owner 执行一次该 DDL 语句即可,其他 DM-worker 可直接忽略对应的 DDL 语句。 +- 上游所有分表的 DDL 在经过 [table router](dm-key-features.md#table-routing) 转换后需要保持一致,因此仅需 DDL 锁的 owner 执行一次该 DDL 语句即可,其他 DM-worker 可直接忽略对应的 DDL 语句。 在上面的示例中,每个 DM-worker 对应的上游 MySQL 实例中只有一个待合并的分表。但在实际场景下,一个 MySQL 实例可能有多个分库内的多个分表需要进行合并,这种情况下,sharding DDL 的协调迁移过程将更加复杂。 diff --git a/zh/handle-failed-ddl-statements.md b/zh/handle-failed-ddl-statements.md index f7597a643..beac5c2f3 100644 --- a/zh/handle-failed-ddl-statements.md +++ b/zh/handle-failed-ddl-statements.md @@ -29,7 +29,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/skip-or-replace-abnormal-sql-stateme ### query-status -`query-status` 命令用于查询当前 MySQL 实例内子任务及 relay 单元等的状态和错误信息,详见[查询状态](query-status.md)。 +`query-status` 命令用于查询当前 MySQL 实例内子任务及 relay 单元等的状态和错误信息,详见[查询状态](dm-query-status.md)。 ### handle-error diff --git a/zh/maintain-dm-using-tiup.md b/zh/maintain-dm-using-tiup.md index 001adeacd..2eaec7ccc 100644 --- a/zh/maintain-dm-using-tiup.md +++ b/zh/maintain-dm-using-tiup.md @@ -181,7 +181,7 @@ tiup dm scale-in prod-cluster -N 172.16.5.140:8262 > **注意:** > -> 从 v2.0.5 版本开始,dmctl 支持[导出和导入集群的数据源和任务配置](export-import-config.md)。 +> 从 v2.0.5 版本开始,dmctl 支持[导出和导入集群的数据源和任务配置](dm-export-import-config.md)。 > > 升级前,可使用 `config export` 命令导出集群的配置文件,升级后如需降级回退到旧版本,可重建旧集群后,使用 `config import` 导入之前的配置。 > diff --git a/zh/manually-upgrade-dm-1.0-to-2.0.md b/zh/manually-upgrade-dm-1.0-to-2.0.md index 3184d0a5a..2e490c52b 100644 --- a/zh/manually-upgrade-dm-1.0-to-2.0.md +++ b/zh/manually-upgrade-dm-1.0-to-2.0.md @@ -24,7 +24,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/manually-upgrade-dm-1.0-to-2.0/'] ### 上游数据库配置文件 -在 v2.0+ 中将[上游数据库 source 相关的配置](source-configuration-file.md)从 DM-worker 的进程配置中独立了出来,因此需要根据 [v1.0.x 的 DM-worker 配置](https://docs.pingcap.com/zh/tidb-data-migration/stable/dm-worker-configuration-file)拆分得到 source 配置。 +在 v2.0+ 中将[上游数据库 source 相关的配置](dm-source-configuration-file.md)从 DM-worker 的进程配置中独立了出来,因此需要根据 [v1.0.x 的 DM-worker 配置](https://docs.pingcap.com/zh/tidb-data-migration/stable/dm-worker-configuration-file)拆分得到 source 配置。 > **注意:** > @@ -99,7 +99,7 @@ from: ### 数据迁移任务配置文件 -对于[数据迁移任务配置向导](task-configuration-guide.md),v2.0+ 基本与 v1.0.x 保持兼容,可直接复制 v1.0.x 的配置。 +对于[数据迁移任务配置向导](dm-task-configuration-guide.md),v2.0+ 基本与 v1.0.x 保持兼容,可直接复制 v1.0.x 的配置。 ## 第 2 步:部署 v2.0+ 集群 @@ -117,7 +117,7 @@ from: ## 第 4 步:升级数据迁移任务 -1. 使用 [`operate-source`](manage-source.md#数据源操作) 命令将 [准备 v2.0+ 的配置文件](#第-1-步准备-v20-的配置文件) 中得到的上游数据库 source 配置加载到 v2.0+ 集群中。 +1. 使用 [`operate-source`](dm-manage-source.md#数据源操作) 命令将 [准备 v2.0+ 的配置文件](#第-1-步准备-v20-的配置文件) 中得到的上游数据库 source 配置加载到 v2.0+ 集群中。 2. 在下游 TiDB 中,从 v1.0.x 的数据复制任务对应的增量 checkpoint 表中获取对应的全局 checkpoint 信息。 @@ -159,8 +159,8 @@ from: > > 如在 source 配置中启动了 `enable-gtid`,当前需要通过解析 binlog 或 relay log 文件获取 binlog position 对应的 GTID sets 并在 `meta` 中设置为 `binlog-gtid`。 -4. 使用 [`start-task`](create-task.md) 命令以 v2.0+ 的数据迁移任务配置文件启动升级后的数据迁移任务。 +4. 使用 [`start-task`](dm-create-task.md) 命令以 v2.0+ 的数据迁移任务配置文件启动升级后的数据迁移任务。 -5. 使用 [`query-status`](query-status.md) 命令确认数据迁移任务是否运行正常。 +5. 使用 [`query-status`](dm-query-status.md) 命令确认数据迁移任务是否运行正常。 如果数据迁移任务运行正常,则表明 DM 升级到 v2.0+ 的操作成功。 diff --git a/zh/migrate-data-using-dm.md b/zh/migrate-data-using-dm.md index aecc7798d..eabfac5b4 100644 --- a/zh/migrate-data-using-dm.md +++ b/zh/migrate-data-using-dm.md @@ -13,7 +13,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/replicate-data-using-dm/','/zh/tidb- > **注意:** > -> - 在 DM 所有的配置文件中,对于数据库密码推荐使用 dmctl 加密后的密文。如果数据库密码为空,则不需要加密。关于如何使用 dmctl 加密明文密码,参考[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 +> - 在 DM 所有的配置文件中,对于数据库密码推荐使用 dmctl 加密后的密文。如果数据库密码为空,则不需要加密。关于如何使用 dmctl 加密明文密码,参考[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 > - 上下游数据库用户必须拥有相应的读写权限。 ## 第 2 步:检查集群信息 @@ -36,7 +36,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/replicate-data-using-dm/','/zh/tidb- | 上游 MySQL-2 | 172.16.10.82 | 3306 | root | VjX8cEeTX+qcvZ3bPaO4h0C80pe/1aU= | | 下游 TiDB | 172.16.10.83 | 4000 | root | | -上游 MySQL 数据库实例用户所需权限参见[上游 MySQL 实例配置前置检查](precheck.md)介绍。 +上游 MySQL 数据库实例用户所需权限参见[上游 MySQL 实例配置前置检查](dm-precheck.md)介绍。 ## 第 3 步:创建数据源 @@ -115,7 +115,7 @@ mydumpers: ## 第 5 步:启动任务 -为了提前发现数据迁移任务的一些配置错误,DM 中增加了[前置检查](precheck.md)功能: +为了提前发现数据迁移任务的一些配置错误,DM 中增加了[前置检查](dm-precheck.md)功能: - 启动数据迁移任务时,DM 自动检查相应的权限和配置。 - 也可使用 `check-task` 命令手动前置检查上游的 MySQL 实例配置是否符合 DM 的配置要求。 diff --git a/zh/migrate-from-mysql-aurora.md b/zh/migrate-from-mysql-aurora.md index b6023f052..6b2067bb1 100644 --- a/zh/migrate-from-mysql-aurora.md +++ b/zh/migrate-from-mysql-aurora.md @@ -68,7 +68,7 @@ DM 在增量复制阶段依赖 `ROW` 格式的 binlog,参见[为 Aurora 实例 > **注意:** > > + 基于 GTID 进行数据迁移需要 MySQL 5.7 (Aurora 2.04) 或更高版本。 -> + 除上述 Aurora 特有配置以外,上游数据库需满足迁移 MySQL 的其他要求,例如表结构、字符集、权限等,参见[上游 MySQL 实例检查内容](precheck.md#检查内容)。 +> + 除上述 Aurora 特有配置以外,上游数据库需满足迁移 MySQL 的其他要求,例如表结构、字符集、权限等,参见[上游 MySQL 实例检查内容](dm-precheck.md#检查内容)。 ## 第 2 步:部署 DM 集群 @@ -122,7 +122,7 @@ tiup dmctl --master-addr 127.0.0.1:8261 list-member > **注意:** > -> DM 所使用的配置文件支持明文或密文数据库密码,推荐使用密文数据库密码确保安全。如何获得密文数据库密码,参见[使用 dmctl 加密数据库密码](manage-source.md#加密数据库密码)。 +> DM 所使用的配置文件支持明文或密文数据库密码,推荐使用密文数据库密码确保安全。如何获得密文数据库密码,参见[使用 dmctl 加密数据库密码](dm-manage-source.md#加密数据库密码)。 根据示例信息保存如下的数据源配置文件,其中 `source-id` 的值将在第 4 步配置任务时被引用。 diff --git a/zh/quick-create-migration-task.md b/zh/quick-create-migration-task.md index a67e8db50..035abba76 100644 --- a/zh/quick-create-migration-task.md +++ b/zh/quick-create-migration-task.md @@ -17,7 +17,7 @@ summary: 了解在不同业务需求场景下如何配置数据迁移任务。 除了业务需求场景导向的创建数据迁移任务教程之外: - 完整的数据迁移任务配置示例,请参考 [DM 任务完整配置文件介绍](task-configuration-file-full.md) -- 数据迁移任务的配置向导,请参考 [数据迁移任务配置向导](task-configuration-guide.md) +- 数据迁移任务的配置向导,请参考 [数据迁移任务配置向导](dm-task-configuration-guide.md) ## 多数据源汇总迁移到 TiDB diff --git a/zh/quick-start-create-source.md b/zh/quick-start-create-source.md index c553f53dd..0625ad006 100644 --- a/zh/quick-start-create-source.md +++ b/zh/quick-start-create-source.md @@ -11,7 +11,7 @@ summary: 了解如何为 DM 创建数据源。 本文档介绍如何为 TiDB Data Migration (DM) 的数据迁移任务创建数据源。 -数据源包含了访问迁移任务上游所需的信息。数据迁移任务需要引用对应的数据源来获取访问配置信息。因此,在创建数据迁移任务之前,需要先创建任务的数据源。详细的数据源管理命令请参考[管理上游数据源](manage-source.md)。 +数据源包含了访问迁移任务上游所需的信息。数据迁移任务需要引用对应的数据源来获取访问配置信息。因此,在创建数据迁移任务之前,需要先创建任务的数据源。详细的数据源管理命令请参考[管理上游数据源](dm-manage-source.md)。 ## 第一步:配置数据源 @@ -57,7 +57,7 @@ summary: 了解如何为 DM 创建数据源。 tiup dmctl --master-addr operate-source create ./source-mysql-01.yaml ``` -数据源配置文件的其他配置参考[数据源配置文件介绍](source-configuration-file.md)。 +数据源配置文件的其他配置参考[数据源配置文件介绍](dm-source-configuration-file.md)。 命令返回结果如下: diff --git a/zh/relay-log.md b/zh/relay-log.md index 37e459a7f..59aa4fa43 100644 --- a/zh/relay-log.md +++ b/zh/relay-log.md @@ -8,7 +8,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/relay-log/'] DM (Data Migration) 工具的 relay log 由若干组有编号的文件和一个索引文件组成。这些有编号的文件包含了描述数据库更改的事件。索引文件包含所有使用过的 relay log 的文件名。 -在启用 relay log 功能后,DM-worker 会自动将上游 binlog 迁移到本地配置目录(若使用 TiUP 部署 DM,则迁移目录默认为 ` / `)。本地配置目录 `` 的默认值是 `relay-dir`,可在[上游数据库配置文件](source-configuration-file.md)中进行修改)。DM-worker 在运行过程中,会将上游 binlog 实时迁移到本地文件。DM-worker 的 sync 处理单元会实时读取本地 relay log 的 binlog 事件,将这些事件转换为 SQL 语句,再将 SQL 语句迁移到下游数据库。 +在启用 relay log 功能后,DM-worker 会自动将上游 binlog 迁移到本地配置目录(若使用 TiUP 部署 DM,则迁移目录默认为 ` / `)。本地配置目录 `` 的默认值是 `relay-dir`,可在[上游数据库配置文件](dm-source-configuration-file.md)中进行修改)。DM-worker 在运行过程中,会将上游 binlog 实时迁移到本地文件。DM-worker 的 sync 处理单元会实时读取本地 relay log 的 binlog 事件,将这些事件转换为 SQL 语句,再将 SQL 语句迁移到下游数据库。 > **注意:** > @@ -99,7 +99,7 @@ Relay log 迁移的起始位置由如下规则决定: > **注意:** > -> 自 v2.0.2 起,上游数据源配置中的 `enable-relay` 项已经失效。在[加载数据源配置](manage-source.md#数据源操作)时,如果发现配置中的 `enable-relay` 项为 `true`,DM 会给出如下信息提示: +> 自 v2.0.2 起,上游数据源配置中的 `enable-relay` 项已经失效。在[加载数据源配置](dm-manage-source.md#数据源操作)时,如果发现配置中的 `enable-relay` 项为 `true`,DM 会给出如下信息提示: > > ``` > Please use `start-relay` to specify which workers should pull relay log of relay-enabled sources. @@ -141,7 +141,7 @@ Relay log 迁移的起始位置由如下规则决定: 在 v2.0.2 之前的版本(不含 v2.0.2),DM-worker 在绑定上游数据源时,会检查上游数据源配置中的 `enable-relay` 项。如果 `enable-relay` 为 `true`,则为该数据源启用 relay log 功能。 -具体配置方式参见[上游数据源配置文件介绍](source-configuration-file.md) +具体配置方式参见[上游数据源配置文件介绍](dm-source-configuration-file.md) diff --git a/zh/shard-merge-best-practices.md b/zh/shard-merge-best-practices.md index 030e4595e..03dbfaffa 100644 --- a/zh/shard-merge-best-practices.md +++ b/zh/shard-merge-best-practices.md @@ -117,7 +117,7 @@ CREATE TABLE `tbl_multi_pk` ( ## 上游 RDS 封装分库分表的处理 -上游数据源为 RDS 且使用了其分库分表功能的情况下,MySQL binlog 中的表名在 SQL client 连接时可能并不可见。例如在 UCloud 分布式数据库 [UDDB](https://www.ucloud.cn/site/product/uddb.html) 中,其 binlog 表名可能会多出 `_0001` 的后缀。这需要根据 binlog 中的表名规律,而不是 SQL client 所见的表名,来配置 [table routing 规则](key-features.md#table-routing)。 +上游数据源为 RDS 且使用了其分库分表功能的情况下,MySQL binlog 中的表名在 SQL client 连接时可能并不可见。例如在 UCloud 分布式数据库 [UDDB](https://www.ucloud.cn/site/product/uddb.html) 中,其 binlog 表名可能会多出 `_0001` 的后缀。这需要根据 binlog 中的表名规律,而不是 SQL client 所见的表名,来配置 [table routing 规则](dm-key-features.md#table-routing)。 ## 合表迁移过程中在上游增/删表 diff --git a/zh/task-configuration-file-full.md b/zh/task-configuration-file-full.md index 0b37e3b7b..98d28780c 100644 --- a/zh/task-configuration-file-full.md +++ b/zh/task-configuration-file-full.md @@ -7,11 +7,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/task-configuration-file-full/','/zh/ 本文档主要介绍 Data Migration (DM) 的任务完整的配置文件,包含[全局配置](#全局配置) 和[实例配置](#实例配置) 两部分。 -关于各配置项的功能和配置,请参阅[数据迁移功能](overview.md#基本功能)。 +关于各配置项的功能和配置,请参阅[数据迁移功能](dm-overview.md#基本功能)。 ## 关键概念 -关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](config-overview.md#关键概念)。 +关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](dm-config-overview.md#关键概念)。 ## 完整配置文件示例 @@ -177,9 +177,9 @@ mysql-instances: | 配置项 | 说明 | | :------------ | :--------------------------------------- | -| `routes` | 上游和下游表之间的路由 table routing 规则集。如果上游与下游的库名、表名一致,则不需要配置该项。使用场景及示例配置参见 [Table Routing](key-features.md#table-routing) | -| `filters` | 上游数据库实例匹配的表的 binlog event filter 规则集。如果不需要对 binlog 进行过滤,则不需要配置该项。使用场景及示例配置参见 [Binlog Event Filter](key-features.md#binlog-event-filter) | -| `block-allow-list` | 该上游数据库实例匹配的表的 block & allow lists 过滤规则集。建议通过该项指定需要迁移的库和表,否则会迁移所有的库和表。使用场景及示例配置参见 [Block & Allow Lists](key-features.md#block--allow-table-lists) | +| `routes` | 上游和下游表之间的路由 table routing 规则集。如果上游与下游的库名、表名一致,则不需要配置该项。使用场景及示例配置参见 [Table Routing](dm-key-features.md#table-routing) | +| `filters` | 上游数据库实例匹配的表的 binlog event filter 规则集。如果不需要对 binlog 进行过滤,则不需要配置该项。使用场景及示例配置参见 [Binlog Event Filter](dm-key-features.md#binlog-event-filter) | +| `block-allow-list` | 该上游数据库实例匹配的表的 block & allow lists 过滤规则集。建议通过该项指定需要迁移的库和表,否则会迁移所有的库和表。使用场景及示例配置参见 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) | | `mydumpers` | dump 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `mydumper-thread` 对 `thread` 配置项单独进行配置。 | | `loaders` | load 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `loader-thread` 对 `pool-size` 配置项单独进行配置。 | | `syncers` | sync 处理单元的运行配置参数。如果默认配置可以满足需求,则不需要配置该项,也可以只使用 `syncer-thread` 对 `worker-count` 配置项单独进行配置。 | diff --git a/zh/task-configuration-file.md b/zh/task-configuration-file.md index af1ab95d6..fcedf7f45 100644 --- a/zh/task-configuration-file.md +++ b/zh/task-configuration-file.md @@ -7,11 +7,11 @@ aliases: ['/docs-cn/tidb-data-migration/dev/task-configuration-file/'] 本文档主要介绍 Data Migration (DM) 的任务基础配置文件,包含[全局配置](#全局配置)和[实例配置](#实例配置)两部分。 -完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md)。关于各配置项的功能和配置,请参阅[数据迁移功能](key-features.md)。 +完整的任务配置参见 [DM 任务完整配置文件介绍](task-configuration-file-full.md)。关于各配置项的功能和配置,请参阅[数据迁移功能](dm-key-features.md)。 ## 关键概念 -关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](config-overview.md#关键概念)。 +关于包括 `source-id` 和 DM-worker ID 在内的关键概念的介绍,请参阅[关键概念](dm-config-overview.md#关键概念)。 ## 基础配置文件示例 @@ -78,7 +78,7 @@ mysql-instances: ### 功能配置集 -对于一般的业务场景,只需要配置黑白名单过滤规则集,配置说明参见以上示例配置文件中 `block-allow-list` 的注释以及 [Block & Allow Lists](key-features.md#block--allow-table-lists) +对于一般的业务场景,只需要配置黑白名单过滤规则集,配置说明参见以上示例配置文件中 `block-allow-list` 的注释以及 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) ## 实例配置 diff --git a/zh/usage-scenario-downstream-more-columns.md b/zh/usage-scenario-downstream-more-columns.md index 98d537e30..82b600ea5 100644 --- a/zh/usage-scenario-downstream-more-columns.md +++ b/zh/usage-scenario-downstream-more-columns.md @@ -48,7 +48,7 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 出现以上错误的原因是 DM 迁移 binlog event 时,如果 DM 内部没有维护对应于该表的表结构,则会尝试使用下游当前的表结构来解析 binlog event 并生成相应的 DML 语句。如果 binlog event 里数据的列数与下游表结构的列数不一致时,则会产生上述错误。 -此时,我们可以使用 [`operate-schema`](manage-schema.md) 命令来为该表指定与 binlog event 匹配的表结构。如果你在进行分表合并的数据迁移,那么需要为每个分表按照如下步骤在 DM 中设置用于解析 MySQL binlog 的表结构。具体操作为: +此时,我们可以使用 [`operate-schema`](dm-manage-schema.md) 命令来为该表指定与 binlog event 匹配的表结构。如果你在进行分表合并的数据迁移,那么需要为每个分表按照如下步骤在 DM 中设置用于解析 MySQL binlog 的表结构。具体操作为: 1. 为数据源中需要迁移的表 `log.messages` 指定表结构,表结构需要对应 DM 将要开始同步的 binlog event 的数据。将对应的 `CREATE TABLE` 表结构语句并保存到文件,例如将以下表结构保存到 `log.messages.sql` 中。 @@ -60,7 +60,7 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 ) ``` -2. 使用 [`operate-schema`](manage-schema.md) 命令设置表结构(此时 task 应该由于上述错误而处于 `Paused` 状态)。 +2. 使用 [`operate-schema`](dm-manage-schema.md) 命令设置表结构(此时 task 应该由于上述错误而处于 `Paused` 状态)。 {{< copyable "" >}} @@ -68,6 +68,6 @@ summary: 了解如何在下游表结构比数据源存在更多列的情况下 tiup dmctl --master-addr operate-schema set -s mysql-01 task-test -d log -t message log.message.sql ``` -3. 使用 [`resume-task`](resume-task.md) 命令恢复处于 `Paused` 状态的任务。 +3. 使用 [`resume-task`](dm-resume-task.md) 命令恢复处于 `Paused` 状态的任务。 -4. 使用 [`query-status`](query-status.md) 命令确认数据迁移任务是否运行正常。 +4. 使用 [`query-status`](dm-query-status.md) 命令确认数据迁移任务是否运行正常。 diff --git a/zh/usage-scenario-shard-merge.md b/zh/usage-scenario-shard-merge.md index e5e0ea7ed..299bd820f 100644 --- a/zh/usage-scenario-shard-merge.md +++ b/zh/usage-scenario-shard-merge.md @@ -78,7 +78,7 @@ CREATE TABLE `sale_01` ( ## 迁移方案 -- 要满足迁移需求 #1,无需配置 [table routing 规则](key-features.md#table-routing)。按照[去掉自增主键的主键属性](shard-merge-best-practices.md#去掉自增主键的主键属性)的要求,在下游手动建表。 +- 要满足迁移需求 #1,无需配置 [table routing 规则](dm-key-features.md#table-routing)。按照[去掉自增主键的主键属性](shard-merge-best-practices.md#去掉自增主键的主键属性)的要求,在下游手动建表。 {{< copyable "sql" >}} @@ -101,7 +101,7 @@ CREATE TABLE `sale_01` ( ignore-checking-items: ["auto_increment_ID"] ``` -- 要满足迁移需求 #2,配置 [table routing 规则](key-features.md#table-routing)如下: +- 要满足迁移需求 #2,配置 [table routing 规则](dm-key-features.md#table-routing)如下: {{< copyable "" >}} @@ -118,7 +118,7 @@ CREATE TABLE `sale_01` ( target-table: "sale" ``` -- 要满足迁移需求 #3,配置 [Block & Allow Lists](key-features.md#block--allow-table-lists) 如下: +- 要满足迁移需求 #3,配置 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists) 如下: {{< copyable "" >}} @@ -131,7 +131,7 @@ CREATE TABLE `sale_01` ( tbl-name: "log_bak" ``` -- 要满足迁移需求 #4,配置 [Binlog event filter 规则](key-features.md#binlog-event-filter)如下: +- 要满足迁移需求 #4,配置 [Binlog event filter 规则](dm-key-features.md#binlog-event-filter)如下: {{< copyable "" >}} @@ -151,7 +151,7 @@ CREATE TABLE `sale_01` ( ## 迁移任务配置 -迁移任务的完整配置如下,更多详情请参阅[数据迁移任务配置向导](task-configuration-guide.md)。 +迁移任务的完整配置如下,更多详情请参阅[数据迁移任务配置向导](dm-task-configuration-guide.md)。 {{< copyable "" >}} diff --git a/zh/usage-scenario-simple-migration.md b/zh/usage-scenario-simple-migration.md index 967b4d94a..2e87461b0 100644 --- a/zh/usage-scenario-simple-migration.md +++ b/zh/usage-scenario-simple-migration.md @@ -68,7 +68,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', ## 迁移方案 -- 为了满足[迁移要求](#迁移要求)中第一点的前三条要求,需要配置以下 [table routing 规则](key-features.md#table-routing): +- 为了满足[迁移要求](#迁移要求)中第一点的前三条要求,需要配置以下 [table routing 规则](dm-key-features.md#table-routing): {{< copyable "" >}} @@ -86,7 +86,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', target-schema: "user_south" ``` -- 为了满足[迁移要求](#迁移要求)中第二点的第一条要求,需要配置以下 [table routing 规则](key-features.md#table-routing): +- 为了满足[迁移要求](#迁移要求)中第二点的第一条要求,需要配置以下 [table routing 规则](dm-key-features.md#table-routing): {{< copyable "" >}} @@ -105,7 +105,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', target-table: "store_shenzhen" ``` -- 为了满足[迁移要求](#迁移要求)中第一点的第四条要求,需要配置以下 [binlog event filter 规则](key-features.md#binlog-event-filter): +- 为了满足[迁移要求](#迁移要求)中第一点的第四条要求,需要配置以下 [binlog event filter 规则](dm-key-features.md#binlog-event-filter): {{< copyable "" >}} @@ -123,7 +123,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', action: Ignore ``` -- 为了满足[迁移要求](#迁移要求)中第二点的第二条要求,需要配置以下 [binlog event filter 规则](key-features.md#binlog-event-filter): +- 为了满足[迁移要求](#迁移要求)中第二点的第二条要求,需要配置以下 [binlog event filter 规则](dm-key-features.md#binlog-event-filter): {{< copyable "" >}} @@ -140,7 +140,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', > > `store-filter-rule` 不同于 `log-filter-rule` 和 `user-filter-rule`。`store-filter-rule` 是针对整个 `store` 库的规则,而 `log-filter-rule` 和 `user-filter-rule` 是针对 `user` 库中 `log` 表的规则。 -- 为了满足[迁移要求](#迁移要求)中的第三点要求,需要配置以下 [Block & Allow Lists](key-features.md#block--allow-table-lists): +- 为了满足[迁移要求](#迁移要求)中的第三点要求,需要配置以下 [Block & Allow Lists](dm-key-features.md#block--allow-table-lists): {{< copyable "" >}} @@ -152,7 +152,7 @@ aliases: ['/docs-cn/tidb-data-migration/dev/usage-scenario-simple-replication/', ## 迁移任务配置 -以下是完整的迁移任务配置,更多详情请参阅 [数据迁移任务配置向导](task-configuration-guide.md)。 +以下是完整的迁移任务配置,更多详情请参阅 [数据迁移任务配置向导](dm-task-configuration-guide.md)。 {{< copyable "" >}} From 2f841d4c9c03ea18cac8f1893f31964ce1f0b92a Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 19:50:46 +0800 Subject: [PATCH 09/11] Update dm-handle-alerts.md --- en/dm-handle-alerts.md | 198 ----------------------------------------- 1 file changed, 198 deletions(-) diff --git a/en/dm-handle-alerts.md b/en/dm-handle-alerts.md index 683a43b1d..15db25b7c 100644 --- a/en/dm-handle-alerts.md +++ b/en/dm-handle-alerts.md @@ -1,4 +1,3 @@ -<<<<<<< HEAD:en/dm-handle-alerts.md --- title: Handle Alerts summary: Understand how to deal with the alert information in DM. @@ -189,200 +188,3 @@ This document introduces how to deal with the alert information in DM. - Solution: Refer to [Handle Performance Issues](dm-handle-performance-issues.md). -======= ---- -title: Handle Alerts -summary: Understand how to deal with the alert information in DM. ---- - -# Handle Alerts - -This document introduces how to deal with the alert information in DM. - -## Alerts related to high availability - -### `DM_master_all_down` - -- Description: - - If all DM-master nodes are offline, this alert is triggered. - -- Solution: - - You can take the following steps to handle the alert: - - 1. Check the environment of the cluster. - 2. Check the logs of all DM-master nodes for troubleshooting. - -### `DM_worker_offline` - -- Description: - - If a DM-worker node is offline for more than one hour, this alert is triggered. In a high-availability architecture, this alert might not directly interrupt the task but increases the risk of interruption. - -- Solution: - - You can take the following steps to handle the alert: - - 1. View the working status of the corresponding DM-worker node. - 2. Check whether the node is connected. - 3. Troubleshoot errors through logs. - -### `DM_DDL_error` - -- Description: - - This error occurs when DM is processing the sharding DDL operations. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_pending_DDL` - -- Description: - - If a sharding DDL operation is pending for more than one hour, this alert is triggered. - -- Solution: - - In some scenarios, the pending sharding DDL operation might be what users expect. Otherwise, refer to [Handle Sharding DDL Locks Manually in DM](manually-handling-sharding-ddl-locks.md) for solution. - -## Alert rules related to task status - -### `DM_task_state` - -- Description: - - When a sub-task of DM-worker is in the `Paused` state for over 20 minutes, an alert is triggered. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -## Alert rules related to relay log - -### `DM_relay_process_exits_with_error` - -- Description: - - When the relay log processing unit encounters an error, this unit moves to `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_remain_storage_of_relay_log` - -- Description: - - When the free space of the disk where the relay log is located is less than 10G, an alert is triggered. - -- Solutions: - - You can take the following methods to handle the alert: - - - Delete unwanted data manually to increase free disk space. - - Reconfigure the [automatic data purge strategy of the relay log](relay-log.md#automatic-data-purge) or [purge data manually](relay-log.md#manual-data-purge). - - Execute the command `pause-relay` to pause the relay log pulling process. After there is enough free disk space, resume the process by running the command `resume-relay`. Note that you must not purge upstream binlog files that have not been pulled after the relay log pulling process is paused. - -### `DM_relay_log_data_corruption` - -- Description: - - When the relay log processing unit validates the binlog event read from the upstream and detects abnormal checksum information, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_fail_to_read_binlog_from_master` - -- Description: - - If an error occurs when the relay log processing unit tries to read the binlog event from the upstream, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_fail_to_write_relay_log` - -- Description: - - If an error occurs when the relay log processing unit tries to write the binlog event into the relay log file, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_binlog_file_gap_between_master_relay` - -- Description: - - When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files pulled by the relay log processing unit by **more than** 1 for 10 minutes, and an alert is triggered. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -## Alert rules related to Dump/Load - -### `DM_dump_process_exists_with_error` - -- Description: - - When the Dump processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_load_process_exists_with_error` - -- Description: - - When the Load processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -## Alert rules related to binlog replication - -### `DM_sync_process_exists_with_error` - -- Description: - - When the binlog replication processing unit encounters an error, this unit moves to the `Paused` state, and an alert is triggered immediately. - -- Solution: - - Refer to [Troubleshoot DM](dm-error-handling.md#troubleshooting). - -### `DM_binlog_file_gap_between_master_syncer` - -- Description: - - When the number of the binlog files in the current upstream MySQL/MariaDB exceeds that of the latest binlog files processed by the relay log processing unit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](dm-handle-performance-issues.md). - -### `DM_binlog_file_gap_between_relay_syncer` - -- Description: - - When the number of the binlog files in the current relay log processing unit exceeds that of the latest binlog files processed by the binlog replication processing unit by **more than** 1 for 10 minutes, an alert is triggered. - -- Solution: - - Refer to [Handle Performance Issues](dm-handle-performance-issues.md). ->>>>>>> parent of a030378 (rename_files):en/handle-alerts.md - - -- Solution: - - Refer to [Handle Performance Issues](handle-performance-issues.md). ->>>>>>> parent of a030378 (rename_files):en/handle-alerts.md From 88ae02d837402c86167feff49dc7c4ea6dd98b01 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 19:57:42 +0800 Subject: [PATCH 10/11] Update dm-alert-rules.md --- en/dm-alert-rules.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/en/dm-alert-rules.md b/en/dm-alert-rules.md index ad0f3b905..6733df3ea 100644 --- a/en/dm-alert-rules.md +++ b/en/dm-alert-rules.md @@ -10,5 +10,4 @@ The [alert system](migrate-data-using-dm.md#step-8-monitor-the-task-and-check-lo For more information about DM alert rules and the solutions, refer to [handle alerts](dm-handle-alerts.md). -Both DM alert information and monitoring metrics are based on Prometheus. For more information about their relationship, refer to [DM monitoring metrics](monitor-a-dm-cluster.md). -ter.md). +Both DM alert information and monitoring metrics are based on Prometheus. For more information about their relationship, refer to [DM monitoring metrics](monitor-a-dm-cluster.md). \ No newline at end of file From c7e5cc8a5f2c74fce9dede67ccc758efdcef7497 Mon Sep 17 00:00:00 2001 From: qiancai Date: Tue, 21 Dec 2021 21:16:49 +0800 Subject: [PATCH 11/11] Update dm-handle-alerts.md --- en/dm-handle-alerts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en/dm-handle-alerts.md b/en/dm-handle-alerts.md index 15db25b7c..66bdbd8a8 100644 --- a/en/dm-handle-alerts.md +++ b/en/dm-handle-alerts.md @@ -1,7 +1,7 @@ --- title: Handle Alerts summary: Understand how to deal with the alert information in DM. -aliases: ['/tidb-data-migration/dev/handle-alerts.md/,'/tidb-data-migration/dev/handle-alerts/'] +aliases: ['/tidb-data-migration/dev/handle-alerts/'] --- # Handle Alerts