diff --git a/ticdc/manage-ticdc.md b/ticdc/manage-ticdc.md index 7523e715f383e..fe61d943e0fd6 100644 --- a/ticdc/manage-ticdc.md +++ b/ticdc/manage-ticdc.md @@ -651,6 +651,10 @@ In the output of the above command, if the value of `sort-engine` is "unified", ## Eventually consistent replication in disaster scenarios +> **Warning:** +> +> In v5.3.0 and v5.4.0, it is not recommended that you use eventually consistent replication in disaster scenarios. For details, see [#6189](https://github.com/pingcap/tiflow/issues/6189). This issue has been fixed in v6.1.1 and later. Therefore, it is recommended that you use v6.1.1 or a later version. + Starting from v5.3.0, TiCDC supports backing up incremental data from an upstream TiDB cluster to S3 storage or an NFS file system of a downstream cluster. When the upstream cluster encounters a disaster and becomes unavailable, TiCDC can restore the downstream data to the recent eventually consistent state. This is the eventually consistent replication capability provided by TiCDC. With this capability, you can switch applications to the downstream cluster quickly, avoiding long-time downtime and improving service continuity. Currently, TiCDC can replicate incremental data from a TiDB cluster to another TiDB cluster or a MySQL-compatible database system (including Aurora, MySQL, and MariaDB). In case the upstream cluster crashes, TiCDC can restore data in the downstream cluster within 5 minutes, given the conditions that before the disaster the replication status of TiCDC is normal and the replication lag is small. It allows data loss of 10s at most, that is, RTO <= 5 min, and P95 RPO <= 10s.