diff --git a/docs/operation-guide.md b/docs/operation-guide.md index daed8a12eb..e35ea2f42d 100644 --- a/docs/operation-guide.md +++ b/docs/operation-guide.md @@ -108,6 +108,8 @@ For minor version upgrade, updating the `image` should be enough. When TiDB majo Since `v1.0.0`, TiDB operator can perform rolling-update on configuration updates. This feature is disabled by default in favor of backward compatibility, you can enable it by setting `enableConfigMapRollout` to `true` in your helm values file. +> **Note**: currently, changing PD's `scheduler` and `replication` configurations(`maxStoreDownTime` and `maxReplicas` in `values.yaml`, and all the configuration key under `[scheduler]` and `[replication]` section if you override the pd config file) after cluster creation has no effect. You have to configure these variables via `pd-ctl` after the cluster creation, see: [pd-ctl](https://pingcap.com/docs/dev/reference/tools/pd-control/) + > WARN: changing this variable against a running cluster will trigger an rolling-update of PD/TiKV/TiDB pods even if there's no configuration change. ## Destroy TiDB cluster diff --git a/images/tidb-operator-e2e/tidb-cluster-values.yaml b/images/tidb-operator-e2e/tidb-cluster-values.yaml index 29d7da8667..866d4cdf36 100644 --- a/images/tidb-operator-e2e/tidb-cluster-values.yaml +++ b/images/tidb-operator-e2e/tidb-cluster-values.yaml @@ -47,9 +47,13 @@ pd: # maxStoreDownTime is how long a store will be considered `down` when disconnected # if a store is considered `down`, the regions will be migrated to other stores + # Note: changing this value after cluster creation has no effect, instead, you have to configure this using pd-ctl + # see https://pingcap.com/docs/dev/reference/tools/pd-control/ maxStoreDownTime: 1h # maxReplicas is the number of replicas for each region + # Note: same as maxStoreDownTime, change of this value should be made using pd-ctl after cluster creation maxReplicas: 3 + resources: limits: {} # cpu: 8000m