diff --git a/tikv-configuration-file.md b/tikv-configuration-file.md index b693304c4fdea..1a3cb1b3bde05 100644 --- a/tikv-configuration-file.md +++ b/tikv-configuration-file.md @@ -1507,6 +1507,60 @@ Configuration items related to BR backup. + Controls whether to limit the resources used by backup tasks to reduce the impact on the cluster when the cluster resource utilization is high. For more information, refer to [BR Auto-Tune](/br/br-auto-tune.md). + Default value: `true` +<<<<<<< HEAD +======= +### `s3-multi-part-size` New in v5.3.2 + +> **Note:** +> +> This configuration is introduced to address backup failures caused by S3 rate limiting. This problem has been fixed by [refining the backup data storage structure](/br/backup-and-restore-design.md#backup-file-structure). Therefore, this configuration is deprecated from v6.1.1 and is no longer recommended. + ++ The part size used when you perform multipart upload to S3 during backup. You can adjust the value of this configuration to control the number of requests sent to S3. ++ If data is backed up to S3 and the backup file is larger than the value of this configuration item, [multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) is automatically enabled. Based on the compression ratio, the backup file generated by a 96-MiB Region is approximately 10 MiB to 30 MiB. ++ Default value: 5MiB + +## log-backup + +Configuration items related to log backup. + +### `enable` New in v6.2.0 + ++ Determines whether to enable log backup. ++ Default value: `true` + +### `file-size-limit` New in v6.2.0 + ++ The size limit on backup log data to be stored. ++ Default value: 256MiB ++ Note: Generally, the value of `file-size-limit` is greater than the backup file size displayed in external storage. This is because the backup files are compressed before being uploaded to external storage. + +### `initial-scan-pending-memory-quota` New in v6.2.0 + ++ The quota of cache used for storing incremental scan data during log backup. ++ Default value: `min(Total machine memory * 10%, 512 MB)` + +### `initial-scan-rate-limit` New in v6.2.0 + ++ The rate limit on throughput in an incremental data scan during log backup. ++ Default value: 60, indicating that the rate limit is 60 MB/s by default. + +### `max-flush-interval` New in v6.2.0 + ++ The maximum interval for writing backup data to external storage in log backup. ++ Default value: 3min + +### `num-threads` New in v6.2.0 + ++ The number of threads used in log backup. ++ Default value: CPU * 0.5 ++ Value range: [2, 12] + +### `temp-path` New in v6.2.0 + ++ The temporary path to which log files are written before being flushed to external storage. ++ Default value: `${deploy-dir}/data/log-backup-temp` + +>>>>>>> b52f1281a (backup: change the location for s3-multi-part-size (#10786)) ## cdc Configuration items related to TiCDC. @@ -1616,6 +1670,7 @@ Suppose that your machine on which TiKV is deployed has limited resources, for e ### `max-delay-duration` (new in v6.0.0) +<<<<<<< HEAD + The maximum time that a single read or write request is forced to wait before it is processed in the foreground. + Default value: `500ms` @@ -1628,3 +1683,25 @@ Suppose that your machine on which TiKV is deployed has limited resources, for e + The part size used when you perform multipart upload to S3 during backup. You can adjust the value of this configuration to control the number of requests sent to S3. + If data is backed up to S3 and the backup file is larger than the value of this configuration item, [multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) is automatically enabled. Based on the compression ratio, the backup file generated by a 96-MiB Region is approximately 10 MiB to 30 MiB. + Default value: 5MiB +======= +Configuration items related to getting the timestamp when TiKV API V2 is enabled (`storage.api-version = 2`). + +To reduce write latency and avoid frequent access to PD, TiKV periodically fetches and caches a batch of timestamps in the local. When the locally cached timestamps are exhausted, TiKV immediately makes a timestamp request. In this situation, the latency of some write requests are increased. To reduce the occurrence of this situation, TiKV dynamically adjusts the size of the locally cached timestamps according to the workload. For most of the time, you do not need to adjust the following parameters. + +> **Warning:** +> +> TiKV API V2 is still an experimental feature. It is not recommended to use it in production environments. + +### `renew-interval` + ++ The interval at which the locally cached timestamps are refreshed. ++ At an interval of `renew-interval`, TiKV starts a batch of timestamp refresh and adjusts the number of cached timestamps according to the timestamp consumption in the previous period. If you set this parameter to too large a value, the latest TiKV workload changes are not reflected in time. If you set this parameter to too small a value, the load of PD increases. If the write traffic is strongly fluctuating, if timestamps are frequently exhausted, and if write latency increases, you can set this parameter to a smaller value. At the same time, you should also consider the load of PD. ++ Default value: `"100ms"` + +### `renew-batch-min-size` + ++ The minimum number of locally cached timestamps. ++ TiKV adjusts the number of cached timestamps according to the timestamp consumption in the previous period. If the usage of locally cached timestamps is low, TiKV gradually reduces the number of cached timestamps until it reaches `renew-batch-min-size`. If large bursty write traffic often occurs in your application, you can set this parameter to a larger value as appropriate. Note that this parameter is the cache size for a single tikv-server. If you set the parameter to too large a value and the cluster contains many tikv-servers, the TSO consumption will be too fast. ++ In the **TiKV-RAW** \> **Causal timestamp** panel in Grafana, **TSO batch size** is the number of locally cached timestamps that has been dynamically adjusted according to the application workload. You can refer to this metric to adjust `renew-batch-min-size`. ++ Default value: `100` +>>>>>>> b52f1281a (backup: change the location for s3-multi-part-size (#10786))