Skip to content

Commit

Permalink
backup: change the location for s3-multi-part-size
Browse files Browse the repository at this point in the history
  • Loading branch information
shichun-0415 committed Oct 12, 2022
1 parent d29408e commit 97b7a7e
Showing 1 changed file with 0 additions and 165 deletions.
165 changes: 0 additions & 165 deletions tikv-configuration-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -1417,13 +1417,6 @@ Configuration items related to BR backup.
+ Default value: `MIN(CPU * 0.75, 32)`.
+ Minimum value: `1`

<<<<<<< HEAD
=======
### `enable-auto-tune` <span class="version-mark">New in v5.4.0</span>

+ Controls whether to limit the resources used by backup tasks to reduce the impact on the cluster when the cluster resource utilization is high. For more information, refer to [BR Auto-Tune](/br/br-auto-tune.md).
+ Default value: `true`

### `s3-multi-part-size` <span class="version-mark">New in v5.3.2</span>

> **Note:**
Expand All @@ -1434,48 +1427,6 @@ Configuration items related to BR backup.
+ If data is backed up to S3 and the backup file is larger than the value of this configuration item, [multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) is automatically enabled. Based on the compression ratio, the backup file generated by a 96-MiB Region is approximately 10 MiB to 30 MiB.
+ Default value: 5MiB

## log-backup

Configuration items related to log backup.

### `enable` <span class="version-mark">New in v6.2.0</span>

+ Determines whether to enable log backup.
+ Default value: `true`

### `file-size-limit` <span class="version-mark">New in v6.2.0</span>

+ The size limit on backup log data to be stored.
+ Default value: 256MiB
+ Note: Generally, the value of `file-size-limit` is greater than the backup file size displayed in external storage. This is because the backup files are compressed before being uploaded to external storage.

### `initial-scan-pending-memory-quota` <span class="version-mark">New in v6.2.0</span>

+ The quota of cache used for storing incremental scan data during log backup.
+ Default value: `min(Total machine memory * 10%, 512 MB)`

### `initial-scan-rate-limit` <span class="version-mark">New in v6.2.0</span>

+ The rate limit on throughput in an incremental data scan during log backup.
+ Default value: 60, indicating that the rate limit is 60 MB/s by default.

### `max-flush-interval` <span class="version-mark">New in v6.2.0</span>

+ The maximum interval for writing backup data to external storage in log backup.
+ Default value: 3min

### `num-threads` <span class="version-mark">New in v6.2.0</span>

+ The number of threads used in log backup.
+ Default value: CPU * 0.5
+ Value range: [2, 12]

### `temp-path` <span class="version-mark">New in v6.2.0</span>

+ The temporary path to which log files are written before being flushed to external storage.
+ Default value: `${deploy-dir}/data/log-backup-temp`

>>>>>>> b52f1281a (backup: change the location for s3-multi-part-size (#10786))
## cdc

Configuration items related to TiCDC.
Expand Down Expand Up @@ -1549,119 +1500,3 @@ For pessimistic transaction usage, refer to [TiDB Pessimistic Transaction Mode](

- This configuration item enables the pipelined process of adding the pessimistic lock. With this feature enabled, after detecting that data can be locked, TiKV immediately notifies TiDB to execute the subsequent requests and write the pessimistic lock asynchronously, which reduces most of the latency and significantly improves the performance of pessimistic transactions. But there is a still low probability that the asynchronous write of the pessimistic lock fails, which might cause the failure of pessimistic transaction commits.
- Default value: `true`

<<<<<<< HEAD
### `s3-multi-part-size` <span class="version-mark">New in v5.3.2</span>

> **Note:**
>
> This configuration is introduced to address backup failures caused by S3 rate limiting. This problem has been fixed by [refining the backup data storage structure](https://docs.pingcap.com/tidb/stable/backup-and-restore-design#backup-file-structure). Therefore, this configuration is deprecated from v6.1.1 and is no longer recommended.
+ The part size used when you perform multipart upload to S3 during backup. You can adjust the value of this configuration to control the number of requests sent to S3.
+ If data is backed up to S3 and the backup file is larger than the value of this configuration item, [multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) is automatically enabled. Based on the compression ratio, the backup file generated by a 96-MiB Region is approximately 10 MiB to 30 MiB.
+ Default value: 5MiB
=======
### `in-memory` (New in v6.0.0)

+ Enables the in-memory pessimistic lock feature. With this feature enabled, pessimistic transactions try to store their locks in memory, instead of writing the locks to disk or replicating the locks to other replicas. This improves the performance of pessimistic transactions. However, there is a still low probability that the pessimistic lock gets lost and causes the pessimistic transaction commits to fail.
+ Default value: `true`
+ Note that `in-memory` takes effect only when the value of `pipelined` is `true`.

## quota

Configuration items related to Quota Limiter.

### `max-delay-duration` <span class="version-mark">New in v6.0.0</span>

+ The maximum time that a single read or write request is forced to wait before it is processed in the foreground.
+ Default value: `500ms`
+ Recommended setting: It is recommended to use the default value in most cases. If out of memory (OOM) or violent performance jitter occurs in the instance, you can set the value to 1S to make the request waiting time shorter than 1 second.

### Foreground Quota Limiter

Configuration items related to foreground Quota Limiter.

Suppose that your machine on which TiKV is deployed has limited resources, for example, with only 4v CPU and 16 G memory. In this situation, the foreground of TiKV might process too many read and write requests so that the CPU resources used by the background are occupied to help process such requests, which affects the performance stability of TiKV. To avoid this situation, you can use the foreground quota-related configuration items to limit the CPU resources to be used by the foreground. When a request triggers Quota Limiter, the request is forced to wait for a while for TiKV to free up CPU resources. The exact waiting time depends on the number of requests, and the maximum waiting time is no longer than the value of [`max-delay-duration`](#max-delay-duration-new-in-v600).

#### `foreground-cpu-time` <span class="version-mark">New in v6.0.0</span>

+ The soft limit on the CPU resources used by TiKV foreground to process read and write requests.
+ Default value: `0` (which means no limit)
+ Unit: millicpu (for example, `1500` means that the foreground requests consume 1.5v CPU)
+ Recommended setting: For the instance with more than 4 cores, use the default value `0`. For the instance with 4 cores, setting the value to the range of `1000` and `1500` can make a balance. For the instance with 2 cores, keep the value smaller than `1200`.

#### `foreground-write-bandwidth` <span class="version-mark">New in v6.0.0</span>

+ The soft limit on the bandwidth with which transactions write data.
+ Default value: `0KB` (which means no limit)
+ Recommended setting: Use the default value `0` in most cases unless the `foreground-cpu-time` setting is not enough to limit the write bandwidth. For such an exception, it is recommended to set the value smaller than `50MB` in the instance with 4 or less cores.

#### `foreground-read-bandwidth` <span class="version-mark">New in v6.0.0</span>

+ The soft limit on the bandwidth with which transactions and the Coprocessor read data.
+ Default value: `0KB` (which means no limit)
+ Recommended setting: Use the default value `0` in most cases unless the `foreground-cpu-time` setting is not enough to limit the read bandwidth. For such an exception, it is recommended to set the value smaller than `20MB` in the instance with 4 or less cores.

### Background Quota Limiter

Configuration items related to background Quota Limiter.

Suppose that your machine on which TiKV is deployed has limited resources, for example, with only 4v CPU and 16 G memory. In this situation, the background of TiKV might process too many calculations and read and write requests, so that the CPU resources used by the foreground are occupied to help process such requests, which affects the performance stability of TiKV. To avoid this situation, you can use the background quota-related configuration items to limit the CPU resources to be used by the background. When a request triggers Quota Limiter, the request is forced to wait for a while for TiKV to free up CPU resources. The exact waiting time depends on the number of requests, and the maximum waiting time is no longer than the value of [`max-delay-duration`](#max-delay-duration-new-in-v600).

> **Warning:**
>
> - Background Quota Limiter is an experimental feature introduced in TiDB v6.2.0, and it is **NOT** recommended to use it in the production environment.
> - This feature is only suitable for environments with limited resources to ensure that TiKV can run stably in those environments. If you enable this feature in an environment with rich resources, performance degradation might occur when the amount of requests reaches a peak.
#### `background-cpu-time` <span class="version-mark">New in v6.2.0</span>

+ The soft limit on the CPU resources used by TiKV background to process read and write requests.
+ Default value: `0` (which means no limit)
+ Unit: millicpu (for example, `1500` means that the background requests consume 1.5v CPU)

#### `background-write-bandwidth` <span class="version-mark">New in v6.2.0</span>

> **Note:**
>
> This configuration item is returned in the result of `SHOW CONFIG`, but currently setting it does not take any effect.
+ The soft limit on the bandwidth with which background transactions write data.
+ Default value: `0KB` (which means no limit)

#### `background-read-bandwidth` <span class="version-mark">New in v6.2.0</span>

> **Note:**
>
> This configuration item is returned in the result of `SHOW CONFIG`, but currently setting it does not take any effect.
+ The soft limit on the bandwidth with which background transactions and the Coprocessor read data.
+ Default value: `0KB` (which means no limit)

#### `enable-auto-tune` <span class="version-mark">New in v6.2.0</span>

+ Determines whether to enable the auto-tuning of quota. If this configuration item is enabled, TiKV dynamically adjusts the quota for the background requests based on the load of TiKV instances.
+ Default value: `false` (which means that the auto-tuning is disabled)

## causal-ts <span class="version-mark">New in v6.1.0</span>

Configuration items related to getting the timestamp when TiKV API V2 is enabled (`storage.api-version = 2`).

To reduce write latency and avoid frequent access to PD, TiKV periodically fetches and caches a batch of timestamps in the local. When the locally cached timestamps are exhausted, TiKV immediately makes a timestamp request. In this situation, the latency of some write requests are increased. To reduce the occurrence of this situation, TiKV dynamically adjusts the size of the locally cached timestamps according to the workload. For most of the time, you do not need to adjust the following parameters.

> **Warning:**
>
> TiKV API V2 is still an experimental feature. It is not recommended to use it in production environments.
### `renew-interval`

+ The interval at which the locally cached timestamps are refreshed.
+ At an interval of `renew-interval`, TiKV starts a batch of timestamp refresh and adjusts the number of cached timestamps according to the timestamp consumption in the previous period. If you set this parameter to too large a value, the latest TiKV workload changes are not reflected in time. If you set this parameter to too small a value, the load of PD increases. If the write traffic is strongly fluctuating, if timestamps are frequently exhausted, and if write latency increases, you can set this parameter to a smaller value. At the same time, you should also consider the load of PD.
+ Default value: `"100ms"`

### `renew-batch-min-size`

+ The minimum number of locally cached timestamps.
+ TiKV adjusts the number of cached timestamps according to the timestamp consumption in the previous period. If the usage of locally cached timestamps is low, TiKV gradually reduces the number of cached timestamps until it reaches `renew-batch-min-size`. If large bursty write traffic often occurs in your application, you can set this parameter to a larger value as appropriate. Note that this parameter is the cache size for a single tikv-server. If you set the parameter to too large a value and the cluster contains many tikv-servers, the TSO consumption will be too fast.
+ In the **TiKV-RAW** \> **Causal timestamp** panel in Grafana, **TSO batch size** is the number of locally cached timestamps that has been dynamically adjusted according to the application workload. You can refer to this metric to adjust `renew-batch-min-size`.
+ Default value: `100`
>>>>>>> b52f1281a (backup: change the location for s3-multi-part-size (#10786))

0 comments on commit 97b7a7e

Please sign in to comment.