Skip to content

Commit 17dee2d

Browse files
CaitinChenLinuxGit
authored andcommitted
op-guide: add configuring CPUfreq governor mode (#531)
* op-guide: add configuring CPUfreq governor mode * Fix the code block format * Update ansible-deployment.md * Update wording
1 parent d4215fc commit 17dee2d

File tree

1 file changed

+58
-4
lines changed

1 file changed

+58
-4
lines changed

op-guide/ansible-deployment.md

Lines changed: 58 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,61 @@ The NTP service is installed and started using the software repository that come
207207

208208
To make the NTP service start synchronizing as soon as possible, the system executes the `ntpdate` command to set the local date and time by polling `ntp_server` in the `hosts.ini` file. The default server is `pool.ntp.org`, and you can also replace it with your NTP server.
209209

210-
## Step 7: Mount the data disk ext4 filesystem with options on the target machines
210+
## Step 7: Configure the CPUfreq governor mode on the target machine
211+
212+
For details about CPUfreq, see [the CPUfreq Governor documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/power_management_guide/cpufreq_governors).
213+
214+
Set the CPUfreq governor mode to `performance` to make full use of CPU performance.
215+
216+
### Check the governor modes supported by the system
217+
218+
You can run the `cpupower frequency-info --governors` command to check the governor modes which the system supports:
219+
220+
```
221+
# cpupower frequency-info --governors
222+
analyzing CPU 0:
223+
available cpufreq governors: performance powersave
224+
```
225+
226+
Taking the above code for example, the system supports the `performance` and `powersave` modes.
227+
228+
> **Note:** As the following shows, if it returns “Not Available”, it means that the current system does not support CPUfreq configuration and you can skip this step.
229+
230+
> ```
231+
> # cpupower frequency-info --governors
232+
> analyzing CPU 0:
233+
> available cpufreq governors: Not Available
234+
> ```
235+
236+
### Check the current governor mode
237+
238+
You can run the `cpupower frequency-info --policy` command to check the current CPUfreq governor mode:
239+
240+
```
241+
# cpupower frequency-info --policy
242+
analyzing CPU 0:
243+
current policy: frequency should be within 1.20 GHz and 3.20 GHz.
244+
The governor "powersave" may decide which speed to use
245+
within this range.
246+
```
247+
248+
As the above code shows, the current mode is `powersave` in this example.
249+
250+
### Change the governor mode
251+
252+
- You can run the following command to change the current mode to `performance`:
253+
254+
```
255+
# cpupower frequency-set --governor performance
256+
```
257+
258+
- You can also run the following command to set the mode on the target machine in batches:
259+
260+
```
261+
$ ansible -i hosts.ini all -m shell -a "cpupower frequency-set --governor performance" -b
262+
```
263+
264+
## Step 8: Mount the data disk ext4 filesystem with options on the target machines
211265
212266
Log in to the Control Machine using the `root` user account.
213267
@@ -274,7 +328,7 @@ Take the `/dev/nvme0n1` data disk as an example:
274328
275329
If the filesystem is ext4 and `nodelalloc` is included in the mount options, you have successfully mount the data disk ext4 filesystem with options on the target machines.
276330
277-
## Step 8: Edit the `inventory.ini` file to orchestrate the TiDB cluster
331+
## Step 9: Edit the `inventory.ini` file to orchestrate the TiDB cluster
278332
279333
Log in to the Control Machine using the `tidb` user account, and edit the `tidb-ansible/inventory.ini` file to orchestrate the TiDB cluster. The standard TiDB cluster contains 6 machines: 2 TiDB nodes, 3 PD nodes and 3 TiKV nodes.
280334
@@ -411,7 +465,7 @@ location_labels = ["host"]
411465
412466
- `capacity`: total disk capacity / number of TiKV instances (the unit is GB)
413467
414-
## Step 9: Edit variables in the `inventory.ini` file
468+
## Step 10: Edit variables in the `inventory.ini` file
415469
416470
This step describes how to edit the variable of deployment directory and other variables in the `inventory.ini` file.
417471
@@ -459,7 +513,7 @@ To enable the following control variables, use the capitalized `True`. To disabl
459513
| enable_bandwidth_limit | to set a bandwidth limit when pulling the diagnostic data from the target machines to the Control Machine; used together with the `collect_bandwidth_limit` variable |
460514
| collect_bandwidth_limit | the limited bandwidth when pulling the diagnostic data from the target machines to the Control Machine; unit: Kbit/s; default 10000, indicating 10Mb/s; for the cluster topology of multiple TiKV instances on each TiKV node, you need to divide the number of the TiKV instances on each TiKV node |
461515

462-
## Step 10: Deploy the TiDB cluster
516+
## Step 11: Deploy the TiDB cluster
463517

464518
When `ansible-playbook` runs Playbook, the default concurrent number is 5. If many deployment target machines are deployed, you can add the `-f` parameter to specify the concurrency, such as `ansible-playbook deploy.yml -f 10`.
465519

0 commit comments

Comments
 (0)