You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: op-guide/ansible-deployment.md
+58-4Lines changed: 58 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -207,7 +207,61 @@ The NTP service is installed and started using the software repository that come
207
207
208
208
To make the NTP service start synchronizing as soon as possible, the system executes the `ntpdate` command to set the local date and time by polling `ntp_server` in the `hosts.ini` file. The default server is `pool.ntp.org`, and you can also replace it with your NTP server.
209
209
210
-
## Step 7: Mount the data disk ext4 filesystem with options on the target machines
210
+
## Step 7: Configure the CPUfreq governor mode on the target machine
211
+
212
+
For details about CPUfreq, see [the CPUfreq Governor documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/power_management_guide/cpufreq_governors).
213
+
214
+
Set the CPUfreq governor mode to `performance` to make full use of CPU performance.
215
+
216
+
### Check the governor modes supported by the system
217
+
218
+
You can run the `cpupower frequency-info --governors` command to check the governor modes which the system supports:
219
+
220
+
```
221
+
# cpupower frequency-info --governors
222
+
analyzing CPU 0:
223
+
available cpufreq governors: performance powersave
224
+
```
225
+
226
+
Taking the above code for example, the system supports the `performance` and `powersave` modes.
227
+
228
+
> **Note:** As the following shows, if it returns “Not Available”, it means that the current system does not support CPUfreq configuration and you can skip this step.
229
+
230
+
> ```
231
+
> # cpupower frequency-info --governors
232
+
> analyzing CPU 0:
233
+
> available cpufreq governors: Not Available
234
+
> ```
235
+
236
+
### Check the current governor mode
237
+
238
+
You can run the `cpupower frequency-info --policy` command to check the current CPUfreq governor mode:
239
+
240
+
```
241
+
# cpupower frequency-info --policy
242
+
analyzing CPU 0:
243
+
current policy: frequency should be within 1.20 GHz and 3.20 GHz.
244
+
The governor "powersave" may decide which speed to use
245
+
within this range.
246
+
```
247
+
248
+
As the above code shows, the current mode is `powersave` in this example.
249
+
250
+
### Change the governor mode
251
+
252
+
- You can run the following command to change the current mode to `performance`:
253
+
254
+
```
255
+
# cpupower frequency-set --governor performance
256
+
```
257
+
258
+
- You can also run the following command to set the mode on the target machine in batches:
259
+
260
+
```
261
+
$ ansible -i hosts.ini all -m shell -a "cpupower frequency-set --governor performance" -b
262
+
```
263
+
264
+
## Step 8: Mount the data disk ext4 filesystem with options on the target machines
211
265
212
266
Log in to the Control Machine using the `root` user account.
213
267
@@ -274,7 +328,7 @@ Take the `/dev/nvme0n1` data disk as an example:
274
328
275
329
If the filesystem is ext4 and `nodelalloc` is included in the mount options, you have successfully mount the data disk ext4 filesystem with options on the target machines.
276
330
277
-
## Step 8: Edit the `inventory.ini` file to orchestrate the TiDB cluster
331
+
## Step 9: Edit the `inventory.ini` file to orchestrate the TiDB cluster
278
332
279
333
Log in to the Control Machine using the `tidb` user account, and edit the `tidb-ansible/inventory.ini` file to orchestrate the TiDB cluster. The standard TiDB cluster contains 6 machines: 2 TiDB nodes, 3 PD nodes and 3 TiKV nodes.
280
334
@@ -411,7 +465,7 @@ location_labels = ["host"]
411
465
412
466
- `capacity`: total disk capacity / number of TiKV instances (the unit is GB)
413
467
414
-
## Step 9: Edit variables in the `inventory.ini` file
468
+
## Step 10: Edit variables in the `inventory.ini` file
415
469
416
470
This step describes how to edit the variable of deployment directory and other variables in the `inventory.ini` file.
417
471
@@ -459,7 +513,7 @@ To enable the following control variables, use the capitalized `True`. To disabl
459
513
| enable_bandwidth_limit | to set a bandwidth limit when pulling the diagnostic data from the target machines to the Control Machine; used together with the `collect_bandwidth_limit` variable |
460
514
| collect_bandwidth_limit | the limited bandwidth when pulling the diagnostic data from the target machines to the Control Machine; unit: Kbit/s; default 10000, indicating 10Mb/s; for the cluster topology of multiple TiKV instances on each TiKV node, you need to divide the number of the TiKV instances on each TiKV node |
461
515
462
-
## Step 10: Deploy the TiDB cluster
516
+
## Step 11: Deploy the TiDB cluster
463
517
464
518
When `ansible-playbook` runs Playbook, the default concurrent number is 5. If many deployment target machines are deployed, you can add the `-f` parameter to specify the concurrency, such as `ansible-playbook deploy.yml -f 10`.
0 commit comments