Skip to content

Commit

Permalink
en: improve format and fix errors in existing files (#585)
Browse files Browse the repository at this point in the history
Signed-off-by: Ran <huangran@pingcap.com>
  • Loading branch information
ran-huang committed Jul 29, 2020
1 parent 8cc60b9 commit 242131c
Show file tree
Hide file tree
Showing 26 changed files with 140 additions and 142 deletions.
4 changes: 2 additions & 2 deletions en/access-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,9 +156,9 @@ type: kubernetes.io/tls

After Ingress is deployed, visit <https://{host}/dashboard> to access TiDB Dashboard.

## Update TiDB cluster
## Update the TiDB cluster

If you enable quick access to TiDB Dashboard by updating an existing TiDB cluster, you need update the following two configurations:
To enable quick access to TiDB Dashboard by updating an existing TiDB cluster, update the following two configurations:

```yaml
apiVersion: pingcap.com/v1alpha1
Expand Down
4 changes: 2 additions & 2 deletions en/access-tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,13 @@ If there is no LoadBalancer, expose the TiDB service port in the following two m

- `externalTrafficPolicy=Cluster`: All machines in the Kubernetes cluster assign a NodePort to TiDB Pod, which is the default mode.

When using the `Cluster` mode, you can access the TiDB service by using the IP address of any machine plus a same port. If there is no TiDB Pod on the machine, the corresponding request is forwarded to the machine with a TiDB Pod.
When using the `Cluster` mode, you can access the TiDB service by using the IP address of any machine plus the same port. If there is no TiDB Pod on the machine, the corresponding request is forwarded to the machine with a TiDB Pod.

> **Note:**
>
> In this mode, the request's source IP obtained by the TiDB server is the node IP, not the real client's source IP. Therefore, the access control based on the client's source IP is not available in this mode.

- `externalTrafficPolicy=Local`: Only those machines that runs TiDB assign NodePort to TiDB Pod so that you can access local TiDB instances.
- `externalTrafficPolicy=Local`: Only those machines that run TiDB assign NodePort to TiDB Pod so that you can access local TiDB instances.

When you use the `Local` mode, it is recommended to enable the `StableScheduling` feature of `tidb-scheduler`. `tidb-scheduler` tries to schedule the newly added TiDB instances to the existing machines during the upgrade process. With such scheduling, client outside the Kubernetes cluster does not need to upgrade configuration after TiDB is restarted.

Expand Down
6 changes: 3 additions & 3 deletions en/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ The following components are responsible for the orchestration and scheduling lo

* `tidb-controller-manager` is a set of custom controllers in Kubernetes. These controllers constantly compare the desired state recorded in the `TidbCluster` object with the actual state of the TiDB cluster. They adjust the resources in Kubernetes to drive the TiDB cluster to meet the desired state and complete the corresponding control logic according to other CRs;
* `tidb-scheduler` is a Kubernetes scheduler extension that injects the TiDB specific scheduling policies to the Kubernetes scheduler;
* `tidb-admission-webhook` is a dynamic admission controller in Kubernetes, which completes the modification, verification, operation and maintenance of Pod, StatefulSet and other related resources.
* `tidb-admission-webhook` is a dynamic admission controller in Kubernetes, which completes the modification, verification, operation, and maintenance of Pod, StatefulSet, and other related resources.

In addition, TiDB Operator also provides `tkctl`, the command-line interface for TiDB clusters in Kubernetes. It is used for cluster operations and troubleshooting cluster issues.
In addition, TiDB Operator provides `tkctl`, the command-line interface for TiDB clusters in Kubernetes. It is used for cluster operations and troubleshooting cluster issues.

## Control flow

Expand All @@ -45,4 +45,4 @@ The overall control flow is described as follows:
3. Kubernetes' native controllers create, update, or delete the corresponding `Pod` based on objects such as `StatefulSet`, `Deployment`, and `Job`;
4. In the `Pod` declaration of PD, TiKV, and TiDB, the `tidb-scheduler` scheduler is specified. `tidb-scheduler` applies the specific scheduling logic of TiDB when scheduling the corresponding `Pod`.

Based on the above declarative control flow, TiDB Operator automatically performs health check and fault recovery for the cluster nodes. You can easily modify the `TidbCluster` object declaration to perform operations such as deployment, upgrade and scaling.
Based on the above declarative control flow, TiDB Operator automatically performs health check and fault recovery for the cluster nodes. You can easily modify the `TidbCluster` object declaration to perform operations such as deployment, upgrade, and scaling.
4 changes: 2 additions & 2 deletions en/backup-to-aws-s3-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ In the AWS cloud environment, different types of Kubernetes clusters provide dif

> **Note:**
>
> When you use this method, refer to [AWS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) for instructions on how to create a EKS cluster, and then deploy TiDB Operator and the TiDB cluster.
> When you use this method, refer to [AWS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) for instructions on how to create an EKS cluster, and then deploy TiDB Operator and the TiDB cluster.
## Ad-hoc full backup

Expand Down Expand Up @@ -357,7 +357,7 @@ You can set a backup policy to perform scheduled backups of the TiDB cluster, an

### Prerequisites for scheduled full backup

The prerequisites for the scheduled full backup is the same with the [prerequisites for ad-hoc full backup](#prerequisites-for-ad-hoc-full-backup).
The prerequisites for the scheduled full backup is the same as the [prerequisites for ad-hoc full backup](#prerequisites-for-ad-hoc-full-backup).

### Process of scheduled full backup

Expand Down
4 changes: 2 additions & 2 deletions en/backup-to-gcs.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ You can set a backup policy to perform scheduled backups of the TiDB cluster, an

### Prerequisites for scheduled backup

The prerequisites for the scheduled backup is the same with the [prerequisites for ad-hoc backup](#prerequisites-for-ad-hoc-backup).
The prerequisites for the scheduled backup is the same as the [prerequisites for ad-hoc backup](#prerequisites-for-ad-hoc-backup).

### Scheduled backup process

Expand Down Expand Up @@ -246,4 +246,4 @@ From the above example, you can see that the `backupSchedule` configuration cons

> **Note:**
>
> TiDB Operator creates a PVC. This PVC is used for both ad-hoc full backup and scheduled full backup. The backup data is stored in PV first, and then uploaded to remote storage. If you want to delete this PVC after the backup is completed, you can refer to [Delete Resource](cheat-sheet.md#delete-resources) to delete the backup Pod first, and then delete the PVC.
> TiDB Operator creates a PVC. This PVC is used for both ad-hoc full backup and scheduled full backup. The backup data is stored in PV first and then uploaded to remote storage. If you want to delete this PVC after the backup is completed, you can refer to [Delete Resource](cheat-sheet.md#delete-resources) to delete the backup Pod first, and then delete the PVC.
4 changes: 2 additions & 2 deletions en/backup-to-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ Refer to [Ad-hoc full backup prerequisites](backup-to-aws-s3-using-br.md#prerequ
storageSize: 10Gi
```
In the examples above, all data of the TiDB cluster is exported and backed up to Amazon S3 and Ceph respectively. You can ignore the `acl`, `endpoint`, and `storageClass` configuration items in the Amazon S3 configuration. S3-compatible storage types other than Amazon S3 can also use configuration similar to that of Amazon S3. You can also leave the configuration item fields empty if you do not need to configure these items as shown in the above Ceph configuration.
In the examples above, all data of the TiDB cluster is exported and backed up to Amazon S3 and Ceph respectively. You can ignore the `acl`, `endpoint`, and `storageClass` configuration items in the Amazon S3 configuration. S3-compatible storage types other than Amazon S3 can also use a configuration similar to that of Amazon S3. You can also leave the configuration item fields empty if you do not need to configure these items as shown in the above Ceph configuration.
Amazon S3 supports the following access-control list (ACL) polices:
Expand Down Expand Up @@ -543,4 +543,4 @@ From the examples above, you can see that the `backupSchedule` configuration con
> **Note:**
>
> TiDB Operator creates a PVC. This PVC is used for both ad-hoc full backup and scheduled full backup. The backup data is stored in PV first, and then uploaded to remote storage. If you want to delete this PVC after the backup is completed, you can refer to [Delete Resource](cheat-sheet.md#delete-resources) to delete the backup Pod first, and then delete the PVC.
> TiDB Operator creates a PVC. This PVC is used for both ad-hoc full backup and scheduled full backup. The backup data is stored in PV first and then uploaded to remote storage. If you want to delete this PVC after the backup is completed, you can refer to [Delete Resource](cheat-sheet.md#delete-resources) to delete the backup Pod first, and then delete the PVC.
20 changes: 10 additions & 10 deletions en/benchmark-sysbench.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/benchmark-sysbench/']

# TiDB in Kubernetes Sysbench Performance Test

Since the release of [TiDB Operator GA](https://pingcap.com/blog/database-cluster-deployment-and-management-made-easy-with-kubernetes/), more users begin to deploy and manage the TiDB cluster in Kubernetes using TiDB Operator. In this report, an in-depth and comprehensive test of TiDB has been conducted on GKE, which offers insight into the influencing factors that affects the performance of TiDB in Kubernetes.
Since the release of [TiDB Operator GA](https://pingcap.com/blog/database-cluster-deployment-and-management-made-easy-with-kubernetes/), more users begin to deploy and manage the TiDB cluster in Kubernetes using TiDB Operator. In this report, an in-depth and comprehensive test of TiDB has been conducted on GKE, which offers insight into the influencing factors that affect the performance of TiDB in Kubernetes.

## Test purpose

Expand All @@ -20,7 +20,7 @@ Since the release of [TiDB Operator GA](https://pingcap.com/blog/database-cluste
In this test:

- TiDB 3.0.1 and TiDB Operator 1.0.0 are used.
- Three instances are deployed for PD, TiDB and TiKV respectively.
- Three instances are deployed for PD, TiDB, and TiKV respectively.
- Each component is configured as below. Unconfigured components use the default values.

PD:
Expand Down Expand Up @@ -97,7 +97,7 @@ Sysbench, the pressure test platform, has a high demand on CPU in the high concu

> **Note:**
>
> The usable machine types vary among GCP Regions. In the test, disk also performs differently. Therefore, only the machines in us-central1 are applied for test.
> The usable machine types vary among GCP Regions. In the test, the disk also performs differently. Therefore, only the machines in us-central1 are applied for test.
#### Disk

Expand All @@ -113,8 +113,8 @@ GKE uses a more scalable and powerful [VPC-Native](https://cloud.google.com/kube

#### CPU

- In the test on single AZ cluster, the c2-standard-16 machine mode is chosen for TiDB/TiKV.
- In the comparison test on single AZ cluster and on multiple AZs cluster, the c2-standard-16 machine type cannot be simultaneously adopted in three AZs within the same GCP Region, so n1-standard-16 machine type is chosen.
- In the test on a single AZ cluster, the c2-standard-16 machine mode is chosen for TiDB/TiKV.
- In the comparison test on a single AZ cluster and on multiple AZs cluster, the c2-standard-16 machine type cannot be simultaneously adopted in three AZs within the same GCP Region, so n1-standard-16 machine type is chosen.

### Operation system and parameters

Expand Down Expand Up @@ -160,7 +160,7 @@ sysbench \
prepare
```

`${tidb_host}` is the address of TiDB database, which is specified according to actual test needs. For example, Pod IP, Service domain name, Host IP, and Load Balancer IP (the same below).
`${tidb_host}` is the address of the TiDB database, which is specified according to actual test needs. For example, Pod IP, Service domain name, Host IP, and Load Balancer IP (the same below).

#### Warming-up

Expand Down Expand Up @@ -290,7 +290,7 @@ From the images above, TiDB performs better on Ubuntu than on COS in the Point S

> **Note:**
>
> - This test is conducted only for the single test case and indicates that the performance might be affected by different operating systems, different optimization and default settings. Therefore, PingCAP makes no recommendation for the operating system.
> - This test is conducted only for the single test case and indicates that the performance might be affected by different operating systems, different optimization, and default settings. Therefore, PingCAP makes no recommendation for the operating system.
> - COS is officially recommended by GKE, because it is optimized for containers and improved substantially on security and disk performance.
#### Kubernetes Service vs GCP LoadBalancer
Expand Down Expand Up @@ -335,7 +335,7 @@ From the images above, TiDB performs better when accessed via Kubernetes Service

In the Point Select read test, TiDB's CPU usage exceeds 1400% (16 cores) while TiKV's CPU usage is about 1000% (16 cores).

The test compares the TiDB performance on general machine types with that on machines which are optimized for computing. In this performance comparison, the frequency of n1-stadnard-16 is about 2.3G, and the frequency of c2-standard-16 is about 3.1G.
The test compares the TiDB performance on general machine types with that on machines which are optimized for computing. In this performance comparison, the frequency of n1-standard-16 is about 2.3G, and the frequency of c2-standard-16 is about 3.1G.

In this test, the operating system is Ubuntu and the Pod network is Host. TiDB is accessed via Kubernetes Service.

Expand Down Expand Up @@ -463,6 +463,6 @@ This is a test of TiDB using sysbench running in Kubernetes deployed on a typica

> **Note:**
>
> - The factors above might change over time. The TiDB performance might varies on different cloud platforms. In the future, more tests will be conducted on more dimensions.
> - The factors above might change over time. The TiDB performance might vary on different cloud platforms. In the future, more tests will be conducted on more dimensions.
>
> - The sysbench test case cannot fully represent the actual business scenarios. It is recommended that you simulate the actual business for test and make consideration based on all the costs behind (machines, difference between operating systems, the limit of Host network, and so on).
> - The sysbench test case cannot fully represent the actual business scenarios. It is recommended that you simulate the actual business for test and make consideration based on all the costs behind (machines, the difference between operating systems, the limit of Host network, and so on).
10 changes: 5 additions & 5 deletions en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ This document introduces how to configure a TiDB cluster for production deployme

## Configure resources

Before deploying a TiDB cluster, it is necessary to configure the resources for each component of the cluster depending on your needs. PD, TiKV and TiDB are the core service components of a TiDB cluster. In a production environment, you need to configure resources of these components according to their needs. For details, refer to [Hardware Recommendations](https://pingcap.com/docs/stable/hardware-and-software-requirements/).
Before deploying a TiDB cluster, it is necessary to configure the resources for each component of the cluster depending on your needs. PD, TiKV, and TiDB are the core service components of a TiDB cluster. In a production environment, you need to configure resources of these components according to their needs. For details, refer to [Hardware Recommendations](https://pingcap.com/docs/stable/hardware-and-software-requirements/).

To ensure the proper scheduling and stable operation of the components of the TiDB cluster in Kubernetes, it is recommended to set Guaranteed-level quality of service (QoS) by making `limits` equal to `requests` when configuring resources. For details, refer to [Configure Quality of Service for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/).

Expand Down Expand Up @@ -56,7 +56,7 @@ Different components of a TiDB cluster have different disk requirements. Before

For the production environment, local storage is recommended for TiKV. The actual local storage in Kubernetes clusters might be sorted by disk types, such as `nvme-disks` and `sas-disks`.

For demonstration environment or functional verification, you can use network storage, such as `ebs` and `nfs`.
For the demonstration environment or functional verification, you can use network storage, such as `ebs` and `nfs`.

> **Note:**
>
Expand Down Expand Up @@ -269,13 +269,13 @@ For all configurable start parameters of TiCDC, see [TiCDC start parameters](htt

> **Note:**
>
> TiDB Operator provides a custom scheduler that guarantees TiDB service can tolerate host level failures through the specified scheduling algorithm. Currently, the TiDB cluster uses this scheduler as the default scheduler, which is configured through the item `spec.schedulerName`. This section focuses on configuring a TiDB cluster to tolerate failures at other levels such as rack, zone or region. This section is optional.
> TiDB Operator provides a custom scheduler that guarantees TiDB service can tolerate host-level failures through the specified scheduling algorithm. Currently, the TiDB cluster uses this scheduler as the default scheduler, which is configured through the item `spec.schedulerName`. This section focuses on configuring a TiDB cluster to tolerate failures at other levels such as rack, zone, or region. This section is optional.

TiDB is a distributed database and its high availability must ensure that when any physical topology node fails, not only the service is unaffected, but also the data is complete and available. The two configurations of high availability are described separately as follows.

### High avalability of TiDB service
### High availability of TiDB service

High availability at other levels (such as rack, zone, region) are guaranteed by Affinity's `PodAntiAffinity`. `PodAntiAffinity` can avoid the situation where different instances of the same component are deployed on the same physical topology node. In this way, disaster recovery is achieved. Detailed user guide for Affinity: [Affinity & AntiAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).
High availability at other levels (such as rack, zone, region) is guaranteed by Affinity's `PodAntiAffinity`. `PodAntiAffinity` can avoid the situation where different instances of the same component are deployed on the same physical topology node. In this way, disaster recovery is achieved. Detailed user guide for Affinity: [Affinity & AntiAffinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity).

The following is an example of a typical service high availability setup:

Expand Down
Loading

0 comments on commit 242131c

Please sign in to comment.