Skip to content

Commit

Permalink
suggest using default-scheduler and disable auto failover (#1504)
Browse files Browse the repository at this point in the history
  • Loading branch information
DanielZhangQD authored Dec 22, 2021
1 parent 7c879a2 commit 726b56c
Show file tree
Hide file tree
Showing 42 changed files with 300 additions and 489 deletions.
9 changes: 6 additions & 3 deletions en/access-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ spec:
>
> This guide shows how to quickly access TiDB Dashboard. Do **NOT** use this method in the production environment. For production environments, refer to [Access TiDB Dashboard by Ingress](#access-tidb-dashboard-by-ingress).
TiDB Dashboard is built in the PD component in TiDB 4.0 and later versions. You can refer to the following example to quickly deploy a v4.0.4 TiDB cluster in Kubernetes.
TiDB Dashboard is built in the PD component in TiDB 4.0 and later versions. You can refer to the following example to quickly deploy a TiDB cluster in Kubernetes.
1. Deploy the following `.yaml` file into the Kubernetes cluster by running the `kubectl apply -f` command:

Expand All @@ -56,18 +56,21 @@ TiDB Dashboard is built in the PD component in TiDB 4.0 and later versions. You
pd:
enableDashboardInternalProxy: true
baseImage: pingcap/pd
maxFailoverCount: 0
replicas: 1
requests:
storage: "1Gi"
storage: "10Gi"
config: {}
tikv:
baseImage: pingcap/tikv
maxFailoverCount: 0
replicas: 1
requests:
storage: "1Gi"
storage: "100Gi"
config: {}
tidb:
baseImage: pingcap/tidb
maxFailoverCount: 0
replicas: 1
service:
type: ClusterIP
Expand Down
9 changes: 9 additions & 0 deletions en/advanced-statefulset.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,18 +84,21 @@ spec:
pvReclaimPolicy: Delete
pd:
baseImage: pingcap/pd
maxFailoverCount: 0
replicas: 3
requests:
storage: "1Gi"
config: {}
tikv:
baseImage: pingcap/tikv
maxFailoverCount: 0
replicas: 4
requests:
storage: "1Gi"
config: {}
tidb:
baseImage: pingcap/tidb
maxFailoverCount: 0
replicas: 2
service:
type: ClusterIP
Expand Down Expand Up @@ -133,18 +136,21 @@ spec:
pvReclaimPolicy: Delete
pd:
baseImage: pingcap/pd
maxFailoverCount: 0
replicas: 3
requests:
storage: "1Gi"
config: {}
tikv:
baseImage: pingcap/tikv
maxFailoverCount: 0
replicas: 3
requests:
storage: "1Gi"
config: {}
tidb:
baseImage: pingcap/tidb
maxFailoverCount: 0
replicas: 2
service:
type: ClusterIP
Expand Down Expand Up @@ -184,18 +190,21 @@ spec:
pvReclaimPolicy: Delete
pd:
baseImage: pingcap/pd
maxFailoverCount: 0
replicas: 3
requests:
storage: "1Gi"
config: {}
tikv:
baseImage: pingcap/tikv
maxFailoverCount: 0
replicas: 4
requests:
storage: "1Gi"
config: {}
tidb:
baseImage: pingcap/tidb
maxFailoverCount: 0
replicas: 2
service:
type: ClusterIP
Expand Down
8 changes: 5 additions & 3 deletions en/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,13 @@ The following components are responsible for the orchestration and scheduling lo
* `tidb-scheduler` is a Kubernetes scheduler extension that injects the TiDB specific scheduling policies to the Kubernetes scheduler;
* `tidb-admission-webhook` is a dynamic admission controller in Kubernetes, which completes the modification, verification, operation, and maintenance of Pod, StatefulSet, and other related resources.

In addition, TiDB Operator provides `tkctl`, the command-line interface for TiDB clusters in Kubernetes. It is used for cluster operations and troubleshooting cluster issues.
> **Note:**
>
> `tidb-scheduler` is not mandatory. Refer to [tidb-scheduler and default-scheduler](tidb-scheduler.md#tidb-scheduler-and-default-scheduler) for details.
## Control flow

The following diagram is the analysis of the control flow of TiDB Operator. Starting from TiDB Operator v1.1, the TiDB cluster, monitoring, initialization, backup, and other components are deployed and managed using CR.
The following diagram is the analysis of the control flow of TiDB Operator. Starting from TiDB Operator v1.1, the TiDB cluster, monitoring, initialization, backup, and other components are deployed and managed using CR.

![TiDB Operator Control Flow](/media/tidb-operator-control-flow-1.1.png)

Expand All @@ -43,6 +45,6 @@ The overall control flow is described as follows:
1. The user creates a `TidbCluster` object and other CR objects through kubectl, such as `TidbMonitor`;
2. TiDB Operator watches `TidbCluster` and other related objects, and constantly adjust the `StatefulSet`, `Deployment`, `Service`, and other objects of PD, TiKV, TiDB, Monitor or other components based on the actual state of the cluster;
3. Kubernetes' native controllers create, update, or delete the corresponding `Pod` based on objects such as `StatefulSet`, `Deployment`, and `Job`;
4. In the `Pod` declaration of PD, TiKV, and TiDB, the `tidb-scheduler` scheduler is specified. `tidb-scheduler` applies the specific scheduling logic of TiDB when scheduling the corresponding `Pod`.
4. If you configure the components to use `tidb-scheduler` in the `TidbCluster` CR, the `Pod` declaration of PD, TiKV, and TiDB specifies `tidb-scheduler` as the scheduler. `tidb-scheduler` applies the specific scheduling logic of TiDB when scheduling the corresponding `Pod`.

Based on the above declarative control flow, TiDB Operator automatically performs health check and fault recovery for the cluster nodes. You can easily modify the `TidbCluster` object declaration to perform operations such as deployment, upgrade, and scaling.
3 changes: 3 additions & 0 deletions en/canary-upgrade-tidb-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ summary: Learn how to perform a canary upgrade on TiDB Operator in Kubernetes.

This document describes how to perform a canary upgrade on TiDB Operator. Using canary upgrades, you can prevent normal TiDB Operator upgrade from causing an unexpected impact on all the TiDB clusters in Kubernetes. After you confirm the impact of TiDB Operator upgrade or that the upgraded TiDB Operator works stably, you can normally upgrade TiDB Operator.

When you use TiDB Operator, `tidb-scheduler` is not mandatory. Refer to [tidb-scheduler and default-scheduler](tidb-scheduler.md#tidb-scheduler-and-default-scheduler) to confirm whether you need to deploy `tidb-scheduler`.

> **Note:**
>
> - You can perform a canary upgrade only on `tidb-controller-manager` and `tidb-scheduler`. AdvancedStatefulSet controller and `tidb-admission-webhook` do not support the canary upgrade.
Expand Down Expand Up @@ -40,6 +42,7 @@ To support canary upgrade, some parameters are added to the `values.yaml` file i
- version=canary
appendReleaseSuffix: true
#scheduler:
# If you do not need tidb-scheduler, set this value to false.
# create: false
advancedStatefulset:
create: false
Expand Down
Loading

0 comments on commit 726b56c

Please sign in to comment.