Skip to content

Commit

Permalink
docs/dind: update documentation of DinD installation (#458)
Browse files Browse the repository at this point in the history
* docs/dind: update installation instructions

 * specify full path of template file

 * update commands and outputs

* docs/dind: add instructions for installing on a remote machine

* docs/dind: adjust worlds

* docs/dind: add notes for system requirements & update details

* docs/dind: fix formatting

* docs/dind: minor updates on worlds

* docs/dind: fix worlds & typo

* docs/dind: update command outputs in examples to fit latest version

* docs/dind: update command outputs in examples to fit latest version

* docs/dind: fix misc syntax issues

* Update docs/local-dind-tutorial.md

Co-Authored-By: Lilian Lee <lilin@pingcap.com>
  • Loading branch information
AstroProfundis and lilin90 authored May 9, 2019
1 parent 21cf119 commit 8e0cc92
Showing 1 changed file with 50 additions and 24 deletions.
74 changes: 50 additions & 24 deletions docs/local-dind-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,23 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme

> **Note:** [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine.
> **Note:** `kubeadm` validates installed Docker version during the installation process. If you are using Docker later than 18.06, there would be warning messages. The cluster might still be working, but it is recommended to use a Docker version between 17.03 and 18.06 for better compatibility.
- [Helm Client](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client): 2.9.0 or later
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 or later
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 at least, 1.13 or later recommended

> **Note:** The outputs of different versions of `kubectl` might be slightly different.
- For Linux users, `kubeadm` might produce warning messages during the installation process if you are using kernel 5.x or later versions. The cluster might still be working, but it is recommended to use kernel version 3.10+ or 4.x for better compatibility.

- `root` access or permissions to operate with the Docker daemon.

## Step 1: Deploy a Kubernetes cluster using DinD

There is a script in our repository that can help you install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator.

```sh
$ git clone https://github.com/pingcap/tidb-operator
$ git clone --depth=1 https://github.com/pingcap/tidb-operator
$ cd tidb-operator
$ manifests/local-dind/dind-cluster-v1.12.sh up
```
Expand All @@ -37,16 +45,7 @@ $ KUBE_REPO_PREFIX=uhub.ucloud.cn/pingcap manifests/local-dind/dind-cluster-v1.1

## Step 2: Install TiDB Operator in the DinD Kubernetes cluster

Uncomment the `scheduler.kubeSchedulerImage` in `values.yaml`, set it to the same as your kubernetes cluster version.

```sh
$ kubectl apply -f manifests/crd.yaml

$ # This creates the custom resource for the cluster that the operator uses.
$ kubectl get customresourcedefinitions
NAME AGE
tidbclusters.pingcap.com 1m

$ # Install TiDB Operator into Kubernetes
$ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin --set scheduler.kubeSchedulerImageName=mirantis/hypokube --set scheduler.kubeSchedulerImageTag=final
$ # wait operator running
Expand All @@ -61,11 +60,11 @@ tidb-scheduler-56757c896c-clzdg 2/2 Running 0 1m
```sh
$ helm install charts/tidb-cluster --name=demo --namespace=tidb
$ watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide
$ # wait a few minutes to get all TiDB components created and ready
$ # wait a few minutes to get all TiDB components get created and ready

$ kubectl get tidbcluster -n tidb
NAME AGE
demo 3m
NAME PD STORAGE READY DESIRE TIKV STORAGE READY DESIRE TIDB READY DESIRE
demo pingcap/pd:v2.1.8 1Gi 3 3 pingcap/tikv:v2.1.8 10Gi 3 3 pingcap/tidb:v2.1.8 2 2

$ kubectl get statefulset -n tidb
NAME DESIRED CURRENT AGE
Expand All @@ -85,17 +84,17 @@ demo-tidb-peer ClusterIP None <none> 10080/TCP
demo-tikv-peer ClusterIP None <none> 20160/TCP 1m

$ kubectl get configmap -n tidb
NAME DATA AGE
demo-monitor 3 1m
demo-pd 2 1m
demo-tidb 2 1m
demo-tikv 2 1m
NAME DATA AGE
demo-monitor 5 1m
demo-monitor-dashboard 0 1m
demo-pd 2 1m
demo-tidb 2 1m
demo-tikv 2 1m

$ kubectl get pod -n tidb
NAME READY STATUS RESTARTS AGE
demo-discovery-649c7bcbdc-t5r2k 2/2 Running 0 1m
demo-discovery-649c7bcbdc-t5r2k 1/1 Running 0 1m
demo-monitor-58745cf54f-gb8kd 2/2 Running 0 1m
demo-monitor-configurator-stvw6 0/1 Completed 0 1m
demo-pd-0 1/1 Running 0 1m
demo-pd-1 1/1 Running 0 1m
demo-pd-2 1/1 Running 0 1m
Expand All @@ -106,7 +105,9 @@ demo-tikv-1 1/1 Running 0 1m
demo-tikv-2 1/1 Running 0 1m
```

To access the TiDB cluster, use `kubectl port-forward` to expose the services to host.
To access the TiDB cluster, use `kubectl port-forward` to expose services to the host. The port numbers in command are in `<host machine port>:<k8s service port>` format.

> **Note:** If you are deploying DinD on a remote machine rather than a local PC, there might be problems accessing "localhost" of that remote system. When you use `kubectl` 1.13 or later, it is possible to expose the port on `0.0.0.0` instead of the default `127.0.0.1` by adding `--address 0.0.0.0` to the `kubectl port-forward` command.
- Access TiDB using the MySQL client

Expand All @@ -119,7 +120,7 @@ To access the TiDB cluster, use `kubectl port-forward` to expose the services to
2. To connect to TiDB using the MySQL client, open a new terminal tab or window and run the following command:

```sh
$ mysql -h 127.0.0.1 -P 4000 -u root -p
$ mysql -h 127.0.0.1 -P 4000 -u root
```

- View the monitor dashboard
Expand All @@ -135,6 +136,31 @@ To access the TiDB cluster, use `kubectl port-forward` to expose the services to
* Default username: admin
* Default password: admin

- Permanent remote access

Although this is a very simple demo cluster and does not apply to any serious usage, it is useful if it can be accessed remotely without `kubectl port-forward`, which might require an open terminal.

TiDB, Prometheus, and Grafana are exposed as `NodePort` Services by default, so it is possible to set up a reverse proxy for them.

1. Find their listing port numbers using the following command:

```sh
$ kubectl get service -n tidb | grep NodePort
demo-grafana NodePort 10.111.80.73 <none> 3000:32503/TCP 1m
demo-prometheus NodePort 10.104.97.84 <none> 9090:32448/TCP 1m
demo-tidb NodePort 10.102.165.13 <none> 4000:32714/TCP,10080:32680/TCP 1m
```

In this sample output, the ports are: 32503 for Grafana, 32448 for Prometheus, and 32714 for TiDB.

2. Find the host IP addresses of the cluster.

DinD is a K8s cluster running inside Docker containers, so Services expose ports to the containers' address, instead of the real host machine. We can find IP addresses of Docker containers by `kubectl get nodes -o yaml | grep address`.
3. Set up a reverse proxy.
Either (or all) of the container IPs can be used as upstream for a reverse proxy. You can use any reverse proxy server that supports TCP (for TiDB) or HTTP (for Grafana and Prometheus) to provide remote access. HAProxy and NGINX are two common choices.
## Scale the TiDB cluster
You can scale out or scale in the TiDB cluster simply by modifying the number of `replicas`.
Expand Down Expand Up @@ -203,4 +229,4 @@ $ sudo rm -rf data/kube-node-*
$ manifests/local-dind/dind-cluster-v1.12.sh up
```
> **Warning:** You must clean the data after you destroy the DinD Kubernetes cluster, otherwise the TiDB cluster would fail to start when you bring it up again.
> **Warning:** You must clean the data after you destroy the DinD Kubernetes cluster, otherwise the TiDB cluster would fail to start when you try to bring it up again.

0 comments on commit 8e0cc92

Please sign in to comment.