Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs/dind: update documentation of DinD installation #458

Merged
merged 25 commits into from
May 9, 2019
Merged
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
75b887d
docs/dind: update installation instructions
AstroProfundis May 5, 2019
0c1a1a4
docs/dind: add instructions for installing on a remote machine
AstroProfundis May 6, 2019
ffd4e23
docs/dind: adjust worlds
AstroProfundis May 6, 2019
2b8470b
docs/dind: add notes for system requirements & update details
AstroProfundis May 6, 2019
e8d745d
docs/dind: fix formatting
AstroProfundis May 7, 2019
7f8b8b8
docs/dind: minor updates on worlds
AstroProfundis May 7, 2019
38dd2ee
Merge branch 'master' into validate-docs
AstroProfundis May 7, 2019
6eeb167
docs/dind: fix worlds & typo
AstroProfundis May 7, 2019
b979994
docs/dind: update command outputs in examples to fit latest version
AstroProfundis May 7, 2019
c6a8332
docs/dind: update command outputs in examples to fit latest version
AstroProfundis May 8, 2019
d4b3da1
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
aef7caa
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
3cb82ab
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
9c3c422
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
0d6d8c4
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
e5e9d4a
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
e4a487f
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
072de4e
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
2056ee9
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
a27efbb
Update docs/local-dind-tutorial.md
lilin90 May 8, 2019
b002c49
docs/dind: fix misc syntax issues
AstroProfundis May 8, 2019
66e4117
Merge branch 'master' into validate-docs
tennix May 8, 2019
46260ba
Merge branch 'master' into validate-docs
AstroProfundis May 9, 2019
71bbf47
Merge branch 'master' into validate-docs
tennix May 9, 2019
712a103
Update docs/local-dind-tutorial.md
AstroProfundis May 9, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
74 changes: 50 additions & 24 deletions docs/local-dind-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,23 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme

> **Note:** [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine.

> **Note:** `kubeadm` validates installed Docker version during the installation process. If you are using Docker later than 18.06, there would be warning messages. The cluster might still be working, but it is recommended to use a Docker version between 17.03 and 18.06 for better compatibility.

- [Helm Client](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client): 2.9.0 or later
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 or later
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 at least, 1.13 or later recommended

> **Note:** The outputs of different versions of `kubectl` might be slightly different.

- For Linux users, `kubeadm` might produce warning messages during the installation process if you are using kernel 5.x or later versions. The cluster might still be working, but it is recommended to use kernel version 3.10+ or 4.x for better compatibility.

- `root` access or permissions to operate with the Docker daemon.

## Step 1: Deploy a Kubernetes cluster using DinD

There is a script in our repository that can help you install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator.

```sh
$ git clone https://github.com/pingcap/tidb-operator
$ git clone --depth=1 https://github.com/pingcap/tidb-operator
$ cd tidb-operator
$ manifests/local-dind/dind-cluster-v1.12.sh up
```
Expand All @@ -37,16 +45,7 @@ $ KUBE_REPO_PREFIX=uhub.ucloud.cn/pingcap manifests/local-dind/dind-cluster-v1.1

## Step 2: Install TiDB Operator in the DinD Kubernetes cluster

Uncomment the `scheduler.kubeSchedulerImage` in `values.yaml`, set it to the same as your kubernetes cluster version.

```sh
$ kubectl apply -f manifests/crd.yaml

$ # This creates the custom resource for the cluster that the operator uses.
$ kubectl get customresourcedefinitions
NAME AGE
tidbclusters.pingcap.com 1m

$ # Install TiDB Operator into Kubernetes
$ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin --set scheduler.kubeSchedulerImageName=mirantis/hypokube --set scheduler.kubeSchedulerImageTag=final
$ # wait operator running
Expand All @@ -61,11 +60,11 @@ tidb-scheduler-56757c896c-clzdg 2/2 Running 0 1m
```sh
$ helm install charts/tidb-cluster --name=demo --namespace=tidb
$ watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide
$ # wait a few minutes to get all TiDB components created and ready
$ # wait a few minutes to get all TiDB components get created and ready

$ kubectl get tidbcluster -n tidb
NAME AGE
demo 3m
NAME PD STORAGE READY DESIRE TIKV STORAGE READY DESIRE TIDB READY DESIRE
AstroProfundis marked this conversation as resolved.
Show resolved Hide resolved
demo pingcap/pd:v2.1.8 1Gi 3 3 pingcap/tikv:v2.1.8 10Gi 3 3 pingcap/tidb:v2.1.8 2 2

$ kubectl get statefulset -n tidb
NAME DESIRED CURRENT AGE
Expand All @@ -85,17 +84,17 @@ demo-tidb-peer ClusterIP None <none> 10080/TCP
demo-tikv-peer ClusterIP None <none> 20160/TCP 1m

$ kubectl get configmap -n tidb
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output needs to be adjusted because of #435

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output of configmap has no difference with the one currently in doc.

NAME DATA AGE
demo-monitor 3 1m
demo-pd 2 1m
demo-tidb 2 1m
demo-tikv 2 1m
NAME DATA AGE
demo-monitor 5 1m
demo-monitor-dashboard 0 1m
demo-pd 2 1m
demo-tidb 2 1m
demo-tikv 2 1m

$ kubectl get pod -n tidb
NAME READY STATUS RESTARTS AGE
demo-discovery-649c7bcbdc-t5r2k 2/2 Running 0 1m
demo-discovery-649c7bcbdc-t5r2k 1/1 Running 0 1m
demo-monitor-58745cf54f-gb8kd 2/2 Running 0 1m
demo-monitor-configurator-stvw6 0/1 Completed 0 1m
demo-pd-0 1/1 Running 0 1m
demo-pd-1 1/1 Running 0 1m
demo-pd-2 1/1 Running 0 1m
Expand All @@ -106,7 +105,9 @@ demo-tikv-1 1/1 Running 0 1m
demo-tikv-2 1/1 Running 0 1m
```

To access the TiDB cluster, use `kubectl port-forward` to expose the services to host.
To access the TiDB cluster, use `kubectl port-forward` to expose services to the host. The port numbers in command are in `<host machine port>:<k8s service port>` format.

> **Note:** If you are deploying DinD on a remote machine rather than a local PC, there might be problems accessing "localhost" of that remote system. When you use `kubectl` 1.13 or later, it is possible to expose the port on `0.0.0.0` instead of the default `127.0.0.1` by adding `--address 0.0.0.0` to the `kubectl port-forward` command.

- Access TiDB using the MySQL client

Expand All @@ -119,7 +120,7 @@ To access the TiDB cluster, use `kubectl port-forward` to expose the services to
2. To connect to TiDB using the MySQL client, open a new terminal tab or window and run the following command:

```sh
$ mysql -h 127.0.0.1 -P 4000 -u root -p
$ mysql -h 127.0.0.1 -P 4000 -u root
```

- View the monitor dashboard
Expand All @@ -135,6 +136,31 @@ To access the TiDB cluster, use `kubectl port-forward` to expose the services to
* Default username: admin
* Default password: admin

- Permanent remote access

Although this is a very simple demo cluster and does not apply to any serious usage, it is useful if it can be accessed remotely without `kubectl port-forward`, which might require an open terminal.

TiDB, Prometheus, and Grafana are exposed as `NodePort` Services by default, so it is possible to set up a reverse proxy for them.

1. Find their listing port numbers using the following command:

```sh
$ kubectl get service -n tidb | grep NodePort
demo-grafana NodePort 10.111.80.73 <none> 3000:32503/TCP 1m
demo-prometheus NodePort 10.104.97.84 <none> 9090:32448/TCP 1m
demo-tidb NodePort 10.102.165.13 <none> 4000:32714/TCP,10080:32680/TCP 1m
```

In this sample output, the ports are: 32503 for Grafana, 32448 for Prometheus, and 32714 for TiDB.

2. Find the host IP addresses of the cluster.

DinD is a K8s cluster running inside Docker containers, so Services expose ports to the containers' address, instead of the real host machine. We can find IP addresses of Docker containers by `kubectl get nodes -o yaml | grep address`.

3. Set up a reverse proxy.

Either (or all) of the container IPs can be used as upstream for a reverse proxy. You can use any reverse proxy server that supports TCP (for TiDB) or HTTP (for Grafana and Prometheus) to provide remote access. HAProxy and NGINX are two common choices.

## Scale the TiDB cluster

You can scale out or scale in the TiDB cluster simply by modifying the number of `replicas`.
Expand Down Expand Up @@ -203,4 +229,4 @@ $ sudo rm -rf data/kube-node-*
$ manifests/local-dind/dind-cluster-v1.12.sh up
```

> **Warning:** You must clean the data after you destroy the DinD Kubernetes cluster, otherwise the TiDB cluster would fail to start when you bring it up again.
> **Warning:** You must clean the data after you destroy the DinD Kubernetes cluster, otherwise the TiDB cluster would fail to start when you try to bring it up again.