Skip to content

Commit

Permalink
docs/dind: update contents to fix issues found in the test
Browse files Browse the repository at this point in the history
  • Loading branch information
AstroProfundis committed May 23, 2019
1 parent 32fa5e9 commit 7b46590
Showing 1 changed file with 97 additions and 20 deletions.
117 changes: 97 additions & 20 deletions docs/local-dind-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme

> **Note:** [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine.
> **Note:** `kubeadm` validates installed Docker version during the installation process. If you are using Docker later than 18.06, there would be warning messages. The cluster might still be working, but it is recommended to use a Docker version between 17.03 and 18.06 for better compatibility.
> **Note:** `kubeadm` validates installed Docker version during the installation process. If you are using Docker later than 18.06, there would be warning messages. The cluster might still be working, but it is recommended to use a Docker version between 17.03 and 18.06 for better compatibility. You can find older versions of docker at [here](https://download.docker.com/).
- [Helm Client](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client): 2.9.0 or later
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 at least, 1.13 or later recommended
Expand Down Expand Up @@ -61,35 +61,61 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme
## Step 1: Deploy a Kubernetes cluster using DinD
There is a script in our repository that can help you install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator.
Install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator with the script in our repository.
```sh
# Get the code
$ git clone --depth=1 https://github.com/pingcap/tidb-operator
# Set up the cluster
$ cd tidb-operator
$ manifests/local-dind/dind-cluster-v1.12.sh up
```
> **Note:** If the cluster fails to pull Docker images during the startup due to the firewall, you can set the environment variable `KUBE_REPO_PREFIX` to `uhub.ucloud.cn/pingcap` before running the script `dind-cluster-v1.12.sh` as follows (the Docker images used are pulled from [UCloud Docker Registry](https://docs.ucloud.cn/compute/uhub/index)):
If the cluster fails to pull Docker images during the startup due to the firewall, you can set the environment variable `KUBE_REPO_PREFIX` to `uhub.ucloud.cn/pingcap` before running the script `dind-cluster-v1.12.sh` as follows (the Docker images used are pulled from [UCloud Docker Registry](https://docs.ucloud.cn/compute/uhub/index)):
```
$ KUBE_REPO_PREFIX=uhub.ucloud.cn/pingcap manifests/local-dind/dind-cluster-v1.12.sh up
```
> **Note:** An alternative solution is to configure HTTP proxies in DinD.
An alternative solution is to configure HTTP proxies in DinD:
```
```sh
$ export DIND_HTTP_PROXY=http://<ip>:<port>
$ export DIND_HTTPS_PROXY=http://<ip>:<port>
$ export DIND_NO_PROXY=.svc,.local,127.0.0.1,0,1,2,3,4,5,6,7,8,9 # whitelist internal domains and IP addresses
$ manifests/local-dind/dind-cluster-v1.12.sh up
```
There might be some warnings during the process due to various settings and environment of your system, but the command should exit without any error. You can verify the k8s cluster is up and running by:
```sh
# Get the cluster information
$ kubectl cluster-info
Kubernetes master is running at http://127.0.0.1:8080
KubeDNS is running at http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
# List host nodes (in the DinD installation, they are docker containers) in the cluster
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master Ready master 11m v1.12.5 10.192.0.2 <none> Debian GNU/Linux 9 (stretch) 3.10.0-957.12.1.el7.x86_64 docker://18.9.0
kube-node-1 Ready <none> 9m32s v1.12.5 10.192.0.3 <none> Debian GNU/Linux 9 (stretch) 3.10.0-957.12.1.el7.x86_64 docker://18.9.0
kube-node-2 Ready <none> 9m32s v1.12.5 10.192.0.4 <none> Debian GNU/Linux 9 (stretch) 3.10.0-957.12.1.el7.x86_64 docker://18.9.0
kube-node-3 Ready <none> 9m32s v1.12.5 10.192.0.5 <none> Debian GNU/Linux 9 (stretch) 3.10.0-957.12.1.el7.x86_64 docker://18.9.0
```
## Step 2: Install TiDB Operator in the DinD Kubernetes cluster
Once the k8s cluster is up and running, we can install TiDB Operator into it using `helm`:
```sh
$ # Install TiDB Operator into Kubernetes
$ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin --set scheduler.kubeSchedulerImageName=mirantis/hypokube --set scheduler.kubeSchedulerImageTag=final
$ # wait operator running
```
Then wait few minutes until operator is running:
```sh
$ kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator
NAME READY STATUS RESTARTS AGE
tidb-controller-manager-5cd94748c7-jlvfs 1/1 Running 0 1m
Expand All @@ -98,11 +124,19 @@ tidb-scheduler-56757c896c-clzdg 2/2 Running 0 1m
## Step 3: Deploy a TiDB cluster in the DinD Kubernetes cluster
By using `helm` along with TiDB Operator, we can easily set up a TiDB cluster:
```sh
$ helm install charts/tidb-cluster --name=demo --namespace=tidb
```
And wait a few minutes for all TiDB components get created and ready:
```sh
# Use Ctrl + C to exit watch mode
$ kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide --watch
$ # wait a few minutes to get all TiDB components get created and ready
# Get basic information of the TiDB cluster
$ kubectl get tidbcluster -n tidb
NAME PD STORAGE READY DESIRE TIKV STORAGE READY DESIRE TIDB READY DESIRE
demo pingcap/pd:v2.1.8 1Gi 3 3 pingcap/tikv:v2.1.8 10Gi 3 3 pingcap/tidb:v2.1.8 2 2
Expand All @@ -125,12 +159,14 @@ demo-tidb-peer ClusterIP None <none> 10080/TCP
demo-tikv-peer ClusterIP None <none> 20160/TCP 1m
$ kubectl get configmap -n tidb
NAME DATA AGE
demo-monitor 5 1m
demo-monitor-dashboard 0 1m
demo-pd 2 1m
demo-tidb 2 1m
demo-tikv 2 1m
NAME DATA AGE
demo-monitor 5 1m
demo-monitor-dashboard-extra-v3 2 1m
demo-monitor-dashboard-v2 5 1m
demo-monitor-dashboard-v3 5 1m
demo-pd 2 1m
demo-tidb 2 1m
demo-tikv 2 1m
$ kubectl get pod -n tidb
NAME READY STATUS RESTARTS AGE
Expand All @@ -146,6 +182,8 @@ demo-tikv-1 1/1 Running 0 1m
demo-tikv-2 1/1 Running 0 1m
```
## Access the database and monitor dashboards
To access the TiDB cluster, use `kubectl port-forward` to expose services to the host. The port numbers in command are in `<host machine port>:<k8s service port>` format.
> **Note:** If you are deploying DinD on a remote machine rather than a local PC, there might be problems accessing "localhost" of that remote system. When you use `kubectl` 1.13 or later, it is possible to expose the port on `0.0.0.0` instead of the default `127.0.0.1` by adding `--address 0.0.0.0` to the `kubectl port-forward` command.
Expand All @@ -166,14 +204,16 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the
$ mysql -h 127.0.0.1 -P 4000 -u root
```
- View the monitor dashboard
- View the monitor dashboards
1. Use `kubectl` to forward the host machine port to the Grafana service port:
```sh
$ kubectl port-forward svc/demo-grafana 3000:3000 --namespace=tidb
```
If the proxy is set up sucessfully, it will print something like `Forwarding from 0.0.0.0:3000 -> 3000`, press `Ctrl + C` to stop the proxy and exit.
2. Open your web browser at http://localhost:3000 to access the Grafana monitoring interface.
* Default username: admin
Expand All @@ -198,7 +238,25 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the
2. Find the host IP addresses of the cluster.
DinD is a K8s cluster running inside Docker containers, so Services expose ports to the containers' address, instead of the real host machine. We can find IP addresses of Docker containers by `kubectl get nodes -o yaml | grep address`.
DinD is a K8s cluster running inside Docker containers, so Services expose ports to the containers' address, instead of the real host machine. We can find IP addresses of Docker containers by the following command:
```sh
$ kubectl get nodes -o yaml | grep address
addresses:
- address: 10.192.0.2
- address: kube-master
addresses:
- address: 10.192.0.3
- address: kube-node-1
addresses:
- address: 10.192.0.4
- address: kube-node-2
addresses:
- address: 10.192.0.5
- address: kube-node-3
```
Use the IP addresses for reverse proxy.
3. Set up a reverse proxy.
Expand All @@ -208,7 +266,7 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the
You can scale out or scale in the TiDB cluster simply by modifying the number of `replicas`.
1. Configure the `charts/tidb-cluster/values.yaml` file.
1. Edit the `charts/tidb-cluster/values.yaml` file with your preffered text editor.
For example, to scale out the cluster, you can modify the number of TiKV `replicas` from 3 to 5, or the number of TiDB `replicas` from 2 to 3.
Expand All @@ -220,18 +278,36 @@ You can scale out or scale in the TiDB cluster simply by modifying the number of
> **Note:** If you need to scale in TiKV, the consumed time depends on the volume of your existing data, because the data needs to be migrated safely.
Use `kubectl get pod -n tidb` to verify the number of each compoments equal to values in the `charts/tidb-cluster/values.yaml` file, and all pods are in `Running` state.
## Upgrade the TiDB cluster
1. Configure the `charts/tidb-cluster/values.yaml` file.
1. Edit the `charts/tidb-cluster/values.yaml` file with your preffered text editor.
For example, change the version of PD/TiKV/TiDB `image` to `v2.1.9`.
For example, change the version of PD/TiKV/TiDB `image` to `v2.1.10`.
2. Run the following command to apply the changes:
```sh
helm upgrade demo charts/tidb-cluster --namespace=tidb
```
Use `kubectl get pod -n tidb` to verify that all pods are in `Running` state. Then you can connect to the database and use `tidb_version()` function to verify the version:
```sh
MySQL [(none)]> select tidb_version()\G
*************************** 1. row ***************************
tidb_version(): Release Version: 2.1.10
Git Commit Hash: v2.1.10
Git Branch: master
UTC Build Time: 2019-05-22 11:12:14
GoVersion: go version go1.12.4 linux/amd64
Race Enabled: false
TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e
Check Table Before Drop: false
1 row in set (0.001 sec)
```
## Destroy the TiDB cluster
When you are done with your test, use the following command to destroy the TiDB cluster:
Expand All @@ -253,9 +329,10 @@ $ kubectl delete pvc --namespace tidb --all
```sh
$ manifests/local-dind/dind-cluster-v1.12.sh stop
```
You can use `docker ps` to verify there are no docker container running.
* If you want to restart the DinD Kubernetes after you stop it, run the following command:
```
Expand Down

0 comments on commit 7b46590

Please sign in to comment.