From ebaa8ecd8ad8acc2ce52d7d84f0a9d13f844d92c Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Wed, 15 May 2019 15:00:45 +0800 Subject: [PATCH 01/22] docs/dind: add instructions for XFS filesystems --- docs/local-dind-tutorial.md | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 61ee1227a7..37d62bfc0d 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -27,6 +27,38 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme - `root` access or permissions to operate with the Docker daemon. +- Supported filesystem + + If the host machine uses an XFS filesystem (default in CentOS 7), it must be formatted with `ftype=1` to enable `d_type` support, see [Docker's documentation](https://docs.docker.com/storage/storagedriver/overlayfs-driver/) for details. + + You can check if your filesystem supports `d_type` using commaind `xfs_info / | grep ftype`, where `/` is the data directory of you installed Docker daemon. + + If your root directory `/` uses XFS without `d_type` support, but there is another partition does, or is using another filesystem, it is also possible to change the data directory of Docker to use that partition. + + Assume a supported filesystem is mounted at path `/data`, use the following instructions to let Docker use it: + + ```sh + # Create a new directory for docker data storage + mkdir -p /data/docker + + # Stop docker daemon + systemctl stop docker.service + + # Make sure the systemd directory exist + mkdir -p /etc/systemd/system/docker.service.d/ + + # Overrite config + cat << EOF > /etc/systemd/system/docker.service.d/docker-storage.conf + [Service] + ExecStart= + ExecStart=/usr/bin/dockerd -g /data/docker -H fd:// --containerd=/run/containerd/containerd.sock + EOF + + # Restart docker daemon + systemctl daemon-reload + systemctl start docker.service + ``` + ## Step 1: Deploy a Kubernetes cluster using DinD There is a script in our repository that can help you install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator. From 8d07ed25a70f10736423d3e2afb57a9efedee2c6 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Fri, 17 May 2019 11:30:25 +0800 Subject: [PATCH 02/22] docs: use `kubectl --watch` instead of `watch kubectl` in tutorials --- deploy/aliyun/README.md | 2 +- deploy/aws/README.md | 4 ++-- docs/aws-eks-tutorial.md | 2 +- docs/local-dind-tutorial.md | 2 +- docs/minikube-tutorial.md | 6 +++--- 5 files changed, 8 insertions(+), 8 deletions(-) diff --git a/deploy/aliyun/README.md b/deploy/aliyun/README.md index 172befa88a..ff7d38b579 100644 --- a/deploy/aliyun/README.md +++ b/deploy/aliyun/README.md @@ -106,7 +106,7 @@ To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in ` This may take a while to complete, watch the process using command: ``` -watch kubectl get pods --namespace tidb -o wide +kubectl get pods --namespace tidb -o wide --watch ``` ## Scale TiDB cluster diff --git a/deploy/aws/README.md b/deploy/aws/README.md index 1d447bcffe..2d20b107f5 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -69,13 +69,13 @@ $ terraform destroy To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in variables.tf and run `terraform apply`. -> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `watch kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb` +> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. ## Scale TiDB cluster To scale TiDB cluster, modify `tikv_count` or `tidb_count` to your desired count, and then run `terraform apply`. -> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb` +> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. ## Customize diff --git a/docs/aws-eks-tutorial.md b/docs/aws-eks-tutorial.md index 88dae6f4d9..a6090069d4 100644 --- a/docs/aws-eks-tutorial.md +++ b/docs/aws-eks-tutorial.md @@ -180,7 +180,7 @@ We can now point a browser to `localhost:3000` and view the dashboards. To scale out TiDB cluster, modify `tikv_count` or `tidb_count` in `aws-tutorial.tfvars` to your desired count, and then run `terraform apply -var-file=aws-tutorial.tfvars`. -> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_aws_tutorial get po -n tidb` +> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_aws_tutorial get po -n tidb --watch`. > *Note*: There are taints and tolerations in place such that only a single pod will be scheduled per node. The count is also passed onto helm via terraform. For this reason attempting to scale out pods via helm or `kubectl scale` will not work as expected. --- diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 37d62bfc0d..8028878e35 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -100,7 +100,7 @@ tidb-scheduler-56757c896c-clzdg 2/2 Running 0 1m ```sh $ helm install charts/tidb-cluster --name=demo --namespace=tidb -$ watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide +$ kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide --watch $ # wait a few minutes to get all TiDB components get created and ready $ kubectl get tidbcluster -n tidb diff --git a/docs/minikube-tutorial.md b/docs/minikube-tutorial.md index 73fb2b12e4..2f824b1e7b 100644 --- a/docs/minikube-tutorial.md +++ b/docs/minikube-tutorial.md @@ -127,7 +127,7 @@ helm install charts/tidb-operator --name tidb-operator --namespace tidb-admin Now, we can watch the operator come up with: ``` -watch kubectl get pods --namespace tidb-admin -o wide +kubectl get pods --namespace tidb-admin -o wide --watch ``` If you have limited access to gcr.io (pods failed with ErrImagePull), you can @@ -151,7 +151,7 @@ helm install charts/tidb-cluster --name demo --set \ Watch the cluster up and running: ``` -watch kubectl get pods --namespace default -l app.kubernetes.io/instance=demo -o wide +kubectl get pods --namespace default -l app.kubernetes.io/instance=demo -o wide --watch ``` ### Test TiDB cluster @@ -160,7 +160,7 @@ There can be a small delay between the pod is up and running, and the service is available. You can watch list services available with: ``` -watch kubectl get svc +kubectl get svc --watch ``` When you see `demo-tidb` appear, it's ready to connect to TiDB server. From 211012e8ef14002ed5ebe4d4984127f5c8cd75f7 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Mon, 20 May 2019 17:50:57 +0800 Subject: [PATCH 03/22] docs/aws: add detail instructions for requirements --- deploy/aws/README.md | 39 ++++++++++++++++++++++++++++----------- 1 file changed, 28 insertions(+), 11 deletions(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index 2d20b107f5..9694b082e1 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -1,15 +1,29 @@ # Deploy TiDB Operator and TiDB cluster on AWS EKS ## Requirements: -* [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73 +* [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73, to control AWS resources + + The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way to set up is using the `aws configure` command: + ``` + # Replace AWS Access Key ID and AWS Secret Access Key to your own keys + $ aws configure + AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE + AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY + Default region name [None]: us-west-2 + Default output format [None]: json + ``` + > **Note:** The access key must have at least permissions to: create VPC, create EBS, create EC2 and create role * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) >= 1.11 * [helm](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client) >= 2.9.0 * [jq](https://stedolan.github.io/jq/download/) -* [aws-iam-authenticator](https://github.com/kubernetes-sigs/aws-iam-authenticator) installed in `PATH` +* [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) installed in `PATH`, to authenticate with AWS -## Configure awscli - -https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html + The easist way to install `aws-iam-authenticator` is to download the prebuilt binary: + ``` + curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator + chmod +x ./aws-iam-authenticator + sudo mv ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator + ``` ## Setup @@ -20,18 +34,19 @@ The default setup will create a new VPC and a t2.micro instance as bastion machi * 2 c4.4xlarge instances for TiDB * 1 c5.xlarge instance for monitor -You can change default values in `variables.tf` (like the cluster name and versions) as needed. The default value of `cluster_name` is `my-cluster`. - ``` shell +# Get the code $ git clone --depth=1 https://github.com/pingcap/tidb-operator $ cd tidb-operator/deploy/aws + +# Apply the configs, note that you must answer "yes" to `terraform apply` to continue $ terraform init $ terraform apply ``` -It might take 10 minutes or more for the process to finish. After `terraform apply` is executed successfully, some basic information is printed to the console. You can access the `monitor_endpoint` using your web browser. +It might take 10 minutes or more for the process to finish. After `terraform apply` is executed successfully, some useful information is printed to the console. You can access the `monitor_endpoint` address (printed in output) using your web browser to view monitoring metrics. -> **Note:** You can use the `terraform output` command to get that information again. +> **Note:** You can use the `terraform output` command to get the output information again. To access TiDB cluster, use the following command to first ssh into the bastion machine, and then connect it via MySQL client: @@ -40,7 +55,7 @@ ssh -i credentials/k8s-prod-.pem ec2-user@ mysql -h -P -u root ``` -If the DNS name is not resolvable, be patient and wait a few minutes. +The default value of `cluster_name` is `my-cluster`. If the DNS name is not resolvable, be patient and wait a few minutes. You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. @@ -55,7 +70,7 @@ kubectl get po -n tidb helm ls ``` -# Destory +# Destroy It may take some while to finish destroying the cluster. @@ -79,6 +94,8 @@ To scale TiDB cluster, modify `tikv_count` or `tidb_count` to your desired count ## Customize +You can change default values in `variables.tf` (like the cluster name and versions) as needed. + ### Customize AWS related resources By default, the terraform script will create a new VPC. You can use an existing VPC by setting `create_vpc` to `false` and specify your existing VPC id and subnet ids to `vpc_id` and `subnets` variables. From b4d4ffd15be73a270bb7035722b5bf27e9b81b68 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Mon, 20 May 2019 19:44:12 +0800 Subject: [PATCH 04/22] docs/dind: add notes for accessing TiDB cluster with MySQL client --- docs/local-dind-tutorial.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 8028878e35..7c892583ab 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -152,13 +152,15 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the - Access TiDB using the MySQL client + Before you start testing your TiDB cluster, make sure you have installed a MySQL client. + 1. Use `kubectl` to forward the host machine port to the TiDB service port: ```sh $ kubectl port-forward svc/demo-tidb 4000:4000 --namespace=tidb ``` - 2. To connect to TiDB using the MySQL client, open a new terminal tab or window and run the following command: + 2. Then, to connect to TiDB using the MySQL client, open a new terminal tab or window and run the following command: ```sh $ mysql -h 127.0.0.1 -P 4000 -u root From fe67dcd68fc86afbdeafa56bb72e79ac029aa564 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Tue, 21 May 2019 11:06:32 +0800 Subject: [PATCH 05/22] docs: add sample output of terraform --- deploy/aliyun/README.md | 4 ++-- deploy/aws/README.md | 31 ++++++++++++++++++++++++++----- 2 files changed, 28 insertions(+), 7 deletions(-) diff --git a/deploy/aliyun/README.md b/deploy/aliyun/README.md index ff7d38b579..477a05bc9e 100644 --- a/deploy/aliyun/README.md +++ b/deploy/aliyun/README.md @@ -52,7 +52,7 @@ $ terraform apply `terraform apply` will take 5 to 10 minutes to create the whole stack, once complete, basic cluster information will be printed: -> **Note:** You can use the `terraform output` command to get this information again. +> **Note:** You can use the `terraform output` command to get the output again. ``` Apply complete! Resources: 3 added, 0 changed, 1 destroyed. @@ -82,7 +82,7 @@ $ helm ls ## Access the DB -You can connect the TiDB cluster via the bastion instance, all necessary information are in the output printed after installation is finished: +You can connect the TiDB cluster via the bastion instance, all necessary information are in the output printed after installation is finished (replace the `<>` parts with values from the output): ```shell $ ssh -i credentials/-bastion-key.pem root@ diff --git a/deploy/aws/README.md b/deploy/aws/README.md index 9694b082e1..90abf2b774 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -4,7 +4,8 @@ * [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73, to control AWS resources The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way to set up is using the `aws configure` command: - ``` + + ``` shell # Replace AWS Access Key ID and AWS Secret Access Key to your own keys $ aws configure AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE @@ -19,7 +20,8 @@ * [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) installed in `PATH`, to authenticate with AWS The easist way to install `aws-iam-authenticator` is to download the prebuilt binary: - ``` + + ``` shell curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator chmod +x ./aws-iam-authenticator sudo mv ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator @@ -46,9 +48,28 @@ $ terraform apply It might take 10 minutes or more for the process to finish. After `terraform apply` is executed successfully, some useful information is printed to the console. You can access the `monitor_endpoint` address (printed in output) using your web browser to view monitoring metrics. -> **Note:** You can use the `terraform output` command to get the output information again. +A successful deploy will print output like: -To access TiDB cluster, use the following command to first ssh into the bastion machine, and then connect it via MySQL client: +``` +Apply complete! Resources: 67 added, 0 changed, 0 destroyed. + +Outputs: + +bastion_ip = [ + 52.14.50.145 +] +eks_endpoint = https://E10A1D0368FFD6E1E32E11573E5CE619.sk1.us-east-2.eks.amazonaws.com +eks_version = 1.12 +monitor_endpoint = http://abd299cc47af411e98aae02938da0762-1989524000.us-east-2.elb.amazonaws.com:3000 +region = us-east-2 +tidb_dns = abd2e3f7c7af411e98aae02938da0762-17499b76b312be02.elb.us-east-2.amazonaws.com +tidb_port = 4000 +tidb_version = v3.0.0-rc.1 +``` + +> **Note:** You can use the `terraform output` command to get the output again. + +To access TiDB cluster, use the following command to first ssh into the bastion machine, and then connect it via MySQL client (replace the `<>` parts with values from the output): ``` shell ssh -i credentials/k8s-prod-.pem ec2-user@ @@ -74,7 +95,7 @@ helm ls It may take some while to finish destroying the cluster. -```shell +``` shell $ terraform destroy ``` From 5d2c19d1eeaf32a01458a4038c9cf115447343e2 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Tue, 21 May 2019 11:12:30 +0800 Subject: [PATCH 06/22] docs/aws: minor updates of words --- deploy/aws/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index 90abf2b774..11a88c6455 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -6,7 +6,7 @@ The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way to set up is using the `aws configure` command: ``` shell - # Replace AWS Access Key ID and AWS Secret Access Key to your own keys + # Replace AWS Access Key ID and AWS Secret Access Key with your own keys $ aws configure AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY @@ -115,7 +115,7 @@ To scale TiDB cluster, modify `tikv_count` or `tidb_count` to your desired count ## Customize -You can change default values in `variables.tf` (like the cluster name and versions) as needed. +You can change default values in `variables.tf` (like the cluster name and image versions) as needed. ### Customize AWS related resources From 5eb0dddc90be96c5899ed9b1476ff6a88d629801 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Wed, 22 May 2019 10:45:52 +0800 Subject: [PATCH 07/22] Update docs/local-dind-tutorial.md Co-Authored-By: Keke Yi <40977455+yikeke@users.noreply.github.com> --- docs/local-dind-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 7c892583ab..3c1f693850 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -31,7 +31,7 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme If the host machine uses an XFS filesystem (default in CentOS 7), it must be formatted with `ftype=1` to enable `d_type` support, see [Docker's documentation](https://docs.docker.com/storage/storagedriver/overlayfs-driver/) for details. - You can check if your filesystem supports `d_type` using commaind `xfs_info / | grep ftype`, where `/` is the data directory of you installed Docker daemon. + You can check if your filesystem supports `d_type` using command `xfs_info / | grep ftype`, where `/` is the data directory of you installed Docker daemon. If your root directory `/` uses XFS without `d_type` support, but there is another partition does, or is using another filesystem, it is also possible to change the data directory of Docker to use that partition. From 15d7ef2fff1111f6c10c078863ccd4da153ab263 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Wed, 22 May 2019 10:52:09 +0800 Subject: [PATCH 08/22] docs: update instructions for macOS & update argument in sample --- deploy/aws/README.md | 5 +++++ docs/local-dind-tutorial.md | 2 +- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index 11a88c6455..a61bb730fd 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -22,7 +22,12 @@ The easist way to install `aws-iam-authenticator` is to download the prebuilt binary: ``` shell + # Download binary for Linux curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator + + # Or, download binary for macOS + curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/darwin/amd64/aws-iam-authenticator + chmod +x ./aws-iam-authenticator sudo mv ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator ``` diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 3c1f693850..3b51ebcc67 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -51,7 +51,7 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme cat << EOF > /etc/systemd/system/docker.service.d/docker-storage.conf [Service] ExecStart= - ExecStart=/usr/bin/dockerd -g /data/docker -H fd:// --containerd=/run/containerd/containerd.sock + ExecStart=/usr/bin/dockerd --data-root /data/docker -H fd:// --containerd=/run/containerd/containerd.sock EOF # Restart docker daemon From 471bab5344730ef86c3ddbe9f182a2d3937ab89e Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Wed, 22 May 2019 16:45:36 +0800 Subject: [PATCH 09/22] docs: Apply suggestions from code review Co-Authored-By: Keke Yi <40977455+yikeke@users.noreply.github.com> --- deploy/aws/README.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index a61bb730fd..14070974ee 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -1,9 +1,9 @@ # Deploy TiDB Operator and TiDB cluster on AWS EKS -## Requirements: +## Requirements * [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73, to control AWS resources - The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way to set up is using the `aws configure` command: + The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way is using the `aws configure` command: ``` shell # Replace AWS Access Key ID and AWS Secret Access Key with your own keys @@ -19,7 +19,7 @@ * [jq](https://stedolan.github.io/jq/download/) * [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) installed in `PATH`, to authenticate with AWS - The easist way to install `aws-iam-authenticator` is to download the prebuilt binary: + The easiest way to install `aws-iam-authenticator` is to download the prebuilt binary: ``` shell # Download binary for Linux @@ -32,7 +32,7 @@ sudo mv ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator ``` -## Setup +## Deploy The default setup will create a new VPC and a t2.micro instance as bastion machine. And EKS cluster with the following ec2 instance worker nodes: @@ -51,9 +51,9 @@ $ terraform init $ terraform apply ``` -It might take 10 minutes or more for the process to finish. After `terraform apply` is executed successfully, some useful information is printed to the console. You can access the `monitor_endpoint` address (printed in output) using your web browser to view monitoring metrics. +It might take 10 minutes or more to finish the process. After `terraform apply` is executed successfully, some useful information is printed to the console. You can access the `monitor_endpoint` address (printed in outputs) using your web browser to view monitoring metrics. -A successful deploy will print output like: +A successful deployment will give the output like: ``` Apply complete! Resources: 67 added, 0 changed, 0 destroyed. @@ -74,7 +74,7 @@ tidb_version = v3.0.0-rc.1 > **Note:** You can use the `terraform output` command to get the output again. -To access TiDB cluster, use the following command to first ssh into the bastion machine, and then connect it via MySQL client (replace the `<>` parts with values from the output): +To access the deployed TiDB cluster, use the following commands to first `ssh` into the bastion machine, and then connect it via MySQL client (replace the `<>` parts with values from the output): ``` shell ssh -i credentials/k8s-prod-.pem ec2-user@ @@ -83,7 +83,7 @@ mysql -h -P -u root The default value of `cluster_name` is `my-cluster`. If the DNS name is not resolvable, be patient and wait a few minutes. -You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. +You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`: ``` shell # By specifying --kubeconfig argument @@ -108,19 +108,19 @@ $ terraform destroy ## Upgrade TiDB cluster -To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in variables.tf and run `terraform apply`. +To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in the `variables.tf` file, and then run `terraform apply`. > *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. ## Scale TiDB cluster -To scale TiDB cluster, modify `tikv_count` or `tidb_count` to your desired count, and then run `terraform apply`. +To scale the TiDB cluster, modify the `tikv_count` or `tidb_count` variable to your desired count in the `variables.tf` file, and then run `terraform apply`. > *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. ## Customize -You can change default values in `variables.tf` (like the cluster name and image versions) as needed. +You can change default values in `variables.tf` (such as the cluster name and image versions) as needed. ### Customize AWS related resources From e510d0e151763fe5a3701407f8fac22a90af92db Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Wed, 22 May 2019 16:49:02 +0800 Subject: [PATCH 10/22] docs/aws: fix synatax of intrucduction --- deploy/aws/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index 14070974ee..f97ea00e58 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -34,7 +34,7 @@ ## Deploy -The default setup will create a new VPC and a t2.micro instance as bastion machine. And EKS cluster with the following ec2 instance worker nodes: +The default setup will create a new VPC and a t2.micro instance as bastion machine, and an EKS cluster with the following ec2 instances as worker nodes: * 3 m5d.xlarge instances for PD * 3 i3.2xlarge instances for TiKV From 1f48c22287b44b0dc7921f3f099f7642b83f8a08 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Wed, 22 May 2019 16:51:59 +0800 Subject: [PATCH 11/22] docs/aliyun: update instructions when deploying with terraform --- deploy/aliyun/README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/deploy/aliyun/README.md b/deploy/aliyun/README.md index d6ca448031..e6543f74f3 100644 --- a/deploy/aliyun/README.md +++ b/deploy/aliyun/README.md @@ -44,8 +44,11 @@ The `variables.tf` file contains default settings of variables used for deployin Apply the stack: ```shell +# Get the code $ git clone https://github.com/pingcap/tidb-operator -$ cd tidb-operator/deploy/alicloud +$ cd tidb-operator/deploy/aliyun + +# Apply the configs, note that you must answer "yes" to `terraform apply` to continue $ terraform init $ terraform apply ``` From d04d7088731a5f4d21f833c1bea8dcc159d64f88 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Wed, 22 May 2019 17:00:37 +0800 Subject: [PATCH 12/22] docs/aws: adjust instructions to access DB and grafana & cleanup --- deploy/aws/README.md | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index f97ea00e58..e16f5c9248 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -51,7 +51,7 @@ $ terraform init $ terraform apply ``` -It might take 10 minutes or more to finish the process. After `terraform apply` is executed successfully, some useful information is printed to the console. You can access the `monitor_endpoint` address (printed in outputs) using your web browser to view monitoring metrics. +It might take 10 minutes or more to finish the process. After `terraform apply` is executed successfully, some useful information is printed to the console. A successful deployment will give the output like: @@ -74,6 +74,8 @@ tidb_version = v3.0.0-rc.1 > **Note:** You can use the `terraform output` command to get the output again. +## Access the DB + To access the deployed TiDB cluster, use the following commands to first `ssh` into the bastion machine, and then connect it via MySQL client (replace the `<>` parts with values from the output): ``` shell @@ -96,6 +98,14 @@ kubectl get po -n tidb helm ls ``` +## Monitoring + +You can access the `monitor_endpoint` address (printed in outputs) using your web browser to view monitoring metrics. + +The initial Grafana login credentials are: + - User: admin + - Password: admin + # Destroy It may take some while to finish destroying the cluster. @@ -135,10 +145,3 @@ Currently, the instance type of TiDB cluster component is not configurable becau ### Customize TiDB parameters Currently, there are not much parameters exposed to be customizable. If you need to customize these, you should modify the `templates/tidb-cluster-values.yaml.tpl` files before deploying. Or if you modify it and run `terraform apply` again after the cluster is running, it will not take effect unless you manually delete the pod via `kubectl delete po -n tidb --all`. This will be resolved when issue [#255](https://github.com/pingcap/tidb-operator/issues/225) is fixed. - -## TODO - -- [ ] Use [cluster autoscaler](https://github.com/kubernetes/autoscaler) -- [ ] Allow create a minimal TiDB cluster for testing -- [ ] Make the resource creation synchronously to follow Terraform convention -- [ ] Make more parameters customizable From 32fa5e960118bafb67be5548b5fca02e930649f4 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 13:04:13 +0800 Subject: [PATCH 13/22] Apply suggestions from code review Co-Authored-By: Keke Yi <40977455+yikeke@users.noreply.github.com> --- deploy/aws/README.md | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index e16f5c9248..55cd22f2c9 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -1,6 +1,10 @@ # Deploy TiDB Operator and TiDB cluster on AWS EKS -## Requirements +This document describes how to deploy TiDB Operator and a TiDB cluster on AWS EKS with your laptop (Linux or macOS) for development or testing. + +## Prerequisites + +Before deploying a TiDB cluster on AWS EKS, make sure the following requirements are satisfied: * [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73, to control AWS resources The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way is using the `aws configure` command: @@ -98,15 +102,16 @@ kubectl get po -n tidb helm ls ``` -## Monitoring +## Monitor You can access the `monitor_endpoint` address (printed in outputs) using your web browser to view monitoring metrics. The initial Grafana login credentials are: - - User: admin - - Password: admin -# Destroy +- User: admin +- Password: admin + +## Destroy It may take some while to finish destroying the cluster. @@ -116,13 +121,13 @@ $ terraform destroy > **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore. -## Upgrade TiDB cluster +## Upgrade To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in the `variables.tf` file, and then run `terraform apply`. > *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. -## Scale TiDB cluster +## Scale To scale the TiDB cluster, modify the `tikv_count` or `tidb_count` variable to your desired count in the `variables.tf` file, and then run `terraform apply`. From 7b4659001f89f6c875ed4c6d63b7442143663f61 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 14:08:56 +0800 Subject: [PATCH 14/22] docs/dind: update contents to fix issues found in the test --- docs/local-dind-tutorial.md | 117 ++++++++++++++++++++++++++++++------ 1 file changed, 97 insertions(+), 20 deletions(-) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 3b51ebcc67..d34468edc8 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -16,7 +16,7 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme > **Note:** [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine. - > **Note:** `kubeadm` validates installed Docker version during the installation process. If you are using Docker later than 18.06, there would be warning messages. The cluster might still be working, but it is recommended to use a Docker version between 17.03 and 18.06 for better compatibility. + > **Note:** `kubeadm` validates installed Docker version during the installation process. If you are using Docker later than 18.06, there would be warning messages. The cluster might still be working, but it is recommended to use a Docker version between 17.03 and 18.06 for better compatibility. You can find older versions of docker at [here](https://download.docker.com/). - [Helm Client](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client): 2.9.0 or later - [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 at least, 1.13 or later recommended @@ -61,35 +61,61 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme ## Step 1: Deploy a Kubernetes cluster using DinD -There is a script in our repository that can help you install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator. +Install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator with the script in our repository. ```sh +# Get the code $ git clone --depth=1 https://github.com/pingcap/tidb-operator + +# Set up the cluster $ cd tidb-operator $ manifests/local-dind/dind-cluster-v1.12.sh up ``` -> **Note:** If the cluster fails to pull Docker images during the startup due to the firewall, you can set the environment variable `KUBE_REPO_PREFIX` to `uhub.ucloud.cn/pingcap` before running the script `dind-cluster-v1.12.sh` as follows (the Docker images used are pulled from [UCloud Docker Registry](https://docs.ucloud.cn/compute/uhub/index)): +If the cluster fails to pull Docker images during the startup due to the firewall, you can set the environment variable `KUBE_REPO_PREFIX` to `uhub.ucloud.cn/pingcap` before running the script `dind-cluster-v1.12.sh` as follows (the Docker images used are pulled from [UCloud Docker Registry](https://docs.ucloud.cn/compute/uhub/index)): ``` $ KUBE_REPO_PREFIX=uhub.ucloud.cn/pingcap manifests/local-dind/dind-cluster-v1.12.sh up ``` -> **Note:** An alternative solution is to configure HTTP proxies in DinD. +An alternative solution is to configure HTTP proxies in DinD: -``` +```sh $ export DIND_HTTP_PROXY=http://: $ export DIND_HTTPS_PROXY=http://: $ export DIND_NO_PROXY=.svc,.local,127.0.0.1,0,1,2,3,4,5,6,7,8,9 # whitelist internal domains and IP addresses $ manifests/local-dind/dind-cluster-v1.12.sh up ``` +There might be some warnings during the process due to various settings and environment of your system, but the command should exit without any error. You can verify the k8s cluster is up and running by: + +```sh +# Get the cluster information +$ kubectl cluster-info +Kubernetes master is running at http://127.0.0.1:8080 +KubeDNS is running at http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy +kubernetes-dashboard is running at http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy + +# List host nodes (in the DinD installation, they are docker containers) in the cluster +$ kubectl get nodes -o wide +NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME +kube-master Ready master 11m v1.12.5 10.192.0.2 Debian GNU/Linux 9 (stretch) 3.10.0-957.12.1.el7.x86_64 docker://18.9.0 +kube-node-1 Ready 9m32s v1.12.5 10.192.0.3 Debian GNU/Linux 9 (stretch) 3.10.0-957.12.1.el7.x86_64 docker://18.9.0 +kube-node-2 Ready 9m32s v1.12.5 10.192.0.4 Debian GNU/Linux 9 (stretch) 3.10.0-957.12.1.el7.x86_64 docker://18.9.0 +kube-node-3 Ready 9m32s v1.12.5 10.192.0.5 Debian GNU/Linux 9 (stretch) 3.10.0-957.12.1.el7.x86_64 docker://18.9.0 +``` + ## Step 2: Install TiDB Operator in the DinD Kubernetes cluster +Once the k8s cluster is up and running, we can install TiDB Operator into it using `helm`: + ```sh -$ # Install TiDB Operator into Kubernetes $ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin --set scheduler.kubeSchedulerImageName=mirantis/hypokube --set scheduler.kubeSchedulerImageTag=final -$ # wait operator running +``` + +Then wait few minutes until operator is running: + +```sh $ kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator NAME READY STATUS RESTARTS AGE tidb-controller-manager-5cd94748c7-jlvfs 1/1 Running 0 1m @@ -98,11 +124,19 @@ tidb-scheduler-56757c896c-clzdg 2/2 Running 0 1m ## Step 3: Deploy a TiDB cluster in the DinD Kubernetes cluster +By using `helm` along with TiDB Operator, we can easily set up a TiDB cluster: + ```sh $ helm install charts/tidb-cluster --name=demo --namespace=tidb +``` + +And wait a few minutes for all TiDB components get created and ready: + +```sh +# Use Ctrl + C to exit watch mode $ kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide --watch -$ # wait a few minutes to get all TiDB components get created and ready +# Get basic information of the TiDB cluster $ kubectl get tidbcluster -n tidb NAME PD STORAGE READY DESIRE TIKV STORAGE READY DESIRE TIDB READY DESIRE demo pingcap/pd:v2.1.8 1Gi 3 3 pingcap/tikv:v2.1.8 10Gi 3 3 pingcap/tidb:v2.1.8 2 2 @@ -125,12 +159,14 @@ demo-tidb-peer ClusterIP None 10080/TCP demo-tikv-peer ClusterIP None 20160/TCP 1m $ kubectl get configmap -n tidb -NAME DATA AGE -demo-monitor 5 1m -demo-monitor-dashboard 0 1m -demo-pd 2 1m -demo-tidb 2 1m -demo-tikv 2 1m +NAME DATA AGE +demo-monitor 5 1m +demo-monitor-dashboard-extra-v3 2 1m +demo-monitor-dashboard-v2 5 1m +demo-monitor-dashboard-v3 5 1m +demo-pd 2 1m +demo-tidb 2 1m +demo-tikv 2 1m $ kubectl get pod -n tidb NAME READY STATUS RESTARTS AGE @@ -146,6 +182,8 @@ demo-tikv-1 1/1 Running 0 1m demo-tikv-2 1/1 Running 0 1m ``` +## Access the database and monitor dashboards + To access the TiDB cluster, use `kubectl port-forward` to expose services to the host. The port numbers in command are in `:` format. > **Note:** If you are deploying DinD on a remote machine rather than a local PC, there might be problems accessing "localhost" of that remote system. When you use `kubectl` 1.13 or later, it is possible to expose the port on `0.0.0.0` instead of the default `127.0.0.1` by adding `--address 0.0.0.0` to the `kubectl port-forward` command. @@ -166,7 +204,7 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the $ mysql -h 127.0.0.1 -P 4000 -u root ``` -- View the monitor dashboard +- View the monitor dashboards 1. Use `kubectl` to forward the host machine port to the Grafana service port: @@ -174,6 +212,8 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the $ kubectl port-forward svc/demo-grafana 3000:3000 --namespace=tidb ``` + If the proxy is set up sucessfully, it will print something like `Forwarding from 0.0.0.0:3000 -> 3000`, press `Ctrl + C` to stop the proxy and exit. + 2. Open your web browser at http://localhost:3000 to access the Grafana monitoring interface. * Default username: admin @@ -198,7 +238,25 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the 2. Find the host IP addresses of the cluster. - DinD is a K8s cluster running inside Docker containers, so Services expose ports to the containers' address, instead of the real host machine. We can find IP addresses of Docker containers by `kubectl get nodes -o yaml | grep address`. + DinD is a K8s cluster running inside Docker containers, so Services expose ports to the containers' address, instead of the real host machine. We can find IP addresses of Docker containers by the following command: + + ```sh + $ kubectl get nodes -o yaml | grep address + addresses: + - address: 10.192.0.2 + - address: kube-master + addresses: + - address: 10.192.0.3 + - address: kube-node-1 + addresses: + - address: 10.192.0.4 + - address: kube-node-2 + addresses: + - address: 10.192.0.5 + - address: kube-node-3 + ``` + + Use the IP addresses for reverse proxy. 3. Set up a reverse proxy. @@ -208,7 +266,7 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the You can scale out or scale in the TiDB cluster simply by modifying the number of `replicas`. -1. Configure the `charts/tidb-cluster/values.yaml` file. +1. Edit the `charts/tidb-cluster/values.yaml` file with your preffered text editor. For example, to scale out the cluster, you can modify the number of TiKV `replicas` from 3 to 5, or the number of TiDB `replicas` from 2 to 3. @@ -220,11 +278,13 @@ You can scale out or scale in the TiDB cluster simply by modifying the number of > **Note:** If you need to scale in TiKV, the consumed time depends on the volume of your existing data, because the data needs to be migrated safely. +Use `kubectl get pod -n tidb` to verify the number of each compoments equal to values in the `charts/tidb-cluster/values.yaml` file, and all pods are in `Running` state. + ## Upgrade the TiDB cluster -1. Configure the `charts/tidb-cluster/values.yaml` file. +1. Edit the `charts/tidb-cluster/values.yaml` file with your preffered text editor. - For example, change the version of PD/TiKV/TiDB `image` to `v2.1.9`. + For example, change the version of PD/TiKV/TiDB `image` to `v2.1.10`. 2. Run the following command to apply the changes: @@ -232,6 +292,22 @@ You can scale out or scale in the TiDB cluster simply by modifying the number of helm upgrade demo charts/tidb-cluster --namespace=tidb ``` +Use `kubectl get pod -n tidb` to verify that all pods are in `Running` state. Then you can connect to the database and use `tidb_version()` function to verify the version: + +```sh +MySQL [(none)]> select tidb_version()\G +*************************** 1. row *************************** +tidb_version(): Release Version: 2.1.10 +Git Commit Hash: v2.1.10 +Git Branch: master +UTC Build Time: 2019-05-22 11:12:14 +GoVersion: go version go1.12.4 linux/amd64 +Race Enabled: false +TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e +Check Table Before Drop: false +1 row in set (0.001 sec) +``` + ## Destroy the TiDB cluster When you are done with your test, use the following command to destroy the TiDB cluster: @@ -253,9 +329,10 @@ $ kubectl delete pvc --namespace tidb --all ```sh $ manifests/local-dind/dind-cluster-v1.12.sh stop - ``` + You can use `docker ps` to verify there are no docker container running. + * If you want to restart the DinD Kubernetes after you stop it, run the following command: ``` From b7ff879bbbf3bee8fb002f36d8fa86faaebea445 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 14:16:40 +0800 Subject: [PATCH 15/22] docs/aws: add an introduction sentense of terraform installation --- deploy/aws/README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index 55cd22f2c9..c19f369174 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -45,6 +45,8 @@ The default setup will create a new VPC and a t2.micro instance as bastion machi * 2 c4.4xlarge instances for TiDB * 1 c5.xlarge instance for monitor +Use the following commands to set up the cluster: + ``` shell # Get the code $ git clone --depth=1 https://github.com/pingcap/tidb-operator From 6bc6e06ddc4310a8f3b89b953e1c1f19ae366123 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 15:09:01 +0800 Subject: [PATCH 16/22] docs/dind: pointing out xfs issue only applies to Linux users --- docs/local-dind-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index d34468edc8..29112cfc78 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -29,7 +29,7 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme - Supported filesystem - If the host machine uses an XFS filesystem (default in CentOS 7), it must be formatted with `ftype=1` to enable `d_type` support, see [Docker's documentation](https://docs.docker.com/storage/storagedriver/overlayfs-driver/) for details. + For Linux users, if the host machine uses an XFS filesystem (default in CentOS 7), it must be formatted with `ftype=1` to enable `d_type` support, see [Docker's documentation](https://docs.docker.com/storage/storagedriver/overlayfs-driver/) for details. You can check if your filesystem supports `d_type` using command `xfs_info / | grep ftype`, where `/` is the data directory of you installed Docker daemon. From 19a6b2805a530eb783673814e06dc726c29eb01a Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 16:31:53 +0800 Subject: [PATCH 17/22] docs/dind: make port-forward instruction more clear --- docs/local-dind-tutorial.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 29112cfc78..6d52655664 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -198,6 +198,8 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the $ kubectl port-forward svc/demo-tidb 4000:4000 --namespace=tidb ``` + > **Note:** If the proxy is set up sucessfully, it will print something like `Forwarding from 0.0.0.0:4000 -> 4000`. After testing, press `Ctrl + C` to stop the proxy and exit. + 2. Then, to connect to TiDB using the MySQL client, open a new terminal tab or window and run the following command: ```sh @@ -212,7 +214,7 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the $ kubectl port-forward svc/demo-grafana 3000:3000 --namespace=tidb ``` - If the proxy is set up sucessfully, it will print something like `Forwarding from 0.0.0.0:3000 -> 3000`, press `Ctrl + C` to stop the proxy and exit. + > **Note:** If the proxy is set up sucessfully, it will print something like `Forwarding from 0.0.0.0:3000 -> 3000`. After testing, press `Ctrl + C` to stop the proxy and exit. 2. Open your web browser at http://localhost:3000 to access the Grafana monitoring interface. From 3cd98265b0643314a163e0833aeb3e03913fb6d8 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 16:36:12 +0800 Subject: [PATCH 18/22] Apply suggestions from code review Co-Authored-By: Keke Yi <40977455+yikeke@users.noreply.github.com> --- deploy/aws/README.md | 2 +- docs/local-dind-tutorial.md | 18 +++++++++--------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index c19f369174..c0b01a89d0 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -80,7 +80,7 @@ tidb_version = v3.0.0-rc.1 > **Note:** You can use the `terraform output` command to get the output again. -## Access the DB +## Access the database To access the deployed TiDB cluster, use the following commands to first `ssh` into the bastion machine, and then connect it via MySQL client (replace the `<>` parts with values from the output): diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 6d52655664..5f05c153f9 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -16,7 +16,7 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme > **Note:** [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac) by uninstalling Legacy Docker Toolbox and installing Docker for Mac, because DinD cannot run on Docker Toolbox and Docker Machine. - > **Note:** `kubeadm` validates installed Docker version during the installation process. If you are using Docker later than 18.06, there would be warning messages. The cluster might still be working, but it is recommended to use a Docker version between 17.03 and 18.06 for better compatibility. You can find older versions of docker at [here](https://download.docker.com/). + > **Note:** `kubeadm` validates installed Docker version during the installation process. If you are using Docker later than 18.06, there would be warning messages. The cluster might still be working, but it is recommended to use a Docker version between 17.03 and 18.06 for better compatibility. You can find older versions of docker [here](https://download.docker.com/). - [Helm Client](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client): 2.9.0 or later - [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): 1.10 at least, 1.13 or later recommended @@ -61,7 +61,7 @@ Before deploying a TiDB cluster to Kubernetes, make sure the following requireme ## Step 1: Deploy a Kubernetes cluster using DinD -Install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator with the script in our repository. +First, make sure that the docker daemon is running, and you can install and set up a Kubernetes cluster (version 1.12) using DinD for TiDB Operator with the script in our repository: ```sh # Get the code @@ -113,7 +113,7 @@ Once the k8s cluster is up and running, we can install TiDB Operator into it usi $ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin --set scheduler.kubeSchedulerImageName=mirantis/hypokube --set scheduler.kubeSchedulerImageTag=final ``` -Then wait few minutes until operator is running: +Then wait a few minutes until TiDB Operator is running: ```sh $ kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator @@ -130,10 +130,10 @@ By using `helm` along with TiDB Operator, we can easily set up a TiDB cluster: $ helm install charts/tidb-cluster --name=demo --namespace=tidb ``` -And wait a few minutes for all TiDB components get created and ready: +And wait a few minutes for all TiDB components to get created and ready: ```sh -# Use Ctrl + C to exit watch mode +# Use `Ctrl + C` to exit watch mode $ kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide --watch # Get basic information of the TiDB cluster @@ -216,7 +216,7 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the > **Note:** If the proxy is set up sucessfully, it will print something like `Forwarding from 0.0.0.0:3000 -> 3000`. After testing, press `Ctrl + C` to stop the proxy and exit. - 2. Open your web browser at http://localhost:3000 to access the Grafana monitoring interface. + 2. Then, open your web browser at http://localhost:3000 to access the Grafana monitoring interface. * Default username: admin * Default password: admin @@ -268,7 +268,7 @@ To access the TiDB cluster, use `kubectl port-forward` to expose services to the You can scale out or scale in the TiDB cluster simply by modifying the number of `replicas`. -1. Edit the `charts/tidb-cluster/values.yaml` file with your preffered text editor. +1. Edit the `charts/tidb-cluster/values.yaml` file with your preferred text editor. For example, to scale out the cluster, you can modify the number of TiKV `replicas` from 3 to 5, or the number of TiDB `replicas` from 2 to 3. @@ -284,7 +284,7 @@ Use `kubectl get pod -n tidb` to verify the number of each compoments equal to v ## Upgrade the TiDB cluster -1. Edit the `charts/tidb-cluster/values.yaml` file with your preffered text editor. +1. Edit the `charts/tidb-cluster/values.yaml` file with your preferred text editor. For example, change the version of PD/TiKV/TiDB `image` to `v2.1.10`. @@ -333,7 +333,7 @@ $ kubectl delete pvc --namespace tidb --all $ manifests/local-dind/dind-cluster-v1.12.sh stop ``` - You can use `docker ps` to verify there are no docker container running. + You can use `docker ps` to verify that there is no docker container running. * If you want to restart the DinD Kubernetes after you stop it, run the following command: From e004170a5189c9158940ffa1f785bd18b92ee248 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 16:39:45 +0800 Subject: [PATCH 19/22] docs/dind: make delete instructions more clear --- docs/local-dind-tutorial.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index 5f05c153f9..a011887c5f 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -318,7 +318,9 @@ When you are done with your test, use the following command to destroy the TiDB $ helm delete demo --purge ``` -> **Note:** This only deletes the running pods and other resources, the data is persisted. If you do not need the data anymore, run the following commands to clean up the data. (Be careful, this permanently deletes the data). +> **Note:** This only deletes the running pods and other resources, the data is persisted. + +If you do not need the data anymore, run the following commands to clean up the data. (Be careful, this permanently deletes the data). ```sh $ kubectl get pv -l app.kubernetes.io/namespace=tidb -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' @@ -343,7 +345,7 @@ $ kubectl delete pvc --namespace tidb --all ## Destroy the DinD Kubernetes cluster -If you want to clean up the DinD Kubernetes cluster and bring up a new cluster, run the following commands: +If you want to clean up the DinD Kubernetes cluster, run the following commands: ```sh $ manifests/local-dind/dind-cluster-v1.12.sh clean From b38188e209fbd1fa2c6b77456d3f1e38c4882584 Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 17:16:55 +0800 Subject: [PATCH 20/22] docs/aws: update instructions of customizing params --- deploy/aws/README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index c0b01a89d0..c39cd42fe8 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -151,4 +151,7 @@ Currently, the instance type of TiDB cluster component is not configurable becau ### Customize TiDB parameters -Currently, there are not much parameters exposed to be customizable. If you need to customize these, you should modify the `templates/tidb-cluster-values.yaml.tpl` files before deploying. Or if you modify it and run `terraform apply` again after the cluster is running, it will not take effect unless you manually delete the pod via `kubectl delete po -n tidb --all`. This will be resolved when issue [#255](https://github.com/pingcap/tidb-operator/issues/225) is fixed. +Currently, there are not many customizable TiDB parameters. And there are two ways to customize the parameters: + +* Before deploying the cluster, you can directly modify the `templates/tidb-cluster-values.yaml.tpl` file and then deploy the cluster with customized configs. +* After the cluster is running, you must run `terraform apply` again every time you make changes to the `templates/tidb-cluster-values.yaml.tpl` file, or the cluster will still be using old configs. From 9e41760298885ac9dba208128f183fddb6f1d77f Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Thu, 23 May 2019 17:19:12 +0800 Subject: [PATCH 21/22] docs/dind: clean up --- docs/local-dind-tutorial.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index a011887c5f..e16d71e6c4 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -350,7 +350,6 @@ If you want to clean up the DinD Kubernetes cluster, run the following commands: ```sh $ manifests/local-dind/dind-cluster-v1.12.sh clean $ sudo rm -rf data/kube-node-* -$ manifests/local-dind/dind-cluster-v1.12.sh up ``` -> **Warning:** You must clean the data after you destroy the DinD Kubernetes cluster, otherwise the TiDB cluster would fail to start when you try to bring it up again. +> **Warning:** You must clean the data after you destroy the DinD Kubernetes cluster, otherwise the TiDB cluster would fail to start when you try to bring a new cluster up again. From 4e0c8e5437dc5be4aca7e89cbc750e4750910e8c Mon Sep 17 00:00:00 2001 From: Allen Zhong Date: Fri, 24 May 2019 14:40:32 +0800 Subject: [PATCH 22/22] docs/aws: add examples and adjust order of sections --- deploy/aws/README.md | 42 ++++++++++++++++++++++++++++++------------ 1 file changed, 30 insertions(+), 12 deletions(-) diff --git a/deploy/aws/README.md b/deploy/aws/README.md index c39cd42fe8..0691fb1c7e 100644 --- a/deploy/aws/README.md +++ b/deploy/aws/README.md @@ -18,6 +18,7 @@ Before deploying a TiDB cluster on AWS EKS, make sure the following requirements Default output format [None]: json ``` > **Note:** The access key must have at least permissions to: create VPC, create EBS, create EC2 and create role +* [terraform](https://learn.hashicorp.com/terraform/getting-started/install.html) * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) >= 1.11 * [helm](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client) >= 2.9.0 * [jq](https://stedolan.github.io/jq/download/) @@ -113,27 +114,34 @@ The initial Grafana login credentials are: - User: admin - Password: admin -## Destroy - -It may take some while to finish destroying the cluster. - -``` shell -$ terraform destroy -``` +## Upgrade -> **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore. +To upgrade the TiDB cluster, edit the `variables.tf` file with your preferred text editor and modify the `tidb_version` variable to a higher version, and then run `terraform apply`. -## Upgrade +For example, to upgrade the cluster to version 2.1.10, modify the `tidb_version` to `v2.1.10`: -To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in the `variables.tf` file, and then run `terraform apply`. +``` + variable "tidb_version" { + description = "tidb cluster version" + default = "v2.1.10" + } +``` > *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. ## Scale -To scale the TiDB cluster, modify the `tikv_count` or `tidb_count` variable to your desired count in the `variables.tf` file, and then run `terraform apply`. +To scale the TiDB cluster, edit the `variables.tf` file with your preferred text editor and modify the `tikv_count` or `tidb_count` variable to your desired count, and then run `terraform apply`. + +For example, to scale out the cluster, you can modify the number of TiDB instances from 2 to 3: -> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. +``` + variable "tidb_count" { + default = 4 + } +``` + +> *Note*: Currently, scaling in is NOT supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. ## Customize @@ -155,3 +163,13 @@ Currently, there are not many customizable TiDB parameters. And there are two wa * Before deploying the cluster, you can directly modify the `templates/tidb-cluster-values.yaml.tpl` file and then deploy the cluster with customized configs. * After the cluster is running, you must run `terraform apply` again every time you make changes to the `templates/tidb-cluster-values.yaml.tpl` file, or the cluster will still be using old configs. + +## Destroy + +It may take some while to finish destroying the cluster. + +``` shell +$ terraform destroy +``` + +> **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore.