Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Fix issues found in Queeny's test #507

Merged
merged 24 commits into from
May 24, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
ebaa8ec
docs/dind: add instructions for XFS filesystems
AstroProfundis May 15, 2019
8d07ed2
docs: use `kubectl --watch` instead of `watch kubectl` in tutorials
AstroProfundis May 17, 2019
211012e
docs/aws: add detail instructions for requirements
AstroProfundis May 20, 2019
b4d4ffd
docs/dind: add notes for accessing TiDB cluster with MySQL client
AstroProfundis May 20, 2019
fe67dcd
docs: add sample output of terraform
AstroProfundis May 21, 2019
117dec3
Merge branch 'master' of github.com:pingcap/tidb-operator into fix-qu…
AstroProfundis May 21, 2019
5d2c19d
docs/aws: minor updates of words
AstroProfundis May 21, 2019
5eb0ddd
Update docs/local-dind-tutorial.md
AstroProfundis May 22, 2019
15d7ef2
docs: update instructions for macOS & update argument in sample
AstroProfundis May 22, 2019
471bab5
docs: Apply suggestions from code review
AstroProfundis May 22, 2019
e510d0e
docs/aws: fix synatax of intrucduction
AstroProfundis May 22, 2019
1f48c22
docs/aliyun: update instructions when deploying with terraform
AstroProfundis May 22, 2019
d04d708
docs/aws: adjust instructions to access DB and grafana & cleanup
AstroProfundis May 22, 2019
32fa5e9
Apply suggestions from code review
AstroProfundis May 23, 2019
7b46590
docs/dind: update contents to fix issues found in the test
AstroProfundis May 23, 2019
b7ff879
docs/aws: add an introduction sentense of terraform installation
AstroProfundis May 23, 2019
6bc6e06
docs/dind: pointing out xfs issue only applies to Linux users
AstroProfundis May 23, 2019
19a6b28
docs/dind: make port-forward instruction more clear
AstroProfundis May 23, 2019
3cd9826
Apply suggestions from code review
AstroProfundis May 23, 2019
e004170
docs/dind: make delete instructions more clear
AstroProfundis May 23, 2019
b38188e
docs/aws: update instructions of customizing params
AstroProfundis May 23, 2019
9e41760
docs/dind: clean up
AstroProfundis May 23, 2019
4e0c8e5
docs/aws: add examples and adjust order of sections
AstroProfundis May 24, 2019
faab010
Merge branch 'master' into fix-queeny-test-issues
tennix May 24, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 7 additions & 4 deletions deploy/aliyun/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,15 +44,18 @@ The `variables.tf` file contains default settings of variables used for deployin
Apply the stack:

```shell
# Get the code
$ git clone https://github.com/pingcap/tidb-operator
$ cd tidb-operator/deploy/alicloud
$ cd tidb-operator/deploy/aliyun

# Apply the configs, note that you must answer "yes" to `terraform apply` to continue
$ terraform init
$ terraform apply
```

`terraform apply` will take 5 to 10 minutes to create the whole stack, once complete, basic cluster information will be printed:

> **Note:** You can use the `terraform output` command to get this information again.
> **Note:** You can use the `terraform output` command to get the output again.

```
Apply complete! Resources: 3 added, 0 changed, 1 destroyed.
Expand Down Expand Up @@ -82,7 +85,7 @@ $ helm ls

## Access the DB

You can connect the TiDB cluster via the bastion instance, all necessary information are in the output printed after installation is finished:
You can connect the TiDB cluster via the bastion instance, all necessary information are in the output printed after installation is finished (replace the `<>` parts with values from the output):

```shell
$ ssh -i credentials/<cluster_name>-bastion-key.pem root@<bastion_ip>
Expand All @@ -106,7 +109,7 @@ To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in `
This may take a while to complete, watch the process using command:

```
watch kubectl get pods --namespace tidb -o wide
kubectl get pods --namespace tidb -o wide --watch
```

## Scale TiDB cluster
Expand Down
136 changes: 105 additions & 31 deletions deploy/aws/README.md
Original file line number Diff line number Diff line change
@@ -1,48 +1,98 @@
# Deploy TiDB Operator and TiDB cluster on AWS EKS

AstroProfundis marked this conversation as resolved.
Show resolved Hide resolved
## Requirements:
* [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73
This document describes how to deploy TiDB Operator and a TiDB cluster on AWS EKS with your laptop (Linux or macOS) for development or testing.

## Prerequisites

Before deploying a TiDB cluster on AWS EKS, make sure the following requirements are satisfied:
* [awscli](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) >= 1.16.73, to control AWS resources

The `awscli` must be [configured](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) before it can interact with AWS. The fastest way is using the `aws configure` command:

``` shell
# Replace AWS Access Key ID and AWS Secret Access Key with your own keys
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
```
> **Note:** The access key must have at least permissions to: create VPC, create EBS, create EC2 and create role
* [terraform](https://learn.hashicorp.com/terraform/getting-started/install.html)
* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) >= 1.11
* [helm](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client) >= 2.9.0
* [jq](https://stedolan.github.io/jq/download/)
* [aws-iam-authenticator](https://github.com/kubernetes-sigs/aws-iam-authenticator) installed in `PATH`
* [aws-iam-authenticator](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html) installed in `PATH`, to authenticate with AWS

The easiest way to install `aws-iam-authenticator` is to download the prebuilt binary:

## Configure awscli
``` shell
# Download binary for Linux
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
AstroProfundis marked this conversation as resolved.
Show resolved Hide resolved

https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
# Or, download binary for macOS
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/darwin/amd64/aws-iam-authenticator

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe using the code below is simpler and more general

os=$(uname -s| tr '[:upper:]' '[:lower:]')
curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/$os/amd64/aws-iam-authenticator

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have any mac for test, if you can confirm this works for macOS?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it works for macOS and Linux

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AstroProfundis The link you mentioned already has an elegant way to install aws-iam-authenticator.
For macOS, install with homebrew. For linux install with the curl to fetch the binary. It's inappropriate to provide the tricky script in the guide.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think homebrew is available to every macOS installation by default...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's not builtin on macOS. But almost all technical users install that. So I think provide homebrew command is reasonable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This definitely can't pass Queeney's test, the "almost all technical users" is not fitting the target user groups.

Copy link
Contributor

@yikeke yikeke May 23, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add a note in the document that before users install aws-iam-authenticator with homebrew, they need to make sure that they have installed homebrew first, then providing the homebrew way is totally acceptable for queeny's test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we're introducing new dependency and copying almost all major parts of AWS document, what's the point of avoiding out-linking again?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you are right. So if the curl way does not require any new dependency for macOS, I suggest we introduce the curl way in our document then. If users prefer the homebrew way, they can still click the link and find the guide in the AWS document.

## Setup
chmod +x ./aws-iam-authenticator
sudo mv ./aws-iam-authenticator /usr/local/bin/aws-iam-authenticator
```

The default setup will create a new VPC and a t2.micro instance as bastion machine. And EKS cluster with the following ec2 instance worker nodes:
## Deploy

The default setup will create a new VPC and a t2.micro instance as bastion machine, and an EKS cluster with the following ec2 instances as worker nodes:

* 3 m5d.xlarge instances for PD
* 3 i3.2xlarge instances for TiKV
* 2 c4.4xlarge instances for TiDB
* 1 c5.xlarge instance for monitor

You can change default values in `variables.tf` (like the cluster name and versions) as needed. The default value of `cluster_name` is `my-cluster`.
Use the following commands to set up the cluster:

``` shell
AstroProfundis marked this conversation as resolved.
Show resolved Hide resolved
# Get the code
$ git clone --depth=1 https://github.com/pingcap/tidb-operator
$ cd tidb-operator/deploy/aws

# Apply the configs, note that you must answer "yes" to `terraform apply` to continue
$ terraform init
$ terraform apply
```

It might take 10 minutes or more for the process to finish. After `terraform apply` is executed successfully, some basic information is printed to the console. You can access the `monitor_endpoint` using your web browser.
It might take 10 minutes or more to finish the process. After `terraform apply` is executed successfully, some useful information is printed to the console.

> **Note:** You can use the `terraform output` command to get that information again.
A successful deployment will give the output like:

To access TiDB cluster, use the following command to first ssh into the bastion machine, and then connect it via MySQL client:
```
Apply complete! Resources: 67 added, 0 changed, 0 destroyed.

Outputs:

bastion_ip = [
52.14.50.145
]
eks_endpoint = https://E10A1D0368FFD6E1E32E11573E5CE619.sk1.us-east-2.eks.amazonaws.com
eks_version = 1.12
monitor_endpoint = http://abd299cc47af411e98aae02938da0762-1989524000.us-east-2.elb.amazonaws.com:3000
region = us-east-2
tidb_dns = abd2e3f7c7af411e98aae02938da0762-17499b76b312be02.elb.us-east-2.amazonaws.com
tidb_port = 4000
tidb_version = v3.0.0-rc.1
```

> **Note:** You can use the `terraform output` command to get the output again.

## Access the database

To access the deployed TiDB cluster, use the following commands to first `ssh` into the bastion machine, and then connect it via MySQL client (replace the `<>` parts with values from the output):

``` shell
ssh -i credentials/k8s-prod-<cluster_name>.pem ec2-user@<bastion_ip>
mysql -h <tidb_dns> -P <tidb_port> -u root
```

If the DNS name is not resolvable, be patient and wait a few minutes.
The default value of `cluster_name` is `my-cluster`. If the DNS name is not resolvable, be patient and wait a few minutes.

You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_<cluster_name>`.
You can interact with the EKS cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_<cluster_name>`:

``` shell
# By specifying --kubeconfig argument
Expand All @@ -55,30 +105,48 @@ kubectl get po -n tidb
helm ls
```

# Destory
## Monitor

It may take some while to finish destroying the cluster.
You can access the `monitor_endpoint` address (printed in outputs) using your web browser to view monitoring metrics.

```shell
$ terraform destroy
```
The initial Grafana login credentials are:

> **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore.
- User: admin
- Password: admin

## Upgrade TiDB cluster
## Upgrade

To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in variables.tf and run `terraform apply`.
To upgrade the TiDB cluster, edit the `variables.tf` file with your preferred text editor and modify the `tidb_version` variable to a higher version, and then run `terraform apply`.

> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `watch kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb`
For example, to upgrade the cluster to version 2.1.10, modify the `tidb_version` to `v2.1.10`:

## Scale TiDB cluster
```
variable "tidb_version" {
description = "tidb cluster version"
default = "v2.1.10"
}
```

> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.

## Scale

To scale the TiDB cluster, edit the `variables.tf` file with your preferred text editor and modify the `tikv_count` or `tidb_count` variable to your desired count, and then run `terraform apply`.

To scale TiDB cluster, modify `tikv_count` or `tidb_count` to your desired count, and then run `terraform apply`.
For example, to scale out the cluster, you can modify the number of TiDB instances from 2 to 3:

> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb`
```
variable "tidb_count" {
default = 4
}
```

> *Note*: Currently, scaling in is NOT supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.

## Customize

You can change default values in `variables.tf` (such as the cluster name and image versions) as needed.

### Customize AWS related resources

By default, the terraform script will create a new VPC. You can use an existing VPC by setting `create_vpc` to `false` and specify your existing VPC id and subnet ids to `vpc_id` and `subnets` variables.
Expand All @@ -91,11 +159,17 @@ Currently, the instance type of TiDB cluster component is not configurable becau

### Customize TiDB parameters

Currently, there are not much parameters exposed to be customizable. If you need to customize these, you should modify the `templates/tidb-cluster-values.yaml.tpl` files before deploying. Or if you modify it and run `terraform apply` again after the cluster is running, it will not take effect unless you manually delete the pod via `kubectl delete po -n tidb --all`. This will be resolved when issue [#255](https://github.com/pingcap/tidb-operator/issues/225) is fixed.
Currently, there are not many customizable TiDB parameters. And there are two ways to customize the parameters:

## TODO
* Before deploying the cluster, you can directly modify the `templates/tidb-cluster-values.yaml.tpl` file and then deploy the cluster with customized configs.
* After the cluster is running, you must run `terraform apply` again every time you make changes to the `templates/tidb-cluster-values.yaml.tpl` file, or the cluster will still be using old configs.

- [ ] Use [cluster autoscaler](https://github.com/kubernetes/autoscaler)
- [ ] Allow create a minimal TiDB cluster for testing
- [ ] Make the resource creation synchronously to follow Terraform convention
- [ ] Make more parameters customizable
## Destroy

It may take some while to finish destroying the cluster.

``` shell
$ terraform destroy
```

> **Note:** You have to manually delete the EBS volumes in AWS console after running `terraform destroy` if you do not need the data on the volumes anymore.
2 changes: 1 addition & 1 deletion docs/aws-eks-tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ We can now point a browser to `localhost:3000` and view the dashboards.

To scale out TiDB cluster, modify `tikv_count` or `tidb_count` in `aws-tutorial.tfvars` to your desired count, and then run `terraform apply -var-file=aws-tutorial.tfvars`.

> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_aws_tutorial get po -n tidb`
> *Note*: Currently, scaling in is not supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_aws_tutorial get po -n tidb --watch`.

> *Note*: There are taints and tolerations in place such that only a single pod will be scheduled per node. The count is also passed onto helm via terraform. For this reason attempting to scale out pods via helm or `kubectl scale` will not work as expected.
---
Expand Down
Loading