Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add terraform script to auto deploy TiDB cluster on AWS #401

Merged
merged 7 commits into from
May 2, 2019

Conversation

tennix
Copy link
Member

@tennix tennix commented Apr 16, 2019

This PR adds initial support for one-click deploying Cloud TiDB on AWS. PTAL @gregwebs @onlymellb @weekface

@gregwebs
Copy link
Contributor

This looks great! Rather than having it under a directory cloud/ I think it will make more sense to have a directory deploy/: that name would also make sense for an on-prem deploy.

@c4pt0r
Copy link
Member

c4pt0r commented Apr 16, 2019

Awesome! AWS finally got some love!

@c4pt0r
Copy link
Member

c4pt0r commented Apr 16, 2019

Block at creating tiller.

...

null_resource.setup-env: Still creating... (4m40s elapsed)
null_resource.setup-env: Still creating... (4m50s elapsed)
null_resource.setup-env: Still creating... (5m0s elapsed)
null_resource.setup-env (local-exec): Error: tiller was not found. polling deadline exceeded

Error: Error applying plan:

1 error(s) occurred:

* null_resource.setup-env: Error running command 'kubectl apply -f manifests/crd.yaml
kubectl apply -f manifests/local-volume-provisioner.yaml
kubectl apply -f manifests/gp2-storageclass.yaml
kubectl apply -f manifests/tiller-rbac.yaml
helm init --service-account tiller --upgrade --wait
': exit status 1. Output: customresourcedefinition.apiextensions.k8s.io/tidbclusters.pingcap.com unchanged
storageclass.storage.k8s.io/local-storage unchanged
configmap/local-provisioner-config unchanged
daemonset.extensions/local-volume-provisioner unchanged
serviceaccount/local-storage-admin unchanged
clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-pv-binding unchanged
clusterrole.rbac.authorization.k8s.io/local-storage-provisioner-node-clusterrole unchanged
clusterrolebinding.rbac.authorization.k8s.io/local-storage-provisioner-node-binding unchanged
storageclass.storage.k8s.io/ebs-gp2 unchanged
serviceaccount/tiller unchanged
clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding configured
$HELM_HOME has been configured at /Users/dongxu/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Error: tiller was not found. polling deadline exceeded


Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

@gregwebs
Copy link
Contributor

This should be integrated with the existing documentation for deploying to AWS EKS: https://github.com/pingcap/tidb-operator/blob/master/docs/aws-eks-tutorial.md

@c4pt0r
Copy link
Member

c4pt0r commented Apr 16, 2019

Yes, it should be. But this PR still needs a lot of polish.

@tennix
Copy link
Member Author

tennix commented Apr 16, 2019

@c4pt0r Did you follow the readme instructions to install aws-iam-authenticator?

@tennix
Copy link
Member Author

tennix commented Apr 17, 2019

@gregwebs I've updated the code to move terraform scripts to deploy directory as you suggested.

@c4pt0r
Copy link
Member

c4pt0r commented Apr 21, 2019

null_resource.setup-env (local-exec): Happy Helming!
null_resource.setup-env: Creation complete after 1m28s (ID: 1960644279543843491)
helm_release.tidb-operator: Creating...
  chart:            "" => "/home/dongxu/tidb-operator/deploy/aws/charts/tidb-operator"
  disable_webhooks: "" => "false"
  force_update:     "" => "false"
  metadata.#:       "" => "<computed>"
  name:             "" => "tidb-operator"
  namespace:        "" => "tidb-admin"
  recreate_pods:    "" => "false"
  reuse:            "" => "false"
  reuse_values:     "" => "false"
  status:           "" => "DEPLOYED"
  timeout:          "" => "300"
  verify:           "" => "false"
  version:          "" => "0.1.0"
  wait:             "" => "true"

Error: Error applying plan:

1 error(s) occurred:

* helm_release.tidb-operator: 1 error(s) occurred:

* helm_release.tidb-operator: error installing: the server could not find the requested resource (post deployments.extensions)

@c4pt0r
Copy link
Member

c4pt0r commented Apr 21, 2019

I'm sure I successfully installed aws-iam-authenticator

@tennix
Copy link
Member Author

tennix commented Apr 22, 2019

@c4pt0r Could you try to run terraform apply again? The helm install may fail in the first time, I'm still figuring out how to prevent it.

@c4pt0r
Copy link
Member

c4pt0r commented Apr 22, 2019

Ok....but it really confused me...why 2 times?

@gregwebs
Copy link
Contributor

I left a comment on #436 wondering about our approach to templating the helm values file.

@tennix tennix force-pushed the terraform-aws branch 2 times, most recently from 74cb4c6 to 372cc6b Compare April 29, 2019 19:11
@c4pt0r
Copy link
Member

c4pt0r commented Apr 30, 2019

LGTM.

c4pt0r
c4pt0r previously approved these changes Apr 30, 2019
@gregwebs
Copy link
Contributor

gregwebs commented May 1, 2019

I think we should template only the values necessary (e.g. cluster version is not) and let the user change the helm values directly. They need to know how to do that anyways.

@tennix
Copy link
Member Author

tennix commented May 1, 2019

@gregwebs The customize section already documents about that. Because we must use the specified storage class for each component, we do not want to let users modify that.

@gregwebs
Copy link
Contributor

gregwebs commented May 1, 2019

@tennix yes. But why do you expose tidb_version? It seems like this is not essential but instead making up for a UX issue with our helm values?

It seems like you are figuring out how to query K8s for information to put into terraform: could we eventually do that for the pd_count, etc also?

@tennix
Copy link
Member Author

tennix commented May 1, 2019

The current deployment is for single TiDB cluster. We need another setup and documentation for deploying multiple TiDB clusters in the same EKS cluster.

@c4pt0r c4pt0r merged commit 13d859c into pingcap:master May 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants