Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS EKS: unable to connect to the cluster with create_vpc=false and existing VPC #514

Closed
george24601 opened this issue May 24, 2019 · 2 comments · Fixed by #530
Closed
Assignees

Comments

@george24601
Copy link

Bug Report

What version of Kubernetes are you using?
AWS EKS 1.12 on us-east-2

What did you do?

  1. Create a new tidb cluster on EKS following the README with create_vpc tf variable set to true
  2. Verified that creation is successful by logging into tidb and executing queries
  3. Create a second tidb cluster with a copy of the terraform in a separate directory (so that tfstate files don't interfere with each other), this time with create_vpc = false, i.e., my tfvar files looks like
pd_instance_type = "c5d.large"
tikv_instance_type = "c5d.large"
tidb_instance_type = "c4.large"
monitor_instance_type = "c5.large"

pd_count = 1
tikv_count = 1
tidb_count = 1

cluster_name = "rep_george"
tikv_root_volume_size = "50"

region = "us-east-2"
create_vpc=false
vpc_id="$VPC_ID_CREATED_IN_STEP_ONE"
subnets=$SUBNET_IDS_CREATED_IN_STEP_ONE

What did you expect to see?
the second tidb cluster will be created inside the same VPC, but on a second EKS cluster

What did you see instead?
Unable to connect to the bastion host, because it went to the private subnet

From the tf code, seems that when create_vpc=false, we should use separate subnet id variables for bastion and EKS, so that bastion goes to the public subnet, EKS private?
https://github.com/pingcap/tidb-operator/blob/master/deploy/aws/main.tf#L72
https://github.com/pingcap/tidb-operator/blob/master/deploy/aws/main.tf#L89

@george24601 george24601 changed the title AWS EKS: unable to create tidb cluster with create_vpc=false AWS EKS: unable to create tidb cluster with create_vpc=false and exsiting VPC May 24, 2019
@george24601 george24601 changed the title AWS EKS: unable to create tidb cluster with create_vpc=false and exsiting VPC AWS EKS: unable to connect to the cluster with create_vpc=false and existing VPC May 24, 2019
@AstroProfundis
Copy link
Contributor

Hello, thank you for the report, we are looking into this issue and will update later. Your patience will be appreciated.

@AstroProfundis
Copy link
Contributor

Reusing existing VPC and subnets that created by Terraform for another EKS cluster is not supported now due to various limitations. However, it is possible to make the second cluster deployed by manually adding tag kubernetes.io/cluster/<second_cluster_name>=shared to the subnets of the first cluster (if that tag doesn't already exist).

Note that this is not recommend and may cause dependency issues when trying to modify or destroy these resources.

We have submitted PR to correctly put resources into public and private subnets, and to make the docs more clear on this kind of situation.

yahonda pushed a commit that referenced this issue Dec 27, 2021
* eks: Update the eks deployment doc

Translate pingcap/docs-tidb-operator#421

* Address comments

* Optimize wording

Co-authored-by: DanielZhangQD <36026334+DanielZhangQD@users.noreply.github.com>
Co-authored-by: Lilian Lee <lilin@pingcap.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants