Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy/aws: split public and private subnets when using existing vpc #530

Merged
merged 5 commits into from
Jun 6, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions deploy/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ For example, to upgrade the cluster to version 3.0.0-rc.1, modify the `tidb_vers
}
```

> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.
> **Note**: The upgrading doesn't finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.

## Scale

Expand All @@ -141,15 +141,17 @@ For example, to scale out the cluster, you can modify the number of TiDB instanc
}
```

> *Note*: Currently, scaling in is NOT supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.
> **Note**: Currently, scaling in is NOT supported since we cannot determine which node to scale. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_<cluster_name> get po -n tidb --watch`.

## Customize

You can change default values in `variables.tf` (such as the cluster name and image versions) as needed.

### Customize AWS related resources

By default, the terraform script will create a new VPC. You can use an existing VPC by setting `create_vpc` to `false` and specify your existing VPC id and subnet ids to `vpc_id` and `subnets` variables.
By default, the terraform script will create a new VPC. You can use an existing VPC by setting `create_vpc` to `false` and specify your existing VPC id and subnet ids to `vpc_id`, `private_subnet_ids` and `public_subnet_ids` variables.

> **Note:** Reusing VPC and subnets of an existing EKS cluster is not supported yet due to limitations of AWS and Terraform, so only change this option if you have to use a manually created VPC.

An ec2 instance is also created by default as bastion machine to connect to the created TiDB cluster, because the TiDB service is exposed as an [Internal Elastic Load Balancer](https://aws.amazon.com/blogs/aws/internal-elastic-load-balancers/). The ec2 instance has MySQL and Sysbench pre-installed, so you can SSH into the ec2 instance and connect to TiDB using the ELB endpoint. You can disable the bastion instance creation by setting `create_bastion` to `false` if you already have an ec2 instance in the VPC.

Expand Down
8 changes: 4 additions & 4 deletions deploy/aws/data.tf
Original file line number Diff line number Diff line change
Expand Up @@ -29,25 +29,25 @@ data "template_file" "tidb_cluster_values" {
# data "kubernetes_service" "tidb" {
# depends_on = ["helm_release.tidb-cluster"]
# metadata {
# name = "tidb-cluster-tidb"
# name = "tidb-cluster-${var.cluster_name}-tidb"
# namespace = "tidb"
# }
# }

# data "kubernetes_service" "monitor" {
# depends_on = ["helm_release.tidb-cluster"]
# metadata {
# name = "tidb-cluster-grafana"
# name = "tidb-cluster-${var.cluster_name}-grafana"
# namespace = "tidb"
# }
# }

data "external" "tidb_service" {
depends_on = ["null_resource.wait-tidb-ready"]
program = ["bash", "-c", "kubectl --kubeconfig credentials/kubeconfig_${var.cluster_name} get svc -n tidb tidb-cluster-tidb -ojson | jq '.status.loadBalancer.ingress[0]'"]
program = ["bash", "-c", "kubectl --kubeconfig credentials/kubeconfig_${var.cluster_name} get svc -n tidb tidb-cluster-${var.cluster_name}-tidb -ojson | jq '.status.loadBalancer.ingress[0]'"]
}

data "external" "monitor_service" {
depends_on = ["null_resource.wait-tidb-ready"]
program = ["bash", "-c", "kubectl --kubeconfig credentials/kubeconfig_${var.cluster_name} get svc -n tidb tidb-cluster-grafana -ojson | jq '.status.loadBalancer.ingress[0]'"]
program = ["bash", "-c", "kubectl --kubeconfig credentials/kubeconfig_${var.cluster_name} get svc -n tidb tidb-cluster-${var.cluster_name}-grafana -ojson | jq '.status.loadBalancer.ingress[0]'"]
}
10 changes: 5 additions & 5 deletions deploy/aws/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ module "ec2" {
monitoring = false
user_data = "${file("bastion-userdata")}"
vpc_security_group_ids = ["${aws_security_group.ssh.id}"]
subnet_ids = "${split(",", var.create_vpc ? join(",", module.vpc.public_subnets) : join(",", var.subnets))}"
subnet_ids = "${split(",", var.create_vpc ? join(",", module.vpc.public_subnets) : join(",", var.public_subnet_ids))}"

tags = {
app = "tidb"
Expand All @@ -86,7 +86,7 @@ module "eks" {
cluster_name = "${var.cluster_name}"
cluster_version = "${var.k8s_version}"
config_output_path = "credentials/"
subnets = "${split(",", var.create_vpc ? join(",", module.vpc.private_subnets) : join(",", var.subnets))}"
subnets = "${split(",", var.create_vpc ? join(",", module.vpc.private_subnets) : join(",", var.private_subnet_ids))}"
vpc_id = "${var.create_vpc ? module.vpc.vpc_id : var.vpc_id}"

# instance types: https://aws.amazon.com/ec2/instance-types/
Expand Down Expand Up @@ -209,7 +209,7 @@ resource "helm_release" "tidb-operator" {

resource "helm_release" "tidb-cluster" {
depends_on = ["helm_release.tidb-operator"]
name = "tidb-cluster"
name = "tidb-cluster-${var.cluster_name}"
namespace = "tidb"
chart = "${path.module}/charts/tidb-cluster"
values = [
Expand All @@ -226,11 +226,11 @@ until kubectl get po -n tidb -lapp.kubernetes.io/component=tidb | grep Running;
echo "Wait TiDB pod running"
sleep 5
done
until kubectl get svc -n tidb tidb-cluster-tidb | grep elb; do
until kubectl get svc -n tidb tidb-cluster-${var.cluster_name}-tidb | grep elb; do
echo "Wait TiDB service ready"
sleep 5
done
until kubectl get svc -n tidb tidb-cluster-grafana | grep elb; do
until kubectl get svc -n tidb tidb-cluster-${var.cluster_name}-grafana | grep elb; do
echo "Wait monitor service ready"
sleep 5
done
Expand Down
27 changes: 19 additions & 8 deletions deploy/aws/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,40 +4,51 @@ variable "region" {
}

variable "ingress_cidr" {
description = "IP cidr that allowed to access bastion ec2 instance"
description = "IP CIDR that allowed to access bastion ec2 instance"
default = ["0.0.0.0/0"] # Note: Please restrict your ingress to only necessary IPs. Opening to 0.0.0.0/0 can lead to security vulnerabilities.
}

# Please note that this is only for manually created VPCs, deploying multiple EKS
# clusters in one VPC is NOT supported now.
variable "create_vpc" {
description = "Create a new VPC or not, if true the vpc_cidr/private_subnets/public_subnets must be set correctly, otherwise vpc_id/subnet_ids must be set correctly"
description = "Create a new VPC or not. If there is an existing VPC that you'd like to use, set this value to `false` and adjust `vpc_id`, `private_subnet_ids` and `public_subnet_ids` to the existing ones."
default = true
}

variable "vpc_cidr" {
description = "vpc cidr"
description = "The network to use within the VPC. This value is ignored if `create_vpc=false`."
default = "10.0.0.0/16"
}

variable "private_subnets" {
description = "vpc private subnets"
description = "The networks to use for private subnets. This value is ignored if `create_vpc=false`."
type = "list"
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
}

variable "public_subnets" {
description = "vpc public subnets"
description = "The networks to use for public subnets. This value is ignored if `create_vpc=false`."
type = "list"
default = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
}

variable "vpc_id" {
description = "VPC id"
description = "ID of the existing VPC. This value is ignored if `create_vpc=true`."
type = "string"
default = "vpc-c679deae"
}

variable "subnets" {
description = "subnet id list"
# To use the same subnets for both private and public usage,
# just set their values identical.
variable "private_subnet_ids" {
description = "The subnet ID(s) of the existing private networks. This value is ignored if `create_vpc=true`."
type = "list"
default = ["subnet-899e79f3", "subnet-a72d80cf", "subnet-a76d34ea"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These defaults are wired and seems to be environment dependent

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are samples and should be changed to user's values if they need to use existing ones, by default (with create_vpc=true) these values are ignored.

}


variable "public_subnet_ids" {
description = "The subnet ID(s) of the existing public networks. This value is ignored if `create_vpc=true`."
type = "list"
default = ["subnet-899e79f3", "subnet-a72d80cf", "subnet-a76d34ea"]
}
Expand Down