Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Hybrid section with local cluster on Outposts #2008

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/patterns/.pages
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ nav:
- Fargate Serverless: fargate-serverless.md
- Fully Private Cluster: fully-private-cluster.md
- GitOps: gitops
- Hybrid: hybrid
- Istio: istio.md
- Karpenter on EKS Fargate: karpenter.md
- Karpenter on EKS MNG: karpenter-mng.md
Expand Down
7 changes: 7 additions & 0 deletions docs/patterns/hybrid/local-cluster-outposts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: Local Cluster on AWS Outposts
---

{%
include-markdown "../../../patterns/local-cluster-outposts/README.md"
%}
78 changes: 78 additions & 0 deletions patterns/local-cluster-outposts/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Local Cluster on AWS Outposts

This pattern demonstrates how to provision an EKS cluster on AWS Outposts. The solution is comprised of primarily of the following components:

1. A self-managed node group is required to launch instances on the AWS Outposts rack.
2. EKS Clusters on AWS Outpost (local mode) do not support public EKS endpoints. Therefore, configuration to create a standalone instance on the AWS Outposts rack has been provided in order to deploy the pattern from that remote host.
3. An EBS GP2 storage class is required when deploying applications on the EKS cluster.

<b>Links:</b>

- [Amazon EKS on-premises with AWS Outposts](https://docs.aws.amazon.com/eks/latest/userguide/eks-outposts.html)

## Code

```terraform hl_lines="19-22 46-61 72-88"
{% include "../../patterns/local-cluster-outposts/eks.tf" %}
```

## Deploy

!!! warning
Access to an AWS Outposts rack is required to deploy this pattern.

See [here](https://aws-ia.github.io/terraform-aws-eks-blueprints/getting-started/#prerequisites) for the prerequisites and steps to deploy this pattern.

EKS local clusters on AWS Outposts do not support public endpoints. Therefore, the cluster can only be accessed from within the VPC where the Outposts is deployed. The example below demonstrates how to deploy a local cluster on AWS Outpost.

!!! info
If you already have access to the Outposts rack network (VPN, etc.), you can skip steps 1 and 2:

1. Deploy the remote host where the cluster will be provisioned from:

```sh
cd prerequisites
terraform init
terraform apply --auto-approve
```

2. If provisioning using the remote host deployed in step 1, connect to the remote host using SSM. You can use the output generated by step 1 to connect:

!!! note
You will need to have the [SSM plugin for the AWS CLI installed](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html)

```sh
aws ssm start-session --region <REGION> --target <INSTANCE_ID>
```

3. Once connected to the remote host, navigate to where the configuration files have been downloaded and deploy the pattern:

```sh
cd
terraform init
terraform apply TF_VAR_outpost_arn=<YOUR-OUTPOST-ARN> --auto-approve
```

## Destroy

To remove the resources that were created, the destroy steps need to be executed in the reverse order of the deployment steps:

1. Log back into the remote host using SSM:

```sh
aws ssm start-session --region <REGION> --target <INSTANCE_ID>
```

2. Once connected to the remote host, navigate to where the configuration files have been downloaded and deprovision the pattern:

```sh
cd
terraform destroy --auto-approved
```

3. Exit the remote host and navigate to the local `prerequisites/` directory to deprovision the remote host:

```sh
cd prerequisites
terraform destroy --auto-approved
```
88 changes: 88 additions & 0 deletions patterns/local-cluster-outposts/eks.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
################################################################################
# Cluster
################################################################################

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.24"

cluster_name = local.name
cluster_version = "1.30"

# Gives Terraform identity admin access to cluster which will
# allow deploying resources (EBS storage class) into the cluster
enable_cluster_creator_admin_permissions = true

vpc_id = data.aws_vpc.this.id
subnet_ids = data.aws_subnets.this.ids

outpost_config = {
control_plane_instance_type = local.instance_type
outpost_arns = [var.outpost_arn]
}

# Extend cluster security group rules
cluster_security_group_additional_rules = {
ingress_vpc_https = {
description = "Remote host to control plane"
protocol = "tcp"
from_port = 443
to_port = 443
type = "ingress"
cidr_blocks = [data.aws_vpc.this.cidr_block]
}
}

self_managed_node_groups = {
outpost = {
name = local.name
ami_type = "AL2023_x86_64_STANDARD"

min_size = 1
max_size = 3
desired_size = 2
instance_type = local.instance_type

# Additional configuration values required to join local cluster to EKS
cloudinit_pre_nodeadm = [
{
content_type = "application/node.eks.aws"
content = <<-EOT
---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
cluster:
enableOutpost: true
id: ${module.eks.cluster_id}
EOT
}
]
}
}

tags = local.tags
}

################################################################################
# GP2 Storage Class
# Required for local cluster on Outposts
################################################################################

resource "kubernetes_storage_class_v1" "this" {
metadata {
name = "ebs-sc"
annotations = {
"storageclass.kubernetes.io/is-default-class" = "true"
}
}

storage_provisioner = "ebs.csi.aws.com"
volume_binding_mode = "WaitForFirstConsumer"
allow_volume_expansion = true

parameters = {
type = "gp2"
encrypted = "true"
}
}
39 changes: 39 additions & 0 deletions patterns/local-cluster-outposts/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
terraform {
required_version = ">= 1.3"

required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.61"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.20"
}
}

# ## Used for end-to-end testing on project; update to suit your needs
# backend "s3" {
# bucket = "terraform-ssp-github-actions-state"
# region = "us-west-2"
# key = "e2e/local-clusters-outposts/terraform.tfstate"
# }
}

provider "aws" {
region = local.region
}

################################################################################
# Common data/locals
################################################################################

locals {
name = basename(path.cwd)
region = "us-west-2"

tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
}
48 changes: 48 additions & 0 deletions patterns/local-cluster-outposts/outpost-network.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
################################################################################
# Outpost Network
################################################################################

variable "outpost_arn" {
description = "The ARN of the Outpost where the EKS cluster will be provisioned"
type = string
}

locals {
instance_type = element(tolist(data.aws_outposts_outpost_instance_types.this.instance_types), 0)
}

# We can only use the instance types support by the Outpost rack
data "aws_outposts_outpost_instance_types" "this" {
arn = var.outpost_arn
}

data "aws_subnets" "lookup" {
filter {
name = "outpost-arn"
values = [var.outpost_arn]
}
}

# Reverse lookup of the subnet to get the VPC
# This is whats used for the cluster
data "aws_subnet" "this" {
id = element(tolist(data.aws_subnets.lookup.ids), 0)
}

# These are subnets for the Outpost and restricted to the same VPC
# This is whats used for the cluster
data "aws_subnets" "this" {
filter {
name = "outpost-arn"
values = [var.outpost_arn]
}

filter {
name = "vpc-id"
values = [data.aws_subnet.this.vpc_id]
}
}

data "aws_vpc" "this" {
id = data.aws_subnet.this.vpc_id
}
125 changes: 125 additions & 0 deletions patterns/local-cluster-outposts/prerequisites/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
terraform {
required_version = ">= 1.3"

required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.61"
}
}

# ## Used for end-to-end testing on project; update to suit your needs
# backend "s3" {
# bucket = "terraform-ssp-github-actions-state"
# region = "us-west-2"
# key = "e2e/local-clusters-outposts-prerequisites/terraform.tfstate"
# }
}

provider "aws" {
region = local.region
}

locals {
region = "us-west-2"
name = "ex-${basename(path.cwd)}"

terraform_version = "1.3.10"

tags = {
Example = local.name
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
}

################################################################################
# Pre-Requisites
################################################################################

module "ssm_bastion_ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 5.5"

name = "${local.name}-bastion"

create_iam_instance_profile = true
iam_role_policies = {
AdministratorAccess = "arn:aws:iam::aws:policy/AdministratorAccess"
}

instance_type = element(tolist(data.aws_outposts_outpost_instance_types.this.instance_types), 0)
ami_ssm_parameter = "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64"

user_data = <<-EOT
#!/bin/bash

# Add ssm-user since it won't exist until first login
adduser -m ssm-user
tee /etc/sudoers.d/ssm-agent-users <<'EOF'
# User rules for ssm-user
ssm-user ALL=(ALL) NOPASSWD:ALL
EOF
chmod 440 /etc/sudoers.d/ssm-agent-users

cd /home/ssm-user

# Install Terraform
dnf install git -y
curl -sSO https://releases.hashicorp.com/terraform/${local.terraform_version}/terraform_${local.terraform_version}_linux_amd64.zip
sudo unzip -qq terraform_${local.terraform_version}_linux_amd64.zip terraform -d /usr/bin/
rm terraform_${local.terraform_version}_linux_amd64.zip

# Install kubectl
curl -LO https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
rm kubectl

# Copy Terraform files
for TF_FILE in eks main outpost; do
curl -O https://raw.githubusercontent.com/aws-ia/terraform-aws-eks-blueprints/feat/hybrid-section/patterns/local-cluster-outposts/$${TF_FILE}.tf
done
terraform init -upgrade

chown -R ssm-user:ssm-user /home/ssm-user/
EOT
user_data_replace_on_change = true

vpc_security_group_ids = [module.bastion_security_group.security_group_id]
subnet_id = element(data.aws_subnets.this.ids, 0)

tags = local.tags
}

output "ssm_start_session" {
description = "SSM start session command to connect to remote host created"
value = "aws ssm start-session --region ${local.region} --target ${module.ssm_bastion_ec2.id}"
}

module "bastion_security_group" {
source = "terraform-aws-modules/security-group/aws"
version = "~> 5.0"

name = "${local.name}-bastion"
description = "Security group to allow provisioning ${local.name} EKS local cluster on Outposts"
vpc_id = data.aws_vpc.this.id

ingress_with_cidr_blocks = [
{
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = data.aws_vpc.this.cidr_block
},
]
egress_with_cidr_blocks = [
{
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = "0.0.0.0/0"
},
]

tags = local.tags
}
Loading
Loading