Skip to content

EKS with nodes groups stacks in private subnets only. VPC: 3 available zones; 3 private, 3 public, 3 db and 3 front subnets.

Notifications You must be signed in to change notification settings

Vadim-Zenin/aws-eks-vpc-3priv-3pub-3db-3front-sn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

aws-eks-vpc-3priv-3pub-3db-3front-sn

EKS with nodes groups stacks in private subnets only. VPC: 3 available zones; 3 private, 3 public, 3 db and 3 front subnets.

Load Balancer is in public subnets.

EKS nodes are in private subnets.

Either clients or qa environments are separated by EKS Name Spaces and use dedicated

  • EKS nodes group
  • AWS IAM police forthe EKS Group
  • AWS Security Groups
  • AWS Load balancers

Diagram

Infrastructure diagram

Auto-scaling

  • Scalable
  • Highly Available

Infrastructure Autoscaling diagram

Naming convention

Environment name naming convention

<Environment_type><VPC_Subnets_SIDR_second_octet>

Example: test16

EKS name spaces

nspace<digital(2 symbols)>, digital must be unique per EKS. The number is used for AWS alb.ingress.kubernetes.io/group.order

Examples: nspace10, nspace20, nspace21, nspace60, nspace61

VPC Subnets:

SUBNET_ID="16"

VpcBlock 10.${SUBNET_ID}.0.0/16

SubnetPublicAblock=10.${SUBNET_ID}.0.0/20
SubnetPublicBblock=10.${SUBNET_ID}.16.0/20
SubnetPublicCblock=10.${SUBNET_ID}.32.0/20

SubnetFrontAblock=10.${SUBNET_ID}.64.0/22
SubnetFrontBblock=10.${SUBNET_ID}.68.0/22
SubnetFrontCblock=10.${SUBNET_ID}.72.0/22

SubnetDbAblock=10.${SUBNET_ID}.96.0/22
SubnetDbBblock=10.${SUBNET_ID}.100.0/22
SubnetDbCblock=10.${SUBNET_ID}.104.0/22

SubnetPrivateAblock=10.${SUBNET_ID}.128.0/19
SubnetPrivateBblock=10.${SUBNET_ID}.160.0/19
SubnetPrivateCblock=10.${SUBNET_ID}.192.0/19

EKS nodes groups

EKS nodes groups:

main
nspace60

main is for Kubernetes core service pods.

nspace60 is for our websites test pods.

We could scale main and websites groups individually.

Variables, Configuration

Configuration is in init.sh file.

Sensitive information and credentials are in AWS Parameter Store

Usage

It is as simple as:

. ~/venv-aws-cli/bin/activate
export COMPANY_NAME_SHORT="abc" && export ENV_TYPE="test" && export IP_2ND_OCTET="16" && export NSPACE="nspace60" && export APP_NAME="app-http-content-from-git" && export CI_CD_DEPLOY=false && bash -c "./bin/deploy-env-full.sh"

After ~ 28 minutes you will see all Workers as Ready:

$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE   VERSION
ip-10-16-143-148.eu-west-1.compute.internal   Ready    <none>   77m   v1.18.9-eks-d1db3c
ip-10-16-183-113.eu-west-1.compute.internal   Ready    <none>   90m   v1.18.9-eks-d1db3c
ip-10-16-200-214.eu-west-1.compute.internal   Ready    <none>   77m   v1.18.9-eks-d1db3c
ip-10-16-222-192.eu-west-1.compute.internal   Ready    <none>   91m   v1.18.9-eks-d1db3c

$ kubectl get nodes --show-labels
NAME                                          STATUS   ROLES    AGE   VERSION              LABELS
ip-10-16-143-148.eu-west-1.compute.internal   Ready    <none>   77m   v1.18.9-eks-d1db3c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1a,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-16-143-148.eu-west-1.compute.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3.xlarge,nodesgroup=nspace60,topology.kubernetes.io/region=eu-west-1,topology.kubernetes.io/zone=eu-west-1a
ip-10-16-183-113.eu-west-1.compute.internal   Ready    <none>   91m   v1.18.9-eks-d1db3c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3a.small,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1b,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-16-183-113.eu-west-1.compute.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3a.small,nodesgroup=main,topology.kubernetes.io/region=eu-west-1,topology.kubernetes.io/zone=eu-west-1b
ip-10-16-200-214.eu-west-1.compute.internal   Ready    <none>   77m   v1.18.9-eks-d1db3c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1c,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-16-200-214.eu-west-1.compute.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3.xlarge,nodesgroup=nspace60,topology.kubernetes.io/region=eu-west-1,topology.kubernetes.io/zone=eu-west-1c
ip-10-16-222-192.eu-west-1.compute.internal   Ready    <none>   91m   v1.18.9-eks-d1db3c   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=t3a.small,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-west-1,failure-domain.beta.kubernetes.io/zone=eu-west-1c,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-16-222-192.eu-west-1.compute.internal,kubernetes.io/os=linux,node.kubernetes.io/instance-type=t3a.small,nodesgroup=main,topology.kubernetes.io/region=eu-west-1,topology.kubernetes.io/zone=eu-west-1c

Application deployment

An application deployment is done by Jenkins file config/pipeline/Jenkinsfile-deploy-app

An application choice

An application version choice

Master branch Jenkins pipeline

AWS EKS ingress and ALB public deployment

A load balancer deployment is done by Jenkins file config/pipeline/Jenkinsfile-deploy-app

A load balancer choice

The load balancer selected

kubectl get pods,deploy,rs,sts,ds,svc,endpoints,ing,pv,pvc,hpa -o wide -n nspace60 | grep test16-nspace60-alb1-public
ingress.extensions/test16-nspace60-alb1-public-ingress  <none>  app-http-content-from-git.test16-nspace60.example.com  80  5m57s

AWS ALB would be created automatically.

AWS ALB

AWS ALB Listeners

AWS ALB Listeners Rules

AWS ALB Listeners Certificate

AWS Target Group

DNS record provisioning.

DNS record(s) pointed to the AWS ALB would be created automatically in AWS Route53.

AWS Route53 records

Multi-Service Architecture and Infrastructure

How compatible Multi-Service Architecture and monolithic Infrastructure a code?

I prefer atomised or Multi-Service Architecture in Infrastructure.

If I have to update small part of AWS Security Group I should not touch all VPC or EKS Infrastructure and should not trust Black box inside either Terraform or AWS CloudFormation. I prefer to change small part of AWS Security Group only.

AWS Cloudformation Stacks

AWS Cloudformation Stacks

Folders structure

.
├── bin # binaries
├── cfn # CloudFormation templates
├── config
│   └── pipeline # CI/CD (Jenkins) files
├── configurations # files for AWS Parameter Store
├── dev-utils # utilities
├── download # folder to keep 3rd party installation files
│   ├── alb-ingress-controller
│   ├── clusterAutoscaler
│   ├── cwagent-fluentd
│   └── metrics-server-0.4.1
├── images
├── k8s_templates # EKS (Kubernetes) templates
├── templates # General templates

Clean-up

To delete every resources (VPC, Workers, EKS cluster)

. ~/venv-aws-cli/bin/activate
export COMPANY_NAME_SHORT="abc" && export ENV_TYPE="test" && export IP_2ND_OCTET="16" && export NSPACE="nspace60" && export APP_NAME="app-http-content-from-git" && bash -c ". ./bin/lib_cfn.sh && envCleanup"

It could take 20+ minutes to clean up.

Note: AWS keyPair and AWS ECR will be kept.

About

EKS with nodes groups stacks in private subnets only. VPC: 3 available zones; 3 private, 3 public, 3 db and 3 front subnets.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published