diff --git a/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/README.md b/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/README.md new file mode 100755 index 00000000..b266f2de --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/README.md @@ -0,0 +1,530 @@ +--- +title: AWS Load Balancer Controller Install on AWS EKS +description: Learn to install AWS Load Balancer Controller for Ingress Implementation on AWS EKS +--- + + +## Step-00: Introduction +1. Create IAM Policy and make a note of Policy ARN +2. Create IAM Role and k8s Service Account and bound them together +3. Install AWS Load Balancer Controller using HELM3 CLI +4. Understand IngressClass Concept and create a default Ingress Class + +## Step-01: Pre-requisites +### Pre-requisite-1: eksctl & kubectl Command Line Utility +- Should be the latest eksctl version +```t +# Verify eksctl version +eksctl version + +# For installing or upgrading latest eksctl version +https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html + +# Verify EKS Cluster version +kubectl version --short +kubectl version +Important Note: You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.20 kubectl client works with Kubernetes 1.19, 1.20 and 1.21 clusters. + +# For installing kubectl cli +https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html +``` +### Pre-requisite-2: Create EKS Cluster and Worker Nodes (if not created) +```t +# Create Cluster (Section-01-02) +eksctl create cluster --name=eksdemo1 \ + --region=us-east-1 \ + --zones=us-east-1a,us-east-1b \ + --version="1.21" \ + --without-nodegroup + + +# Get List of clusters (Section-01-02) +eksctl get cluster + +# Template (Section-01-02) +eksctl utils associate-iam-oidc-provider \ + --region region-code \ + --cluster \ + --approve + +# Replace with region & cluster name (Section-01-02) +eksctl utils associate-iam-oidc-provider \ + --region us-east-1 \ + --cluster eksdemo1 \ + --approve + +# Create EKS NodeGroup in VPC Private Subnets (Section-07-01) +eksctl create nodegroup --cluster=eksdemo1 \ + --region=us-east-1 \ + --name=eksdemo1-ng-private1 \ + --node-type=t3.medium \ + --nodes-min=2 \ + --nodes-max=4 \ + --node-volume-size=20 \ + --ssh-access \ + --ssh-public-key=kube-demo \ + --managed \ + --asg-access \ + --external-dns-access \ + --full-ecr-access \ + --appmesh-access \ + --alb-ingress-access \ + --node-private-networking +``` +### Pre-requisite-3: Verify Cluster, Node Groups and configure kubectl cli if not configured +1. EKS Cluster +2. EKS Node Groups in Private Subnets +```t +# Verfy EKS Cluster +eksctl get cluster + +# Verify EKS Node Groups +eksctl get nodegroup --cluster=eksdemo1 + +# Verify if any IAM Service Accounts present in EKS Cluster +eksctl get iamserviceaccount --cluster=eksdemo1 +Observation: +1. No k8s Service accounts as of now. + +# Configure kubeconfig for kubectl +eksctl get cluster # TO GET CLUSTER NAME +aws eks --region update-kubeconfig --name +aws eks --region us-east-1 update-kubeconfig --name eksdemo1 + +# Verify EKS Nodes in EKS Cluster using kubectl +kubectl get nodes + +# Verify using AWS Management Console +1. EKS EC2 Nodes (Verify Subnet in Networking Tab) +2. EKS Cluster +``` + +## Step-02: Create IAM Policy +- Create IAM policy for the AWS Load Balancer Controller that allows it to make calls to AWS APIs on your behalf. +- As on today `2.3.1` is the latest Load Balancer Controller +- We will download always latest from main branch of Git Repo +- [AWS Load Balancer Controller Main Git repo](https://github.com/kubernetes-sigs/aws-load-balancer-controller) +```t +# Change Directroy +cd 08-NEW-ELB-Application-LoadBalancers/ +cd 08-01-Load-Balancer-Controller-Install + +# Delete files before download (if any present) +rm iam_policy_latest.json + +# Download IAM Policy +## Download latest +curl -o iam_policy_latest.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json +## Verify latest +ls -lrta + +## Download specific version +curl -o iam_policy_v2.3.1.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.3.1/docs/install/iam_policy.json + + +# Create IAM Policy using policy downloaded +aws iam create-policy \ + --policy-name AWSLoadBalancerControllerIAMPolicy \ + --policy-document file://iam_policy_latest.json + +## Sample Output +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ aws iam create-policy \ +> --policy-name AWSLoadBalancerControllerIAMPolicy \ +> --policy-document file://iam_policy_latest.json +{ + "Policy": { + "PolicyName": "AWSLoadBalancerControllerIAMPolicy", + "PolicyId": "ANPASUF7HC7S52ZQAPETR", + "Arn": "arn:aws:iam::180789647333:policy/AWSLoadBalancerControllerIAMPolicy", + "Path": "/", + "DefaultVersionId": "v1", + "AttachmentCount": 0, + "PermissionsBoundaryUsageCount": 0, + "IsAttachable": true, + "CreateDate": "2022-02-02T04:51:21+00:00", + "UpdateDate": "2022-02-02T04:51:21+00:00" + } +} +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ +``` +- **Important Note:** If you view the policy in the AWS Management Console, you may see warnings for ELB. These can be safely ignored because some of the actions only exist for ELB v2. You do not see warnings for ELB v2. + +### Make a note of Policy ARN +- Make a note of Policy ARN as we are going to use that in next step when creating IAM Role. +```t +# Policy ARN +Policy ARN: arn:aws:iam::180789647333:policy/AWSLoadBalancerControllerIAMPolicy +``` + + +## Step-03: Create an IAM role for the AWS LoadBalancer Controller and attach the role to the Kubernetes service account +- Applicable only with `eksctl` managed clusters +- This command will create an AWS IAM role +- This command also will create Kubernetes Service Account in k8s cluster +- In addition, this command will bound IAM Role created and the Kubernetes service account created +### Step-03-01: Create IAM Role using eksctl +```t +# Verify if any existing service account +kubectl get sa -n kube-system +kubectl get sa aws-load-balancer-controller -n kube-system +Obseravation: +1. Nothing with name "aws-load-balancer-controller" should exist + +# Template +eksctl create iamserviceaccount \ + --cluster=my_cluster \ + --namespace=kube-system \ + --name=aws-load-balancer-controller \ #Note: K8S Service Account Name that need to be bound to newly created IAM Role + --attach-policy-arn=arn:aws:iam::111122223333:policy/AWSLoadBalancerControllerIAMPolicy \ + --override-existing-serviceaccounts \ + --approve + + +# Replaced name, cluster and policy arn (Policy arn we took note in step-02) +eksctl create iamserviceaccount \ + --cluster=eksdemo1 \ + --namespace=kube-system \ + --name=aws-load-balancer-controller \ + --attach-policy-arn=arn:aws:iam::180789647333:policy/AWSLoadBalancerControllerIAMPolicy \ + --override-existing-serviceaccounts \ + --approve +``` +- **Sample Output** +```t +# Sample Output for IAM Service Account creation +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ eksctl create iamserviceaccount \ +> --cluster=eksdemo1 \ +> --namespace=kube-system \ +> --name=aws-load-balancer-controller \ +> --attach-policy-arn=arn:aws:iam::180789647333:policy/AWSLoadBalancerControllerIAMPolicy \ +> --override-existing-serviceaccounts \ +> --approve +2022-02-02 10:22:49 [ℹ] eksctl version 0.82.0 +2022-02-02 10:22:49 [ℹ] using region us-east-1 +2022-02-02 10:22:52 [ℹ] 1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules) +2022-02-02 10:22:52 [!] metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set +2022-02-02 10:22:52 [ℹ] 1 task: { + 2 sequential sub-tasks: { + create IAM role for serviceaccount "kube-system/aws-load-balancer-controller", + create serviceaccount "kube-system/aws-load-balancer-controller", + } }2022-02-02 10:22:52 [ℹ] building iamserviceaccount stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" +2022-02-02 10:22:53 [ℹ] deploying stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" +2022-02-02 10:22:53 [ℹ] waiting for CloudFormation stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" +2022-02-02 10:23:10 [ℹ] waiting for CloudFormation stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" +2022-02-02 10:23:29 [ℹ] waiting for CloudFormation stack "eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" +2022-02-02 10:23:32 [ℹ] created serviceaccount "kube-system/aws-load-balancer-controller" +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ +``` + +### Step-03-02: Verify using eksctl cli +```t +# Get IAM Service Account +eksctl get iamserviceaccount --cluster eksdemo1 + +# Sample Output +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ eksctl get iamserviceaccount --cluster eksdemo1 +2022-02-02 10:23:50 [ℹ] eksctl version 0.82.0 +2022-02-02 10:23:50 [ℹ] using region us-east-1 +NAMESPACE NAME ROLE ARN +kube-system aws-load-balancer-controller arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-kube-Role1-1244GWMVEAKEN +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ +``` + +### Step-03-03: Verify CloudFormation Template eksctl created & IAM Role +- Goto Services -> CloudFormation +- **CFN Template Name:** eksctl-eksdemo1-addon-iamserviceaccount-kube-system-aws-load-balancer-controller +- Click on **Resources** tab +- Click on link in **Physical Id** to open the IAM Role +- Verify it has **eksctl-eksdemo1-addon-iamserviceaccount-kube-Role1-WFAWGQKTAVLR** associated + +### Step-03-04: Verify k8s Service Account using kubectl +```t +# Verify if any existing service account +kubectl get sa -n kube-system +kubectl get sa aws-load-balancer-controller -n kube-system +Obseravation: +1. We should see a new Service account created. + +# Describe Service Account aws-load-balancer-controller +kubectl describe sa aws-load-balancer-controller -n kube-system +``` +- **Observation:** You can see that newly created Role ARN is added in `Annotations` confirming that **AWS IAM role bound to a Kubernetes service account** +- **Output** +```t +## Sample Output +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ kubectl describe sa aws-load-balancer-controller -n kube-system +Name: aws-load-balancer-controller +Namespace: kube-system +Labels: app.kubernetes.io/managed-by=eksctl +Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-kube-Role1-1244GWMVEAKEN +Image pull secrets: +Mountable secrets: aws-load-balancer-controller-token-5w8th +Tokens: aws-load-balancer-controller-token-5w8th +Events: +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ +``` + +## Step-04: Install the AWS Load Balancer Controller using Helm V3 +### Step-04-01: Install Helm +- [Install Helm](https://helm.sh/docs/intro/install/) if not installed +- [Install Helm for AWS EKS](https://docs.aws.amazon.com/eks/latest/userguide/helm.html) +```t +# Install Helm (if not installed) MacOS +brew install helm + +# Verify Helm version +helm version +``` +### Step-04-02: Install AWS Load Balancer Controller +- **Important-Note-1:** If you're deploying the controller to Amazon EC2 nodes that have restricted access to the Amazon EC2 instance metadata service (IMDS), or if you're deploying to Fargate, then add the following flags to the command that you run: +```t +--set region=region-code +--set vpcId=vpc-xxxxxxxx +``` +- **Important-Note-2:** If you're deploying to any Region other than us-west-2, then add the following flag to the command that you run, replacing account and region-code with the values for your region listed in Amazon EKS add-on container image addresses. +- [Get Region Code and Account info](https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html) +```t +--set image.repository=account.dkr.ecr.region-code.amazonaws.com/amazon/aws-load-balancer-controller +``` +```t +# Add the eks-charts repository. +helm repo add eks https://aws.github.io/eks-charts + +# Update your local repo to make sure that you have the most recent charts. +helm repo update + +# Install the AWS Load Balancer Controller. +## Template +helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ + -n kube-system \ + --set clusterName= \ + --set serviceAccount.create=false \ + --set serviceAccount.name=aws-load-balancer-controller \ + --set region= \ + --set vpcId= \ + --set image.repository=.dkr.ecr..amazonaws.com/amazon/aws-load-balancer-controller + +## Replace Cluster Name, Region Code, VPC ID, Image Repo Account ID and Region Code +helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ + -n kube-system \ + --set clusterName=eksdemo1 \ + --set serviceAccount.create=false \ + --set serviceAccount.name=aws-load-balancer-controller \ + --set region=us-east-1 \ + --set vpcId=vpc-0165a396e41e292a3 \ + --set image.repository=602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-load-balancer-controller +``` +- **Sample output for AWS Load Balancer Controller Install steps** +```t +## Sample Ouput for AWS Load Balancer Controller Install steps +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ +> -n kube-system \ +> --set clusterName=eksdemo1 \ +> --set serviceAccount.create=false \ +> --set serviceAccount.name=aws-load-balancer-controller \ +> --set region=us-east-1 \ +> --set vpcId=vpc-0570fda59c5aaf192 \ +> --set image.repository=602401143452.dkr.ecr.us-east-1.amazonaws.com/amazon/aws-load-balancer-controller +NAME: aws-load-balancer-controller +LAST DEPLOYED: Wed Feb 2 10:33:57 2022 +NAMESPACE: kube-system +STATUS: deployed +REVISION: 1 +TEST SUITE: None +NOTES: +AWS Load Balancer controller installed! +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ +``` +### Step-04-03: Verify that the controller is installed and Webhook Service created +```t +# Verify that the controller is installed. +kubectl -n kube-system get deployment +kubectl -n kube-system get deployment aws-load-balancer-controller +kubectl -n kube-system describe deployment aws-load-balancer-controller + +# Sample Output +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ kubectl get deployment -n kube-system aws-load-balancer-controller +NAME READY UP-TO-DATE AVAILABLE AGE +aws-load-balancer-controller 2/2 2 2 27s +Kalyans-MacBook-Pro:08-01-Load-Balancer-Controller-Install kdaida$ + +# Verify AWS Load Balancer Controller Webhook service created +kubectl -n kube-system get svc +kubectl -n kube-system get svc aws-load-balancer-webhook-service +kubectl -n kube-system describe svc aws-load-balancer-webhook-service + +# Sample Output +Kalyans-MacBook-Pro:aws-eks-kubernetes-masterclass-internal kdaida$ kubectl -n kube-system get svc aws-load-balancer-webhook-service +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +aws-load-balancer-webhook-service ClusterIP 10.100.53.52 443/TCP 61m +Kalyans-MacBook-Pro:aws-eks-kubernetes-masterclass-internal kdaida$ + +# Verify Labels in Service and Selector Labels in Deployment +kubectl -n kube-system get svc aws-load-balancer-webhook-service -o yaml +kubectl -n kube-system get deployment aws-load-balancer-controller -o yaml +Observation: +1. Verify "spec.selector" label in "aws-load-balancer-webhook-service" +2. Compare it with "aws-load-balancer-controller" Deployment "spec.selector.matchLabels" +3. Both values should be same which traffic coming to "aws-load-balancer-webhook-service" on port 443 will be sent to port 9443 on "aws-load-balancer-controller" deployment related pods. +``` + +### Step-04-04: Verify AWS Load Balancer Controller Logs +```t +# List Pods +kubectl get pods -n kube-system + +# Review logs for AWS LB Controller POD-1 +kubectl -n kube-system logs -f +kubectl -n kube-system logs -f aws-load-balancer-controller-86b598cbd6-5pjfk + +# Review logs for AWS LB Controller POD-2 +kubectl -n kube-system logs -f +kubectl -n kube-system logs -f aws-load-balancer-controller-86b598cbd6-vqqsk +``` + +### Step-04-05: Verify AWS Load Balancer Controller k8s Service Account - Internals +```t +# List Service Account and its secret +kubectl -n kube-system get sa aws-load-balancer-controller +kubectl -n kube-system get sa aws-load-balancer-controller -o yaml +kubectl -n kube-system get secret -o yaml +kubectl -n kube-system get secret aws-load-balancer-controller-token-5w8th +kubectl -n kube-system get secret aws-load-balancer-controller-token-5w8th -o yaml +## Decoce ca.crt using below two websites +https://www.base64decode.org/ +https://www.sslchecker.com/certdecoder + +## Decode token using below two websites +https://www.base64decode.org/ +https://jwt.io/ +Observation: +1. Review decoded JWT Token + +# List Deployment in YAML format +kubectl -n kube-system get deploy aws-load-balancer-controller -o yaml +Observation: +1. Verify "spec.template.spec.serviceAccount" and "spec.template.spec.serviceAccountName" in "aws-load-balancer-controller" Deployment +2. We should find the Service Account Name as "aws-load-balancer-controller" + +# List Pods in YAML format +kubectl -n kube-system get pods +kubectl -n kube-system get pod -o yaml +kubectl -n kube-system get pod aws-load-balancer-controller-65b4f64d6c-h2vh4 -o yaml +Observation: +1. Verify "spec.serviceAccount" and "spec.serviceAccountName" +2. We should find the Service Account Name as "aws-load-balancer-controller" +3. Verify "spec.volumes". You should find something as below, which is a temporary credentials to access AWS Services +CHECK-1: Verify "spec.volumes.name = aws-iam-token" + - name: aws-iam-token + projected: + defaultMode: 420 + sources: + - serviceAccountToken: + audience: sts.amazonaws.com + expirationSeconds: 86400 + path: token +CHECK-2: Verify Volume Mounts + volumeMounts: + - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount + name: aws-iam-token + readOnly: true +CHECK-3: Verify ENVs whose path name is "token" + - name: AWS_WEB_IDENTITY_TOKEN_FILE + value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token +``` + +### Step-04-06: Verify TLS Certs for AWS Load Balancer Controller - Internals +```t +# List aws-load-balancer-tls secret +kubectl -n kube-system get secret aws-load-balancer-tls -o yaml + +# Verify the ca.crt and tls.crt in below websites +https://www.base64decode.org/ +https://www.sslchecker.com/certdecoder + +# Make a note of Common Name and SAN from above +Common Name: aws-load-balancer-controller +SAN: aws-load-balancer-webhook-service.kube-system, aws-load-balancer-webhook-service.kube-system.svc + +# List Pods in YAML format +kubectl -n kube-system get pods +kubectl -n kube-system get pod -o yaml +kubectl -n kube-system get pod aws-load-balancer-controller-65b4f64d6c-h2vh4 -o yaml +Observation: +1. Verify how the secret is mounted in AWS Load Balancer Controller Pod +CHECK-2: Verify Volume Mounts + volumeMounts: + - mountPath: /tmp/k8s-webhook-server/serving-certs + name: cert + readOnly: true +CHECK-3: Verify Volumes + volumes: + - name: cert + secret: + defaultMode: 420 + secretName: aws-load-balancer-tls +``` + +### Step-04-07: UNINSTALL AWS Load Balancer Controller using Helm Command (Information Purpose - SHOULD NOT EXECUTE THIS COMMAND) +- This step should not be implemented. +- This is just put it here for us to know how to uninstall aws load balancer controller from EKS Cluster +```t +# Uninstall AWS Load Balancer Controller +helm uninstall aws-load-balancer-controller -n kube-system +``` + + + +## Step-05: Ingress Class Concept +- Understand what is Ingress Class +- Understand how it overrides the default deprecated annotation `#kubernetes.io/ingress.class: "alb"` +- [Ingress Class Documentation Reference](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/ingress_class/) +- [Different Ingress Controllers available today](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) + + +## Step-06: Review IngressClass Kubernetes Manifest +- **File Location:** `08-01-Load-Balancer-Controller-Install/kube-manifests/01-ingressclass-resource.yaml` +- Understand in detail about annotation `ingressclass.kubernetes.io/is-default-class: "true"` +```yaml +apiVersion: networking.k8s.io/v1 +kind: IngressClass +metadata: + name: my-aws-ingress-class + annotations: + ingressclass.kubernetes.io/is-default-class: "true" +spec: + controller: ingress.k8s.aws/alb + +## Additional Note +# 1. You can mark a particular IngressClass as the default for your cluster. +# 2. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an ingressClassName field specified will be assigned this default IngressClass. +# 3. Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/ingress_class/ +``` + +## Step-07: Create IngressClass Resource +```t +# Navigate to Directory +cd 08-01-Load-Balancer-Controller-Install + +# Create IngressClass Resource +kubectl apply -f kube-manifests + +# Verify IngressClass Resource +kubectl get ingressclass + +# Describe IngressClass Resource +kubectl describe ingressclass my-aws-ingress-class +``` + +## References +- [AWS Load Balancer Controller Install](https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html) +- [ECR Repository per region](https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html) + + + + + + + + + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/iam_policy_latest.json b/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/iam_policy_latest.json new file mode 100644 index 00000000..a8d47c8b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/iam_policy_latest.json @@ -0,0 +1,219 @@ +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "iam:CreateServiceLinkedRole" + ], + "Resource": "*", + "Condition": { + "StringEquals": { + "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "ec2:DescribeAccountAttributes", + "ec2:DescribeAddresses", + "ec2:DescribeAvailabilityZones", + "ec2:DescribeInternetGateways", + "ec2:DescribeVpcs", + "ec2:DescribeVpcPeeringConnections", + "ec2:DescribeSubnets", + "ec2:DescribeSecurityGroups", + "ec2:DescribeInstances", + "ec2:DescribeNetworkInterfaces", + "ec2:DescribeTags", + "ec2:GetCoipPoolUsage", + "ec2:DescribeCoipPools", + "elasticloadbalancing:DescribeLoadBalancers", + "elasticloadbalancing:DescribeLoadBalancerAttributes", + "elasticloadbalancing:DescribeListeners", + "elasticloadbalancing:DescribeListenerCertificates", + "elasticloadbalancing:DescribeSSLPolicies", + "elasticloadbalancing:DescribeRules", + "elasticloadbalancing:DescribeTargetGroups", + "elasticloadbalancing:DescribeTargetGroupAttributes", + "elasticloadbalancing:DescribeTargetHealth", + "elasticloadbalancing:DescribeTags" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "cognito-idp:DescribeUserPoolClient", + "acm:ListCertificates", + "acm:DescribeCertificate", + "iam:ListServerCertificates", + "iam:GetServerCertificate", + "waf-regional:GetWebACL", + "waf-regional:GetWebACLForResource", + "waf-regional:AssociateWebACL", + "waf-regional:DisassociateWebACL", + "wafv2:GetWebACL", + "wafv2:GetWebACLForResource", + "wafv2:AssociateWebACL", + "wafv2:DisassociateWebACL", + "shield:GetSubscriptionState", + "shield:DescribeProtection", + "shield:CreateProtection", + "shield:DeleteProtection" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "ec2:AuthorizeSecurityGroupIngress", + "ec2:RevokeSecurityGroupIngress" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "ec2:CreateSecurityGroup" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "ec2:CreateTags" + ], + "Resource": "arn:aws:ec2:*:*:security-group/*", + "Condition": { + "StringEquals": { + "ec2:CreateAction": "CreateSecurityGroup" + }, + "Null": { + "aws:RequestTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "ec2:CreateTags", + "ec2:DeleteTags" + ], + "Resource": "arn:aws:ec2:*:*:security-group/*", + "Condition": { + "Null": { + "aws:RequestTag/elbv2.k8s.aws/cluster": "true", + "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "ec2:AuthorizeSecurityGroupIngress", + "ec2:RevokeSecurityGroupIngress", + "ec2:DeleteSecurityGroup" + ], + "Resource": "*", + "Condition": { + "Null": { + "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:CreateLoadBalancer", + "elasticloadbalancing:CreateTargetGroup" + ], + "Resource": "*", + "Condition": { + "Null": { + "aws:RequestTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:CreateListener", + "elasticloadbalancing:DeleteListener", + "elasticloadbalancing:CreateRule", + "elasticloadbalancing:DeleteRule" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:AddTags", + "elasticloadbalancing:RemoveTags" + ], + "Resource": [ + "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*", + "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*", + "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*" + ], + "Condition": { + "Null": { + "aws:RequestTag/elbv2.k8s.aws/cluster": "true", + "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:AddTags", + "elasticloadbalancing:RemoveTags" + ], + "Resource": [ + "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*", + "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*", + "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*", + "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*" + ] + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:ModifyLoadBalancerAttributes", + "elasticloadbalancing:SetIpAddressType", + "elasticloadbalancing:SetSecurityGroups", + "elasticloadbalancing:SetSubnets", + "elasticloadbalancing:DeleteLoadBalancer", + "elasticloadbalancing:ModifyTargetGroup", + "elasticloadbalancing:ModifyTargetGroupAttributes", + "elasticloadbalancing:DeleteTargetGroup" + ], + "Resource": "*", + "Condition": { + "Null": { + "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:RegisterTargets", + "elasticloadbalancing:DeregisterTargets" + ], + "Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*" + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:SetWebAcl", + "elasticloadbalancing:ModifyListener", + "elasticloadbalancing:AddListenerCertificates", + "elasticloadbalancing:RemoveListenerCertificates", + "elasticloadbalancing:ModifyRule" + ], + "Resource": "*" + } + ] +} diff --git a/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/iam_policy_v2.3.1.json b/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/iam_policy_v2.3.1.json new file mode 100644 index 00000000..4e6e4dee --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/iam_policy_v2.3.1.json @@ -0,0 +1,217 @@ +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": "iam:CreateServiceLinkedRole", + "Resource": "*", + "Condition": { + "StringEquals": { + "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "ec2:DescribeAccountAttributes", + "ec2:DescribeAddresses", + "ec2:DescribeAvailabilityZones", + "ec2:DescribeInternetGateways", + "ec2:DescribeVpcs", + "ec2:DescribeVpcPeeringConnections", + "ec2:DescribeSubnets", + "ec2:DescribeSecurityGroups", + "ec2:DescribeInstances", + "ec2:DescribeNetworkInterfaces", + "ec2:DescribeTags", + "ec2:GetCoipPoolUsage", + "ec2:DescribeCoipPools", + "elasticloadbalancing:DescribeLoadBalancers", + "elasticloadbalancing:DescribeLoadBalancerAttributes", + "elasticloadbalancing:DescribeListeners", + "elasticloadbalancing:DescribeListenerCertificates", + "elasticloadbalancing:DescribeSSLPolicies", + "elasticloadbalancing:DescribeRules", + "elasticloadbalancing:DescribeTargetGroups", + "elasticloadbalancing:DescribeTargetGroupAttributes", + "elasticloadbalancing:DescribeTargetHealth", + "elasticloadbalancing:DescribeTags" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "cognito-idp:DescribeUserPoolClient", + "acm:ListCertificates", + "acm:DescribeCertificate", + "iam:ListServerCertificates", + "iam:GetServerCertificate", + "waf-regional:GetWebACL", + "waf-regional:GetWebACLForResource", + "waf-regional:AssociateWebACL", + "waf-regional:DisassociateWebACL", + "wafv2:GetWebACL", + "wafv2:GetWebACLForResource", + "wafv2:AssociateWebACL", + "wafv2:DisassociateWebACL", + "shield:GetSubscriptionState", + "shield:DescribeProtection", + "shield:CreateProtection", + "shield:DeleteProtection" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "ec2:AuthorizeSecurityGroupIngress", + "ec2:RevokeSecurityGroupIngress" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "ec2:CreateSecurityGroup" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "ec2:CreateTags" + ], + "Resource": "arn:aws:ec2:*:*:security-group/*", + "Condition": { + "StringEquals": { + "ec2:CreateAction": "CreateSecurityGroup" + }, + "Null": { + "aws:RequestTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "ec2:CreateTags", + "ec2:DeleteTags" + ], + "Resource": "arn:aws:ec2:*:*:security-group/*", + "Condition": { + "Null": { + "aws:RequestTag/elbv2.k8s.aws/cluster": "true", + "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "ec2:AuthorizeSecurityGroupIngress", + "ec2:RevokeSecurityGroupIngress", + "ec2:DeleteSecurityGroup" + ], + "Resource": "*", + "Condition": { + "Null": { + "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:CreateLoadBalancer", + "elasticloadbalancing:CreateTargetGroup" + ], + "Resource": "*", + "Condition": { + "Null": { + "aws:RequestTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:CreateListener", + "elasticloadbalancing:DeleteListener", + "elasticloadbalancing:CreateRule", + "elasticloadbalancing:DeleteRule" + ], + "Resource": "*" + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:AddTags", + "elasticloadbalancing:RemoveTags" + ], + "Resource": [ + "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*", + "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*", + "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*" + ], + "Condition": { + "Null": { + "aws:RequestTag/elbv2.k8s.aws/cluster": "true", + "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:AddTags", + "elasticloadbalancing:RemoveTags" + ], + "Resource": [ + "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*", + "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*", + "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*", + "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*" + ] + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:ModifyLoadBalancerAttributes", + "elasticloadbalancing:SetIpAddressType", + "elasticloadbalancing:SetSecurityGroups", + "elasticloadbalancing:SetSubnets", + "elasticloadbalancing:DeleteLoadBalancer", + "elasticloadbalancing:ModifyTargetGroup", + "elasticloadbalancing:ModifyTargetGroupAttributes", + "elasticloadbalancing:DeleteTargetGroup" + ], + "Resource": "*", + "Condition": { + "Null": { + "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" + } + } + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:RegisterTargets", + "elasticloadbalancing:DeregisterTargets" + ], + "Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*" + }, + { + "Effect": "Allow", + "Action": [ + "elasticloadbalancing:SetWebAcl", + "elasticloadbalancing:ModifyListener", + "elasticloadbalancing:AddListenerCertificates", + "elasticloadbalancing:RemoveListenerCertificates", + "elasticloadbalancing:ModifyRule" + ], + "Resource": "*" + } + ] +} diff --git a/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/kube-manifests/01-ingressclass-resource.yaml b/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/kube-manifests/01-ingressclass-resource.yaml new file mode 100755 index 00000000..8196bef5 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-01-Load-Balancer-Controller-Install/kube-manifests/01-ingressclass-resource.yaml @@ -0,0 +1,13 @@ +apiVersion: networking.k8s.io/v1 +kind: IngressClass +metadata: + name: my-aws-ingress-class + annotations: + ingressclass.kubernetes.io/is-default-class: "true" +spec: + controller: ingress.k8s.aws/alb + +## Additional Note +# 1. You can mark a particular IngressClass as the default for your cluster. +# 2. Setting the ingressclass.kubernetes.io/is-default-class annotation to true on an IngressClass resource will ensure that new Ingresses without an spec.ingressClassName field specified will be assigned this default IngressClass. +# 3. Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/ingress_class/ \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/01-kube-manifests-default-backend/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/01-kube-manifests-default-backend/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..5a9b6d94 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/01-kube-manifests-default-backend/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer +# alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/01-kube-manifests-default-backend/02-ALB-Ingress-Basic.yml b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/01-kube-manifests-default-backend/02-ALB-Ingress-Basic.yml new file mode 100755 index 00000000..917c6ee4 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/01-kube-manifests-default-backend/02-ALB-Ingress-Basic.yml @@ -0,0 +1,33 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-nginxapp1 + labels: + app: app1-nginx + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: app1ingress + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) # Additional Notes: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/ingress/ingress_class/#deprecated-kubernetesioingressclass-annotation + # Ingress Core Settings + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` diff --git a/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/02-kube-manifests-rules/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/02-kube-manifests-rules/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..5a9b6d94 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/02-kube-manifests-rules/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer +# alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/02-kube-manifests-rules/02-ALB-Ingress-Basic.yml b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/02-kube-manifests-rules/02-ALB-Ingress-Basic.yml new file mode 100755 index 00000000..d2759d82 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/02-kube-manifests-rules/02-ALB-Ingress-Basic.yml @@ -0,0 +1,38 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-nginxapp1 + labels: + app: app1-nginx + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: app1ingressrules + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + # Ingress Core Settings + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + rules: + - http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` diff --git a/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/README.md b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/README.md new file mode 100755 index 00000000..35132bb2 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-02-ALB-Ingress-Basics/README.md @@ -0,0 +1,262 @@ +--- +title: AWS Load Balancer Controller - Ingress Basics +description: Learn AWS Load Balancer Controller - Ingress Basics +--- + +## Step-01: Introduction +- Discuss about the Application Architecture which we are going to deploy +- Understand the following Ingress Concepts + - [Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/) + - [ingressClassName](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/ingress_class/) + - defaultBackend + - rules + +## Step-02: Review App1 Deployment kube-manifest +- **File Location:** `01-kube-manifests-default-backend/01-Nginx-App1-Deployment-and-NodePortService.yml` +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +``` +## Step-03: Review App1 NodePort Service +- **File Location:** `01-kube-manifests-default-backend/01-Nginx-App1-Deployment-and-NodePortService.yml` +```yaml +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer +# alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 +``` + +## Step-04: Review Ingress kube-manifest with Default Backend Option +- [Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/) +- **File Location:** `01-kube-manifests-default-backend/02-ALB-Ingress-Basic.yml` +```yaml +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-nginxapp1 + labels: + app: app1-nginx + annotations: + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + # Ingress Core Settings + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' +spec: + ingressClassName: ic-external-lb # Ingress Class + defaultBackend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 +``` + +## Step-05: Deploy kube-manifests and Verify +```t +# Change Directory +cd 08-02-ALB-Ingress-Basics + +# Deploy kube-manifests +kubectl apply -f 01-kube-manifests-default-backend/ + +# Verify k8s Deployment and Pods +kubectl get deploy +kubectl get pods + +# Verify Ingress (Make a note of Address field) +kubectl get ingress +Obsevation: +1. Verify the ADDRESS value, we should see something like "app1ingress-1334515506.us-east-1.elb.amazonaws.com" + +# Describe Ingress Controller +kubectl describe ingress ingress-nginxapp1 +Observation: +1. Review Default Backend and Rules + +# List Services +kubectl get svc + +# Verify Application Load Balancer using +Goto AWS Mgmt Console -> Services -> EC2 -> Load Balancers +1. Verify Listeners and Rules inside a listener +2. Verify Target Groups + +# Access App using Browser +kubectl get ingress +http:// +http:///app1/index.html +or +http:// +http:///app1/index.html + +# Sample from my environment (for reference only) +http://app1ingress-154912460.us-east-1.elb.amazonaws.com +http://app1ingress-154912460.us-east-1.elb.amazonaws.com/app1/index.html + +# Verify AWS Load Balancer Controller logs +kubectl get po -n kube-system +## POD1 Logs: +kubectl -n kube-system logs -f +kubectl -n kube-system logs -f aws-load-balancer-controller-65b4f64d6c-h2vh4 +##POD2 Logs: +kubectl -n kube-system logs -f +kubectl -n kube-system logs -f aws-load-balancer-controller-65b4f64d6c-t7qqb +``` + +## Step-06: Clean Up +```t +# Delete Kubernetes Resources +kubectl delete -f 01-kube-manifests-default-backend/ +``` + +## Step-07: Review Ingress kube-manifest with Ingress Rules +- Discuss about [Ingress Path Types](https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types) +- [Better Path Matching With Path Types](https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#better-path-matching-with-path-types) +- [Sample Ingress Rule](https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource) +- **ImplementationSpecific (default):** With this path type, matching is up to the controller implementing the IngressClass. Implementations can treat this as a separate pathType or treat it identically to the Prefix or Exact path types. +- **Exact:** Matches the URL path exactly and with case sensitivity. +- **Prefix:** Matches based on a URL path prefix split by /. Matching is case sensitive and done on a path element by element basis. + +- **File Location:** `02-kube-manifests-rules\02-ALB-Ingress-Basic.yml` +```yaml +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-nginxapp1 + labels: + app: app1-nginx + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: app1ingressrules + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + # Ingress Core Settings + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' +spec: + ingressClassName: ic-external-lb # Ingress Class + rules: + - http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + + +# 1. If "spec.ingressClassName: ic-external-lb" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` +``` + +## Step-08: Deploy kube-manifests and Verify +```t +# Change Directory +cd 08-02-ALB-Ingress-Basics + +# Deploy kube-manifests +kubectl apply -f 02-kube-manifests-rules/ + +# Verify k8s Deployment and Pods +kubectl get deploy +kubectl get pods + +# Verify Ingress (Make a note of Address field) +kubectl get ingress +Obsevation: +1. Verify the ADDRESS value, we should see something like "app1ingressrules-154912460.us-east-1.elb.amazonaws.com" + +# Describe Ingress Controller +kubectl describe ingress ingress-nginxapp1 +Observation: +1. Review Default Backend and Rules + +# List Services +kubectl get svc + +# Verify Application Load Balancer using +Goto AWS Mgmt Console -> Services -> EC2 -> Load Balancers +1. Verify Listeners and Rules inside a listener +2. Verify Target Groups + +# Access App using Browser +kubectl get ingress +http:// +http:///app1/index.html +or +http:// +http:///app1/index.html + +# Sample from my environment (for reference only) +http://app1ingressrules-154912460.us-east-1.elb.amazonaws.com +http://app1ingressrules-154912460.us-east-1.elb.amazonaws.com/app1/index.html + +# Verify AWS Load Balancer Controller logs +kubectl get po -n kube-system +kubectl logs -f aws-load-balancer-controller-794b7844dd-8hk7n -n kube-system +``` + +## Step-09: Clean Up +```t +# Delete Kubernetes Resources +kubectl delete -f 02-kube-manifests-rules/ + +# Verify if Ingress Deleted successfully +kubectl get ingress +Important Note: It is going to cost us heavily if we leave ALB load balancer idle without deleting it properly + +# Verify Application Load Balancer DELETED +Goto AWS Mgmt Console -> Services -> EC2 -> Load Balancers +``` + + + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/README.md b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/README.md new file mode 100755 index 00000000..82fc2e48 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/README.md @@ -0,0 +1,200 @@ +--- +title: AWS Load Balancer Ingress Context Path Based Routing +description: Learn AWS Load Balancer Controller - Ingress Context Path Based Routing +--- + +## Step-01: Introduction +- Discuss about the Architecture we are going to build as part of this Section +- We are going to deploy all these 3 apps in kubernetes with context path based routing enabled in Ingress Controller + - /app1/* - should go to app1-nginx-nodeport-service + - /app2/* - should go to app1-nginx-nodeport-service + - /* - should go to app3-nginx-nodeport-service +- As part of this process, this respective annotation `alb.ingress.kubernetes.io/healthcheck-path:` will be moved to respective application NodePort Service. +- Only generic settings will be present in Ingress manifest annotations area `04-ALB-Ingress-ContextPath-Based-Routing.yml` + + +## Step-02: Review Nginx App1, App2 & App3 Deployment & Service +- Differences for all 3 apps will be only two fields from kubernetes manifests perspective and their naming conventions + - **Kubernetes Deployment:** Container Image name + - **Kubernetes Node Port Service:** Health check URL path +- **App1 Nginx: 01-Nginx-App1-Deployment-and-NodePortService.yml** + - **image:** stacksimplify/kube-nginxapp1:1.0.0 + - **Annotation:** alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +- **App2 Nginx: 02-Nginx-App2-Deployment-and-NodePortService.yml** + - **image:** stacksimplify/kube-nginxapp2:1.0.0 + - **Annotation:** alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +- **App3 Nginx: 03-Nginx-App3-Deployment-and-NodePortService.yml** + - **image:** stacksimplify/kubenginx:1.0.0 + - **Annotation:** alb.ingress.kubernetes.io/healthcheck-path: /index.html + + + +## Step-03: Create ALB Ingress Context path based Routing Kubernetes manifest +- **04-ALB-Ingress-ContextPath-Based-Routing.yml** +```yaml +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-cpr-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: cpr-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + - path: / + pathType: Prefix + backend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` +``` + +## Step-04: Deploy all manifests and test +```t +# Deploy Kubernetes manifests +kubectl apply -f kube-manifests/ + +# List Pods +kubectl get pods + +# List Services +kubectl get svc + +# List Ingress Load Balancers +kubectl get ingress + +# Describe Ingress and view Rules +kubectl describe ingress ingress-cpr-demo + +# Verify AWS Load Balancer Controller logs +kubectl -n kube-system get pods +kubectl -n kube-system logs -f aws-load-balancer-controller-794b7844dd-8hk7n +``` + +## Step-05: Verify Application Load Balancer on AWS Management Console** +- Verify Load Balancer + - In Listeners Tab, click on **View/Edit Rules** under Rules +- Verify Target Groups + - GroupD Details + - Targets: Ensure they are healthy + - Verify Health check path + - Verify all 3 targets are healthy) +```t +# Access Application +http:///app1/index.html +http:///app2/index.html +http:/// +``` + +## Step-06: Test Order in Context path based routing +### Step-0-01: Move Root Context Path to top +- **File:** 04-ALB-Ingress-ContextPath-Based-Routing.yml +```yaml + ingressClassName: my-aws-ingress-class # Ingress Class + rules: + - http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 +``` +### Step-06-02: Deploy Changes and Verify +```t +# Deploy Changes +kubectl apply -f kube-manifests/ + +# Access Application (Open in new incognito window) +http:///app1/index.html -- SHOULD FAIL +http:///app2/index.html -- SHOULD FAIL +http:/// - SHOULD PASS +``` + +## Step-07: Roll back changes in 04-ALB-Ingress-ContextPath-Based-Routing.yml +```yaml +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + - path: / + pathType: Prefix + backend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 +``` + +## Step-08: Clean Up +```t +# Clean-Up +kubectl delete -f kube-manifests/ +``` diff --git a/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..a2d3597c --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app3-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/04-ALB-Ingress-ContextPath-Based-Routing.yml b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/04-ALB-Ingress-ContextPath-Based-Routing.yml new file mode 100755 index 00000000..8195b55c --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-03-ALB-Ingress-ContextPath-Based-Routing/kube-manifests/04-ALB-Ingress-ContextPath-Based-Routing.yml @@ -0,0 +1,54 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-cpr-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: cpr-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + - path: / + pathType: Prefix + backend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + + +# Important Note-1: In path based routing order is very important, if we are going to use "/*" (Root Context), try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/README.md b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/README.md new file mode 100755 index 00000000..17ee5582 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/README.md @@ -0,0 +1,91 @@ +--- +title: AWS Load Balancer Ingress SSL +description: Learn AWS Load Balancer Controller - Ingress SSL +--- + +## Step-01: Introduction +- We are going to register a new DNS in AWS Route53 +- We are going to create a SSL certificate +- Add Annotations related to SSL Certificate in Ingress manifest +- Deploy the manifests and test +- Clean-Up + +## Step-02: Pre-requisite - Register a Domain in Route53 (if not exists) +- Goto Services -> Route53 -> Registered Domains +- Click on **Register Domain** +- Provide **desired domain: somedomain.com** and click on **check** (In my case its going to be `stacksimplify.com`) +- Click on **Add to cart** and click on **Continue** +- Provide your **Contact Details** and click on **Continue** +- Enable Automatic Renewal +- Accept **Terms and Conditions** +- Click on **Complete Order** + +## Step-03: Create a SSL Certificate in Certificate Manager +- Pre-requisite: You should have a registered domain in Route53 +- Go to Services -> Certificate Manager -> Create a Certificate +- Click on **Request a Certificate** + - Choose the type of certificate for ACM to provide: Request a public certificate + - Add domain names: *.yourdomain.com (in my case it is going to be `*.stacksimplify.com`) + - Select a Validation Method: **DNS Validation** + - Click on **Confirm & Request** +- **Validation** + - Click on **Create record in Route 53** +- Wait for 5 to 10 minutes and check the **Validation Status** + +## Step-04: Add annotations related to SSL +- **04-ALB-Ingress-SSL.yml** +```yaml + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/632a3ff6-3f6d-464c-9121-b9d97481a76b + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) +``` +## Step-05: Deploy all manifests and test +### Deploy and Verify +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Ingress Resource +kubectl get ingress + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify NodePort Services +kubectl get svc +``` +### Verify Load Balancer & Target Groups +- Load Balancer - Listeneres (Verify both 80 & 443) +- Load Balancer - Rules (Verify both 80 & 443 listeners) +- Target Groups - Group Details (Verify Health check path) +- Target Groups - Targets (Verify all 3 targets are healthy) + +## Step-06: Add DNS in Route53 +- Go to **Services -> Route 53** +- Go to **Hosted Zones** + - Click on **yourdomain.com** (in my case stacksimplify.com) +- Create a **Record Set** + - **Name:** ssldemo101.stacksimplify.com + - **Alias:** yes + - **Alias Target:** Copy our ALB DNS Name here (Sample: ssl-ingress-551932098.us-east-1.elb.amazonaws.com) + - Click on **Create** + +## Step-07: Access Application using newly registered DNS Name +- **Access Application** +- **Important Note:** Instead of `stacksimplify.com` you need to replace with your registered Route53 domain (Refer pre-requisite Step-02) +```t +# HTTP URLs +http://ssldemo101.stacksimplify.com/app1/index.html +http://ssldemo101.stacksimplify.com/app2/index.html +http://ssldemo101.stacksimplify.com/ + +# HTTPS URLs +https://ssldemo101.stacksimplify.com/app1/index.html +https://ssldemo101.stacksimplify.com/app2/index.html +https://ssldemo101.stacksimplify.com/ +``` + +## Annotation Reference +- [AWS Load Balancer Controller Annotation Reference](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/) \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..81b7b7d6 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/04-ALB-Ingress-SSL.yml b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/04-ALB-Ingress-SSL.yml new file mode 100755 index 00000000..61704cdd --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-04-ALB-Ingress-SSL/kube-manifests/04-ALB-Ingress-SSL.yml @@ -0,0 +1,55 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-ssl-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: ssl-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/README.md b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/README.md new file mode 100755 index 00000000..3aad63eb --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/README.md @@ -0,0 +1,64 @@ +--- +title: AWS Load Balancer - Ingress SSL HTTP to HTTPS Redirect +description: Learn AWS Load Balancer - Ingress SSL HTTP to HTTPS Redirect +--- + +## Step-01: Add annotations related to SSL Redirect +- **File Name:** 04-ALB-Ingress-SSL-Redirect.yml +- Redirect from HTTP to HTTPS +```yaml + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' +``` + +## Step-02: Deploy all manifests and test + +### Deploy and Verify +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Ingress Resource +kubectl get ingress + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify NodePort Services +kubectl get svc +``` +### Verify Load Balancer & Target Groups +- Load Balancer - Listeneres (Verify both 80 & 443) +- Load Balancer - Rules (Verify both 80 & 443 listeners) +- Target Groups - Group Details (Verify Health check path) +- Target Groups - Targets (Verify all 3 targets are healthy) + +## Step-03: Access Application using newly registered DNS Name +- **Access Application** +```t +# HTTP URLs (Should Redirect to HTTPS) +http://ssldemo101.stacksimplify.com/app1/index.html +http://ssldemo101.stacksimplify.com/app2/index.html +http://ssldemo101.stacksimplify.com/ + +# HTTPS URLs +https://ssldemo101.stacksimplify.com/app1/index.html +https://ssldemo101.stacksimplify.com/app2/index.html +https://ssldemo101.stacksimplify.com/ +``` + +## Step-04: Clean Up +```t +# Delete Manifests +kubectl delete -f kube-manifests/ + +## Delete Route53 Record Set +- Delete Route53 Record we created (ssldemo101.stacksimplify.com) +``` + +## Annotation Reference +- [AWS Load Balancer Controller Annotation Reference](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/) + + + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..81b7b7d6 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/04-ALB-Ingress-SSL-Redirect.yml b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/04-ALB-Ingress-SSL-Redirect.yml new file mode 100755 index 00000000..7de42628 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-05-ALB-Ingress-SSL-Redirect/kube-manifests/04-ALB-Ingress-SSL-Redirect.yml @@ -0,0 +1,57 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-ssl-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: ssl-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-06-Deploy-ExternalDNS-on-EKS/README.md b/08-NEW-ELB-Application-LoadBalancers/08-06-Deploy-ExternalDNS-on-EKS/README.md new file mode 100755 index 00000000..d912630d --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-06-Deploy-ExternalDNS-on-EKS/README.md @@ -0,0 +1,175 @@ +--- +title: AWS Load Balancer Controller - External DNS Install +description: Learn AWS Load Balancer Controller - External DNS Install +--- + +## Step-01: Introduction +- **External DNS:** Used for Updating Route53 RecordSets from Kubernetes +- We need to create IAM Policy, k8s Service Account & IAM Role and associate them together for external-dns pod to add or remove entries in AWS Route53 Hosted Zones. +- Update External-DNS default manifest to support our needs +- Deploy & Verify logs + +## Step-02: Create IAM Policy +- This IAM policy will allow external-dns pod to add, remove DNS entries (Record Sets in a Hosted Zone) in AWS Route53 service +- Go to Services -> IAM -> Policies -> Create Policy + - Click on **JSON** Tab and copy paste below JSON + - Click on **Visual editor** tab to validate + - Click on **Review Policy** + - **Name:** AllowExternalDNSUpdates + - **Description:** Allow access to Route53 Resources for ExternalDNS + - Click on **Create Policy** + +```json +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "route53:ChangeResourceRecordSets" + ], + "Resource": [ + "arn:aws:route53:::hostedzone/*" + ] + }, + { + "Effect": "Allow", + "Action": [ + "route53:ListHostedZones", + "route53:ListResourceRecordSets" + ], + "Resource": [ + "*" + ] + } + ] +} +``` +- Make a note of Policy ARN which we will use in next step +```t +# Policy ARN +arn:aws:iam::180789647333:policy/AllowExternalDNSUpdates +``` + + +## Step-03: Create IAM Role, k8s Service Account & Associate IAM Policy +- As part of this step, we are going to create a k8s Service Account named `external-dns` and also a AWS IAM role and associate them by annotating role ARN in Service Account. +- In addition, we are also going to associate the AWS IAM Policy `AllowExternalDNSUpdates` to the newly created AWS IAM Role. +### Step-03-01: Create IAM Role, k8s Service Account & Associate IAM Policy +```t +# Template +eksctl create iamserviceaccount \ + --name service_account_name \ + --namespace service_account_namespace \ + --cluster cluster_name \ + --attach-policy-arn IAM_policy_ARN \ + --approve \ + --override-existing-serviceaccounts + +# Replaced name, namespace, cluster, IAM Policy arn +eksctl create iamserviceaccount \ + --name external-dns \ + --namespace default \ + --cluster eksdemo1 \ + --attach-policy-arn arn:aws:iam::180789647333:policy/AllowExternalDNSUpdates \ + --approve \ + --override-existing-serviceaccounts +``` +### Step-03-02: Verify the Service Account +- Verify external-dns service account, primarily verify annotation related to IAM Role +```t +# List Service Account +kubectl get sa external-dns + +# Describe Service Account +kubectl describe sa external-dns +Observation: +1. Verify the Annotations and you should see the IAM Role is present on the Service Account +``` +### Step-03-03: Verify CloudFormation Stack +- Go to Services -> CloudFormation +- Verify the latest CFN Stack created. +- Click on **Resources** tab +- Click on link in **Physical ID** field which will take us to **IAM Role** directly + +### Step-03-04: Verify IAM Role & IAM Policy +- With above step in CFN, we will be landed in IAM Role created for external-dns. +- Verify in **Permissions** tab we have a policy named **AllowExternalDNSUpdates** +- Now make a note of that Role ARN, this we need to update in External-DNS k8s manifest +```t +# Make a note of Role ARN +arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-defa-Role1-JTO29BVZMA2N +``` + +### Step-03-05: Verify IAM Service Accounts using eksctl +- You can also make a note of External DNS Role ARN from here too. +```t +# List IAM Service Accounts using eksctl +eksctl get iamserviceaccount --cluster eksdemo1 + +# Sample Output +Kalyans-Mac-mini:08-06-ALB-Ingress-ExternalDNS kalyanreddy$ eksctl get iamserviceaccount --cluster eksdemo1 +2022-02-11 09:34:39 [ℹ] eksctl version 0.71.0 +2022-02-11 09:34:39 [ℹ] using region us-east-1 +NAMESPACE NAME ROLE ARN +default external-dns arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-defa-Role1-JTO29BVZMA2N +kube-system aws-load-balancer-controller arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-kube-Role1-EFQB4C26EALH +Kalyans-Mac-mini:08-06-ALB-Ingress-ExternalDNS kalyanreddy$ +``` + + +## Step-04: Update External DNS Kubernetes manifest +- **Original Template** you can find in https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md +- **File Location:** kube-manifests/01-Deploy-ExternalDNS.yml +### Change-1: Line number 9: IAM Role update + - Copy the role-arn you have made a note at the end of step-03 and replace at line no 9. +```yaml + eks.amazonaws.com/role-arn: arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-defa-Role1-JTO29BVZMA2N +``` +### Chnage-2: Line 55, 56: Commented them +- We used eksctl to create IAM role and attached the `AllowExternalDNSUpdates` policy +- We didnt use KIAM or Kube2IAM so we don't need these two lines, so commented +```yaml + #annotations: + #iam.amazonaws.com/role: arn:aws:iam::ACCOUNT-ID:role/IAM-SERVICE-ROLE-NAME +``` +### Change-3: Line 65, 67: Commented them +```yaml + # - --domain-filter=external-dns-test.my-org.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones + # - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization +``` + +### Change-4: Line 61: Get latest Docker Image name +- [Get latest external dns image name](https://github.com/kubernetes-sigs/external-dns/releases/tag/v0.10.2) +```yaml + spec: + serviceAccountName: external-dns + containers: + - name: external-dns + image: k8s.gcr.io/external-dns/external-dns:v0.10.2 +``` + +## Step-05: Deploy ExternalDNS +- Deploy the manifest +```t +# Change Directory +cd 08-06-Deploy-ExternalDNS-on-EKS + +# Deploy external DNS +kubectl apply -f kube-manifests/ + +# List All resources from default Namespace +kubectl get all + +# List pods (external-dns pod should be in running state) +kubectl get pods + +# Verify Deployment by checking logs +kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') +``` + +## References +- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/alb-ingress.md +- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md + + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-06-Deploy-ExternalDNS-on-EKS/kube-manifests/01-Deploy-ExternalDNS.yml b/08-NEW-ELB-Application-LoadBalancers/08-06-Deploy-ExternalDNS-on-EKS/kube-manifests/01-Deploy-ExternalDNS.yml new file mode 100755 index 00000000..09836094 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-06-Deploy-ExternalDNS-on-EKS/kube-manifests/01-Deploy-ExternalDNS.yml @@ -0,0 +1,72 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: external-dns + # If you're using Amazon EKS with IAM Roles for Service Accounts, specify the following annotation. + # Otherwise, you may safely omit it. + annotations: + # Substitute your account ID and IAM service role name below. #Change-1: Replace with your IAM ARN Role for extern-dns + eks.amazonaws.com/role-arn: arn:aws:iam::180789647333:role/eksctl-eksdemo1-addon-iamserviceaccount-defa-Role1-1KZREOLB5TGO5 +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: external-dns +rules: +- apiGroups: [""] + resources: ["services","endpoints","pods"] + verbs: ["get","watch","list"] +- apiGroups: ["extensions","networking.k8s.io"] + resources: ["ingresses"] + verbs: ["get","watch","list"] +- apiGroups: [""] + resources: ["nodes"] + verbs: ["list","watch"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: external-dns-viewer +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: external-dns +subjects: +- kind: ServiceAccount + name: external-dns + namespace: default +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: external-dns +spec: + strategy: + type: Recreate + selector: + matchLabels: + app: external-dns + template: + metadata: + labels: + app: external-dns + # If you're using kiam or kube2iam, specify the following annotation. + # Otherwise, you may safely omit it. #Change-2: Commented line 55 and 56 + #annotations: + #iam.amazonaws.com/role: arn:aws:iam::ACCOUNT-ID:role/IAM-SERVICE-ROLE-NAME + spec: + serviceAccountName: external-dns + containers: + - name: external-dns + image: k8s.gcr.io/external-dns/external-dns:v0.10.2 + args: + - --source=service + - --source=ingress + # Change-3: Commented line 65 and 67 - --domain-filter=external-dns-test.my-org.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones + - --provider=aws + # Change-3: Commented line 65 and 67 - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization + - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both) + - --registry=txt + - --txt-owner-id=my-hostedzone-identifier + securityContext: + fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/README.md b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/README.md new file mode 100755 index 00000000..0bd07d40 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/README.md @@ -0,0 +1,93 @@ +--- +title: AWS Load Balancer Controller - External DNS & Ingress +description: Learn AWS Load Balancer Controller - External DNS & Ingress +--- + +## Step-01: Update Ingress manifest by adding External DNS Annotation +- Added annotation with two DNS Names + - dnstest901.kubeoncloud.com + - dnstest902.kubeoncloud.com +- Once we deploy the application, we should be able to access our Applications with both DNS Names. +- **File Name:** 04-ALB-Ingress-SSL-Redirect-ExternalDNS.yml +```yaml + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: dnstest901.stacksimplify.com, dnstest902.stacksimplify.com +``` +- In your case it is going to be, replace `yourdomain` with your domain name + - dnstest901.yourdoamin.com + - dnstest902.yourdoamin.com + +## Step-02: Deploy all Application Kubernetes Manifests +### Deploy +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Ingress Resource +kubectl get ingress + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify NodePort Services +kubectl get svc +``` +### Verify Load Balancer & Target Groups +- Load Balancer - Listeneres (Verify both 80 & 443) +- Load Balancer - Rules (Verify both 80 & 443 listeners) +- Target Groups - Group Details (Verify Health check path) +- Target Groups - Targets (Verify all 3 targets are healthy) + +### Verify External DNS Log +```t +# Verify External DNS logs +kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') +``` +### Verify Route53 +- Go to Services -> Route53 +- You should see **Record Sets** added for `dnstest901.stacksimplify.com`, `dnstest902.stacksimplify.com` + +## Step-04: Access Application using newly registered DNS Name +### Perform nslookup tests before accessing Application +- Test if our new DNS entries registered and resolving to an IP Address +```t +# nslookup commands +nslookup dnstest901.stacksimplify.com +nslookup dnstest902.stacksimplify.com +``` +### Access Application using dnstest1 domain +```t +# HTTP URLs (Should Redirect to HTTPS) +http://dnstest901.stacksimplify.com/app1/index.html +http://dnstest901.stacksimplify.com/app2/index.html +http://dnstest901.stacksimplify.com/ +``` + +### Access Application using dnstest2 domain +```t +# HTTP URLs (Should Redirect to HTTPS) +http://dnstest902.stacksimplify.com/app1/index.html +http://dnstest902.stacksimplify.com/app2/index.html +http://dnstest902.stacksimplify.com/ +``` + + +## Step-05: Clean Up +```t +# Delete Manifests +kubectl delete -f kube-manifests/ + +## Verify Route53 Record Set to ensure our DNS records got deleted +- Go to Route53 -> Hosted Zones -> Records +- The below records should be deleted automatically + - dnstest901.stacksimplify.com + - dnstest902.stacksimplify.com +``` + + +## References +- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/alb-ingress.md +- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md + + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..81b7b7d6 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/04-ALB-Ingress-SSL-Redirect-ExternalDNS.yml b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/04-ALB-Ingress-SSL-Redirect-ExternalDNS.yml new file mode 100755 index 00000000..d5077834 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-07-Use-ExternalDNS-for-k8s-Ingress/kube-manifests/04-ALB-Ingress-SSL-Redirect-ExternalDNS.yml @@ -0,0 +1,59 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-externaldns-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: externaldns-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: dnstest901.stacksimplify.com, dnstest902.stacksimplify.com +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/README.md b/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/README.md new file mode 100644 index 00000000..96c2aa43 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/README.md @@ -0,0 +1,82 @@ +--- +title: AWS Load Balancer Controller - External DNS & Service +description: Learn AWS Load Balancer Controller - External DNS & Kubernetes Service +--- + +## Step-01: Introduction +- We will create a Kubernetes Service of `type: LoadBalancer` +- We will annotate that Service with external DNS hostname `external-dns.alpha.kubernetes.io/hostname: externaldns-k8s-service-demo101.stacksimplify.com` which will register the DNS in Route53 for that respective load balancer + +## Step-02: 02-Nginx-App1-LoadBalancer-Service.yml +```yaml +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-loadbalancer-service + labels: + app: app1-nginx + annotations: + external-dns.alpha.kubernetes.io/hostname: externaldns-k8s-service-demo101.stacksimplify.com +spec: + type: LoadBalancer + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 +``` +## Step-03: Deploy & Verify + +### Deploy & Verify +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify Service +kubectl get svc +``` +### Verify Load Balancer +- Go to EC2 -> Load Balancers -> Verify Load Balancer Settings + +### Verify External DNS Log +```t +# Verify External DNS logs +kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') +``` +### Verify Route53 +- Go to Services -> Route53 +- You should see **Record Sets** added for `externaldns-k8s-service-demo101.stacksimplify.com` + + +## Step-04: Access Application using newly registered DNS Name +### Perform nslookup tests before accessing Application +- Test if our new DNS entries registered and resolving to an IP Address +```t +# nslookup commands +nslookup externaldns-k8s-service-demo101.stacksimplify.com +``` +### Access Application using DNS domain +```t +# HTTP URL +http://externaldns-k8s-service-demo101.stacksimplify.com/app1/index.html +``` + +## Step-05: Clean Up +```t +# Delete Manifests +kubectl delete -f kube-manifests/ + +## Verify Route53 Record Set to ensure our DNS records got deleted +- Go to Route53 -> Hosted Zones -> Records +- The below records should be deleted automatically + - externaldns-k8s-service-demo101.stacksimplify.com +``` + + +## References +- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/alb-ingress.md +- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md diff --git a/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/kube-manifests/01-Nginx-App1-Deployment.yml b/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/kube-manifests/01-Nginx-App1-Deployment.yml new file mode 100755 index 00000000..360bd878 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/kube-manifests/01-Nginx-App1-Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 diff --git a/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/kube-manifests/02-Nginx-App1-LoadBalancer-Service.yml b/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/kube-manifests/02-Nginx-App1-LoadBalancer-Service.yml new file mode 100755 index 00000000..e82473ce --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-08-Use-ExternalDNS-for-k8s-Service/kube-manifests/02-Nginx-App1-LoadBalancer-Service.yml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-loadbalancer-service + labels: + app: app1-nginx + annotations: + external-dns.alpha.kubernetes.io/hostname: externaldns-k8s-service-demo101.stacksimplify.com +spec: + type: LoadBalancer + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/README.md b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/README.md new file mode 100755 index 00000000..09393654 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/README.md @@ -0,0 +1,156 @@ +--- +title: AWS Load Balancer Controller - Ingress Host Header Routing +description: Learn AWS Load Balancer Controller - Ingress Host Header Routing +--- + +## Step-01: Introduction +- Implement Host Header routing using Ingress +- We can also call it has name based virtual host routing + +## Step-02: Review Ingress Manifests for Host Header Routing +- **File Name:** 04-ALB-Ingress-HostHeader-Routing.yml +```yaml +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-namedbasedvhost-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: namedbasedvhost-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/632a3ff6-3f6d-464c-9121-b9d97481a76b + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: default101.stacksimplify.com +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + rules: + - host: app101.stacksimplify.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - host: app201.stacksimplify.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + +``` + +## Step-03: Deploy all Application Kubernetes Manifests and Verify +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Ingress Resource +kubectl get ingress + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify NodePort Services +kubectl get svc +``` +### Verify Load Balancer & Target Groups +- Load Balancer - Listeneres (Verify both 80 & 443) +- Load Balancer - Rules (Verify both 80 & 443 listeners) +- Target Groups - Group Details (Verify Health check path) +- Target Groups - Targets (Verify all 3 targets are healthy) + +### Verify External DNS Log +```t +# Verify External DNS logs +kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') +``` +### Verify Route53 +- Go to Services -> Route53 +- You should see **Record Sets** added for + - default101.stacksimplify.com + - app101.stacksimplify.com + - app201.stacksimplify.com + +## Step-04: Access Application using newly registered DNS Name +### Perform nslookup tests before accessing Application +- Test if our new DNS entries registered and resolving to an IP Address +```t +# nslookup commands +nslookup default101.stacksimplify.com +nslookup app101.stacksimplify.com +nslookup app201.stacksimplify.com +``` +### Positive Case: Access Application using DNS domain +```t +# Access App1 +http://app101.stacksimplify.com/app1/index.html + +# Access App2 +http://app201.stacksimplify.com/app2/index.html + +# Access Default App (App3) +http://default101.stacksimplify.com +``` + +### Negative Case: Access Application using DNS domain +```t +# Access App2 using App1 DNS Domain +http://app101.stacksimplify.com/app2/index.html -- SHOULD FAIL + +# Access App1 using App2 DNS Domain +http://app201.stacksimplify.com/app1/index.html -- SHOULD FAIL + +# Access App1 and App2 using Default Domain +http://default101.stacksimplify.com/app1/index.html -- SHOULD FAIL +http://default101.stacksimplify.com/app2/index.html -- SHOULD FAIL +``` + +## Step-05: Clean Up +```t +# Delete Manifests +kubectl delete -f kube-manifests/ + +## Verify Route53 Record Set to ensure our DNS records got deleted +- Go to Route53 -> Hosted Zones -> Records +- The below records should be deleted automatically + - default101.stacksimplify.com + - app101.stacksimplify.com + - app201.stacksimplify.com +``` + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..81b7b7d6 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/04-ALB-Ingress-HostHeader-Routing.yml b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/04-ALB-Ingress-HostHeader-Routing.yml new file mode 100755 index 00000000..2ac98c92 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-09-NameBasedVirtualHost-Routing/kube-manifests/04-ALB-Ingress-HostHeader-Routing.yml @@ -0,0 +1,63 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-namedbasedvhost-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: namedbasedvhost-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: default101.stacksimplify.com +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + rules: + - host: app101.stacksimplify.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - host: app201.stacksimplify.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/README.md b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/README.md new file mode 100644 index 00000000..45f0e5e0 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/README.md @@ -0,0 +1,148 @@ +--- +title: AWS Load Balancer Controller - Ingress SSL Discovery Host +description: Learn AWS Load Balancer Controller - Ingress SSL Discovery Host +--- + +## Step-01: Introduction +- Automatically disover SSL Certificate from AWS Certificate Manager Service using `spec.rules.host` +- In this approach, with the specified domain name if we have the SSL Certificate created in AWS Certificate Manager, that certificate will be automatically detected and associated to Application Load Balancer. +- We don't need to get the SSL Certificate ARN and update it in Kubernetes Ingress Manifest +- Discovers via Ingress rule host and attaches a cert for `app102.stacksimplify.com` or `*.stacksimplify.com` to the ALB + +## Step-02: Discover via Ingress "spec.rules.host" +```yaml +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-certdiscoveryhost-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: certdiscoveryhost-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + #alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/632a3ff6-3f6d-464c-9121-b9d97481a76b + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: default102.stacksimplify.com +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + rules: + - host: app102.stacksimplify.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - host: app202.stacksimplify.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` +``` + + +## Step-03: Deploy all Application Kubernetes Manifests and Verify +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Ingress Resource +kubectl get ingress + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify NodePort Services +kubectl get svc +``` +### Verify Load Balancer & Target Groups +- Load Balancer - Listeneres (Verify both 80 & 443) +- Load Balancer - Rules (Verify both 80 & 443 listeners) +- Target Groups - Group Details (Verify Health check path) +- Target Groups - Targets (Verify all 3 targets are healthy) +- **PRIMARILY VERIFY - CERTIFICATE ASSOCIATED TO APPLICATION LOAD BALANCER** + +### Verify External DNS Log +```t +# Verify External DNS logs +kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') +``` +### Verify Route53 +- Go to Services -> Route53 +- You should see **Record Sets** added for + - default102.stacksimplify.com + - app102.stacksimplify.com + - app202.stacksimplify.com + +## Step-04: Access Application using newly registered DNS Name +### Perform nslookup tests before accessing Application +- Test if our new DNS entries registered and resolving to an IP Address +```t +# nslookup commands +nslookup default102.stacksimplify.com +nslookup app102.stacksimplify.com +nslookup app202.stacksimplify.com +``` +### Positive Case: Access Application using DNS domain +```t +# Access App1 +http://app102.stacksimplify.com/app1/index.html + +# Access App2 +http://app202.stacksimplify.com/app2/index.html + +# Access Default App (App3) +http://default102.stacksimplify.com +``` + +## Step-05: Clean Up +```t +# Delete Manifests +kubectl delete -f kube-manifests/ + +## Verify Route53 Record Set to ensure our DNS records got deleted +- Go to Route53 -> Hosted Zones -> Records +- The below records should be deleted automatically + - default102.stacksimplify.com + - app102.stacksimplify.com + - app202.stacksimplify.com +``` + + +## References +- https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/cert_discovery/ diff --git a/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..81b7b7d6 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/04-ALB-Ingress-CertDiscovery-host.yml b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/04-ALB-Ingress-CertDiscovery-host.yml new file mode 100755 index 00000000..b4a2912d --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-10-Ingress-SSL-Discovery-host/kube-manifests/04-ALB-Ingress-CertDiscovery-host.yml @@ -0,0 +1,63 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-certdiscoveryhost-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: certdiscoveryhost-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + #alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: default102.stacksimplify.com +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + rules: + - host: app102.stacksimplify.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - host: app202.stacksimplify.com + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/README.md b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/README.md new file mode 100644 index 00000000..424afe0a --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/README.md @@ -0,0 +1,144 @@ +--- +title: AWS Load Balancer Controller - Ingress SSL Discovery Host +description: Learn AWS Load Balancer Controller - Ingress SSL Discovery Host +--- + +## Step-01: Introduction +- Automatically disover SSL Certificate from AWS Certificate Manager Service using `spec.tls.host` +- In this approach, with the specified domain name if we have the SSL Certificate created in AWS Certificate Manager, that certificate will be automatically detected and associated to Application Load Balancer. +- We don't need to get the SSL Certificate ARN and update it in Kubernetes Ingress Manifest +- Discovers via Ingress rule host and attaches a cert for `app102.stacksimplify.com` or `*.stacksimplify.com` to the ALB + +## Step-02: Discover via Ingress "spec.tls.hosts" +```yaml +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-certdiscoverytls-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: certdiscoverytls-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + #alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/632a3ff6-3f6d-464c-9121-b9d97481a76b + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: certdiscovery-tls-101.stacksimplify.com +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + tls: + - hosts: + - "*.stacksimplify.com" + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - http: + paths: + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + ``` + + +## Step-03: Deploy all Application Kubernetes Manifests and Verify +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Ingress Resource +kubectl get ingress + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify NodePort Services +kubectl get svc +``` +### Verify Load Balancer & Target Groups +- Load Balancer - Listeneres (Verify both 80 & 443) +- Load Balancer - Rules (Verify both 80 & 443 listeners) +- Target Groups - Group Details (Verify Health check path) +- Target Groups - Targets (Verify all 3 targets are healthy) +- **PRIMARILY VERIFY - CERTIFICATE ASSOCIATED TO APPLICATION LOAD BALANCER** + +### Verify External DNS Log +```t +# Verify External DNS logs +kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') +``` +### Verify Route53 +- Go to Services -> Route53 +- You should see **Record Sets** added for + - certdiscovery-tls-901.stacksimplify.com + + +## Step-04: Access Application using newly registered DNS Name +### Perform nslookup tests before accessing Application +- Test if our new DNS entries registered and resolving to an IP Address +```t +# nslookup commands +nslookup certdiscovery-tls-101.stacksimplify.com +``` +### Access Application using DNS domain +```t +# Access App1 +http://certdiscovery-tls-101.stacksimplify.com/app1/index.html + +# Access App2 +http://certdiscovery-tls-101.stacksimplify.com/app2/index.html + +# Access Default App (App3) +http://certdiscovery-tls-101.stacksimplify.com +``` + +## Step-05: Clean Up +```t +# Delete Manifests +kubectl delete -f kube-manifests/ + +## Verify Route53 Record Set to ensure our DNS records got deleted +- Go to Route53 -> Hosted Zones -> Records +- The below records should be deleted automatically + - certdiscovery-tls-101.stacksimplify.com +``` + + +## References +- https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/cert_discovery/ diff --git a/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..81b7b7d6 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/04-ALB-Ingress-CertDiscovery-tls.yml b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/04-ALB-Ingress-CertDiscovery-tls.yml new file mode 100755 index 00000000..36a64d16 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-11-Ingress-SSL-Discovery-tls/kube-manifests/04-ALB-Ingress-CertDiscovery-tls.yml @@ -0,0 +1,64 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-certdiscoverytls-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: certdiscoverytls-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + #alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: certdiscovery-tls-102.stacksimplify.com +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + tls: + - hosts: + - "*.stacksimplify.com" + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - http: + paths: + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/README.md b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/README.md new file mode 100755 index 00000000..f299195d --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/README.md @@ -0,0 +1,71 @@ +--- +title: AWS Load Balancer Controller - Ingress Groups +description: Learn AWS Load Balancer Controller - Ingress Groups +--- + +## Step-01: Introduction +- IngressGroup feature enables you to group multiple Ingress resources together. +- The controller will automatically merge Ingress rules for all Ingresses within IngressGroup and support them with a single ALB. +- In addition, most annotations defined on a Ingress only applies to the paths defined by that Ingress. +- Demonstrate Ingress Groups concept with two Applications. + +## Step-02: Review App1 Ingress Manifest - Key Lines +- **File Name:** `kube-manifests/app1/02-App1-Ingress.yml` +```yaml + # Ingress Groups + alb.ingress.kubernetes.io/group.name: myapps.web + alb.ingress.kubernetes.io/group.order: '10' +``` + +## Step-03: Review App2 Ingress Manifest - Key Lines +- **File Name:** `kube-manifests/app2/02-App2-Ingress.yml` +```yaml + # Ingress Groups + alb.ingress.kubernetes.io/group.name: myapps.web + alb.ingress.kubernetes.io/group.order: '20' +``` + +## Step-04: Review App3 Ingress Manifest - Key Lines +```yaml + # Ingress Groups + alb.ingress.kubernetes.io/group.name: myapps.web + alb.ingress.kubernetes.io/group.order: '30' +``` + +## Step-05: Deploy Apps with two Ingress Resources +```t +# Deploy both Apps +kubectl apply -R -f kube-manifests + +# Verify Pods +kubectl get pods + +# Verify Ingress +kubectl get ingress +Observation: +1. Three Ingress resources will be created with same ADDRESS value +2. Three Ingress Resources are merged to a single Application Load Balancer as those belong to same Ingress group "myapps.web" +``` + +## Step-06: Verify on AWS Mgmt Console +- Go to Services -> EC2 -> Load Balancers +- Verify Routing Rules for `/app1` and `/app2` and `default backend` + +## Step-07: Verify by accessing in browser +```t +# Web URLs +http://ingress-groups-demo601.stacksimplify.com/app1/index.html +http://ingress-groups-demo601.stacksimplify.com/app2/index.html +http://ingress-groups-demo601.stacksimplify.com +``` + +## Step-08: Clean-Up +```t +# Delete Apps from k8s cluster +kubectl delete -R -f kube-manifests/ + +## Verify Route53 Record Set to ensure our DNS records got deleted +- Go to Route53 -> Hosted Zones -> Records +- The below records should be deleted automatically + - ingress-groups-demo601.stacksimplify.com +``` diff --git a/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app1/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app1/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app1/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app1/02-App1-Ingress.yml b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app1/02-App1-Ingress.yml new file mode 100755 index 00000000..ca2374e5 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app1/02-App1-Ingress.yml @@ -0,0 +1,46 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +#apiVersion: extensions/v1beta1 +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: app1-ingress + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: ingress-groups-demo + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: ingress-groups-demo601.stacksimplify.com + # Ingress Groups + alb.ingress.kubernetes.io/group.name: myapps.web + alb.ingress.kubernetes.io/group.order: '10' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app2/01-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app2/01-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app2/01-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app2/02-App2-Ingress.yml b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app2/02-App2-Ingress.yml new file mode 100755 index 00000000..733d2b9b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app2/02-App2-Ingress.yml @@ -0,0 +1,47 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +#apiVersion: extensions/v1beta1 +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: app2-ingress + annotations: + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" + alb.ingress.kubernetes.io/scheme: internet-facing + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: ingress-groups-demo + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: ingress-groups-demo601.stacksimplify.com + # Ingress Groups + alb.ingress.kubernetes.io/group.name: myapps.web + alb.ingress.kubernetes.io/group.order: '20' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + rules: + - http: + paths: + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app3/01-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app3/01-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..81b7b7d6 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app3/01-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app3/03-App3-Ingress-default-backend.yml b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app3/03-App3-Ingress-default-backend.yml new file mode 100755 index 00000000..4bf1c335 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-12-IngressGroups/kube-manifests/app3/03-App3-Ingress-default-backend.yml @@ -0,0 +1,41 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +#apiVersion: extensions/v1beta1 +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: app3-ingress + annotations: + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" + alb.ingress.kubernetes.io/scheme: internet-facing + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: ingress-groups-demo + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: ingress-groups-demo601.stacksimplify.com + # Ingress Groups + alb.ingress.kubernetes.io/group.name: myapps.web + alb.ingress.kubernetes.io/group.order: '30' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/README.md b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/README.md new file mode 100644 index 00000000..64205873 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/README.md @@ -0,0 +1,85 @@ +--- +title: AWS Load Balancer Controller - Ingress Target Type IP +description: Learn AWS Load Balancer Controller - Ingress Target Type IP +--- + +## Step-01: Introduction +- `alb.ingress.kubernetes.io/target-type` specifies how to route traffic to pods. +- You can choose between `instance` and `ip` +- **Instance Mode:** `instance mode` will route traffic to all ec2 instances within cluster on NodePort opened for your service. +- **IP Mode:** `ip mode` is required for sticky sessions to work with Application Load Balancers. + + +## Step-02: Ingress Manifest - Add target-type +- **File Name:** 04-ALB-Ingress-target-type-ip.yml +```yaml + # Target Type: IP + alb.ingress.kubernetes.io/target-type: ip +``` + +## Step-03: Deploy all Application Kubernetes Manifests and Verify +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Ingress Resource +kubectl get ingress + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify NodePort Services +kubectl get svc +``` +### Verify Load Balancer & Target Groups +- Load Balancer - Listeneres (Verify both 80 & 443) +- Load Balancer - Rules (Verify both 80 & 443 listeners) +- Target Groups - Group Details (Verify Health check path) +- Target Groups - Targets (Verify all 3 targets are healthy) +- **PRIMARILY VERIFY - TARGET GROUPS which contain thePOD IPs instead of WORKER NODE IP with NODE PORTS** +```t +# List Pods and their IPs +kubectl get pods -o wide +``` + +### Verify External DNS Log +```t +# Verify External DNS logs +kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') +``` +### Verify Route53 +- Go to Services -> Route53 +- You should see **Record Sets** added for + - target-type-ip-501.stacksimplify.com + + +## Step-04: Access Application using newly registered DNS Name +### Perform nslookup tests before accessing Application +- Test if our new DNS entries registered and resolving to an IP Address +```t +# nslookup commands +nslookup target-type-ip-501.stacksimplify.com +``` +### Access Application using DNS domain +```t +# Access App1 +http://target-type-ip-501.stacksimplify.com /app1/index.html + +# Access App2 +http://target-type-ip-501.stacksimplify.com /app2/index.html + +# Access Default App (App3) +http://target-type-ip-501.stacksimplify.com +``` + +## Step-05: Clean Up +```t +# Delete Manifests +kubectl delete -f kube-manifests/ + +## Verify Route53 Record Set to ensure our DNS records got deleted +- Go to Route53 -> Hosted Zones -> Records +- The below records should be deleted automatically + - target-type-ip-501.stacksimplify.com +``` \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/01-Nginx-App1-Deployment-and-ClusterIPService.yml b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/01-Nginx-App1-Deployment-and-ClusterIPService.yml new file mode 100755 index 00000000..03e5126e --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/01-Nginx-App1-Deployment-and-ClusterIPService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-clusterip-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: ClusterIP + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/02-Nginx-App2-Deployment-and-ClusterIPService.yml b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/02-Nginx-App2-Deployment-and-ClusterIPService.yml new file mode 100755 index 00000000..c265ddcf --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/02-Nginx-App2-Deployment-and-ClusterIPService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-clusterip-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: ClusterIP + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/03-Nginx-App3-Deployment-and-ClusterIPService.yml b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/03-Nginx-App3-Deployment-and-ClusterIPService.yml new file mode 100644 index 00000000..300fba2a --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/03-Nginx-App3-Deployment-and-ClusterIPService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-clusterip-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: ClusterIP + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/04-ALB-Ingress-target-type-ip.yml b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/04-ALB-Ingress-target-type-ip.yml new file mode 100755 index 00000000..9501a676 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-13-Ingress-TargetType-IP/kube-manifests/04-ALB-Ingress-target-type-ip.yml @@ -0,0 +1,66 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-target-type-ip-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: target-type-ip-ingress + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + #alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/ssl-redirect: '443' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: target-type-ip-501.stacksimplify.com + # Target Type: IP + alb.ingress.kubernetes.io/target-type: ip +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-clusterip-service + port: + number: 80 + tls: + - hosts: + - "*.stacksimplify.com" + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-clusterip-service + port: + number: 80 + - http: + paths: + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-clusterip-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/README.md b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/README.md new file mode 100644 index 00000000..f6aa1895 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/README.md @@ -0,0 +1,91 @@ +--- +title: AWS Load Balancer Controller - Ingress Internal LB +description: Learn AWS Load Balancer Controller - Ingress Internal LB +--- + +## Step-01: Introduction +- Create Internal Application Load Balancer using Ingress +- To test the Internal LB, use the `curl-pod` +- Deploy `curl-pod` +- Connect to `curl-pod` and test Internal LB from `curl-pod` + +## Step-02: Update Ingress Scheme annotation to Internal +- **File Name:** 04-ALB-Ingress-Internal-LB.yml +```yaml + # Creates Internal Application Load Balancer + alb.ingress.kubernetes.io/scheme: internal +``` + +## Step-03: Deploy all Application Kubernetes Manifests and Verify +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Ingress Resource +kubectl get ingress + +# Verify Apps +kubectl get deploy +kubectl get pods + +# Verify NodePort Services +kubectl get svc +``` +### Verify Load Balancer & Target Groups +- Load Balancer - Listeneres (Verify both 80 & 443) +- Load Balancer - Rules (Verify both 80 & 443 listeners) +- Target Groups - Group Details (Verify Health check path) +- Target Groups - Targets (Verify all 3 targets are healthy) + +## Step-04: How to test this Internal Load Balancer? +- We are going to deploy a `curl-pod` in EKS Cluster +- We connect to that `curl-pod` in EKS Cluster and test using `curl commands` for our sample applications load balanced using this Internal Application Load Balancer + + +## Step-05: curl-pod Kubernetes Manifest +- **File Name:** kube-manifests-curl/01-curl-pod.yml +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: curl-pod +spec: + containers: + - name: curl + image: curlimages/curl + command: [ "sleep", "600" ] +``` + +## Step-06: Deploy curl-pod and Verify Internal LB +```t +# Deploy curl-pod +kubectl apply -f kube-manifests-curl + +# Will open up a terminal session into the container +kubectl exec -it curl-pod -- sh + +# We can now curl external addresses or internal services: +curl http://google.com/ +curl + +# Default Backend Curl Test +curl internal-ingress-internal-lb-1839544354.us-east-1.elb.amazonaws.com + +# App1 Curl Test +curl internal-ingress-internal-lb-1839544354.us-east-1.elb.amazonaws.com/app1/index.html + +# App2 Curl Test +curl internal-ingress-internal-lb-1839544354.us-east-1.elb.amazonaws.com/app2/index.html + +# App3 Curl Test +curl internal-ingress-internal-lb-1839544354.us-east-1.elb.amazonaws.com +``` + + +## Step-07: Clean Up +```t +# Delete Manifests +kubectl delete -f kube-manifests/ +kubectl delete -f kube-manifests-curl/ +``` + diff --git a/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests-curl/01-curl-pod.yml b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests-curl/01-curl-pod.yml new file mode 100644 index 00000000..a9d6c513 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests-curl/01-curl-pod.yml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Pod +metadata: + name: curl-pod +spec: + containers: + - name: curl + image: curlimages/curl + command: [ "sleep", "600" ] \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..9431f21b --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/01-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100755 index 00000000..4d79c350 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,40 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..81b7b7d6 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/03-Nginx-App3-Deployment-and-NodePortService.yml @@ -0,0 +1,38 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 +--- +apiVersion: v1 +kind: Service +metadata: + name: app3-nginx-nodeport-service + labels: + app: app3-nginx + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /index.html +spec: + type: NodePort + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/04-ALB-Ingress-Internal-LB.yml b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/04-ALB-Ingress-Internal-LB.yml new file mode 100755 index 00000000..a84f2e13 --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/08-14-Ingress-Internal-LB/kube-manifests/04-ALB-Ingress-Internal-LB.yml @@ -0,0 +1,54 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: ingress-internal-lb-demo + annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: ingress-internal-lb + # Ingress Core Settings + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) + # Creates External Application Load Balancer + #alb.ingress.kubernetes.io/scheme: internet-facing + # Creates Internal Application Load Balancer + alb.ingress.kubernetes.io/scheme: internal + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' +spec: + ingressClassName: my-aws-ingress-class # Ingress Class + defaultBackend: + service: + name: app3-nginx-nodeport-service + port: + number: 80 + rules: + - http: + paths: + - path: /app1 + pathType: Prefix + backend: + service: + name: app1-nginx-nodeport-service + port: + number: 80 + - path: /app2 + pathType: Prefix + backend: + service: + name: app2-nginx-nodeport-service + port: + number: 80 + +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + +# 1. If "spec.ingressClassName: my-aws-ingress-class" not specified, will reference default ingress class on this kubernetes cluster +# 2. Default Ingress class is nothing but for which ingress class we have the annotation `ingressclass.kubernetes.io/is-default-class: "true"` + + \ No newline at end of file diff --git a/08-NEW-ELB-Application-LoadBalancers/README.md b/08-NEW-ELB-Application-LoadBalancers/README.md new file mode 100755 index 00000000..81ca3d5e --- /dev/null +++ b/08-NEW-ELB-Application-LoadBalancers/README.md @@ -0,0 +1,42 @@ +# Load Balancing workloads on EKS using AWS Application Load Balancer + +## Topics +- We will be looking in to this topic very extensively in a step by step and module by module model. +- The below will be the list of topics covered as part of AWS ALB Ingress Perspective. + + +| S.No | Topic Name | +| ------------- | ------------- | +| 1. | AWS Load Balancer Controller Installation | +| 2. | ALB Ingress Basics | +| 3. | ALB Ingress Context Path based Routing | +| 4. | ALB Ingress SSL | +| 5. | ALB Ingress SSL Redirect (HTTP to HTTPS) | +| 6. | ALB Ingress External DNS | +| 7. | ALB Ingress External DNS for k8s Ingress | +| 8. | ALB Ingress External DNS for k8s Service | +| 9. | ALB Ingress Name based Virtual Host Routing | +| 10. | ALB Ingress SSL Discovery - Host | +| 11. | ALB Ingress SSL Discovery - TLS | +| 12. | ALB Ingress Groups | +| 13. | ALB Ingress Target Type - IP Mode | +| 13. | ALB Ingress Internal Load Balancer | + + +## References: +- Good to refer all the below for additional understanding. + +### AWS Load Balancer Controller +- [AWS Load Balancer Controller Documentation](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/) + + +### AWS ALB Ingress Annotations Reference +- https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/ + +### eksctl getting started +- https://eksctl.io/introduction/#getting-started + +### External DNS +- https://github.com/kubernetes-sigs/external-dns +- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/alb-ingress.md +- https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/01-namespace.yml b/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/01-namespace.yml new file mode 100644 index 00000000..31b8002c --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/01-namespace.yml @@ -0,0 +1,4 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: fp-dev \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/02-Nginx-App1-Deployment-and-NodePortService.yml b/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/02-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..851d5ee7 --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/02-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,49 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx + namespace: fp-dev +spec: + replicas: 2 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 + resources: + requests: + memory: "128Mi" + cpu: "500m" + limits: + memory: "500Mi" + cpu: "1000m" +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + namespace: fp-dev + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml b/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml new file mode 100644 index 00000000..27da0268 --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests-old/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml @@ -0,0 +1,46 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/ +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: ingress-usermgmt-restapp-service + labels: + app: usermgmt-restapp + namespace: fp-dev + annotations: + # Ingress Core Settings + kubernetes.io/ingress.class: "alb" + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/9f042b5d-86fd-4fad-96d0-c81c5abc71e1 + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: fpdev.kubeoncloud.com + # For Fargate + alb.ingress.kubernetes.io/target-type: ip +spec: + rules: + - http: + paths: + - path: /* # SSL Redirect Setting + backend: + serviceName: ssl-redirect + servicePort: use-annotation + - path: /* + backend: + serviceName: app1-nginx-nodeport-service + servicePort: 80 +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + diff --git a/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml b/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml index 27da0268..675e224e 100644 --- a/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml +++ b/09-EKS-Workloads-on-Fargate/09-01-Fargate-Profile-Basic/kube-manifests/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml @@ -1,46 +1,44 @@ -# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/ -apiVersion: extensions/v1beta1 +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 kind: Ingress metadata: - name: ingress-usermgmt-restapp-service - labels: - app: usermgmt-restapp + name: app1-ingress-service namespace: fp-dev annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: ingress-fargatedemo # Ingress Core Settings - kubernetes.io/ingress.class: "alb" + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) alb.ingress.kubernetes.io/scheme: internet-facing # Health Check Settings alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-port: traffic-port #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer - #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' alb.ingress.kubernetes.io/success-codes: '200' alb.ingress.kubernetes.io/healthy-threshold-count: '2' - alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' ## SSL Settings alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' - alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/9f042b5d-86fd-4fad-96d0-c81c5abc71e1 + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) # SSL Redirect Setting - alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' + alb.ingress.kubernetes.io/ssl-redirect: '443' # External DNS - For creating a Record Set in Route53 - external-dns.alpha.kubernetes.io/hostname: fpdev.kubeoncloud.com + external-dns.alpha.kubernetes.io/hostname: fpdev101.stacksimplify.com # For Fargate alb.ingress.kubernetes.io/target-type: ip spec: rules: - http: - paths: - - path: /* # SSL Redirect Setting + paths: + - path: /app1 + pathType: Prefix backend: - serviceName: ssl-redirect - servicePort: use-annotation - - path: /* - backend: - serviceName: app1-nginx-nodeport-service - servicePort: 80 + service: + name: app1-nginx-nodeport-service + port: + number: 80 # Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/01-Fargate-Advanced-Profiles/01-fargate-profiles.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/01-Fargate-Advanced-Profiles/01-fargate-profiles.yml new file mode 100644 index 00000000..67042e3d --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/01-Fargate-Advanced-Profiles/01-fargate-profiles.yml @@ -0,0 +1,18 @@ +apiVersion: eksctl.io/v1alpha5 +kind: ClusterConfig +metadata: + name: eksdemo1 # Name of the EKS Cluster + region: us-east-1 +fargateProfiles: + - name: fp-app2 + selectors: + # All workloads in the "ns-app2" Kubernetes namespace will be + # scheduled onto Fargate: + - namespace: ns-app2 + - name: fp-ums + selectors: + # All workloads in the "ns-ums" Kubernetes namespace matching the following + # label selectors will be scheduled onto Fargate: + - namespace: ns-ums + labels: + runon: fargate diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/01-namespace.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/01-namespace.yml new file mode 100644 index 00000000..065dc46e --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/01-namespace.yml @@ -0,0 +1,5 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: ns-app1 +# Apps deployed in this namespace will run on a EC2 Managed Node Group diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/02-Nginx-App1-Deployment-and-NodePortService.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/02-Nginx-App1-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..a554097b --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/02-Nginx-App1-Deployment-and-NodePortService.yml @@ -0,0 +1,49 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app1-nginx-deployment + labels: + app: app1-nginx + namespace: ns-app1 +spec: + replicas: 2 + selector: + matchLabels: + app: app1-nginx + template: + metadata: + labels: + app: app1-nginx + spec: + containers: + - name: app1-nginx + image: stacksimplify/kube-nginxapp1:1.0.0 + ports: + - containerPort: 80 + resources: + requests: + memory: "128Mi" + cpu: "500m" + limits: + memory: "500Mi" + cpu: "1000m" +--- +apiVersion: v1 +kind: Service +metadata: + name: app1-nginx-nodeport-service + labels: + app: app1-nginx + namespace: ns-app1 + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html +spec: + type: NodePort + selector: + app: app1-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml new file mode 100644 index 00000000..57e0806e --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/01-ns-app1/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml @@ -0,0 +1,44 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/ +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: app1-ingress-service + labels: + app: app1-nginx + namespace: ns-app1 + annotations: + # Ingress Core Settings + kubernetes.io/ingress.class: "alb" + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/9f042b5d-86fd-4fad-96d0-c81c5abc71e1 + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: app1.kubeoncloud.com +spec: + rules: + - http: + paths: + - path: /* # SSL Redirect Setting + backend: + serviceName: ssl-redirect + servicePort: use-annotation + - path: /* + backend: + serviceName: app1-nginx-nodeport-service + servicePort: 80 +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/01-namespace.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/01-namespace.yml new file mode 100644 index 00000000..8bd9572b --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/01-namespace.yml @@ -0,0 +1,5 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: ns-app2 +# Apps deployed in this namespace will run on a Fargate fp-app2 \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/02-Nginx-App2-Deployment-and-NodePortService.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/02-Nginx-App2-Deployment-and-NodePortService.yml new file mode 100644 index 00000000..ff17db39 --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/02-Nginx-App2-Deployment-and-NodePortService.yml @@ -0,0 +1,51 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app2-nginx-deployment + labels: + app: app2-nginx + namespace: ns-app2 +spec: + replicas: 2 + selector: + matchLabels: + app: app2-nginx + template: + metadata: + labels: + app: app2-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kube-nginxapp2:1.0.0 + ports: + - containerPort: 80 + resources: + requests: + memory: "128Mi" + cpu: "500m" + limits: + memory: "500Mi" + cpu: "1000m" +--- +apiVersion: v1 +kind: Service +metadata: + name: app2-nginx-nodeport-service + labels: + app: app2-nginx + namespace: ns-app2 + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /app2/index.html + # For Fargate + alb.ingress.kubernetes.io/target-type: ip +spec: + type: NodePort + selector: + app: app2-nginx + ports: + - port: 80 + targetPort: 80 + + \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml new file mode 100644 index 00000000..5b7f4575 --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/02-ns-app2/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml @@ -0,0 +1,46 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/ +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: app2-ingress-service + labels: + app: app2-nginx + namespace: ns-app2 + annotations: + # Ingress Core Settings + kubernetes.io/ingress.class: "alb" + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/9f042b5d-86fd-4fad-96d0-c81c5abc71e1 + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: app2.kubeoncloud.com + # For Fargate + alb.ingress.kubernetes.io/target-type: ip +spec: + rules: + - http: + paths: + - path: /* # SSL Redirect Setting + backend: + serviceName: ssl-redirect + servicePort: use-annotation + - path: /* + backend: + serviceName: app2-nginx-nodeport-service + servicePort: 80 +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/01-namespace.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/01-namespace.yml new file mode 100644 index 00000000..daba4286 --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/01-namespace.yml @@ -0,0 +1,5 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: ns-ums +# Apps deployed in this namespace will run on a Fargate fp-ums \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/02-MySQL-externalName-Service.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/02-MySQL-externalName-Service.yml new file mode 100644 index 00000000..3a5aa024 --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/02-MySQL-externalName-Service.yml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Service +metadata: + name: mysql + labels: + runon: fargate + namespace: ns-ums +spec: + type: ExternalName + externalName: usermgmtdb.cxojydmxwly6.us-east-1.rds.amazonaws.com diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/03-UserManagementMicroservice-Deployment-Service.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/03-UserManagementMicroservice-Deployment-Service.yml new file mode 100644 index 00000000..4be5d63d --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/03-UserManagementMicroservice-Deployment-Service.yml @@ -0,0 +1,63 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: usermgmt-microservice + labels: + app: usermgmt-restapp + runon: fargate + namespace: ns-ums +spec: + replicas: 2 + selector: + matchLabels: + app: usermgmt-restapp + template: + metadata: + labels: + app: usermgmt-restapp + runon: fargate + spec: + initContainers: + - name: init-db + image: busybox:1.31 + command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";'] + containers: + - name: usermgmt-restapp + image: stacksimplify/kube-usermanagement-microservice:1.0.0 + resources: + requests: + memory: "128Mi" + cpu: "500m" + limits: + memory: "500Mi" + cpu: "1000m" + ports: + - containerPort: 8095 + env: + - name: DB_HOSTNAME + value: "mysql" + - name: DB_PORT + value: "3306" + - name: DB_NAME + value: "usermgmt" + - name: DB_USERNAME + value: "dbadmin" # RDS DB Username is dbadmin + - name: DB_PASSWORD + valueFrom: + secretKeyRef: + name: mysql-db-password + key: db-password + livenessProbe: + exec: + command: + - /bin/sh + - -c + - nc -z localhost 8095 + initialDelaySeconds: 60 + periodSeconds: 10 + readinessProbe: + httpGet: + path: /usermgmt/health-status + port: 8095 + initialDelaySeconds: 60 + periodSeconds: 10 \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/04-Kubernetes-Secrets.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/04-Kubernetes-Secrets.yml new file mode 100644 index 00000000..51384690 --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/04-Kubernetes-Secrets.yml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Secret +metadata: + name: mysql-db-password + labels: + runon: fargate + namespace: ns-ums +type: Opaque +data: + db-password: ZGJwYXNzd29yZDEx \ No newline at end of file diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/05-UserManagement-NodePort-Service.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/05-UserManagement-NodePort-Service.yml new file mode 100644 index 00000000..7dfec78c --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/05-UserManagement-NodePort-Service.yml @@ -0,0 +1,20 @@ +apiVersion: v1 +kind: Service +metadata: + name: usermgmt-restapp-nodeport-service + labels: + app: usermgmt-restapp + runon: fargate + namespace: ns-ums + annotations: +#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status +spec: + type: NodePort + selector: + app: usermgmt-restapp + ports: + - port: 8095 + targetPort: 8095 + + diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/07-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/07-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml new file mode 100644 index 00000000..0926b968 --- /dev/null +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests-old/02-Applications/03-ns-ums/07-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml @@ -0,0 +1,47 @@ +# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/ +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: ums-ingress-service + labels: + app: usermgmt-restapp + runon: fargate + namespace: ns-ums + annotations: + # Ingress Core Settings + kubernetes.io/ingress.class: "alb" + alb.ingress.kubernetes.io/scheme: internet-facing + # Health Check Settings + alb.ingress.kubernetes.io/healthcheck-protocol: HTTP + alb.ingress.kubernetes.io/healthcheck-port: traffic-port + #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer + #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status + alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' + alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' + alb.ingress.kubernetes.io/success-codes: '200' + alb.ingress.kubernetes.io/healthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + ## SSL Settings + alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/9f042b5d-86fd-4fad-96d0-c81c5abc71e1 + #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) + # SSL Redirect Setting + alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: ums.kubeoncloud.com + # For Fargate + alb.ingress.kubernetes.io/target-type: ip +spec: + rules: + - http: + paths: + - path: /* # SSL Redirect Setting + backend: + serviceName: ssl-redirect + servicePort: use-annotation + - path: /* + backend: + serviceName: usermgmt-restapp-nodeport-service + servicePort: 8095 +# Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. + diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/01-ns-app1/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/01-ns-app1/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml index 57e0806e..9bee31e4 100644 --- a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/01-ns-app1/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/01-ns-app1/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml @@ -1,44 +1,44 @@ -# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/ -apiVersion: extensions/v1beta1 +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app1-ingress-service labels: - app: app1-nginx - namespace: ns-app1 + app: app1-nginx + namespace: ns-app1 annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: app1-ingress # Ingress Core Settings - kubernetes.io/ingress.class: "alb" + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) alb.ingress.kubernetes.io/scheme: internet-facing # Health Check Settings alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-port: traffic-port #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer - #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' alb.ingress.kubernetes.io/success-codes: '200' alb.ingress.kubernetes.io/healthy-threshold-count: '2' - alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' ## SSL Settings alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' - alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/9f042b5d-86fd-4fad-96d0-c81c5abc71e1 + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) # SSL Redirect Setting - alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' + alb.ingress.kubernetes.io/ssl-redirect: '443' # External DNS - For creating a Record Set in Route53 - external-dns.alpha.kubernetes.io/hostname: app1.kubeoncloud.com + external-dns.alpha.kubernetes.io/hostname: app1101.stacksimplify.com spec: rules: - http: - paths: - - path: /* # SSL Redirect Setting + paths: + - path: /app1 + pathType: Prefix backend: - serviceName: ssl-redirect - servicePort: use-annotation - - path: /* - backend: - serviceName: app1-nginx-nodeport-service - servicePort: 80 + service: + name: app1-nginx-nodeport-service + port: + number: 80 # Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/02-ns-app2/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/02-ns-app2/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml index 5b7f4575..e0da214b 100644 --- a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/02-ns-app2/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/02-ns-app2/03-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml @@ -1,46 +1,46 @@ -# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/ -apiVersion: extensions/v1beta1 +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: app2-ingress-service labels: - app: app2-nginx - namespace: ns-app2 + app: app2-nginx + namespace: ns-app2 annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: app2-ingress # Ingress Core Settings - kubernetes.io/ingress.class: "alb" + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) alb.ingress.kubernetes.io/scheme: internet-facing # Health Check Settings alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-port: traffic-port #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer - #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' alb.ingress.kubernetes.io/success-codes: '200' alb.ingress.kubernetes.io/healthy-threshold-count: '2' - alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' ## SSL Settings alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' - alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/9f042b5d-86fd-4fad-96d0-c81c5abc71e1 + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) # SSL Redirect Setting - alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' + alb.ingress.kubernetes.io/ssl-redirect: '443' # External DNS - For creating a Record Set in Route53 - external-dns.alpha.kubernetes.io/hostname: app2.kubeoncloud.com + external-dns.alpha.kubernetes.io/hostname: app2201.stacksimplify.com # For Fargate alb.ingress.kubernetes.io/target-type: ip spec: rules: - http: - paths: - - path: /* # SSL Redirect Setting + paths: + - path: /app2 + pathType: Prefix backend: - serviceName: ssl-redirect - servicePort: use-annotation - - path: /* - backend: - serviceName: app2-nginx-nodeport-service - servicePort: 80 + service: + name: app2-nginx-nodeport-service + port: + number: 80 # Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. diff --git a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/03-ns-ums/07-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/03-ns-ums/07-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml index 0926b968..18ea85c7 100644 --- a/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/03-ns-ums/07-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml +++ b/09-EKS-Workloads-on-Fargate/09-02-Fargate-Profiles-Advanced-YAML/kube-manifests/02-Applications/03-ns-ums/07-ALB-Ingress-SSL-Redirect-with-ExternalDNS.yml @@ -1,47 +1,47 @@ -# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/ -apiVersion: extensions/v1beta1 +# Annotations Reference: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/ingress/annotations/ +apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ums-ingress-service labels: app: usermgmt-restapp runon: fargate - namespace: ns-ums + namespace: ns-ums annotations: + # Load Balancer Name + alb.ingress.kubernetes.io/load-balancer-name: ums-ingress # Ingress Core Settings - kubernetes.io/ingress.class: "alb" + #kubernetes.io/ingress.class: "alb" (OLD INGRESS CLASS NOTATION - STILL WORKS BUT RECOMMENDED TO USE IngressClass Resource) alb.ingress.kubernetes.io/scheme: internet-facing # Health Check Settings alb.ingress.kubernetes.io/healthcheck-protocol: HTTP alb.ingress.kubernetes.io/healthcheck-port: traffic-port #Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer - #alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15' alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5' alb.ingress.kubernetes.io/success-codes: '200' alb.ingress.kubernetes.io/healthy-threshold-count: '2' - alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' + alb.ingress.kubernetes.io/unhealthy-threshold-count: '2' ## SSL Settings alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]' - alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/9f042b5d-86fd-4fad-96d0-c81c5abc71e1 + alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d #alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used) # SSL Redirect Setting - alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}' + alb.ingress.kubernetes.io/ssl-redirect: '443' # External DNS - For creating a Record Set in Route53 - external-dns.alpha.kubernetes.io/hostname: ums.kubeoncloud.com + external-dns.alpha.kubernetes.io/hostname: ums1101.stacksimplify.com # For Fargate alb.ingress.kubernetes.io/target-type: ip spec: rules: - http: - paths: - - path: /* # SSL Redirect Setting + paths: + - path: / + pathType: Prefix backend: - serviceName: ssl-redirect - servicePort: use-annotation - - path: /* - backend: - serviceName: usermgmt-restapp-nodeport-service - servicePort: 8095 + service: + name: usermgmt-restapp-nodeport-service + port: + number: 8095 # Important Note-1: In path based routing order is very important, if we are going to use "/*", try to use it at the end of all rules. diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/README.md b/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/README.md new file mode 100644 index 00000000..7ca19daf --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/README.md @@ -0,0 +1,121 @@ +--- +title: AWS Load Balancer Controller - NLB Basics +description: Learn to use AWS Network Load Balancer with AWS Load Balancer Controller +--- + +## Step-01: Introduction +- Understand more about + - **AWS Cloud Provider Load Balancer Controller (Legacy):** Creates AWS CLB and NLB + - **AWS Load Balancer Controller (Latest):** Creates AWS ALB and NLB +- Understand how the Kubernetes Service of Type Load Balancer which can create AWS NLB to be associated with latest `AWS Load Balancer Controller`. +- Understand various NLB Annotations + + +## Step-02: Review 01-Nginx-App3-Deployment.yml +- **File Name:** `kube-manifests/01-Nginx-App3-Deployment.yml` +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 + +``` + +## Step-03: Review 02-LBC-NLB-LoadBalancer-Service.yml +- **File Name:** `kube-manifests\02-LBC-NLB-LoadBalancer-Service.yml` +```yaml +apiVersion: v1 +kind: Service +metadata: + name: basics-lbc-network-lb + annotations: + # Traffic Routing + service.beta.kubernetes.io/aws-load-balancer-name: basics-lbc-network-lb + service.beta.kubernetes.io/aws-load-balancer-type: external + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance + #service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet ## Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. + + # Health Check Settings + service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http + service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port + service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /index.html + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" # The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. + + # Access Control + service.beta.kubernetes.io/load-balancer-source-ranges: 0.0.0.0/0 + service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" + + # AWS Resource Tags + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test +spec: + type: LoadBalancer + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 +``` + +## Step-04: Deploy all kube-manifests +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Pods +kubectl get pods + +# Verify Services +kubectl get svc +Observation: +1. Verify the network lb DNS name + +# Verify AWS Load Balancer Controller pod logs +kubectl -n kube-system get pods +kubectl -n kube-system logs -f + +# Verify using AWS Mgmt Console +Go to Services -> EC2 -> Load Balancing -> Load Balancers +1. Verify Description Tab - DNS Name matching output of "kubectl get svc" External IP +2. Verify Listeners Tab + +Go to Services -> EC2 -> Load Balancing -> Target Groups +1. Verify Registered targets +2. Verify Health Check path + +# Access Application +http:// +``` + +## Step-05: Clean-Up +```t +# Delete or Undeploy kube-manifests +kubectl delete -f kube-manifests/ + +# Verify if NLB deleted +In AWS Mgmt Console, +Go to Services -> EC2 -> Load Balancing -> Load Balancers +``` + +## References +- [Network Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) +- [NLB Service](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/) +- [NLB Service Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/) \ No newline at end of file diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/kube-manifests/01-Nginx-App3-Deployment.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/kube-manifests/01-Nginx-App3-Deployment.yml new file mode 100644 index 00000000..c68a8665 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/kube-manifests/01-Nginx-App3-Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml new file mode 100755 index 00000000..8671b200 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-01-LBC-NLB-Basic/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml @@ -0,0 +1,32 @@ +apiVersion: v1 +kind: Service +metadata: + name: basics-lbc-network-lb + annotations: + # Traffic Routing + service.beta.kubernetes.io/aws-load-balancer-name: basics-lbc-network-lb + service.beta.kubernetes.io/aws-load-balancer-type: external + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance # specifies the target type to configure for NLB. You can choose between instance and ip + #service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet ## Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. + + # Health Check Settings + service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http + service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port + service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /index.html + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" # The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. + + # Access Control + service.beta.kubernetes.io/load-balancer-source-ranges: 0.0.0.0/0 # specifies the CIDRs that are allowed to access the NLB. + service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" # specifies whether the NLB will be internet-facing or internal + + # AWS Resource Tags + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test +spec: + type: LoadBalancer + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/README.md b/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/README.md new file mode 100644 index 00000000..367c2d12 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/README.md @@ -0,0 +1,77 @@ +--- +title: AWS Load Balancer Controller - NLB TLS +description: Learn to use AWS Network Load Balancer TLS with AWS Load Balancer Controller +--- + +## Step-01: Introduction +- Understand about the 4 TLS Annotations for Network Load Balancers +- aws-load-balancer-ssl-cert +- aws-load-balancer-ssl-ports +- aws-load-balancer-ssl-negotiation-policy +- aws-load-balancer-ssl-negotiation-policy + +## Step-02: Review TLS Annotations +- **File Name:** `kube-manifests\02-LBC-NLB-LoadBalancer-Service.yml` +- **Security Policies:** https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html#describe-ssl-policies +```yaml + # TLS + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, # Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer + service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp +``` + + +## Step-03: Deploy all kube-manifests +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Pods +kubectl get pods + +# Verify Services +kubectl get svc +Observation: +1. Verify the network lb DNS name + +# Verify AWS Load Balancer Controller pod logs +kubectl -n kube-system get pods +kubectl -n kube-system logs -f + +# Verify using AWS Mgmt Console +Go to Services -> EC2 -> Load Balancing -> Load Balancers +1. Verify Description Tab - DNS Name matching output of "kubectl get svc" External IP +2. Verify Listeners Tab +Observation: Should see two listeners Port 80 and 443 + +Go to Services -> EC2 -> Load Balancing -> Target Groups +1. Verify Registered targets +2. Verify Health Check path +Observation: Should see two target groups. 1 Target group for 1 listener + +# Access Application +# Test HTTP URL +http:// +http://lbc-network-lb-tls-demo-a956479ba85953f8.elb.us-east-1.amazonaws.com + +# Test HTTPS URL +https:// +https://lbc-network-lb-tls-demo-a956479ba85953f8.elb.us-east-1.amazonaws.com +``` + +## Step-04: Clean-Up +```t +# Delete or Undeploy kube-manifests +kubectl delete -f kube-manifests/ + +# Verify if NLB deleted +In AWS Mgmt Console, +Go to Services -> EC2 -> Load Balancing -> Load Balancers +``` + +## References +- [Network Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) +- [NLB Service](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/) +- [NLB Service Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/) + diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/kube-manifests/01-Nginx-App3-Deployment.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/kube-manifests/01-Nginx-App3-Deployment.yml new file mode 100644 index 00000000..c68a8665 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/kube-manifests/01-Nginx-App3-Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml new file mode 100755 index 00000000..972f38ac --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-02-LBC-NLB-TLS/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml @@ -0,0 +1,52 @@ +apiVersion: v1 +kind: Service +metadata: + name: tls-lbc-network-lb + annotations: + # Traffic Routing + service.beta.kubernetes.io/aws-load-balancer-name: tls-lbc-network-lb + service.beta.kubernetes.io/aws-load-balancer-type: external + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance + #service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet ## Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. + + # Health Check Settings + service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http + service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port + service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /index.html + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" # The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. + + # Access Control + service.beta.kubernetes.io/load-balancer-source-ranges: 0.0.0.0/0 + service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" + + # AWS Resource Tags + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test + + # TLS + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d # specifies the ARN of one or more certificates managed by the AWS Certificate Manager. + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, # Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer + service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 # specifies the Security Policy for NLB frontend connections, allowing you to control the protocol and ciphers. + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp # specifies whether to use TLS or TCP for the backend traffic between the load balancer and the kubernetes pods. +spec: + type: LoadBalancer + selector: + app: app3-nginx + ports: + - name: http + port: 80 # Creates NLB Port 80 Listener + targetPort: 80 # Creates NLB Port 80 Target Group-1 + - name: https + port: 443 # Creates NLB Port 443 Listener + targetPort: 80 # Creates NLB Port 80 Target Group-2 + - name: http81 + port: 81 # Creates NLB Port 81 Listener + targetPort: 80 # Creates NLB Port 80 Target Group-3 + - name: http82 + port: 82 # Creates NLB Port 82 Listener + targetPort: 80 # Creates NLB Port 80 Target Group-4 + +# Note-1: Listener to Target Group is a one to one Mapping +# Note-2: Every listener will have its own new Target Group created with that port mentioned in targetPort field +# Note-3: This might not be a effective way but unfortunately when you create via kubernetes service, thats the behavior \ No newline at end of file diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/README.md b/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/README.md new file mode 100644 index 00000000..0548cb22 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/README.md @@ -0,0 +1,78 @@ +--- +title: AWS Load Balancer Controller - NLB External DNS +description: Learn to use AWS Network Load Balancer & External DNS with AWS Load Balancer Controller +--- + +## Step-01: Introduction +- Implement External DNS Annotation in NLB Kubernetes Service Manifest + + +## Step-02: Review External DNS Annotations +- **File Name:** `kube-manifests\02-LBC-NLB-LoadBalancer-Service.yml` +```yaml + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: nlbdns101.stacksimplify.com +``` + +## Step-03: Deploy all kube-manifests +```t +# Verify if External DNS Pod exists and Running +kubectl get pods +Observation: +external-dns pod should be running + +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Pods +kubectl get pods + +# Verify Services +kubectl get svc +Observation: +1. Verify the network lb DNS name + +# Verify AWS Load Balancer Controller pod logs +kubectl -n kube-system get pods +kubectl -n kube-system logs -f + +# Verify using AWS Mgmt Console +Go to Services -> EC2 -> Load Balancing -> Load Balancers +1. Verify Description Tab - DNS Name matching output of "kubectl get svc" External IP +2. Verify Listeners Tab +Observation: Should see two listeners Port 80 and 443 + +Go to Services -> EC2 -> Load Balancing -> Target Groups +1. Verify Registered targets +2. Verify Health Check path +Observation: Should see two target groups. 1 Target group for 1 listener + +# Verify External DNS logs +kubectl logs -f $(kubectl get po | egrep -o 'external-dns[A-Za-z0-9-]+') + +# Perform nslookup Test +nslookup nlbdns101.stacksimplify.com + +# Access Application +# Test HTTP URL +http://nlbdns101.stacksimplify.com + +# Test HTTPS URL +https://nlbdns101.stacksimplify.com +``` + +## Step-04: Clean-Up +```t +# Delete or Undeploy kube-manifests +kubectl delete -f kube-manifests/ + +# Verify if NLB deleted +In AWS Mgmt Console, +Go to Services -> EC2 -> Load Balancing -> Load Balancers +``` + +## References +- [Network Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) +- [NLB Service](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/) +- [NLB Service Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/) + diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/kube-manifests/01-Nginx-App3-Deployment.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/kube-manifests/01-Nginx-App3-Deployment.yml new file mode 100644 index 00000000..c68a8665 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/kube-manifests/01-Nginx-App3-Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml new file mode 100755 index 00000000..3f6a7f1a --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-03-LBC-NLB-ExternalDNS/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml @@ -0,0 +1,45 @@ +apiVersion: v1 +kind: Service +metadata: + name: extdns-lbc-network-lb + annotations: + # Traffic Routing + service.beta.kubernetes.io/aws-load-balancer-name: extdns-lbc-network-lb + service.beta.kubernetes.io/aws-load-balancer-type: external + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance + #service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet ## Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. + + # Health Check Settings + service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http + service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port + service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /index.html + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" # The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. + + # Access Control + service.beta.kubernetes.io/load-balancer-source-ranges: 0.0.0.0/0 + service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" + + # AWS Resource Tags + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test + + # TLS + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, # Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer + service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp + + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: nlbdns101.stacksimplify.com +spec: + type: LoadBalancer + selector: + app: app3-nginx + ports: + - name: http + port: 80 + targetPort: 80 + - name: https + port: 443 + targetPort: 80 diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/README.md b/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/README.md new file mode 100644 index 00000000..3eab5032 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/README.md @@ -0,0 +1,87 @@ +--- +title: AWS Load Balancer Controller - NLB Elastic IP +description: Learn to use AWS Network Load Balancer & Elastic IP with AWS Load Balancer Controller +--- + +## Step-01: Introduction +- Create Elastic IPs +- Update NLB Service k8s manifest with Elastic IP Annotation with EIP Allocation IDs + +## Step-02: Create two Elastic IPs and get EIP Allocation IDs +- This configuration is optional and use can use it to assign static IP addresses to your NLB +- You must specify the same number of eip allocations as load balancer subnets annotation +- NLB must be internet-facing +```t +# Elastic IP Allocation IDs +eipalloc-07daf60991cfd21f0 +eipalloc-0a8e8f70a6c735d16 +``` + +## Step-03: Review Elastic IP Annotations +- **File Name:** `kube-manifests\02-LBC-NLB-LoadBalancer-Service.yml` +```yaml + # Elastic IPs + service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-07daf60991cfd21f0, eipalloc-0a8e8f70a6c735d16 +``` + +## Step-04: Deploy all kube-manifests +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Pods +kubectl get pods + +# Verify Services +kubectl get svc +Observation: +1. Verify the network lb DNS name + +# Verify AWS Load Balancer Controller pod logs +kubectl -n kube-system get pods +kubectl -n kube-system logs -f + +# Verify using AWS Mgmt Console +Go to Services -> EC2 -> Load Balancing -> Load Balancers +1. Verify Description Tab - DNS Name matching output of "kubectl get svc" External IP +2. Verify Listeners Tab +Observation: Should see two listeners Port 80 and 443 + +Go to Services -> EC2 -> Load Balancing -> Target Groups +1. Verify Registered targets +2. Verify Health Check path +Observation: Should see two target groups. 1 Target group for 1 listener + +# Perform nslookup Test +nslookup nlbeip201.stacksimplify.com +Observation: +1. Verify the IP Address matches our Elastic IPs we created in Step-02 + +# Access Application +# Test HTTP URL +http://nlbeip201.stacksimplify.com + +# Test HTTPS URL +https://nlbeip201.stacksimplify.com +``` + +## Step-05: Clean-Up +```t +# Delete or Undeploy kube-manifests +kubectl delete -f kube-manifests/ + +# Delete Elastic IPs created +In AWS Mgmt Console, +Go to Services -> EC2 -> Network & Security -> Elastic IPs +Delete two EIPs created + +# Verify if NLB deleted +In AWS Mgmt Console, +Go to Services -> EC2 -> Load Balancing -> Load Balancers +``` + +## References +- [Network Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) +- [NLB Service](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/) +- [NLB Service Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/) + diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/kube-manifests/01-Nginx-App3-Deployment.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/kube-manifests/01-Nginx-App3-Deployment.yml new file mode 100644 index 00000000..c68a8665 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/kube-manifests/01-Nginx-App3-Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml new file mode 100755 index 00000000..3a558338 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-04-LBC-NLB-ElasticIP/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml @@ -0,0 +1,48 @@ +apiVersion: v1 +kind: Service +metadata: + name: elasticip-lbc-network-lb + annotations: + # Traffic Routing + service.beta.kubernetes.io/aws-load-balancer-name: elasticip-lbc-network-lb + service.beta.kubernetes.io/aws-load-balancer-type: external + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance + #service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet ## Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. + + # Health Check Settings + service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http + service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port + service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /index.html + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" # The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. + + # Access Control + service.beta.kubernetes.io/load-balancer-source-ranges: 0.0.0.0/0 + service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" + + # AWS Resource Tags + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test + + # TLS + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, # Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer + service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp + + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: nlbeip201.stacksimplify.com + + # Elastic IPs + service.beta.kubernetes.io/aws-load-balancer-eip-allocations: eipalloc-068b65c8e0df2b53e, eipalloc-022d66b51f98706c6 +spec: + type: LoadBalancer + selector: + app: app3-nginx + ports: + - name: http + port: 80 + targetPort: 80 + - name: https + port: 443 + targetPort: 80 diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/README.md b/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/README.md new file mode 100644 index 00000000..50febc32 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/README.md @@ -0,0 +1,81 @@ +--- +title: AWS Load Balancer Controller - Internal NLB +description: Learn to create Internal AWS Network Load Balancer with Kubernetes +--- + +## Step-01: Introduction +- Create Internal NLB +- Update NLB Service k8s manifest with `aws-load-balancer-scheme` Annotation as `internal` +- Deploy curl pod +- Connect to curl pod and access Internal NLB endpoint using `curl command`. + + +## Step-02: Review LB Scheme Annotation +- **File Name:** `kube-manifests\02-LBC-NLB-LoadBalancer-Service.yml` +```yaml + # Access Control + service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" +``` + +## Step-03: Deploy all kube-manifests +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Pods +kubectl get pods + +# Verify Services +kubectl get svc +Observation: +1. Verify the network lb DNS name + +# Verify AWS Load Balancer Controller pod logs +kubectl -n kube-system get pods +kubectl -n kube-system logs -f + +# Verify using AWS Mgmt Console +Go to Services -> EC2 -> Load Balancing -> Load Balancers +1. Verify Description Tab - DNS Name matching output of "kubectl get svc" External IP +2. Verify Listeners Tab +Observation: Should see two listeners Port 80 + +Go to Services -> EC2 -> Load Balancing -> Target Groups +1. Verify Registered targets +2. Verify Health Check path +``` + +## Step-04: Deploy curl pod and test Internal NLB +```t +# Deploy curl-pod +kubectl apply -f kube-manifests-curl + +# Will open up a terminal session into the container +kubectl exec -it curl-pod -- sh + +# We can now curl external addresses or internal services: +curl http://google.com/ +curl + +# Internal Network LB Curl Test +curl lbc-network-lb-internal-demo-7031ade4ca457080.elb.us-east-1.amazonaws.com +``` + + +## Step-05: Clean-Up +```t +# Delete or Undeploy kube-manifests +kubectl delete -f kube-manifests/ +kubectl delete -f kube-manifests-curl/ + +# Verify if NLB deleted +In AWS Mgmt Console, +Go to Services -> EC2 -> Load Balancing -> Load Balancers +``` + +## References +- [Network Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) +- [NLB Service](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/) +- [NLB Service Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/) + + diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests-curl/01-curl-pod.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests-curl/01-curl-pod.yml new file mode 100644 index 00000000..a9d6c513 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests-curl/01-curl-pod.yml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: Pod +metadata: + name: curl-pod +spec: + containers: + - name: curl + image: curlimages/curl + command: [ "sleep", "600" ] \ No newline at end of file diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests/01-Nginx-App3-Deployment.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests/01-Nginx-App3-Deployment.yml new file mode 100644 index 00000000..c68a8665 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests/01-Nginx-App3-Deployment.yml @@ -0,0 +1,21 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml new file mode 100755 index 00000000..ce56834b --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-05-LBC-NLB-Internal/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml @@ -0,0 +1,33 @@ +apiVersion: v1 +kind: Service +metadata: + name: lbc-network-lb-internal + annotations: + # Traffic Routing + service.beta.kubernetes.io/aws-load-balancer-name: lbc-network-lb-internal + service.beta.kubernetes.io/aws-load-balancer-type: external + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance + #service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet ## Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. + + # Health Check Settings + service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http + service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port + service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /index.html + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" # The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. + + # Access Control + service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" + # The VPC CIDR will be used if service.beta.kubernetes.io/aws-load-balancer-scheme is internal + #service.beta.kubernetes.io/load-balancer-source-ranges: 0.0.0.0/0 + + # AWS Resource Tags + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test +spec: + type: LoadBalancer + selector: + app: app3-nginx + ports: + - port: 80 + targetPort: 80 \ No newline at end of file diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/README.md b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/README.md new file mode 100644 index 00000000..b7951ae6 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/README.md @@ -0,0 +1,154 @@ +--- +title: AWS Load Balancer Controller - NLB & Fargate +description: Learn to use AWS Network Load Balancer with Fargate Pods +--- + +## Step-01: Introduction +- Create advanced AWS Fargate Profile +- Schedule App3 on Fargate Pod +- Update NLB Annotation `aws-load-balancer-nlb-target-type` with `ip` from `instance` mode + +## Step-02: Review Fargate Profile +- **File Name:** `fargate-profile/01-fargate-profiles.yml` +```yaml +apiVersion: eksctl.io/v1alpha5 +kind: ClusterConfig +metadata: + name: eksdemo1 # Name of the EKS Cluster + region: us-east-1 +fargateProfiles: + - name: fp-app3 + selectors: + # All workloads in the "ns-app3" Kubernetes namespace will be + # scheduled onto Fargate: + - namespace: ns-app3 +``` + +## Step-03: Create Fargate Profile +```t +# Change Directory +cd 19-06-LBC-NLB-Fargate-External + +# Create Fargate Profile +eksctl create fargateprofile -f fargate-profile/01-fargate-profiles.yml +``` + +## Step-04: Update Annotation aws-load-balancer-nlb-target-type to IP +- **File Name:** `kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml` +```yaml +service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip # For Fargate Workloads we should use target-type as ip +``` + +## Step-05: Review the k8s Deployment Metadata for namespace +- **File Name:** `kube-manifests/01-Nginx-App3-Deployment.yml` +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx + namespace: ns-app3 # Update Namespace given in Fargate Profile 01-fargate-profiles.yml +spec: + replicas: 1 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 + resources: + requests: + memory: "128Mi" + cpu: "500m" + limits: + memory: "500Mi" + cpu: "1000m" +``` + +## Step-06: Deploy all kube-manifests +```t +# Deploy kube-manifests +kubectl apply -f kube-manifests/ + +# Verify Pods +kubectl get pods -o wide +Observation: +1. It will take couple of minutes to get the pod from pending to running state due to Fargate Mode. + +# Verify Worker Nodes +kubectl get nodes -o wide +Obseravtion: +1. wait for Fargate worker node to create + +# Verify Services +kubectl get svc +Observation: +1. Verify the network lb DNS name + +# Verify AWS Load Balancer Controller pod logs +kubectl -n kube-system get pods +kubectl -n kube-system logs -f + +# Verify using AWS Mgmt Console +Go to Services -> EC2 -> Load Balancing -> Load Balancers +1. Verify Description Tab - DNS Name matching output of "kubectl get svc" External IP +2. Verify Listeners Tab + +Go to Services -> EC2 -> Load Balancing -> Target Groups +1. Verify Registered targets +2. Verify Health Check path + +# Perform nslookup Test +nslookup nlbfargate901.stacksimplify.com + +# Access Application +# Test HTTP URL +http://nlbfargate901.stacksimplify.com + +# Test HTTPS URL +https://nlbfargate901.stacksimplify.com +``` + +## Step-06: Clean-Up +```t +# Delete or Undeploy kube-manifests +kubectl delete -f kube-manifests/ + +# Verify if NLB deleted +In AWS Mgmt Console, +Go to Services -> EC2 -> Load Balancing -> Load Balancers +``` + +## References +- [Network Load Balancer](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) +- [NLB Service](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/nlb/) +- [NLB Service Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/service/annotations/) + + + + + + + + + + + +## Step-09: Delete Fargate Profile +```t +# List Fargate Profiles +eksctl get fargateprofile --cluster eksdemo1 + +# Delete Fargate Profile +eksctl delete fargateprofile --cluster eksdemo1 --name --wait + +eksctl delete fargateprofile --cluster eksdemo1 --name fp-app3 --wait +``` diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/fargate-profile/01-fargate-profiles.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/fargate-profile/01-fargate-profiles.yml new file mode 100644 index 00000000..00799dac --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/fargate-profile/01-fargate-profiles.yml @@ -0,0 +1,11 @@ +apiVersion: eksctl.io/v1alpha5 +kind: ClusterConfig +metadata: + name: eksdemo1 # Name of the EKS Cluster + region: us-east-1 +fargateProfiles: + - name: fp-app3 + selectors: + # All workloads in the "ns-app3" Kubernetes namespace will be + # scheduled onto Fargate: + - namespace: ns-app3 \ No newline at end of file diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/00-namespace.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/00-namespace.yml new file mode 100644 index 00000000..76ba44e0 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/00-namespace.yml @@ -0,0 +1,5 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: ns-app3 +# Apps deployed in this namespace will run on a Fargate fp-app3 \ No newline at end of file diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/01-Nginx-App3-Deployment.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/01-Nginx-App3-Deployment.yml new file mode 100644 index 00000000..c6ca6f5b --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/01-Nginx-App3-Deployment.yml @@ -0,0 +1,29 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: app3-nginx-deployment + labels: + app: app3-nginx + namespace: ns-app3 # Update Namespace given in Fargate Profile 01-fargate-profiles.yml +spec: + replicas: 2 + selector: + matchLabels: + app: app3-nginx + template: + metadata: + labels: + app: app3-nginx + spec: + containers: + - name: app2-nginx + image: stacksimplify/kubenginx:1.0.0 + ports: + - containerPort: 80 + resources: + requests: + memory: "128Mi" + cpu: "500m" + limits: + memory: "500Mi" + cpu: "1000m" diff --git a/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml new file mode 100755 index 00000000..bff68c74 --- /dev/null +++ b/19-ELB-Network-LoadBalancers-with-LBC/19-06-LBC-NLB-Fargate-External/kube-manifests/02-LBC-NLB-LoadBalancer-Service.yml @@ -0,0 +1,46 @@ +apiVersion: v1 +kind: Service +metadata: + name: fargate-lbc-network-lb + namespace: ns-app3 + annotations: + # Traffic Routing + service.beta.kubernetes.io/aws-load-balancer-name: fargate-lbc-network-lb + service.beta.kubernetes.io/aws-load-balancer-type: external + service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip # For Fargate Workloads we should use target-type as ip + #service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-xxxx, mySubnet ## Subnets are auto-discovered if this annotation is not specified, see Subnet Discovery for further details. + + # Health Check Settings + service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: http + service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: traffic-port + service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /index.html + service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3" + service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10" # The controller currently ignores the timeout configuration due to the limitations on the AWS NLB. The default timeout for TCP is 10s and HTTP is 6s. + + # Access Control + service.beta.kubernetes.io/load-balancer-source-ranges: 0.0.0.0/0 + service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" + + # AWS Resource Tags + service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: Environment=dev,Team=test + + # TLS + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:180789647333:certificate/d86de939-8ffd-410f-adce-0ce1f5be6e0d + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443, # Specify this annotation if you need both TLS and non-TLS listeners on the same load balancer + service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-TLS13-1-2-2021-06 + service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp + + # External DNS - For creating a Record Set in Route53 + external-dns.alpha.kubernetes.io/hostname: nlbfargate901.stacksimplify.com +spec: + type: LoadBalancer + selector: + app: app3-nginx + ports: + - name: http + port: 80 + targetPort: 80 + - name: https + port: 443 + targetPort: 80 diff --git a/presentation/.DS_Store b/presentation/.DS_Store index 5008ddfc..158d76ac 100644 Binary files a/presentation/.DS_Store and b/presentation/.DS_Store differ diff --git a/presentation/AWS-Fargate-and-EKS-Masterclass-V8.pptx b/presentation/AWS-Fargate-and-EKS-Masterclass-V8.pptx new file mode 100644 index 00000000..7b36823b Binary files /dev/null and b/presentation/AWS-Fargate-and-EKS-Masterclass-V8.pptx differ diff --git a/presentation/archive-ppts/.DS_Store b/presentation/archive-ppts/.DS_Store new file mode 100644 index 00000000..5008ddfc Binary files /dev/null and b/presentation/archive-ppts/.DS_Store differ diff --git a/presentation/archive-ppts/AWS-Fargate-and-EKS-Masterclass-V7.pptx b/presentation/archive-ppts/AWS-Fargate-and-EKS-Masterclass-V7.pptx new file mode 100644 index 00000000..b73423d0 Binary files /dev/null and b/presentation/archive-ppts/AWS-Fargate-and-EKS-Masterclass-V7.pptx differ diff --git a/presentation/AWS-Fargate-and-EKS-Masterclass.pptx b/presentation/archive-ppts/before-v7-AWS-Fargate-and-EKS-Masterclass.pptx similarity index 100% rename from presentation/AWS-Fargate-and-EKS-Masterclass.pptx rename to presentation/archive-ppts/before-v7-AWS-Fargate-and-EKS-Masterclass.pptx