- Build and Test Services using Maven
- Docker
- Amazon ECS and AWS Fargate
- Deployment: Create Cluster using AWS Console
- Deployment: Create Cluster using AWS CloudFormation
- Deployment: Create Cluster using AWS CLI
- Deployment: Deploy Tasks and Service using AWS CLI
- Deployment: Deploy Tasks and Service using ECS CLI
- Deployment: Create Cluster and Deploy Fargate Tasks using CloudFormation
- Deployment: Create Cluster and Deploy EC2 Tasks using CloudFormation
- Deployment Pipeline: AWS CodePipeline
- Monitoring: AWS X-Ray
- Monitoring: Prometheus and Grafana
- Kubernetes
- AWS Lambda
- License
This repo contains a simple application that consists of three microservices. The sample application uses three services:
-
“webapp”: Web application microservice uses “greeting” and “name”" microservice to generate a greeting for a person.
-
“greeting”: A microservice that returns a greeting.
-
“name”: A microservice that returns a person’s name based upon
{id}
in the URL.
Each application is deployed using different AWS Compute options. Each deployment model will highlight:
-
Local dev, test and debug
-
Deployment options
-
Deployment pipeline
-
Logging
-
Monitoring
Change to the “services” directory: cd services
-
Greeting service in Terminal 1:
mvn -pl greeting wildfly-swarm:run
-
Optionally test:
curl http://localhost:8081/resources/greeting
-
-
Name service in Terminal 2:
mvn -pl name wildfly-swarm:run
-
Optionally test:
-
-
Webapp service in Terminal 3:
mvn -pl webapp wildfly-swarm:run
-
Invoke the application in Terminal 4:
curl http://localhost:8080/
mvn -pl <service> package -Pdocker
where <service>
is greeting
, name
or webapp
Docker images for all the services can be created from the services
directory:
mvn package -Pdocker
By default, the Docker image name is arungupta/<service>
. The image can be created in your repo:
mvn package -Pdocker -Ddocker.repo=<repo>
By default, the latest
tag is used for the image. A different tag may be specified as:
mvn package -Pdocker -Ddocker.tag=<tag>
docker push <repo>/greeting:<tag>
docker push <repo>/name:<tag>
docker push <repo>/webapp:<tag>
This is a workaround until aws-samples#4 is fixed.
-
docker swarm init
-
cd apps/docker
-
docker stack deploy --compose-file docker-compose.yaml myapp
-
Access the application:
curl http://localhost:80
-
Optionally test the endpoints:
-
Greeting endpoint:
curl http://localhost:8081/resources/greeting
-
Name endpoint:
curl http://localhost:8082/resources/names/1
-
-
-
Remove the stack:
docker stack rm myapp
This section will explain how to create Amazon ECS cluster. The three microservices will be deployed on this cluster using Fargate
and EC2
mode.
Note
|
AWS Fargate is only supported in us-east-1 region at this time. The instructions will only work in that region.
|
This section will explain how to create an ECS cluster using AWS Console.
Complete instructions are available at https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.html.
This section will explain how to create an ECS cluster using CloudFormation. The cluster can be created with a private VPC or a public VPC. The CloudFormation templates for different types are available at https://github.com/awslabs/aws-cloudformation-templates/tree/master/aws/services/ECS/EC2LaunchType/clusters.
This section will create a 3-instance cluster using a public VPC:
curl -O https://raw.githubusercontent.com/awslabs/aws-cloudformation-templates/master/aws/services/ECS/EC2LaunchType/clusters/public-vpc.yml
aws cloudformation deploy \
--stack-name MyECSCluster \
--template-file public-vpc.yml \
--region us-east-1 \
--capabilities CAPABILITY_IAM
List the cluster using aws ecs list-clusters
command:
{
"clusterArns": [
"arn:aws:ecs:us-east-1:091144949931:cluster/MyECSCluster-ECSCluster-197YNE1ZHPSOP"
]
}
This section will explain how to create an ECS cluster using AWS CLI.
Make sure to use aws configure
to configure the us-east-1
region.
-
Create an ECS cluster:
aws ecs create-cluster --cluster-name MyCluster
-
Create and/or identify AWS resources needed for the cluster
-
Get the list of key pairs:
aws ec2 describe-key-pairs
-
Create a security group. Default VPC id will be used for the security group:
aws ec2 describe-vpcs \ --query 'Vpcs[?IsDefault==`true`].VpcId' \ --output text
Now, create the security group:
aws ec2 create-security-group \ --group-name MySecurityGroup \ --vpc-id <default-vpc-id> \ --description "ECS cluster security group"
Note the security group id, it’ll be used in the next step.
Enable SSH and an ingress at port
80
:aws ec2 authorize-security-group-ingress \ --group-id <security-group> \ --protocol tcp \ --port 22 \ --cidr 0.0.0.0/0; aws ec2 authorize-security-group-ingress \ --group-id <security-group> \ --protocol tcp \ --port 80 \ --cidr 0.0.0.0/0
-
Container instances that run the agent require an IAM policy and role for the service to know that the agent belongs to you. Before you can launch container instances and register them into a cluster, you must create an IAM role for those container instances to use when they are launched.
If you’ve created an ECS cluster using the console then
ecsInstanceRole
is already created for you. Check if the role exists:aws iam list-roles \ --query 'Roles[?RoleName==`ecsInstanceRole`]'
If the output is empty, this means that the role does not exist and needs to be created. Create the role and attach the appropriate policies:
aws iam create-role \ --role-name ecsInstanceRole2 \ --assume-role-policy-document file://ecs-assume-role-policy.json aws iam put-role-policy \ --role-name ecsInstanceRole2 \ --policy-name MyPolicy \ --policy-document file://ecs-instance-role-policy.json aws iam attach-role-policy \ --role-name ecsInstanceRole2 \ --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole aws iam attach-role-policy \ --role-name ecsInstanceRole2 \ --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess
The container instance will be launched with these IAM permissions. This allows the ECS Agent in the EC2 instance to connect to the ECS cluster.
-
-
Launch an instance (work in progress):
aws ec2 run-instances \ --image-id ami-cb17d8b6 \ --count 3 \ --instance-type c4.large \ --key-name arun-us-east1 \ --security-group-ids sg-059060666247be0db \ --iam-instance-profile ecsInstanceRole2
The AMI id is for
us-east-1
as that is the only region where Fargate is currently supported. The complete list of AMI ids is listed at https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html.By default, this instance is launched using a default subnet from the default VPC. Alternatively, you can use
--subnet-id
to specify the subnet from a different VPC.This is causing aws-samples#121.
-
List the container instances in the cluster:
aws ecs list-container-instances --cluster MyCluster
This concludes the creation of an ECS cluster using AWS CLI.
This section will explain how to deploy tasks and services in Fargate and EC2 mode using AWS CLI. Difference between the two deployment modes will be clearly highlighted.
This section will explain how to create an ECS cluster using a CloudFormation template. The tasks are then deployed using ECS CLI and Docker Compose definitions.
-
Run the CloudFormation template to create the AWS resources:
Region
Launch Template
N. Virginia (us-east-1)
-
Run the follow command to capture the output from the CloudFormation template as key/value pairs in the file
ecs-cluster.props
. These will be used to setup environment variables which are used subseqently.aws cloudformation describe-stacks \ --stack-name aws-microservices-deploy-options-ecscli \ --query 'Stacks[0].Outputs' \ --output=text | \ perl -lpe 's/\s+/=/g' | \ tee ecs-cluster.props
-
Setup the environment variables using this file:
set -o allexport source ecs-cluster.props set +o allexport
-
Configure ECS CLI:
ecs-cli configure --cluster $ECSCluster --region us-east-1 --default-launch-type FARGATE
-
Create the task definition parameters for each of the service:
ecs-params-create.sh greeting ecs-params-create.sh name ecs-params-create.sh webapp
-
Start the
greeting
service up:ecs-cli compose --verbose \ --file greeting-docker-compose.yaml \ --task-role-arn $ECSRole \ --ecs-params ecs-params_greeting.yaml \ --project-name greeting \ service up \ --target-group-arn $GreetingTargetGroupArn \ --container-name greeting-service \ --container-port 8081
-
Bring the
name
service up:ecs-cli compose --verbose \ --file name-docker-compose.yaml \ --task-role-arn $ECSRole \ --ecs-params ecs-params_name.yaml \ --project-name name \ service up \ --target-group-arn $NameTargetGroupArn \ --container-name name-service \ --container-port 8082
-
Bring the webapp service up:
ecs-cli compose --verbose \ --file webapp-docker-compose.yaml \ --task-role-arn $ECSRole \ --ecs-params ecs-params_webapp.yaml \ --project-name webapp \ service up \ --target-group-arn $WebappTargetGroupArn \ --container-name webapp-service \ --container-port 8080
Docker Compose supports environment variable substitution. The
webapp-docker-compose.yaml
uses$PrivateALBCName
to refer to the private Application Load Balancer forgreeting
andname
service. -
Check the
healthy
status of different services:aws elbv2 describe-target-health \ --target-group-arn $GreetingTargetGroupArn \ --query 'TargetHealthDescriptions[0].TargetHealth.State' \ --output text aws elbv2 describe-target-health \ --target-group-arn $NameTargetGroupArn \ --query 'TargetHealthDescriptions[0].TargetHealth.State' \ --output text aws elbv2 describe-target-health \ --target-group-arn $WebappTargetGroupArn \ --query 'TargetHealthDescriptions[0].TargetHealth.State' \ --output text
-
Once all the services are in
healthy
state, get a response from thewebapp
service:curl http://"$ALBPublicCNAME" Hello Sheldon
ecs-cli compose --verbose \
--file greeting-docker-compose.yaml \
--task-role-arn $ECSRole \
--ecs-params ecs-params_greeting.yaml \
--project-name greeting \
service down
ecs-cli compose --verbose \
--file name-docker-compose.yaml \
--task-role-arn $ECSRole \
--ecs-params ecs-params_name.yaml \
--project-name name \
service down
ecs-cli compose --verbose \
--file webapp-docker-compose.yaml \
--task-role-arn $ECSRole \
--ecs-params ecs-params_webapp.yaml \
--project-name webapp \
service down
aws cloudformation delete-stack --region us-east-1 --stack-name aws-microservices-deploy-options-ecscli
This section creates an ECS cluster and deploys Fargate tasks to the cluster:
Region |
Launch Template |
N. Virginia (us-east-1) |
Retrieve the public endpoint to test your application deployment:
aws cloudformation \
describe-stacks \
--region us-east-1 \
--stack-name aws-compute-options-fargate \
--query 'Stacks[].Outputs[?OutputKey==`PublicALBCNAME`].[OutputValue]' \
--output text
Use the command to test:
curl http://<public_endpoint>
This section creates an ECS cluster and deploys EC2 tasks to the cluster:
Region |
Launch Template |
N. Virginia (us-east-1) |
Retrieve the public endpoint to test your application deployment:
aws cloudformation \
describe-stacks \
--region us-east-1 \
--stack-name aws-compute-options-ecs \
--query 'Stacks[].Outputs[?OutputKey==`PublicALBCNAME`].[OutputValue]' \
--output text
Use the command to test:
curl http://<public_endpoint>
Make sure kubectl
CLI is installed and configured for the Kubernetes cluster.
-
Apply the manifests:
kubectl apply -f apps/k8s/standalone/manifest.yml
-
Access the application:
curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
-
Delete the application:
kubectl delete -f apps/k8s/standalone/manifest.yml
Make sure kubectl
CLI is installed and configured for the Kubernetes cluster. Also, make sure Helm is installed on that Kubernetes cluster.
-
Install the Helm CLI:
brew install kubernetes-helm
-
Install Helm in Kubernetes cluster:
helm init
-
Install the Helm chart:
helm install --name myapp apps/k8s/helm/myapp
-
By default, the
latest
tag for an image is used. Alternatively, a different tag for the image can be used:helm install --name myapp apps/k8s/helm/myapp --set "docker.tag=<tag>"
-
-
Access the application:
curl http://$(kubectl get svc/myapp-webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
-
Delete the Helm chart:
helm delete --purge myapp
Make sure kubectl
CLI is installed and configured for the Kubernetes cluster.
-
Install
ksonnet
fromhomebrew
tap:brew install ksonnet/tap/ks
-
Change into the ksonnet sub directory:
cd apps/k8s/ksonnet/myapp
-
Add the environment:
ks env add default
-
Deploy the manifests:
ks apply default
-
Access the application:
curl http://$(kubectl get svc/webapp -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
-
Delete the application:
ks delete default
This section explains how to setup a deployment pipeline using AWS CodePipeline.
CloudFormation templates for different regions are listed at https://github.com/aws-samples/aws-kube-codesuite. us-west-2
is listed below.
Region |
Launch Template |
Oregon (us-west-2) |
Create a deployment pipeline using Jenkins X.
-
Install Jenkins X CLI:
brew tap jenkins-x/jx brew install jx
-
Create the Kubernetes cluster:
jx create cluster aws
This will create a Kubernetes cluster on AWS using kops. This cluster will have RBAC enabled. It will also have insecure registries enabled. These are needed by the pipeline to store Docker images.
-
Clone the repo:
git clone https://github.com/arun-gupta/docker-kubernetes-hello-world
-
Import the project in Jenkins X:
jx import
This will generate
Dockerfile
and Helm charts, if they don’t already exist. It also creates aJenkinsfile
with different build stages identified. Finally, it triggers a Jenkins build and deploy the application in a staging environment by default. -
View Jenkins console using
jx console
. Select the user, project and branch to see the deployment pipeline. -
Get the staging URL using
jx get apps
and view the output from the application in a browser window. -
Now change the message in displayed from
HelloHandler
and push to the GitHub repo. Make sure to change the corresponding test as well otherwise the pipeline will fail. Wait for the deployment to complete and then refresh the browser page to see the updated output.
-
arungupta/xray:us-west-2
Docker image is already available on Docker Hub. Optionally, you may build the image:cd config/xray docker build -t arungupta/xray:latest . docker image push arungupta/xray:us-west-2
-
Deploy the DaemonSet:
kubectl apply -f xray-daemonset.yaml
-
Deploy the application using Helm charts
-
Access the application
-
Open the X-Ray console and watch the service map and traces. This is tracked as #60.
Serverless Application Model (SAM) defines a standard application model for serverless applications. It extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.
sam
is the AWS CLI tool for managing Serverless applications written with SAM. Install SAM CLI as:
npm install -g aws-sam-local
The complete installation steps for SAM CLI are at https://github.com/awslabs/aws-sam-local#installation.
-
Serverless applications are stored as a deployment packages in a S3 bucket. Create a S3 bucket:
aws s3api create-bucket \ --bucket aws-microservices-deploy-options \ --region us-west-2 \ --create-bucket-configuration LocationConstraint=us-west-2`
Make sure to use a bucket name that is unique.
-
Package the SAM application. This uploads the deployment package to the specified S3 bucket and generates a new file with the code location:
cd apps/lambda sam package \ --template-file sam.yaml \ --s3-bucket <s3-bucket> \ --output-template-file \ sam.transformed.yaml
-
Create the resources:
sam deploy \ --template-file sam.transformed.yaml \ --stack-name aws-microservices-deploy-options-lambda \ --capabilities CAPABILITY_IAM
-
Test the application:
-
Greeting endpoint:
curl `aws cloudformation \ describe-stacks \ --stack-name aws-microservices-deploy-options-lambda \ --query "Stacks[].Outputs[?OutputKey=='GreetingApiEndpoint'].[OutputValue]" \ --output text`
-
Name endpoint:
curl `aws cloudformation \ describe-stacks \ --stack-name aws-microservices-deploy-options-lambda \ --query "Stacks[].Outputs[?OutputKey=='NamesApiEndpoint'].[OutputValue]" \ --output text`
-
Webapp endpoint:
curl `aws cloudformation \ describe-stacks \ --stack-name aws-microservices-deploy-options-lambda \ --query "Stacks[].Outputs[?OutputKey=='WebappApiEndpoint'].[OutputValue]" \ --output text`/1
-
-
sam local start-api --template sam.yaml --env-vars test/env-mac.json
-
Greeting endpoint:
curl http://127.0.0.1:3000/resources/greeting
-
Name endpoint:
-
Webapp endpoint:
curl http://127.0.0.1:3000/
This section will explain how to deploy Lambda + API Gateway via CodePipeline.
-
cd apps/lambda
-
aws cloudformation deploy --template-file pipeline.yaml --stack-name aws-compute-options-lambda-pipeline --capabilities CAPABILITY_IAM
-
git remote add codecommit $(aws cloudformation describe-stacks --stack-name aws-compute-options-lambda-pipeline --query "Stacks[].Outputs[?OutputKey=='RepositoryHttpUrl'].OutputValue" --output text)
-
Setup your Git credential by following the document. This is required to push the code into the CodeCommit repo created in the CloudFormation stack. When the Git credential is setup, you can use the following command to push in the code and trigger the pieline to run.
git push codecommit master
-
Get the URL to view the deployment pipeline:
aws cloudformation \ describe-stacks \ --stack-name aws-compute-options-lambda-pipeline \ --query "Stacks[].Outputs[?OutputKey=='CodePipelineUrl'].[OutputValue]" \ --output text
Deployment pipeline in AWS console looks like as shown: