-
Notifications
You must be signed in to change notification settings - Fork 9
Problem Statement 8 || Solution
1.1.1: Create a gitlab/github private repository to keep packer code to create amis for application.
1.1.2: Clone repository and create directory v1_packer and in the same directory setup following files
mkdir v1_packer
touch v1_packer/main.json
touch v1_packer/variables.json
touch v1_packer/index.html
touch v1_packer/packages.sh
touch v1_packer/V1AMI.jenkinsfile
1.1.3: Create v1_packer/main.json: This file contains the main configuration for packer
{
"builders": [
{
"type": "amazon-ebs",
"profile": "packer",
"region": "{{user `region`}}",
"instance_type": "{{user `instance_type`}}",
"ssh_username": "{{user `ssh_username`}}",
"ami_name": "{{user `ami_name`}}-{{isotime | clean_resource_name}}",
"source_ami": "{{user `source_ami`}}"
}
],
"provisioners": [
{
"type": "shell",
"script": "./packages.sh"
},
{ "type": "file",
"source": "./index.html",
"destination": "/tmp/index.html"
},
{
"type": "shell",
"inline": [
"sudo mv /tmp/index.html /var/www/html/index.html",
"sleep 10",
"sudo systemctl restart nginx"
]
}
],
"post-processors": [
{
"type": "manifest",
"output":"output.json"
}
]
}
-
builders: This section is responsible for creating machines and generating images from them for various platforms.
-
provisioners: This section uses builtin and third-party software to install and configure the machine image after booting.
-
post-processors: This section run after the image is built by the builder and provisioned by the provisioner.
1.1.4: Create v1_packer/variables.json, this file stores parameters that are used as arguments by the main packer file.
{
"source_ami": "ami-0747bdcabd34c712a",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "Nginx-AMI-V_1.0.0",
"region": "us-east-1"
}
1.1.5: Create v1_packer/index.html, this is the index page to be kept in servers that will be served using nginx.
<h1>Hello from V1</h1>
1.1.6: Create v1_packer/packages.sh, this file will be used by provisioners in the main packer configuration file.
#!/bin/bash
sudo apt update
sudo apt upgrade -y
sudo apt install nginx -y
1.1.7: Create v1_packer/V1AMI.jenkinsfile, this file will be used to setup pipeline to creat VM1 ami
pipeline {
agent any
stages {
stage('Intializing the project') {
steps {
echo 'Welcome to Opstree Labs'
}
}
stage('Cloning code') {
steps {
git branch: 'main' , url: 'https://gitlab.com/ot-external-training/globallogic/trainers/packer.git', credentialsId: 'globallogicv5'
}
}
stage('Validating Packer Code') {
steps {
sh '''
cd v1_packer
/usr/local/bin/packer validate --var-file=variables.json main.json
'''
}
}
stage('Buidling Packer AMI for V1') {
steps {
sh '''
cd v1_packer
/usr/local/bin/packer build --var-file=variables.json main.json
'''
}
}
stage('Printing AMI ID') {
steps {
sh '''
cd v1_packer
AMI_ID=$(jq -r '.builds[-1].artifact_id' output.json | cut -d ":" -f2)
echo $AMI_ID
'''
}
}
}
post {
success {
archiveArtifacts artifacts: 'v1_packer/output.json', followSymlinks: false
}
}
}
1.1.8: Push the changes to remote repository
git add .
git commit -m "Add VM1 ami creation packer code"
git push origin master
1.1.9: Create and configure jenkins job


1.1.10: Execute jenkins job

Note : The setup of ami for v2 will be similar to that of Step 1
1.2.1: Create directory v2_packer and in the same directory setup following files
mkdir v2_packer
touch v2_packer/main.json
touch v2_packer/variables.json
touch v2_packer/index.html
touch v2_packer/packages.sh
touch v2_packer/V1AMI.jenkinsfile
1.2.2: Create v2_packer/main.json: This file contains the main configuration for packer
{
"builders": [
{
"type": "amazon-ebs",
"profile": "packer",
"region": "{{user `region`}}",
"instance_type": "{{user `instance_type`}}",
"ssh_username": "{{user `ssh_username`}}",
"ami_name": "{{user `ami_name`}}-{{isotime | clean_resource_name}}",
"source_ami": "{{user `source_ami`}}"
}
],
"provisioners": [
{
"type": "shell",
"script": "./packages.sh"
},
{ "type": "file",
"source": "./index.html",
"destination": "/tmp/index.html"
},
{
"type": "shell",
"inline": [
"sudo mv /tmp/index.html /var/www/html/index.html",
"sleep 10",
"sudo systemctl restart nginx"
]
}
],
"post-processors": [
{
"type": "manifest",
"output":"output.json"
}
]
}
1.2.3: Create v2_packer/variables.json, this file stores parameters that are used as arguments by the main packer file.
{
"source_ami": "ami-0747bdcabd34c712a",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "Nginx-AMI-V_2.0.0",
"region": "us-east-1"
}
1.2.4: Create v2_packer/index.html, this is the index page to be kept in servers that will be served using nginx.
<h1>Hello from V2</h1>
1.2.5: Create v2_packer/packages.sh, this file will be used by provisioners in the main packer configuration file.
#!/bin/bash
sudo apt update
sudo apt upgrade -y
sudo apt install nginx -y
1.2.6: Create v2_packer/V2AMI.jenkinsfile, this file will be used to setup pipeline to creat VM1 ami
pipeline {
agent any
stages {
stage('Intializing the project') {
steps {
echo 'Welcome to Opstree Labs'
}
}
stage('Cloning code') {
steps {
git branch: 'main' , url: 'https://gitlab.com/ot-external-training/globallogic/trainers/packer.git', credentialsId: 'globallogicv5'
}
}
stage('Validating Packer Code') {
steps {
sh '''
cd v2_packer
/usr/local/bin/packer validate --var-file=variables.json main.json
'''
}
}
stage('Buidling Packer AMI for V2') {
steps {
sh '''
cd v2_packer
/usr/local/bin/packer build --var-file=variables.json main.json
'''
}
}
stage('Printing AMI ID') {
steps {
sh '''
cd v2_packer
AMI_ID=$(jq -r '.builds[-1].artifact_id' output.json | cut -d ":" -f2)
echo $AMI_ID
'''
}
}
}
post {
success {
archiveArtifacts artifacts: 'v2_packer/output.json', followSymlinks: false
}
}
}
1.2.7: Push the changes to remote repository
git add .
git commit -m "Add VM2 ami creation packer code"
git push origin master
1.2.8: Create and configure jenkins job


1.2.9: Execute jenkins job

2.1: Create a gitlab/github private repository to keep packer code to create amis for application.
2.2: Clone repository and create directory v1_packer and in the same directory setup following files
mkdir RecreateDeployment.Jenkinsfile
touch main.tf
touch output.tf
touch vars.tf
2.3: Create file main.tf to setup application load balancer, auto-scaling group and launch template
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
provider "aws" {
profile = "packer"
region = "us-east-1"
}
resource "aws_launch_template" "web" {
name_prefix = "web-"
image_id = var.ami_id
instance_type = "t2.micro"
key_name = "devops_training"
vpc_security_group_ids = var.security_groups
lifecycle {
create_before_destroy = true
}
}
resource "aws_elb" "web_elb" {
name = "web-elb"
security_groups = var.security_groups
subnets = var.pub_subnets
cross_zone_load_balancing = true
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
interval = 30
target = "HTTP:80/"
}
listener {
lb_port = 80
lb_protocol = "http"
instance_port = "80"
instance_protocol = "http"
}
}
resource "aws_autoscaling_group" "web" {
name = "web-asg"
min_size = var.min_capacity
desired_capacity = var.desired_capacity
max_size = var.max_size
health_check_type = "ELB"
load_balancers = [
aws_elb.web_elb.id
]
launch_template {
id = aws_launch_template.web.id
version = aws_launch_template.web.latest_version
}
enabled_metrics = [
"GroupMinSize",
"GroupMaxSize",
"GroupDesiredCapacity",
"GroupInServiceInstances",
"GroupTotalInstances"
]
metrics_granularity = "1Minute"
vpc_zone_identifier = var.pub_subnets
# Required to redeploy without an outage.
lifecycle {
create_before_destroy = true
}
tag {
key = "Name"
value = "web"
propagate_at_launch = true
}
}
2.4: Create vars.tf to keep parameters to be used as arguments in main.tf
#Batch1VPC4
variable "vpc_id" {
default = " vpc-043ca7a7a43fe1be1"
}
variable "pub_subnets" {
default = ["subnet-0cbc5f46b61a777ed"]
}
variable "ami_id" {
type = string
default = "ami-077b78992957fe05b"
}
variable "security_groups" {
default = [ "sg-07d030c682f619bc4" ]
}
variable "desired_capacity" {
default = 2
}
variable "min_capacity" {
default = 1
}
variable "max_size" {
default = 3
}
2.5: Create output.tf to output alb dns name
output "elb_dns_name" {
value = aws_elb.web_elb.dns_name
}
2.6: Create RecreateDeployment.Jenkinsfile that will be used to create pipeline to execute terraform code.
pipeline {
agent any
stages {
stage('Intializing the project') {
steps {
echo 'Welcome to Opstree Labs'
}
}
stage('Wait for user Input for Recreate deployment') {
input {
message "Should we continue?"
ok "Yes, we should."
}
steps {
sh 'echo "Moving ahead"'
}
}
stage('Deploying Basic Infra with V1 VMS') {
steps {
sh '''
cd tf
terraform init
terraform validate
terraform plan -var='ami_id=ami-077b78992957fe05b' -out myplan
terraform apply --auto-approve myplan
terraform output elb_dns_name
'''
}
}
stage('Bringing down V1 VMs') {
input {
message "Shall we bring down V1 VM's?"
ok "Yes, we should."
}
steps {
sh '''
cd tf
terraform init
terraform validate
terraform plan -var='ami_id=ami-077b78992957fe05b' -var='min_capacity=0' -var='desired_capacity=0' -out myplan
terraform apply --auto-approve myplan
'''
}
}
stage('Bringing up V2 VMs') {
input {
message "Shall we start bring up V2 VM's?"
ok "Yes, we should."
}
steps {
sh '''
cd tf
terraform init
terraform validate
terraform plan -var='ami_id=ami-05e003760ad016513' -out myplan
terraform apply --auto-approve myplan
'''
}
}
stage('Terminate') {
input {
message "Terminate setup?"
ok "Yes, we should."
}
steps {
sh '''
cd tf
terraform destroy --auto-approve
'''
}
}
}
}
2.7: Push the code to remote repository
git add .
git commit -m "Add alb, asg and lt creation terraform code"
git push origin master
2.8: Create and configure jenkins job using RecreateDeployment.Jenkinsfile


2.9: Execute jenkins job and validate recreate deployment
- Execution started


- 2 instances running V1 have been provisioned



- Bringing down V1 application servers, downtime will be experienced due to this stage


- Provisioning 2 V2 servers, version will be updated with downtime, till V2 servers are provisioned




Note : For this we will use the above repository.
3.1: Create RollingDeployment.Jenkinsfile to update the versions for application
pipeline {
agent any
stages {
stage('Intializing the project') {
steps {
echo 'Welcome to Opstree Labs'
}
}
stage('Wait for user Input for Rolling deployment') {
input {
message "Should we continue?"
ok "Yes, we should."
}
steps {
sh 'echo "Moving ahead"'
}
}
stage('Deploying Basic Infra with V1 VMS') {
steps {
sh '''
cd tf
terraform init
terraform validate
terraform plan -var='ami_id=ami-077b78992957fe05b' -out myplan
terraform apply --auto-approve myplan
terraform output elb_dns_name
'''
}
}
stage('Step1 of Rolling Deployment') {
input {
message "Shall we bring first VM with V2 ?"
ok "Yes, we should."
}
steps {
sh '''
cd tf
terraform init
terraform validate
terraform plan -var='ami_id=ami-05e003760ad016513' -var='desired_capacity=3' -out myplan
terraform apply --auto-approve myplan
terraform plan -var='ami_id=ami-05e003760ad016513' -var='desired_capacity=2' -out myplan
terraform apply --auto-approve myplan
'''
}
}
stage('Step2 of Rolling Deployment') {
input {
message "Shall we bring second VM with V2 ?"
ok "Yes, we should."
}
steps {
sh '''
cd tf
terraform init
terraform validate
terraform plan -var='ami_id=ami-05e003760ad016513' -var='desired_capacity=3' -out myplan
terraform apply --auto-approve myplan
terraform plan -var='ami_id=ami-05e003760ad016513' -var='desired_capacity=2' -out myplan
terraform apply --auto-approve myplan
'''
}
}
stage('Terminate') {
input {
message "Terminate setup?"
ok "Yes, we should."
}
steps {
sh '''
cd tf
terraform destroy --auto-approve
'''
}
}
}
}
3.2: Push the code to remote repository
git add .
git commit -m "Add alb, asg and lt creation terraform code"
git push origin master
3.3: Create and configure jenkins job using RecreateDeployment.Jenkinsfile


3.4: Execute jenkins job and validate rolling deployment
- Execution Started


- 2 instances running application V1 have been provisioned


- Adding an instance with V2 application


- Removing a V1 application running instance, version updated without downtime



