Skip to content

Commit 0118108

Browse files
author
Lionel Martin
committed
need tests
0 parents  commit 0118108

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+2628
-0
lines changed

.gitignore

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
.terraform.lock.hcl
2+
.terraform/

README.md

+157
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,157 @@
1+
# Deploy Laravel on AWS ECS Fargate
2+
3+
## Key Resources & Features
4+
5+
| Name | Description | Required | Status |
6+
|------|-------------|:--------:|------|
7+
| `aws_vpc` | We create a separate Virtual Private Cloud network for each new Laravel stack (e.g. production vs staging) | yes | DONE |
8+
| `aws_subnet` | We always setup our VPC both with private and public subnets. Private subnets can't be accessible from the internet, which is great for the database | yes | DONE |
9+
| Multi-AZ | By default, Fargate tasks are spread across Availability Zones. So we just need to bump the number of instances for our Fargate tasks | no | DONE |
10+
| `aws_route_table` | Required for traffic to flow in and out of our subnets | yes | DONE |
11+
| `aws_nat_gateway` | Required for instances in private subnets to egress to the internet | no | DONE |
12+
| `aws_ecs_cluster` | We need one cluster for each of our Laravel web frontend, workers and crons | yes | DONE |
13+
| `aws_ecs_service` | We need one service for each of our Laravel web frontend, workers and crons | yes | DONE |
14+
| `aws_ecs_task_definition` | We need one task definition for each of our Laravel web frontend, workers and crons | yes | DONE |
15+
| `aws_ecr_repository` | We will build our Laravel project as a Docker image, which will be stored in a new Docker repository | yes | DONE |
16+
| `aws_iam_role` | Roles needed for our compute instances to access various resources, such as S3 or ECR | yes | DONE |
17+
| `aws_rds_cluster` | Our MySQL database | yes | DONE |
18+
| Auto-Scaling | The ability for our clusters and services to automatically instantiate more Laravel frontend (or workers) based on CPU usage | no | DONE for frontend |
19+
| Laravel Workers & Cron | Self explanatory | yes | DONE |
20+
| `Dockerfile` | Our PHP FPM configuration | yes | DONE |
21+
| `Dockerfile-nginx` | Our reverse proxy configuration | yes | DONE |
22+
| `aws_elasticache_cluster` | Our Redis cluster | no | DONE |
23+
| `aws_elasticsearch_domain` | Our managed ElasticSearch instance | no | DONE |
24+
| `aws_sqs_queue` | Our Laravel queue | no | DONE |
25+
| `aws_ssm_parameter` | Third party secrets in a managed vault | no | DONE |
26+
| `aws_cloudwatch_dashboard` | Cloudwatch dashboard | no | TODO |
27+
| `aws_s3_bucket` | Example S3 bucket for | no | DONE |
28+
| `aws_cloudfront_distribution` | A CloudFront distribution | no | Coming soon... |
29+
30+
## 1. Create an IAM User for Terraform in the AWS console
31+
...with Programmatic Access only and with the following permissions:
32+
33+
- `arn:aws:iam::aws:policy/AmazonS3FullAccess`
34+
- `arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess`
35+
- `arn:aws:iam::aws:policy/IAMFullAccess`
36+
- `arn:aws:iam::aws:policy/AmazonRoute53FullAccess`
37+
- `arn:aws:iam::aws:policy/AWSCertificateManagerFullAccess`
38+
- `arn:aws:iam::aws:policy/AmazonRDSFullAccess`
39+
- `arn:aws:iam::aws:policy/AmazonEC2FullAccess`
40+
- `arn:aws:iam::aws:policy/AmazonECS_FullAccess`
41+
- `arn:aws:iam::aws:policy/CloudWatchFullAccess`
42+
- `arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess`
43+
44+
## 2. Save new access keys as an AWS CLI profile
45+
```
46+
export PROJECT_NAME=your_project_name_here
47+
48+
aws --profile $PROJECT_NAME configure
49+
```
50+
51+
Save the below function into your terminal to easily load an AWS profile in a terminal instance (optional):
52+
```
53+
awsprofile() { export AWS_ACCESS_KEY_ID=$(aws --profile $1 configure get aws_access_key_id) && export AWS_SECRET_ACCESS_KEY=$(aws --profile $1 configure get aws_secret_access_key); }
54+
55+
awsprofile $PROJECT_NAME
56+
```
57+
58+
## 3. Create your infrastructure using Terraform
59+
60+
### Create and configure an S3 bucket as Terraform backend
61+
You can use any naming norm for your S3 bucket, as long as you update the backend bucket name configuration in `providers.tf` accordingly.
62+
63+
```
64+
export BUCKET_NAME=$PROJECT_NAME-$(date '+%Y%m%d%H%M%S')
65+
66+
aws s3 mb s3://$BUCKET_NAME
67+
68+
aws s3api put-bucket-encryption --bucket $BUCKET_NAME --server-side-encryption-configuration '{ "Rules": [ { "ApplyServerSideEncryptionByDefault": { "SSEAlgorithm": "AES256" } } ] }'
69+
70+
aws s3api put-public-access-block --bucket $BUCKET_NAME --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
71+
72+
aws s3api put-bucket-versioning --bucket $BUCKET_NAME --versioning-configuration MFADelete=Disabled,Status=Enabled
73+
```
74+
75+
### Create a DynamoDB database for Terraform state locking
76+
```
77+
aws dynamodb create-table --region us-east-1 --table-name terraform_locks --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
78+
```
79+
80+
### Terraform apply
81+
Copy the terraform folder at the root of your Laravel project.
82+
83+
```
84+
cd terraform
85+
86+
export TF_VAR_project_name=$PROJECT_NAME
87+
88+
terraform init -backend-config="bucket=$BUCKET_NAME"
89+
90+
terraform apply
91+
```
92+
93+
### Build and deploy your Docker images manually (optional - only if you don't use a CD pipeline)
94+
95+
```
96+
aws ecr get-login-password --region $(terraform output region | tr -d '"') | docker login --username AWS --password-stdin $(terraform output account_id | tr -d '"').dkr.ecr.$(terraform output region | tr -d '"').amazonaws.com
97+
98+
docker pull li0nel/laravel-test && docker tag li0nel/laravel-test $(terraform output -json | jq '.ecr.value.laravel_repository_uri' | tr -d '"') && docker push $(terraform output -json | jq '.ecr.value.laravel_repository_uri' | tr -d '"')
99+
100+
docker pull li0nel/nginx && docker tag li0nel/nginx $(terraform output -json | jq '.ecr.value.nginx_repository_uri' | tr -d '"') && docker push $(terraform output -json | jq '.ecr.value.nginx_repository_uri' | tr -d '"')
101+
```
102+
103+
### SSH tunnelling into the database through the EC2 bastion (optional - only to access the database manually)
104+
105+
Coming soon: replace with [VPN setup](https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-client-vpn-to-securely-access-aws-and-on-premises-resources/) + [AWS System Manager Session Manager](https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/)
106+
107+
```
108+
aws ec2 run-instances --image-id $(terraform output ec2_ami_id) --count 1 --instance-type t2.micro --key-name $(terraform output ec2_key_name) --security-group-ids $(terraform output ec2_security_group_id) --subnet-id $(terraform output ec2_public_subnet_id) --associate-public-ip-address | grep InstanceId
109+
110+
aws ec2 describe-instances --instance-ids xxxx | grep PublicIpAddress
111+
```
112+
113+
```
114+
ssh ubuntu@xxxxx -i $(terraform output ec2_ssh_key_path) -L 3306:$(terraform output aurora_endpoint):3306
115+
```
116+
117+
Then connect using your favourite MySQL client
118+
```
119+
mysql -u$(terraform output aurora_db_username) -p$(terraform output aurora_master_password) -h 127.0.0.1 -D $(terraform output aurora_db_name)
120+
```
121+
122+
```
123+
aws ec2 terminate-instances --instance-ids xxxx
124+
```
125+
126+
## Set up your Continuous Integration/Deployment pipeline
127+
128+
You will need the below environment variables in your CI/CD project to redeploy your ECS service.
129+
130+
- `AWS_ACCOUNT_ID`
131+
- `ECR_LARAVEL_URI_*`
132+
- `ECR_NGINX_URI_*`
133+
- `AWS_ACCESS_KEY_ID_*`
134+
- `AWS_SECRET_ACCESS_KEY_*`
135+
- `AWS_REGION`
136+
- `ECS_TASK_DEFINITION`
137+
- `ECS_CLUSTER_NAME_*`
138+
- `ECS_SERVICE_NAME_*`
139+
140+
... where * is each of `PRODUCTION` and `STAGING`
141+
142+
## 4. Test your infrastructure code
143+
144+
// Test Laravel is up
145+
// Test Laravel workers are running -> workers are writing in the database
146+
// Test Laravel scheduler is running -> scheduler is creating recurrent jobs picked up by workers
147+
// Test Laravel can reach S3 -> URL that post/read file
148+
// Test Laravel can reach MySQL -> select 1
149+
// Test Laravel can reach Redis -> used as cache driver
150+
// Test Laravel can reach ElasticSearch
151+
// Test Laravel can reach SQS -> used as queue driver
152+
// Test Laravel can be passed SSM secrets -> echo all env vars
153+
154+
// queue driver, db driver, cache driver, file driver
155+
// store jobs in db?
156+
// page that dumps env or phpinfo
157+
// page that posts a random file to S3

data.tf

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
data "aws_availability_zones" "available" {}
2+
3+
data "aws_caller_identity" "current" {}

locals.tf

+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
locals {
2+
stack_name = terraform.workspace == "default" ? var.project_name : join("-", [var.project_name, replace(terraform.workspace, "/[^[:alnum:]]/", "")])
3+
# hostname = var.subdomain == "" ? var.domain : join(".", [var.subdomain, var.domain])
4+
}

main.tf

+106
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
// TODO any other subdomain should redirect to APEX
2+
# module "route53" {
3+
# source = "./modules/route53"
4+
# domain = var.domain
5+
# hostname = local.hostname
6+
# alb_hostname = module.ecs.ecs_alb_hostname
7+
# alb_zone_id = module.ecs.ecs_alb_zone_id
8+
9+
# providers = {
10+
# aws = "aws.us-east-1"
11+
# }
12+
# }
13+
14+
# module "acm" {
15+
# source = "./modules/acm"
16+
# hostname = local.hostname
17+
# hosted_zone_id = module.route53.hosted_zone_id
18+
# }
19+
20+
# module "cloudfront" {
21+
# source = "./modules/cloudfront"
22+
# hostname = local.hostname
23+
# }
24+
25+
module "vpc" {
26+
source = "./modules/vpc"
27+
stack_name = local.stack_name
28+
b_nat_gateway = false
29+
}
30+
31+
module "iam" {
32+
source = "./modules/iam"
33+
stack_name = local.stack_name
34+
}
35+
36+
module "aurora" {
37+
source = "./modules/aurora"
38+
stack_name = local.stack_name
39+
subnet_ids = module.vpc.private_subnets.*.id
40+
vpc_id = module.vpc.vpc.id
41+
}
42+
43+
module "ecr" {
44+
source = "./modules/ecr"
45+
stack_name = replace(local.stack_name, "/[^a-zA-Z0-9]+/", "")
46+
ci_pipeline_user_arn = module.iam.ci_pipeline_arn
47+
ecs_role = module.iam.ecs_role
48+
}
49+
50+
module "s3" {
51+
source = "./modules/s3"
52+
stack_name = local.stack_name
53+
}
54+
55+
module "elasticache" {
56+
source = "./modules/elasticache"
57+
stack_name = local.stack_name
58+
}
59+
60+
module "elasticsearch" {
61+
source = "./modules/elasticsearch"
62+
stack_name = local.stack_name
63+
vpc_id = module.vpc.vpc.id
64+
private_subnet_ids = module.vpc.private_subnets.*.id
65+
}
66+
67+
module "sqs" {
68+
source = "./modules/sqs"
69+
stack_name = local.stack_name
70+
vpc_id = module.vpc.vpc.id
71+
private_subnet_ids = module.vpc.private_subnets.*.id
72+
security_group_ids = [module.ecs.aws_security_group.id]
73+
}
74+
75+
module "ssm" {
76+
source = "./modules/ssm"
77+
stack_name = local.stack_name
78+
}
79+
80+
module "cloudwatch" {
81+
source = "./modules/cloudwatch"
82+
stack_name = local.stack_name
83+
}
84+
85+
module "ecs" {
86+
source = "./modules/ecs"
87+
stack_name = local.stack_name
88+
vpc_id = module.vpc.vpc.id
89+
public_subnet_ids = module.vpc.public_subnets.*.id
90+
private_subnet_ids = module.vpc.private_subnets.*.id
91+
role = module.iam.ecs_role
92+
# certificate_arn = module.acm.certificate_arn
93+
# hostname = local.hostname
94+
95+
aurora_endpoint = module.aurora.aurora_cluster.endpoint
96+
aurora_port = module.aurora.aurora_cluster.port
97+
aurora_db_name = module.aurora.aurora_cluster.database_name
98+
aurora_db_username = module.aurora.aurora_cluster.master_username
99+
aurora_master_password = module.aurora.rds_master_password.result
100+
ecr_laravel_repository_uri = module.ecr.laravel_repository_uri
101+
ecr_nginx_repository_uri = module.ecr.nginx_repository_uri
102+
s3_bucket_name = module.s3.bucket.id
103+
s3_bucket_arn = module.s3.bucket.arn
104+
105+
// TODO Redis params
106+
}

modules/acm/main.tf

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
resource "aws_acm_certificate" "certificate" {
2+
domain_name = var.hostname
3+
validation_method = "DNS"
4+
}
5+
6+
resource "aws_route53_record" "certificate_validation" {
7+
name = aws_acm_certificate.certificate.domain_validation_options[0].resource_record_name
8+
type = aws_acm_certificate.certificate.domain_validation_options[0].resource_record_type
9+
zone_id = var.hosted_zone_id
10+
records = [aws_acm_certificate.certificate.domain_validation_options[0].resource_record_value]
11+
ttl = 5
12+
}
13+
14+
resource "aws_acm_certificate_validation" "certificate_validation" {
15+
certificate_arn = aws_acm_certificate.certificate.arn
16+
validation_record_fqdns = [aws_route53_record.certificate_validation.fqdn]
17+
}

modules/acm/outputs.tf

+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
output "certificate_arn" {
2+
depends_on = [aws_acm_certificate_validation.certificate_validation]
3+
value = aws_acm_certificate.certificate.arn
4+
}

modules/acm/variables.tf

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
variable "hostname" {
2+
type = string
3+
}
4+
5+
variable "hosted_zone_id" {
6+
type = string
7+
}

modules/aurora/data.tf

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
data "aws_region" "current" {}
2+
3+
data "aws_availability_zones" "available" {}
4+
5+
data "aws_vpc" "vpc" {
6+
id = var.vpc_id
7+
}

modules/aurora/main.tf

+55
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
resource "random_password" "rds_master_password" {
2+
length = 16
3+
special = true
4+
override_special = "!#$%&*()-_=+[]{}<>:?"
5+
}
6+
7+
resource "random_string" "final_snapshot_id" {
8+
length = 12
9+
special = false
10+
}
11+
12+
resource "aws_db_subnet_group" "aurora_subnet_group" {
13+
name = "aurora_db_subnet_group_${var.stack_name}"
14+
subnet_ids = var.subnet_ids
15+
}
16+
17+
resource "aws_security_group" "db_security_group" {
18+
vpc_id = data.aws_vpc.vpc.id
19+
20+
ingress {
21+
from_port = var.port
22+
to_port = var.port
23+
protocol = "TCP"
24+
cidr_blocks = [data.aws_vpc.vpc.cidr_block]
25+
}
26+
}
27+
28+
resource "aws_rds_cluster" "aurora_cluster" {
29+
engine = "aurora-mysql"
30+
engine_version = "5.7.mysql_aurora.2.07.2"
31+
database_name = replace(var.stack_name, "/[^a-zA-Z0-9]+/", "")
32+
master_username = "aurora"
33+
master_password = random_password.rds_master_password.result
34+
backup_retention_period = 35
35+
preferred_backup_window = "02:00-03:00"
36+
preferred_maintenance_window = "wed:03:00-wed:04:00"
37+
db_subnet_group_name = aws_db_subnet_group.aurora_subnet_group.name
38+
final_snapshot_identifier = join("-", [var.stack_name, random_string.final_snapshot_id.result, "0"])
39+
port = var.port
40+
41+
vpc_security_group_ids = [
42+
aws_security_group.db_security_group.id,
43+
]
44+
}
45+
46+
resource "aws_rds_cluster_instance" "aurora_cluster_instance" {
47+
count = 1
48+
cluster_identifier = aws_rds_cluster.aurora_cluster.id
49+
instance_class = "db.t2.small"
50+
db_subnet_group_name = aws_db_subnet_group.aurora_subnet_group.name
51+
publicly_accessible = false
52+
engine = "aurora-mysql"
53+
engine_version = "5.7.mysql_aurora.2.07.2"
54+
}
55+

0 commit comments

Comments
 (0)