|
| 1 | +# Deploy Laravel on AWS ECS Fargate |
| 2 | + |
| 3 | +## Key Resources & Features |
| 4 | + |
| 5 | +| Name | Description | Required | Status | |
| 6 | +|------|-------------|:--------:|------| |
| 7 | +| `aws_vpc` | We create a separate Virtual Private Cloud network for each new Laravel stack (e.g. production vs staging) | yes | DONE | |
| 8 | +| `aws_subnet` | We always setup our VPC both with private and public subnets. Private subnets can't be accessible from the internet, which is great for the database | yes | DONE | |
| 9 | +| Multi-AZ | By default, Fargate tasks are spread across Availability Zones. So we just need to bump the number of instances for our Fargate tasks | no | DONE | |
| 10 | +| `aws_route_table` | Required for traffic to flow in and out of our subnets | yes | DONE | |
| 11 | +| `aws_nat_gateway` | Required for instances in private subnets to egress to the internet | no | DONE | |
| 12 | +| `aws_ecs_cluster` | We need one cluster for each of our Laravel web frontend, workers and crons | yes | DONE | |
| 13 | +| `aws_ecs_service` | We need one service for each of our Laravel web frontend, workers and crons | yes | DONE | |
| 14 | +| `aws_ecs_task_definition` | We need one task definition for each of our Laravel web frontend, workers and crons | yes | DONE | |
| 15 | +| `aws_ecr_repository` | We will build our Laravel project as a Docker image, which will be stored in a new Docker repository | yes | DONE | |
| 16 | +| `aws_iam_role` | Roles needed for our compute instances to access various resources, such as S3 or ECR | yes | DONE | |
| 17 | +| `aws_rds_cluster` | Our MySQL database | yes | DONE | |
| 18 | +| Auto-Scaling | The ability for our clusters and services to automatically instantiate more Laravel frontend (or workers) based on CPU usage | no | DONE for frontend | |
| 19 | +| Laravel Workers & Cron | Self explanatory | yes | DONE | |
| 20 | +| `Dockerfile` | Our PHP FPM configuration | yes | DONE | |
| 21 | +| `Dockerfile-nginx` | Our reverse proxy configuration | yes | DONE | |
| 22 | +| `aws_elasticache_cluster` | Our Redis cluster | no | DONE | |
| 23 | +| `aws_elasticsearch_domain` | Our managed ElasticSearch instance | no | DONE | |
| 24 | +| `aws_sqs_queue` | Our Laravel queue | no | DONE | |
| 25 | +| `aws_ssm_parameter` | Third party secrets in a managed vault | no | DONE | |
| 26 | +| `aws_cloudwatch_dashboard` | Cloudwatch dashboard | no | TODO | |
| 27 | +| `aws_s3_bucket` | Example S3 bucket for | no | DONE | |
| 28 | +| `aws_cloudfront_distribution` | A CloudFront distribution | no | Coming soon... | |
| 29 | + |
| 30 | +## 1. Create an IAM User for Terraform in the AWS console |
| 31 | +...with Programmatic Access only and with the following permissions: |
| 32 | + |
| 33 | +- `arn:aws:iam::aws:policy/AmazonS3FullAccess` |
| 34 | +- `arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess` |
| 35 | +- `arn:aws:iam::aws:policy/IAMFullAccess` |
| 36 | +- `arn:aws:iam::aws:policy/AmazonRoute53FullAccess` |
| 37 | +- `arn:aws:iam::aws:policy/AWSCertificateManagerFullAccess` |
| 38 | +- `arn:aws:iam::aws:policy/AmazonRDSFullAccess` |
| 39 | +- `arn:aws:iam::aws:policy/AmazonEC2FullAccess` |
| 40 | +- `arn:aws:iam::aws:policy/AmazonECS_FullAccess` |
| 41 | +- `arn:aws:iam::aws:policy/CloudWatchFullAccess` |
| 42 | +- `arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess` |
| 43 | + |
| 44 | +## 2. Save new access keys as an AWS CLI profile |
| 45 | +``` |
| 46 | +export PROJECT_NAME=your_project_name_here |
| 47 | +
|
| 48 | +aws --profile $PROJECT_NAME configure |
| 49 | +``` |
| 50 | + |
| 51 | +Save the below function into your terminal to easily load an AWS profile in a terminal instance (optional): |
| 52 | +``` |
| 53 | +awsprofile() { export AWS_ACCESS_KEY_ID=$(aws --profile $1 configure get aws_access_key_id) && export AWS_SECRET_ACCESS_KEY=$(aws --profile $1 configure get aws_secret_access_key); } |
| 54 | +
|
| 55 | +awsprofile $PROJECT_NAME |
| 56 | +``` |
| 57 | + |
| 58 | +## 3. Create your infrastructure using Terraform |
| 59 | + |
| 60 | +### Create and configure an S3 bucket as Terraform backend |
| 61 | +You can use any naming norm for your S3 bucket, as long as you update the backend bucket name configuration in `providers.tf` accordingly. |
| 62 | + |
| 63 | +``` |
| 64 | +export BUCKET_NAME=$PROJECT_NAME-$(date '+%Y%m%d%H%M%S') |
| 65 | +
|
| 66 | +aws s3 mb s3://$BUCKET_NAME |
| 67 | +
|
| 68 | +aws s3api put-bucket-encryption --bucket $BUCKET_NAME --server-side-encryption-configuration '{ "Rules": [ { "ApplyServerSideEncryptionByDefault": { "SSEAlgorithm": "AES256" } } ] }' |
| 69 | +
|
| 70 | +aws s3api put-public-access-block --bucket $BUCKET_NAME --public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true |
| 71 | +
|
| 72 | +aws s3api put-bucket-versioning --bucket $BUCKET_NAME --versioning-configuration MFADelete=Disabled,Status=Enabled |
| 73 | +``` |
| 74 | + |
| 75 | +### Create a DynamoDB database for Terraform state locking |
| 76 | +``` |
| 77 | +aws dynamodb create-table --region us-east-1 --table-name terraform_locks --attribute-definitions AttributeName=LockID,AttributeType=S --key-schema AttributeName=LockID,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 |
| 78 | +``` |
| 79 | + |
| 80 | +### Terraform apply |
| 81 | +Copy the terraform folder at the root of your Laravel project. |
| 82 | + |
| 83 | +``` |
| 84 | +cd terraform |
| 85 | +
|
| 86 | +export TF_VAR_project_name=$PROJECT_NAME |
| 87 | +
|
| 88 | +terraform init -backend-config="bucket=$BUCKET_NAME" |
| 89 | +
|
| 90 | +terraform apply |
| 91 | +``` |
| 92 | + |
| 93 | +### Build and deploy your Docker images manually (optional - only if you don't use a CD pipeline) |
| 94 | + |
| 95 | +``` |
| 96 | +aws ecr get-login-password --region $(terraform output region | tr -d '"') | docker login --username AWS --password-stdin $(terraform output account_id | tr -d '"').dkr.ecr.$(terraform output region | tr -d '"').amazonaws.com |
| 97 | +
|
| 98 | +docker pull li0nel/laravel-test && docker tag li0nel/laravel-test $(terraform output -json | jq '.ecr.value.laravel_repository_uri' | tr -d '"') && docker push $(terraform output -json | jq '.ecr.value.laravel_repository_uri' | tr -d '"') |
| 99 | +
|
| 100 | +docker pull li0nel/nginx && docker tag li0nel/nginx $(terraform output -json | jq '.ecr.value.nginx_repository_uri' | tr -d '"') && docker push $(terraform output -json | jq '.ecr.value.nginx_repository_uri' | tr -d '"') |
| 101 | +``` |
| 102 | + |
| 103 | +### SSH tunnelling into the database through the EC2 bastion (optional - only to access the database manually) |
| 104 | + |
| 105 | +Coming soon: replace with [VPN setup](https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-aws-client-vpn-to-securely-access-aws-and-on-premises-resources/) + [AWS System Manager Session Manager](https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/) |
| 106 | + |
| 107 | +``` |
| 108 | +aws ec2 run-instances --image-id $(terraform output ec2_ami_id) --count 1 --instance-type t2.micro --key-name $(terraform output ec2_key_name) --security-group-ids $(terraform output ec2_security_group_id) --subnet-id $(terraform output ec2_public_subnet_id) --associate-public-ip-address | grep InstanceId |
| 109 | +
|
| 110 | +aws ec2 describe-instances --instance-ids xxxx | grep PublicIpAddress |
| 111 | +``` |
| 112 | + |
| 113 | +``` |
| 114 | +ssh ubuntu@xxxxx -i $(terraform output ec2_ssh_key_path) -L 3306:$(terraform output aurora_endpoint):3306 |
| 115 | +``` |
| 116 | + |
| 117 | +Then connect using your favourite MySQL client |
| 118 | +``` |
| 119 | +mysql -u$(terraform output aurora_db_username) -p$(terraform output aurora_master_password) -h 127.0.0.1 -D $(terraform output aurora_db_name) |
| 120 | +``` |
| 121 | + |
| 122 | +``` |
| 123 | +aws ec2 terminate-instances --instance-ids xxxx |
| 124 | +``` |
| 125 | + |
| 126 | +## Set up your Continuous Integration/Deployment pipeline |
| 127 | + |
| 128 | +You will need the below environment variables in your CI/CD project to redeploy your ECS service. |
| 129 | + |
| 130 | +- `AWS_ACCOUNT_ID` |
| 131 | +- `ECR_LARAVEL_URI_*` |
| 132 | +- `ECR_NGINX_URI_*` |
| 133 | +- `AWS_ACCESS_KEY_ID_*` |
| 134 | +- `AWS_SECRET_ACCESS_KEY_*` |
| 135 | +- `AWS_REGION` |
| 136 | +- `ECS_TASK_DEFINITION` |
| 137 | +- `ECS_CLUSTER_NAME_*` |
| 138 | +- `ECS_SERVICE_NAME_*` |
| 139 | + |
| 140 | +... where * is each of `PRODUCTION` and `STAGING` |
| 141 | + |
| 142 | +## 4. Test your infrastructure code |
| 143 | + |
| 144 | +// Test Laravel is up |
| 145 | +// Test Laravel workers are running -> workers are writing in the database |
| 146 | +// Test Laravel scheduler is running -> scheduler is creating recurrent jobs picked up by workers |
| 147 | +// Test Laravel can reach S3 -> URL that post/read file |
| 148 | +// Test Laravel can reach MySQL -> select 1 |
| 149 | +// Test Laravel can reach Redis -> used as cache driver |
| 150 | +// Test Laravel can reach ElasticSearch |
| 151 | +// Test Laravel can reach SQS -> used as queue driver |
| 152 | +// Test Laravel can be passed SSM secrets -> echo all env vars |
| 153 | + |
| 154 | +// queue driver, db driver, cache driver, file driver |
| 155 | +// store jobs in db? |
| 156 | +// page that dumps env or phpinfo |
| 157 | +// page that posts a random file to S3 |
0 commit comments