A collection of scripts I use to deploy ECS services and S3 static websites
aws cli and jq are required in order to use aws-deploy-scripts
This repository is for my sanity in managing multiple AWS accounts on the same machine. I have made the occational mistake of pushing resources to the wrong AWS account. I also want to manage these shared scripts in one place rather than updating each respository individually.
- aws-ecs-build
- aws-ecs-deploy
- aws-parameter-store
- aws-s3-deploy
- aws-s3-secrets
-
Add aws-deploy-scripts to your project
yarn add aws-deploy-scripts --dev
-
Install the latest aws cli command line tools
pip install --upgrade awscli
-
Install jq for parsing JSON responses from AWS CLI
brew install jq
-
Next, setup named profiles for each of your AWS accounts
-
(Optional) Sometimes it's nice to test commands in a REPL environment. Amazon's aws-shell is great for that
pip install --upgrade aws-shell
-
(Optional) Use an AWS credential management library to switch the [default] AWS credentials in your
~/.aws/credentials
. Personally, I'm using switcher.git clone https://github.com/ivarrian/switcher cd switcher npm install -g
aws-ecs-build --environment staging --image nginx --dockerfile services/nginx/Dockerfile --prefix company-identifier --account-id 12345678 --noCache
There are three ways to keep track of build versions when using aws-ecs-build
-
Don't supply a build tag in which case we determine the next build number based on the last build pushed to ECR
-
Create a
.buildversion
file in your project home directory (or wherever you callaws-ecs-build
from) and we'll keep track of the current build version in that file# .buildversion staging/nginx:237 production/nginx:233
-
Specify a tag when calling
aws-ecs-build
aws-ecs-build --environment staging --image nginx --dockerfile services/nginx/Dockerfile --tag 101 --prefix company-identifier --account-id 12345678 --noCache
The current build tag that we support are basic auto-incrementing integers. The integer tag numbers allow us to quickly deploy/rollback to a working build version from the command line.
You can also push your build to ECR by adding --push
to the aws-ecs-build
command.
aws-ecs-build --environment staging --image nginx --dockerfile services/nginx/Dockerfile --prefix company-identifier --account-id 12345678 --noCache --push
aws-s3-deploy
Back in the day I used gulp and grunt scripts to sync assets with S3 for a number of static websites I manage.
Now, in addition to syncing with S3, I use CloudFront to serve the assets. Because of this I have changed my deployment process to use the aws cli
in combination with a few bash scripts--slowly converting them to node scripts.
See this blog post on how to setup and deploy your React app to Amazon Web Services S3 and CloudFront.
Example gatsby
and create-react-app
projects can be found in the examples
directory.
-
Install aws-deploy-scripts to your dev dependencies. I've switched to using yarn for a few reasons. One of my favorite is that I don't have to specify run like I do with npm.
npm run deploy
is nowyarn deploy
.yarn add aws-deploy-scripts --dev
-
Add a
"deploy"
script to your package.json in your main repository with the AWSaccount-id
, S3bucket
name, CloudFrontdistribution-id
, AWS CLIprofile
name, and the buildpath
of the minified assets."scripts": { "start": "gatsby develop", "build": "gatsby build", + "postbuild": "find public -name '*.map' -type f -delete", + "deploy": "aws-s3-deploy --account-id 14234234 --bucket www.mysite.com --path public --profile mysite", "test": "echo \"Error: no test specified\" && exit 1" },
Slack Notifications are available. Add
--slack-webhook-url [environment_variable_name]
to the deploy command to enable Slack notifications to the #general channel.If I'm deploying a
react-create-app
single page app, then I add the following"prebuild"
script to remove previous builds:"scripts": { "start": "react-scripts start", + "prebuild": "rm -fR build/*", "build": "react-scripts build", + "postbuild": "find build -name '*.map' -type f -delete", + "deploy": "aws-s3-deploy --account-id 14234234 --bucket www.mysite.com --path build --profile mysite", "test": "react-scripts test --env=jsdom", "eject": "react-scripts eject" },
In most cases I remove the source maps using the
"postbuild"
script. I know source maps are only requested when the dev tools are opened, but with some static sites I don't want the source code visible. It's easy enough to build and deploy with the source maps if you need to track down a bug.CloudFront caching can be invalidated by specifying the
distribution id
as an additional argument the deploy command.The S3 deploy script will automatically create an invalidation if the CloudFront distribution id
--distribution-id [distribution id]
as soon as all of the minified assets are synced with S3."deploy": "aws-s3-deploy --account-id 14234234 --bucket www.mysite.com --distribution-id E245GA45256 --path build --profile mysite",
Security is important and you should not store any sensitive values in your repository. I personally feel that the AWS account id, S3 bucket name, and CloudFront distribution id are more helpful to keep in the repository when collaborating with a team then requiring each person to set up environment variables. I'd be interested to get your feedback.
There are lots of ways to handle permissions, but here is how I set things up.
I setup git to use the develop branch for the main branch. All pull requests merge in to the develop branch. I specify the development or staging S3 bucket and AWS account id in the package.json on the develop branch. Then I create the following IAM policy:
IAM Policy Name: DevelopmentS3[SiteName]Deploy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mysite.dev"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::mysite.dev/*"
]
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation",
"cloudfront:GetDistribution",
"cloudfront:GetStreamingDistribution",
"cloudfront:GetDistributionConfig",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations",
"cloudfront:ListStreamingDistributions",
"cloudfront:ListDistributions"
],
"Resource": "*"
}
]
}
I can create a developers group and assign this policy to the group. Or assign the policy to users individually. This allows developers to push their code to development or staging for feedback or QA. It's also nice to have a site for clients to view your progress before you make things live!
NOTE: At the time of writing I don't believe that CloudFront supports specifying the resource. When it does support this, then I'll update the example policy.
Finally, I use the master git branch for production. I update the package.json to use the production AWS account id, S3 bucket name, and build path. I also make sure that I have an IAM Policy for production deploys and that the policy is assigned to the right groups and users.
IAM Policy Name: ProductionS3[SiteName]Deploy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mysite.com"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::mysite.com/*"
]
},
{
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation",
"cloudfront:GetDistribution",
"cloudfront:GetStreamingDistribution",
"cloudfront:GetDistributionConfig",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations",
"cloudfront:ListStreamingDistributions",
"cloudfront:ListDistributions"
],
"Resource": "*"
}
]
}
Deploying is pretty straight forward:
yarn build
yarn deploy
yarn build && yarn deploy
NOTE: You can always add your own script to either run tests or lint your code before you deploy.
The script concept comes from How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker
Once setup run
yarn env-vars
to send your secret file to S3
-
Create an S3 bucket of your choice. My preference is to do
[project]-[environment]-secrets
. -
Attach either the
bucket-policy.json
orvpc-only-bucket-policy.json
to the newly created bucket. -
Install
aws-deploy-scripts
in your project--if not already installedyarn add aws-deploy-scripts --dev
-
Add
env-vars
andget-env-vars
scripts to yourpackage.json
"scripts": { "start": "node index.js", "env-vars": "aws-s3-secrets --action put --environment staging --bucket blog-staging-secrets --profile default", "get-env-vars": "aws-s3-secrets --action get --environment staging --bucket blog-staging-secrets --profile default" },
-
Send your local secrets file to S3 (NOTE: if you used the VPC policy on your s3 bucket, then you'll need to connect to the VPN before you can send your secret file)
yarn env-vars
-
Testing the file on S3 requires you to download the file
yarn get-env-vars
I've added prefix of s3.
to the file being downloaded so that it doesn't overwrite your original file.