- Add docker and docker-compose to your project(examples given in repo)
- Run
docker-compose -f docker/development/docker-compose.yml upto check if it's working on local machine
Create new Policy for ecr full access - name it 'full access to ecr'
- Sign up and sign in to AWS console
- Go to IAM service, choose users tab, click
Add users - Enter username, choose
Access key - Programmatic access - From policies choose just created policy(full access to ecr)
- Review and create new user
- !Important!, save credentials for this user
- Create new password for this user via
Security credentialstab on user's detail page - Copy SignIn link from
Console sign-in link:on this page - Sign Out as root user and login to console using IAM user credentials
- Go to ECR (Elastic Container Registry), click
Create Repository - Enter repo name and click
Create
- Go to EC2, find
Scurity groups - Click
Create security group - Enter name and description (sg-load-balancer)
- Add inbound rule
Type -> HTTP,Source -> Anywhere ipv4 - Create
- Create another group for app
- Enter name and description (sg-rails)
- Add inbound rules:
type -> SSH,source -> anywhwre ipv4type -> all TCP,source -> <security group that created for load balancer>
9.Create
- Go to EC2, find
key pairs - Create new key-pair (fill-in its name and seve .pem file on your local machine)
- Go to EC2, find
load balancers, clickCreate new load balancer - Choose
Application load balancertype, click next - Fill in its name
Scheme -> Internet-facingIP address type -> IPv4- In
Network mappingchoose at least 2 subnets, don't forget what subnets you've chosen, you will need it later - In
Security Groupsselect secutiry goup for load balancer that have beed created earlier Listeners and routing-> create new target group:Target type -> Instances- Fill in its name
Protocol -> HTTP,PORT -> 80- For Health checks, enter route name for health checking(Just add route that returns 200 status in your app)
- Create
- Choose just created target group
Create load balancer
- Run in root of your project
docker build . -f docker/<env_name>/Dockerfile - Copy just built image id from
docker imagesoutput - Run
docker tag <image_id> <ecr_link>:<tag>, ecr_link - copy link from repository that created in ECR, tag - for example,staging - Run
aws configureand fill in access key and secret key from saved IAM credentails (This step need to be done only one time) - Run
aws ecr get-login --no-include-email --region=<your_region>and copy-past output and run this command - Run
docker push <ecr_link>:<tag> - DONE
- Go to ecs, click
Task Definitions - Create new task definition
- Enter name
type -> ec2- Scroll down and add volumes:
Name -> redis Volume type -> Docker Driver -> local Scope -> Shared Auto-provisioning enabled -> trueName -> postgres Volume type -> Docker Driver -> local Scope -> Shared Auto-provisioning enabled -> trueName -> public Volume type -> Bind Mount
- Enter task memory and task cpu (for example 512x512)
Next we need to add containers (rails-server, db-host, redis-db, sidekiq)
redis-db:
Image -> image name from docker-hub- Ports
6379:6379 - Healthcheck
command -> CMD-SHELL,redis-cli -h localhost ping, interval -> 30, timeout -> 5, retries -> 3 - Storage and logging
mount points -> source_volume -> redis, cont_path -> /data - Log configuration
true - Create
db-host:
Image -> image name from docker hub- Ports
5432:5432 - Healthcheck
command -> CMD-SHELL,pg_isready -U postgres, interval -> 30, timeout -> 5, retries -> 3 - Env variables such as
POSTGRES_USER -> postgres, POSTGRES_PASSWORD -> postgres - Storage and logging
mount points -> source_volume -> postgres, cont_path -> /var/lib/postgresql/data - Log configuration
true - Create
rails-server:
Image -> <ecr_link>:<tag>- Ports
3000:3000 - Healthcheck
command -> CMD-SHELL,curl -f http://localhost:3000/health_check || exit 1, interval -> 30, timeout -> 5, retries -> 3 - Environment
entry point -> docker/staging/entrypoint.sh, command -> bundle,exec,puma,-C,config/puma.rb,-p,3000 - Env variables (all env variables we need like RAILS_ENV, AWS KEYS), exmaple:
DB_HOST -> db-host(we created earlier), REDIS_URL -> redis://redis-db:6379/1 (we created earlier) - Startup deps ordering
db-host -> HEALTHY, redis-db -> HEALTHY - Network settings
links -> db-host,redis-db (we connect to redis and db from our app) - Storage and logging
mount points -> source_volume -> public, cont_path -> /home/www/<app_name>/public (path to public folder of our app) check Dockerfile - Log configuration
true - Create
sidekiq:
Image -> <ecr_link>:<tag>- Ports
skip - Healthcheck
command -> CMD-SHELL,ps ax | grep -v grep | grep sidekiq || exit 1, interval -> 30, timeout -> 5, retries -> 3 - Environment
command -> bundle,exec,sidekiq,-C,config/sidekiq.yml - Env variables
copy from rails-server - Startup deps ordering
db-host -> HEALTHY, redis-db -> HEALTHY, rails-server -> HEALTHY - Network settings
links -> db-host,redis-db (we connect to redis and db from our app) - Storage and logging
mount points -> source_volume -> public, cont_path -> /home/www/<app_name>/public (path to public folder of our app) check Dockerfile - Log configuration
true - Create
Create Task Definition -> DONE
- Go to ECS, find
Clusters->Create new cluster - Type ->
EC2 + Networking - Enter cluster name
- Provisioning Model ->
On-Demand Instance - Instance_type ->
t2.micro - Number of instances,
for now -> 1 - Key-pair ->
select key-pair that was created earlier - Networking
VPC -> select default VPC, subnets -> select 2 subnets that were selecte in load balancer - Security group ->
select security group that created for rails server - CloudWatch Container Insights ->
true - Create
- Go to ECS, go to clusters, click on cluster that was created
- Go to services, click create
- Launch type ->
EC2 - Task definition ->
taskDefinition that was created earlier - Enter name
- Number of tasks -> at least 1 for now
- Min healthy percent -> 0
- Max healthy percent -> 100
- Load balancer type ->
ApplicationLoadBalancer - Health check grace period ->
100 - Load balancer name ->
select load balancer that created earlier - Click
Add to load balancer - Set auto scaling (optional)
- Create Service
- Wait untill task inside service with all containers are healthy
- Check container logs if task fails
- Go to EC2
- Find
Load Balancerstab, click - Go to your load balancer
- Scroll down and find
Listenerstab, click - Click on traget group
- If everything is you will see
Healthy -> 1, Unhealthy -> 0 - Back to load balancer
- In descitpion tab find
DNS name - Visit link
- Dance and have fun cause all work is done and your app is working