Skip to content

Latest commit

 

History

History
154 lines (106 loc) · 4.93 KB

dashboard-deployment-guide.md

File metadata and controls

154 lines (106 loc) · 4.93 KB

Bento Dashboard Deployment Guide

Prerequisites

On your local machine:

AWS EC2, type - Amazon Linux 2 instance

1. Create the Bento Node Express Docker image

Clone repo

In your your local machine's terminal, within the Bento root folder (created in the Bento pipeline deployment process), or any other folder:

git clone https://github.com/bento-video/bento-dashboard-backend.git && cd bento-dashboard-backend

Update the Dockerfile

Within the Dockerfile (found within the bento-dashboard-backend folder) the following environment variables require values:

  1. ENV START_BUCKET
  2. ENV END_BUCKET

enter the following command to view view all of your bucket names:

aws s3api list-buckets --query "Buckets[].Name"

There will be a bucket with bento-prod-videouploadbucket in its name. Use this bucket's full name for the value of ENV START_BUCKET.

There will be a bucket with bento-prod-processedvideosbucket in its name. Use the full bucket name for the value of ENV END_BUCKET.

  1. ENV RECORD_UPLOAD_LAMBDA
  2. ENV EXECUTOR_LAMBDA

These variables reference the arn of the recordUpload and executor Lambdas. The following commands lists the properties of these Lambdas:

aws lambda get-function --function-name recordUpload
aws lambda get-function --function-name executor
  1. ENV REGION your AWS region

  2. ENV AWS_ACCESS_KEY_ID your AWS access key

  3. ENV AWS_SECRET_ACCESS_KEY your AWS secret access key

Build and tag a Docker image

docker build -t yourhubusername/bentobackend .

Push Docker image to Docker Hub

IMPORTANT: immediately after pushing this image login to Docker hub and configure your settings to make this repo private as it contains your AWS keys.

docker push yourhubusername/bentobackend

2. Install Docker on Amazon Linux 2

Connect to your EC2 instance within terminal and enter the following commands:

sudo yum update -y &&
sudo amazon-linux-extras install docker &&
sudo service docker start &&
sudo usermod -a -G docker ec2-user &&
sudo chmod 666 /var/run/docker.sock 

3. Create and run the Bento Node Express Docker container

Log in to Docker hub (docker login --username=yourhubusername)within your EC2 terminal and enter the following command:

docker run --rm -d -v ${PWD}:/app -v /app/node_modules -v app/package.json -p 3001:3001 yourhubusername/bentobackend

4. Modify EC2 Security Group settings

Within AWS web console modify the inbound rules for your EC2 instance:

Type: Custom TCP

Protocol: TCP

Port range: 3001

Source: My IP (or any you want to authorize to interact with your Bento pipeline)

5. Build the Bento Dashboard

Clone repo

In your your local machine's terminal, within the Bento root folder, or any other folder:

git clone https://github.com/bento-video/bento-dashboard.git && cd bento-dashboard

Update .env.production file

The following variable references the public endpoint of your EC2 instance:

REACT_APP_API_ENDPOINT

Change the hostname to your EC2 instance's public IP or DNS name, both values are returned within the output of the following command:

aws ec2 describe-instances

Build the React app

npm install && npm run build

Serve React build files on S3

Within the AWS S3 web console:

  • create a new S3 bucket

  • remove the Block all public access selection

  • move all the files (and folder) within bento-dashboard/build to this bucket

aws s3 sync build/ s3://your-bucket-name --acl public-read
  • navigate to the Properties tag and select Static website hosting

  • select Use this bucket to host a website, Index document: index.html

  • copy Endpoint, this is your endpoint to access the Bento Dashboard front-end from your browser

  • add a policy (Permissions -> Bucket Policy) to this bucket to enable GET requests to the objects (files) of this bucket (note, this will allow anyone with the above end point to access these static React files however access to the content of your pipeline is still secured with the entry IP address you configured for the Express app port on EC2 in step 4):

{
	"Version":"2008-10-17",
	"Statement":[{
	"Sid":"AllowPublicRead",
		"Effect":"Allow",
		"Principal": {
			"AWS": "*"
			},
		"Action":["s3:GetObject"],
		"Resource":["arn:aws:s3:::yourbucketname/*"]
	}]
}