This project is a platform designed to enable seamless deployment of frontend applications across various JavaScript frameworks and libraries. It provides users with a streamlined deployment process, integrating directly with GitHub repositories for effortless deployment with a single click. The project employs containerized deployment strategies and leverages AWS S3 with CloudFront to achieve exceptional performance, reducing latency by up to 90%. The system employs an asynchronous architecture for efficient deployment task queuing and processing, ensuring scalability and reliability. Additionally, it utilizes Kafka for real-time streaming of build logs, ensuring efficient and scalable deployment processes.
Upload-Service:
- Clones GitHub Repositories.
- Uploads source code to AWS S3.
- Triggers build tasks via Kafka.
Deploy-Service:
- Consumes build tasks from Kafka.
- Executes builds and deploys to AWS S3.
- Streams build logs via Kafka.
Frontend:
- Notifies users of build status.
- Routes requests through AWS CloudFront for low-latency delivery.
- TypeScript
- Express JS
- Next JS
- PostgreSQL
- Prisma
- Docker
- Redis
- Kafka
- AWS services (S3, CloudFront)
npx tsc --init
Update tsconfig.json
file configurations:
{
"compilerOptions": {
"rootDir": "./src",
"outDir": "./dist"
}
}
npm i -D ts-node nodemon
Add in package.json
file the following script to create dev server:
"scripts": {
"dev": "nodemon --exec ts-node src/index.ts"
},
Reference: Prisma Docs - QuickStart
npx tsc --init
npm install prisma --save-dev
# Setup Prisma ORM with PostgreSQL
npx prisma init --datasource-provider postgresql
-
Create S3 bucket & Extract the
bucket name
andregion
AWS_S3_BUCKET_NAME=vercel-clone-s3-bucket AWS_S3_BUCKET_REGION=ap-south-1
-
Create IAM user with S3 full access to our
vercel-clone-s3-bucket
bucket.
Final policy should look like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowWebAccess",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::vercel-clone-s3-bucket/dist/*",
"arn:aws:s3:::vercel-clone-s3-bucket/404/*"
]
}
]
}
-
Create S3 bucket and disable public access since we want to serve files via CloudFront only.
-
Create CloudFront distribution with S3 bucket as origin.
-
Since our S3 bucket is private, we need to create an
OAC
(Origin Access Control) to allow only CloudFront to access the S3 bucket. -
Copy the
OAC
(Origin Access Control) that is generated and paste it in the S3 bucket policy. -
Copy the CloudFront domain name and use it to access the files in the S3 bucket instead of the S3 bucket URL.
NOTE: The endpoint URL to access files will now change:
# old endpoint URL
AWS_S3_BASE_URL=https://vercel-clone-s3-bucket.s3.ap-south-1.amazonaws.com
# new endpoint URL
AWS_S3_BASE_URL=https://d5lj04npxpnla.cloudfront.net
Benefits of using CloudFront:
-
Improved Performance: It caches content at edge locations around the world, reducing the distance to the end user and improving load times.
-
Scalability: CloudFront can handle traffic spikes and high load times without requiring manual intervention, making it easier to scale your application.
-
Security: CloudFront provides several security features, including AWS Shield for DDoS protection, AWS WAF for protecting against common web exploits, and field-level encryption for sensitive data.
Reference: KeyCDN for Performance testing
TTFB: time to first byte, the time between the client making a request and the server responding
References:
Steps taken:
- Install and start LocalStack
# Install LocalStack
pip install localstack
# Start LocalStack in docker mode, from a container
localstack start -d
# Install awslocal, which is a thin wrapper around the AWS CLI that allows you to access LocalStack
pip install awscli-local
- Create a new Local AWS Profile (called "localstack") to work with LocalStack
PS D:\Projects\Vercel Clone> aws configure --profile localstack
AWS Access Key ID [None]: test
AWS Secret Access Key [None]: test
Default region name [None]: ap-south-1
Default output format [None]:
- Check if the profile is created
PS D:\Projects\Vercel Clone> aws configure list --profile localstack
Name Value Type Location
---- ----- ---- --------
profile localstack manual --profile
access_key ****************test shared-credentials-file
secret_key ****************test shared-credentials-file
region ap-south-1 config-file ~/.aws/config
- Create S3 bucket ("vercel-clone-s3-bucket") with "localstack" profile using awslocal
aws s3 mb s3://vercel-clone-s3-bucket --endpoint-url http://localhost:4566 --profile localstack
# List all buckets
aws s3 ls --endpoint-url http://localhost:4566 --profile localstack
- List all files inside some bucket/ - here comes after "upload-service" uploads the files to S3
aws s3 ls s3://vercel-clone-s3-bucket/clonedRepos/5b2abda7e18543df85f8d84814dda19f --recursive --endpoint-url http://localhost:4566 --profile localstack
References:
Steps taken:
-
Navigate to GitHub Developer Settings and create a new OAuth app.
-
Note that
Authorization callback URL
ishttp://localhost:3000/api/auth/callback/github
which is the default callback URL for NextAuth.js. Checkoutfrontend/app/api/auth/[...nextauth]/route.js
&frontend/app/pages/api/auth/[...nextauth]/options.js
where we register our Github OAuth provider. -
Get the
CLIENT_ID
andCLIENT_SECRET
variables.
NOTE: By default NextAuth.js only asks for user:email
scope. But we want to generate an access_token
to also access private repositories in upload-service
. So we override the default scope in options.js
file.
import GitHubProvider from "next-auth/providers/github";
export const options = {
providers: [
GitHubProvider({
clientId: process.env.GITHUB_CLIENT_ID,
clientSecret: process.env.GITHUB_CLIENT_SECRET,
// ! IMP: used to set the scope of the access token
// ! below will provide complete access to all repos
authorization: {
params: {
scope: "repo user",
},
},
}),
],
session: {
// ! IMP: session data is stored directly in a JWT token
strategy: "jwt",
maxAge: 60 * 60 * 1, // ! one hour
},
callbacks: {
jwt: async ({ token, user, account }) => {
if (account && account.access_token) {
token.accessToken = account.access_token;
}
return token;
},
},
};
NOTE: By default, NextAuth.js won't provide the access_token
in the session
object. So we have to manually add it in the jwt
callback. Now, we can extract the access_token
from the session
object in the frontend and send it to the backend (upload-service
) to access private repositories.
References:
Steps taken:
-
Added A records for the subdomain ec2.rohitshah1706.tech pointing to the EC2 instance's IP address (e.g., 43.205.119.247).
-
Verify the A record using
nslookup
command to check if the subdomain is pointing to the correct IP address.
nslookup ec2.rohitshah1706.tech
# Output
Server: one.one.one.one
Address: 1.1.1.1
Non-authoritative answer:
Name: ec2.rohitshah1706.tech
Address: 43.205.119.247
- Update Nginx Configurations to setup a reverse proxy & redirect our application to port 3000 (for this example). Edit the file at
/etc/nginx/sites-available/default
server {
server_name ec2.rohitshah1706.tech;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
- Verify the Nginx configuration and restart the Nginx service.
# Verify the Nginx configuration
sudo nginx -t
# Restart the Nginx service
sudo systemctl restart nginx
- Install Certbot and request a new SSL certificate for the subdomain.
# Install snapd
sudo apt-get update
sudo snap install core
sudo snap refresh core
# Remove the certbot old version if installed
sudo apt-get remove certbot
# Install certbot
sudo snap install --classic certbot
# Link - create a symlink to the new version
sudo ln -s /snap/bin/certbot /usr/bin/certbot
# Running Certbot with the --nginx plugin
# will take care of reconfiguring Nginx
# and reloading the config whenever necessary
sudo certbot --nginx
- Run a basic http server on port 3000 to check if our subdomain is now accessible via HTTPS.
NOTE: make sure that EC2 instance security group allows traffic on port 443 for https.
Reference: Upload or download large files to and from Amazon S3 using an AWS SDK
Breaking down a large file into smaller pieces, or chunking, has several benefits:
-
Improved reliability: If a download fails, we only need to retry the failed chunk, not the entire file.
-
Parallel downloads: We can download multiple chunks simultaneously, which can significantly speed up the download process if we have a high-bandwidth connection.
-
Lower memory usage: When downloading a file in one piece, the entire file needs to be held in memory, which can be a problem for large files. By downloading in chunks, we only need to hold one chunk in memory at a time.
NOTE: Checkout deploy-service/src/downloadInChunks.ts
for implementation.
-
Behind the scenes of Vercel's infrastructure: Achieving optimal scalability and performance
-
Upload or download large files to and from Amazon S3 using an AWS SDK
-
SSL Certificate Setup and Deployment: (refer to section for more details)
-
Apache Kafka: