Thank you for wanting to contribute to Peblio. In this document, you will find instructions for the below -
- Setup of the project on your system
- Running it in your local environment
- Running tests!
- The git workflow that we follow
- Deploying
- Credstash and Environment Variables
- Remix this repository.
- Clone the forked repository and cd into it.
- Create a Python 3.8.11 environment and activate it.
pip install -r requirements.txt --use-deprecated=backtrack-on-build-failures
- Make sure that you have Node.js v8.7.0 or higher installed.
npm install
cd client && npm install
cd ../server && npm install
- Install MongoDB and make sure it is running.
- For Mac OSX with homebrew:
brew install mongodb
thenbrew services start mongodb
- For Mac OSX with homebrew:
- Get the AWS IAM credentials for the
peblio-local-development
user from Mathura and place them in your~/.aws/credentials
file under a profile calledpeblio
. You should also create apeblio
profile in~/.aws/config
with the lineregion=us-east-1
. The easiest way to do this is to runaws configure --profile peblio
. See the AWS docs for more details. - Get the
Peblios.pem
SSH key from Mathura and place it in your~/.ssh
directory. Then, if you haven't already, generate an SSH key for your GitHub account. Finally, add the following lines to your~/.bashrc
or~/.bash_profile
startup script:Alternatively, if you're using OS X, instead of modifying yourssh-add ~/.ssh/Peblios.pem ssh-add ~/.ssh/id_rsa
~/.bashrc
, you can configure SSH to use the OS X Keychain to automatically make your private keys available to SSH. - (Optional) Install the React Developer Tools and Redux Developer Tools Chrome extensions.
- In case you see error for not being able to load libffi
brew install libffi
- Find where libffi.8.dylib is and run
cp libffi.8.dylib libffi.7.dylib
- Remix this repository.
- Clone the forked repository and cd into it.
- (Optional) Create a Python 3.7.11 virtualenv and activate it.
pip3 install -r requirements.txt
- Make sure that you have Node.js v16.20.2 or higher installed.
npm install
cd client && npm install
cd ../server && npm install
- Install MongoDB and make sure it is running.
- For Mac OSX with homebrew:
brew install mongodb
thenbrew services start mongodb
- For Windows and Linux: MongoDB Installation
- For Mac OSX with homebrew:
- Get the AWS IAM credentials for the
peblio-local-development
user from Mathura and place them in your~/.aws/credentials
file under a profile calledpeblio
. You should also create apeblio
profile in~/.aws/config
with the lineregion=us-east-1
. The easiest way to do this is to runaws configure --profile peblio
. See the AWS docs for more details. - Get the
Peblios.pem
SSH key from Mathura and place it in your~/.ssh
directory. Then, if you haven't already, generate an SSH key for your GitHub account. Finally, add the following lines to your~/.bashrc
or~/.bash_profile
startup script:Alternatively, if you're using OS X, instead of modifying yourssh-add ~/.ssh/Peblios.pem ssh-add ~/.ssh/id_rsa
~/.bashrc
, you can configure SSH to use the OS X Keychain to automatically make your private keys available to SSH. - (Optional) Install the React Developer Tools and Redux Developer Tools Chrome extensions.
cd client && npm start
- In another terminal session,
cd server && npm start
- Navigate to http://localhost:8080 in your browser.
Nodeman has been configured to do life refresh on the server side.
After any changes to the server, wait for the files to get
re-interpreted and type rs
on the console where the server runs
We have storybook installed for developing and designing basic components. To start storybook -
cd client && npm run storybook
- Your browser should automatically open the Storybook UI
Currently, the storybook files are present in client/src/components/**/*/.stories.js
We can use this as a template to build further components
Please note, the debug only works in Visual Studio Code. Also the debug does not work with python being used in virtualenv.
- Install Visual Studio Code and open project
- The launch.json and tasks.json shall create a Debug Launch Confirguration "Server Debug"
- Install a breakpoint and start the Debug
The presence of jsconfig.json file in a directory indicates that the directory is the root of a JavaScript Project. It also helps us point that server is ES6 syntax javascript project
To be able to use ES6 syntax, we use babel on the server side.
As mentioned in babel-node docs, You should not be using babel-node in production. It is unnecessarily heavy, with high memory usage due to the cache being stored in memory. You will also always experience a startup performance penalty as the entire app needs to be compiled on the fly.
As an option, while launching the server on higher environments we use npm run startserver
. With this, we first transpile code to backwards compatible version of JavaScript
and then start server. This is done in the prestartserver script of node.
While launching locally, we use npm start
which calls prestart
script from package.json.
The reason we have different server startup scripts for local and for higher environments is because npm start
generates source-maps which helps in local debugging.
This isn't required while starting the server on higher environment.
This project uses TestCafe to run end-to-end tests against the frontend and backend.
You can run the tests once by running
npm test
You can run the tests in watch
mode (using TestCafe Live) by running
npm run test:watch
By default, these commands run the tests using a headless Firefox browser. However, there are several other test commands defined in package.json that will run the tests against other browsers. For example, to run the tests against Chrome, Firefox, and Safari simultaneously, you could run
npm run test:all
You can run the from root folder by running
cd server
npm test
We have used enzyme to test react components along with chai, mocha, sinon You can run the from root folder by running
cd client
npm test
We have used supertest to write integration tests along with chai, mocha, sinon. The framework brings up the server and uses ENVIRONMENT local configuration You can run the from root folder by running
cd server
npm run integrationTest
The expected git workflow for feature development is:
-
Either create a local branch for your feature, OR, create a branch for your feature in your fork of the repo.
-
When your work is ready, create a pull request against the
master
branch of this repo. -
Once your pull request to
master
has been merged, create a pull request frommaster
tostaging
, merge it, and deploy to the staging environment to manually test your feature. -
Once you've verified that everything works in the staging environment, you can create a pull request from
staging
toproduction
, merge it, and deploy to the production environment.
The backend uses ansible to deploy code to EC2 servers. For more info on how this works, see the Ansible section of this README.
cd server
./devops/staging_deploy.sh
cd server
./devops/prod_deploy.sh
The frontend uses the AWS CLI to push the latest build to S3 and invalidate the corresponding CloudFront cache.
git checkout staging
cd client
./devops/staging_deploy.sh
git checkout production
cd client
./devops/prod_deploy.sh
On the backend, Peblio uses credstash, a utility that uses AWS KMS to securely store data in AWS DynamoDB, to manage secrets that should not be checked into version control.
server/run_with_credstash.sh takes care of mapping credstash secrets to environment variables used by the Express server. This bash script expects you to have an AWS profile called peblio
configured with the credentials for the peblio-local-development
IAM user.
To see a list of the secrets currently stored in credstash:
cd server
./list_credstash_secrets.sh
To add a secret to credstash:
cd server
./add_credstash_secret.sh my.secret donttellanyone local
When adding a new secret, always make sure to add a version for each of the following environments: local
, test
, staging
, and production
.
Once you've added a secret, make sure that you also update server/run_with_credstash.sh to map that secret to an environment variable.
If you need to a delete a secret, you can run:
cd server
./delete_credstash_secret.sh my.secret local
However, DO NOT DO THIS unless you're sure that it won't affect the production servers or anyone else's work.
Environment variables for the frontend are defined in three .env
files in the client
directory: .env, .env.staging, and .env.production.
These files are checked into version control, because any environment variable used in the frontend will be visible in the compiled JavaScript code. With that in mind, DO NOT PUT SENSITIVE DATA OR CREDENTIALS IN THESE FILES! Anything that needs to be kept secret should be stored in credstash and used exclusively on the backend.
ansible is an IT automation tool that is used in this project to automate the creation, provisioning, and deployment of backend EC2 instances.
The structure of the server/devops/ansible directory roughly follows the recommended strategy for multistage environments described in this tutorial.
Inventories specify which EC2 servers belong to which environments.
There are two separate inventories located in server/devops/ansible/inventories, one for staging
and one for production
. Both inventories use an EC2 Dynamic Inventory to find any currently running EC2 instances that belong to the corresponding environment.
The code that defines these two inventories is almost identical, with the key difference being a filter in the respective ec2.ini files that only includes EC2 instances with an env
tag corresponding to the environment.
Each inventory also defines group variables that are unique to that environment in various YAML files. These variables are used in roles when creating, provisioning, and deploying EC2 instances. Any variables that are invariant across environments are defined in server/devops/ansible/inventories/cross_env_vars.yml, which is symlinked into both the staging
and production
inventories.
Roles define sets of tasks that are run either locally or remotely on EC2 instances via SSH. Some roles may also define their own variables or include template files that will be copied over to an EC2 instance.
Most roles are intended to be idempotent. That means that there should be no difference between running a role like deploy_webserver once vs. running in many times in a row. However, not all roles are idempotent - create_webserver spins up a new EC2 instance each time that it runs.
Roles don't specify which hosts they should run on - that's taken care of by playbooks.
Playbooks map host groups to roles. This allows the same role to be used multiple times for different hosts in different contexts, e.g. the deploy_webserver
role is used in both create_webserver.yml and deploy_webserver.yml.
For convenience, there are several bash scripts (staging_deploy.sh
, prod_deploy.sh
, etc.) located in the devops directory that abstract away the details of running ansible for common tasks like deployment and provisioning.
We use peblio
profile for AWS and this is defined in AWS_PROFILE environment variable
Let's talk about front end first
- Create an S3 bucket - this needs to be done manually
- When creating bucket, you can copy the settings from an existing bucket
- Setup static website hosting for the bucket once its created
- Create cloudfront - choose web distribution
- Make note of the distribution ID
- Update distribution ID and S3 bucket name in deploy.sh
- Post this, you can use the deploy scripts, and it should be up on the S3 link
- To map this URL to a domain name, go to DNS provider (godaddy) and update the details
regd Back end
- Create a new IAM user (this is not necessary, but will make for a clean, new system)
- Change the cross_env_var to include the new github repo and new IAM user
- Change the variables inside group_vars - you will choose machine size here
- Now you can run the create_webserver script and the deploy_webserver script! YAAAY!
- Post this, run the script to spin up the server
Trouble shooting commands for python3 upgrade:
Clear space df -h sudo apt-get clean sudo apt clean cd /home/ubuntu/proto-2/server npm dedupe cd /usr/src ls -lt -- Delete older header like linux-aws-headers-4.4.0-1047 sudo apt purge linux-aws-headers-4.4.0-1079 sudo apt-get install linux-aws-headers-4.4.0-1095 sudo find / -name "linux-aws-*" sudo apt-get autoremove sudo apt autoremove sudo apt autoclean
Install pip3 sudo apt remove --purge python3-pip curl -O https://bootstrap.pypa.io/pip/3.5/get-pip.py sudo -E python3 get-pip.py