Skip to content

Latest commit

 

History

History

server

🔃 Raise Server

Back-end code and resources for the Raise platform.

🏃 Running locally

See the main README for general instructions.

🏃 Live updates

If the local server is running, your changes will immediately be applied, except for:

  • Adding or removing entire endpoints or functions
  • Changing the database seed data
  • Resetting the database

🪝 Emulating Stripe webhooks

🔧 Setup

  1. Get access to a Stripe account. Either set up your own Stripe account or get access to a shared one.
  2. Update the environment variables in src/env/local.ts to your test API keys
  3. Install the Stripe CLI

🏃 Running

  1. Run stripe listen --forward-to localhost:8001/stripe/webhook (more info).
  2. Update your the STRIPE_WEBHOOK_SECRET in src/env/local.ts to the secret output by the Stripe CLI

😕 Troubleshooting

Error: Exception in thread "main" java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:8004

This means DynamoDB is still running in the background, when you tried to start a new one. To stop the old one use the command killall java (NB: this will also kill any other java processes on your system which may or may not be fine depending on what you've got going on)

📁 File structure

(in rough order of what is more likely to be useful to you)

  • src: The bulk of the source code - you probably want to work in here most of the time
    • api: Code for endpoints on the HTTP API. The file system structure represents the path and method used for the endpoint, e.g. src/api/admin/fundraisers/get.ts represents the handler when for the route GET /admin/fundraisers. See the request handling section for more details about writing API endpoints.
    • env: Environment configuration, named {environment}.ts (e.g. local.ts). For security reasons, configuration for the dev and prod environments should not be checked into the repository. When running or deploying the server, the environment config will be copied to a file env.ts which is actually used - this will be overwritten so do not edit this file directly!
    • scheduler: Code for scheduled functions. Generally this should not contain lots of logic, and should instead make calls to the API which handle more of the logic. This is so we get the nice things we are used to with the API middleware e.g. schema checks and audit logging.
    • helpers: Helper functions and types that support the rest of the codebase
  • local: Devtools and database seed files
    • table_*.json: Seed file for database. When running locally, the relevant database table is prepopulated with this data so you have something to work with. The data is reset to this each time the local database is restarted.
  • serverless.ts: Defines infrastructure and settings for the serverless framework and related plugins
  • package.json: Defines depenedencies to use, and npm commands (more info)
  • package-lock.json: Edited automatically by NPM, specifies exact versions of dependencies based on package.json (more info)
  • .eslintrc: Configuration for ESLint, the linter that prevents some bad coding practices and enforces consistent code formatting
  • .github: Defines GitHub configuration, most importantly CI and CD pipelines (more info)
  • .gitignore: Defines what files git should ignore (more info)
  • .gitpod.yml: Defines Gitpod configuration (more info)#
  • webpack.config.js: Defines webpack settings (related to serverless-webpack) for bundling up our code before running or deploying it
  • .vscode: Configuration files for improving the code editing experience in VS Code

➕ Adding an entity

To add a new entity:

  1. Define its schema, and request schemas, in src/schemas/jsonSchema.ts
  2. Create a database table for it in src/helpers/tables.ts
  3. Create endpoints, probably like:
src
  api
    admin
      entity
        {entity_id}
          patch.ts
        get.ts
        post.ts

🧅 Request handling

Our HTTP API is hosted on AWS API Gateway. This delegates to AWS Lambda, which recieves a payload of a certain form from API Gateway.

We use a library called middy to transform this request into something more useful and do appropriate checks (e.g. JWT authentication, schema validation, parsing JSON) before passing it to the actual handler. Middy implements a onion-like middleware pattern, which we register middlewares to.

In general, you don't need to worry about the inner workings of this. However, you should know that:

  • You should wrap any API handlers in middyfy to attach the relevant middleware (see existing endpoints for examples)
  • Middy will ensure the body you are receiving satisfies the requestSchema, and that the body you are returning satisfies the responseSchema. These schemas should be JSON Schemas, probably defined in src/helpers/schemas.ts. To generate the corresponding TypeScript types for these JSON schemas (helpful for use in TypeScript code) you should run npm run schemas which will update the src/helpers/schemaTypes.ts file. You should keep the two files in sync by running this whenever the schemas change.
  • If the requiresAuth parameter is set to true, middy will perform authentication. This ensures the caller has provided a valid access token. However, this does not perform authorisation, i.e. checking whether the caller should be able to use the endpoint. You need to implement this yourself, maybe using the helper method assertHasGroup.
  • Your handler will be passed an event object - you can use the TypeScript types to explore what's attached to it, but briefly the key things you'll want are:
    • event.body: the parsed request sent to the endpoint
    • event.pathParameters: the path parameters (e.g. for PATCH /admin/fundraisers/ABCD you'd get { fundraiserId: "ABCD" })
    • event.auth: authentication details
  • You should create and throw detailed errors with the createHttpError from the http-errors package, e.g. throw new createHttpError.BadRequest("You cannot change the donationAmount on a card payment")

💳 Payments

We use Stripe to process payments.

The one-off donation flow is:

  1. The front-end sends a request to POST /public/fundraisers/{fundraiserId}/donation to create a Stripe payment intent and a donation with a pending payment. We return the client id for this payment intent.
  2. The front-end uses this client id for the payment intent to set up a and confirm a card payment.
  3. Stripe sends us a payment_intent.succeeded webhook, confirming their payment to POST /stripe/webhook. We validate and cross-reference the details with our records and mark their payment as paid if everything looks good. We allocate match funding, and update the amounts on their donation and the fundraiser.

The recurring donation flow is:

  1. The front-end sends a request to POST /public/fundraisers/{fundraiserId}/donation to create a Stripe payment intent and a donation with a pending payment. We tell Stripe we want to save this card for use in the future. We return the client id for this payment intent.
  2. The front-end uses this client id for the payment intent to set up a and confirm a card payment, and to set up their card for future use.
  3. Stripe sends us a payment_intent.succeeded webhook, confirming their payment to POST /stripe/webhook. We validate and cross-reference the details with our records and mark their payment as paid if everything looks good. We allocate match funding, and update the amounts on their donation and the fundraiser. We create a Stripe customer and save their payment method to this customer for future usage. We store the customer and payment method ids on the donation.
  4. Later on (e.g. weekly), a scheduled function runs which calls the POST /scheduler/collect-payments endpoint. This endpoint makes all the card payments due by creating payment intents with the customer and payment method id for immediate confirmation. If a payment fails, it will be retried later unless marked as cancelled by an admin.

🗃 Database

The database we use is AWS DynamoDB. It is a managed NoSQL database that helps us:

  • minimize database maintenance
  • minimize costs
  • integrate well with AWS Lambda and AWS IAM

While we don't have a lot of concepts from a SQL database, for simplicity and maintainability we have mapped each entity to its own table and enforce a strict schema.

Tables are defined in src/helpers/tables.ts which then influences the serverless.ts file. Helper functions to access and modify the data are provided in src/helpers/db.ts - you should strongly avoid writing custom database logic elsewhere.

We use condition expressions and transactions liberally to ensure the integrity of our data and prevent concurrency-related errors. Note that because multiple functions may be editing the same item and reads are only eventually consistent by default, we need to be extra vigilant about concurrency bugs.

🕵 Security, governance and auditing

If you think you have found a security issue, no matter how serious or whether you caused it, please immediately report it to the national team's tech person. If you're unsure whether something is a security problem, please report it.

🔒 JWT authentication and authorization

Where enabled, the API uses JWTs for authentication and authorization. Our implementation of this uses public/private key cryptography: when admins login we issue them an access token which is a JWT signed with our private key. When they access resources, we check the access token they provide was signed correctly with our private key, by using our public key. We can also embed information like their email and groups to make decisions about what they should be able to access.

In general:

  • the national team is granted access to everything
  • local teams are granted access to everything under their fundraisers
  • admins can see all fundraisers, but not the donations or payments under those fundraisers

✅ Checks

API endpoints must conduct checks to ensure executing requests maintains data integrity.

Any code changes to the server must be peer-reviewed to ensure the code is correct and free of security defects. Additionally, we use unit tests to ensure our code does what we expect/want it to.

Any admins using must be given appropriate training in the correct usage of the system as well as information security training before being given access to the production system.

👀 Auditing

To review incidents and put things right if there is a problem, we store basic logs of API requests. For deeper investigations, we also store audit logs which monitor for suspicious activity (e.g. failed logins) and all database edits.

In production, we also store comprehensive database backups to ensure we can examine or roll back to a previous state in case of a problem.

🏗 Infrastructure

The server is hosted on Amazon Web Services (AWS). This is a cloud platform provided by Amazon that allows the server to scale to meet demand, minimizes the burden of running our own servers, and allows us to save on costs when there is no traffic.

The primary computation platform we use is AWS Lambda. This is what actually runs the code in the functions.

We store data in AWS DynamoDB, a managed NoSQL database. More details can be found in the database section.

Our HTTP API is served by AWS API Gateway, which then forwards on the requests to AWS Lambda for us.

Scheduled events are triggered by AWS CloudWatch.

We manage permissions between the different AWS services with AWS IAM (e.g. to allow the Lambda functions to talk to the DynamoDB database).

We use the Serverless framework to manage all this infrastructure, which uses AWS CloudFormation under the hood. We define the configuration for the framework (and therefore the AWS resources we want) in serverless.ts.

Serverless also comes with plugins that help us:

  • serverless-webpack: Allows us to use webpack to bundle up our code in the way AWS Lambda expects before running or deploying it
  • serverless-offline: Allows us to mock many AWS services (Lambda, API Gateway, CloudWatch) locally so we can run the server ourselves
  • serverless-dynamodb: Allows us to mock AWS DynamoDB locally so we can run the server ourselves

👷 CI

We have CI pipelines that run in GitHub Actions. These check that our TypeScript compiles correctly, our code abides by lint rules and our tests pass. These checks are important, and we should only merge in branches when the pipeline succeeds. If the master branch has a failing pipeline, this should be investigated and fixed with high priority.

On the master branch, our CI pipeline deploys our changes to the dev environment in AWS.