From late 2012 to the present I have been writing backends (server-side code) for web applications. This document summarizes many aspects of how I write these pieces of code.
I'm writing this lore down for three purposes:
- Share it with you.
- Systematize it for future reference and improvement.
- Learn from your feedback.
Your questions and observations are very welcome!
If you must sting, please also be nice. But above all, please be accurate.
I'd like to thank everyone who joined in the HN discussion of this document and pointed out or stimulated many interesting points that I had missed. I'm humbled and grateful for the tremendously positive and constructive feedback I received. Y'all rock.
This is all public domain; take whatever you find useful.
My approach to backends (as with code in general) is to iteratively strive for simplicity. This approach - and a share of good luck - has enabled me to write lean, solid and maintainable server code with a relatively small time investment and minimal hardware resources.
To see how this approach works in practice, you can see the code of the backend for an open source service, ac;pic.
- The latest Ubuntu Long Term Support version as the underlying OS. I use it both on the cloud (see Architecture below) and locally.
- The latest Node.js Long Term Support version, although older (even much older) versions of node would suffice. The latest version is recommended because of security and performance improvements.
- The latest version of Redis that is available to the package manager of the current Ubuntu.
- The latest version of NginX that is available to the package manager of the current Ubuntu.
- Amazon Web Services, in particular S3 (files), SES (email) and optionally EC2.
The simplest version of the architecture is that for local development. It looks like this:
Architecture A
┌──╌ local ubuntu box ╌────────────────────────────┐
│ │
internet <────┼────────────────> node <──────> local filesystem │
(localhost) │ ┬ ┬ │
│ │ └────────> redis │
│ │ │
│ └──────────────────────────────┼───> AWS S3 & SES
└──────────────────────────────────────────────────┘
The simplest version of the architecture on a remote environment (a server connected to the internet with a public IP address) looks like this:
Architecture B
┌──╌ remote ubuntu box ╌───────────────────────────┐
│ │
internet <────┼──> nginx <─────> node <──────> local filesystem │
│ ┬ ┬ │
│ │ └────────> redis <───────────┼────────┐
│ │ │ v
│ └──────────────────────────────┼───> AWS S3 & SES
└──────────────────────────────────────────────────┘
nginx works as a reverse proxy. Its main use is to provide HTTPS support - in particular, HTTPS support with nginx is extremely easy to configure with Let's Encrypt free, automated and open certificates. For more about HTTPS and nginx, please refer to the HTTPS section.
This architecture, which runs everything on a single Ubuntu instance (node, redis, and access to the local filesystem) can take you surprisingly far. It is definitely sufficient as a test environment. It can also serve as the production environment of a MVP or even of an application with moderate use that is not mission critical.
redis can be replaced or complemented by another NoSQL database (such as MongoDB) or a relational database (such as Postgres). Throughout this section, where you see redis, feel free to replace it with the database you might want to use.
node should be stateless; or rather, store all its state in redis and the FS. This allows for easier scaling later (see below); more importantly, by storing state explicitly (either in redis or in the FS), the overall structure is much easier to understand and debug.
Notice that there's an arrow connecting redis to AWS. This reflects the (highly recommended) possibility of periodically and automatically uploading redis dumps to AWS S3, to restore the database from a snapshot in case of an issue.
Relying on the local FS is optional, since we're also using AWS S3. For a discussion, see the File section.
The performance of this simple setup can be outstanding. I have personally witnessed virtual instances (which have significantly less power than a comparable dedicated server) with 4-8 cores and 4-8GB of RAM handling over a thousand requests per second (100M requests per day) with sub-10 millisecond latencies. Unless your application logic is CPU intensive, performance should not be a reason for changing this architecture unless you expect your load to be closer to a billion requests per day.
The two main reasons for using an architecture involving more machines are:
- Resilience to outages. If we only use one machine, we have a single point of failure.
- Too much data: > 16GB in Redis and/or > 100GB in files. Not all of that fits into one instance.
An architecture with several instances of node running is of course possible, in case loads are very high and/or latency needs to be kept to the bare minimum. In this case, nginx can be placed on its own machine and also the filesystem and redis are placed in another instance.
Architecture C
┌─╌ api─1 ╌─┐
│ │
┌────┼─> node <──┼────┬──> AWS S3 & SES <─────┐
│ └───────────┘ │ │
│ │ │
┌╌ load balancer ╌┐ │ ┌─╌ api─2 ╌─┐ │ ┌─╌ data─server ╌─┐ │
│ │ │ │ │ │ │ │ │
internet <────┼────> nginx <────┼──┴────┼─> node <──┼────┴──┼─┬─> FS server │ │
└─────────────────┘ └───────────┘ │ └─> redis <─────┼──┘
└─────────────────┘
Notice that we now have a data server, comprising both a database and files. This seems to reflect a pattern that has been repeated since the very beginnings of computing, where there are always two types of storage (one fast and small, another one larger and slower). redis and the FS serve as the particular incarnations of this pattern within our architecture.
Because all nodes refer to the data server for all its state (including sessions), any request can be served by any node; which node serves it is inconsequential. This sidesteps the need for sticky sessions.
In Architecture C, a further modification is necessary: instead of relying on local access to the filesystem, a node server must act as a filesystem server. It should implement routes for reading, writing, listing and deleting files. In my experience, this logic can be done in 100-200 lines, but the code must be written down carefully. If the data server is publicly accessible, redis must be secured through either a password or (better) through spiped; the FS server, meanwhile, will probably have to use either a password in a header or (better) an auth system with cookies to authorize/deny access. These security provisions are only necessary if the data server is accessible from the broader internet, but if you use AWS VPC (or any other means to restrict network access to the data server), this is unnecessary.
In either case, the FS server will also communicate with AWS S3 and will handle its contents.
It is highly recommended that redis should have a follower/slave replica on a separate server.
Architecture D
┌─╌ api─1 ╌─┐
│ │
┌────┼─> node <──┼────┬──> AWS S3 & SES <───────┐
│ └───────────┘ │ │
│ │ │
┌╌ load balancer ╌┐ │ ┌─╌ api─2 ╌─┐ │ ┌─╌ data─server-1 ╌─┐ │ ┌─╌ data─server-2 ╌─┐
│ │ │ │ │ │ │ │ │ │ │
internet <────┼────> nginx <────┼──┴────┼─> node <──┼────┴──┼─┬─> FS server │ │ │ redis │
└─────────────────┘ └───────────┘ │ └─> redis <───────┼──┴──┼──> replica │
└───────────────────┘ └───────────────────┘
With n node servers, it is possible to scale horizontally, to improve performance and avoid an outage if one of the node servers is down.
This architecture, however, still has two points of failure: the load balancer and the data server. Having multiple load balancers is also possible by having multiple A records (DNS load balancing) or by using AWS ELB.
This leaves the problem of scaling the data server. The FS can be split among several servers, although this undertaking is nontrivial; in any case, if you have very large amounts of data, you can rely mostly on AWS S3 and use a FS server as a mere cache.
The truly difficult thing to scale (and where you should be very careful with vendor statements) is the scaling of the database. Scaling a database allows for 1) partitioning the data into sizes that will fit a single server, no matter how much data you have; 2) depending on what type of partitioning/sharding you use, allow for no downtime or data loss in the event of a database node failing.
Here it is essential to be aware of the CAP theorem. When we partition a database into multiple nodes that are connected through the internet (and thus are susceptible to delays or errors in their communication), we can only choose between consistency and availability. You can either decide to respond to all requests and risk serving & storing inconsistent data, or you can choose to remain fully consistent at the price of having some downtime. This hard choice is an interesting one. Most of the solutions out there, including redis cluster favor availability over consistency. Whatever solution you use, it should be one of the default ways in which your database of choice is meant to scale, unless you really know what you're doing.
Architecture E
┌─╌ api─1 ╌─┐
│ │
┌────┼─> node <──┼────┬──> AWS S3 & SES <────────┐
│ └───────────┘ │ │
│ │ │
┌╌ load balancer ╌┐ │ ┌─╌ api─2 ╌─┐ │ ┌─╌ data─server-1a ╌─┐ │ ┌─╌ data─server-1b ╌─┐
│ │ │ │ │ │ │ │ │ │ │
internet <────┼────> nginx <────┼──┴────┼─> node <──┼────┴──┼─┬─> FS server 1 │ │ │ redis 1 │
└─────────────────┘ └───────────┘ │ │ └─> redis 1 <──────┼──┴──┼──> replica │
│ └────────────────────┘ │ └────────────────────┘
│ │
│ ┌─╌ data─server-2a ╌─┐ │ ┌─╌ data─server-2b ╌─┐
│ │ │ │ │ │
└──┼─┬─> FS server 2 │ │ │ redis 2 │
│ └─> redis 2 <──────┼──┴──┼──> replica │
└────────────────────┘ └────────────────────┘
Below are a few more notes about database partitioning.
Providing HTTPS support with node is possible, but in my experience it is quite more cumbersome. Using nginx seems to reduce the overall complexity of the architecture & setup.
Using nginx to receive all requests also has the advantage that our node server can run as a non-root user, which prevents an exploit in node from giving access to an attacker to the OS. Without further configuration, the default ports for HTTP (80) and HTTPS (443) can only be used by the root user of the system (any port under 1024, actually); in this architecture, nginx runs as root and listens on those two ports, and forwards traffic to node, which is listening on a port higher than 1024.
Questions for future research: having node instead of nginx running as root reduces the attack surface of the application or is it mere cculte? Are there concrete security advantages to not run nginx as root and which merit taking the trouble to set up port forwarding?
To configure HTTPS with nginx: if you own a domain DOMAIN
(could be either a domain (mydomain.com
) or a subdomain (app.mydomain.com
)) and its main A record is pointing to the IP of an Ubuntu server under your control, here's how you can set up HTTPS (be sure to replace occurrences of DOMAIN
with your actual domain :):
sudo apt-get install certbot python3-certbot-nginx -y
Note: in old versions of ubuntu you might have to run these two commands before installing certbot:
sudo add-apt-repository ppa:certbot/certbot -y
sudo apt-get update
In the file /etc/nginx/sites-available/default
, change server_name
to DOMAIN
.
sudo service nginx reload
sudo certbot --nginx -d DOMAIN
For forwarding traffic from nginx to a local node, I use this nginx configuration snippet within a server
block. If you use it, please replace PORT
with the port where your node server is listening.
location / {
proxy_pass http://127.0.0.1:PORT/;
proxy_set_header X-Forwarded-For $remote_addr;
}
The proxy_set_header
line sends to node the IP address of whoever made the request, which is useful for security purposes (like detecting an abnormal geographic location in a login, or a repeated source of malicious requests).
Redis is an amazing choice for database. While it may not be the best choice for you, I highly recommend that you take a look at it, particularly if you've never used it before.
Redis is much more than a key-value store: it is a server that implements data structures in memory. In practice, this means that you have access to fundamental constructs like lists, hashes, sets. These data structures are implemented with amazing quality, consistency and performance.
If you work with mission critical data (financial transactions, healthcare), I suggest working instead with a relational database that fulfills the ACID properties and has a very high probability of not losing data ever. As far as I know, redis can guarantee ACI
, but cannot guarantee D
to the same extent (at least not if you're running a single node).
As stated above in the Architecture section, it is highly recommended that RDB persistence (and possibly AOF) should be turned on. I highly recommend backing up redis dumps into AWS S3, instead of just locally - this can be done by the node instance itself.
Because redis is an in-memory database, if either redis (or the underlying instance) is restarted you'll almost certainly lose a few seconds of data (the data written to memory yet not committed to a RDB/AOF files, which are on the disk and will survive a restart). It is critical that neither redis nor the instance where it runs should be randomly restarted. The deeper causes for accidental restarts should be analyzed and eliminated. In seven years of using redis, I've never experienced a restart coming from redis itself (or a redis bug, for that matter); but I have experienced Docker and the OS restarting redis because of memory limits.
I recommend writing down in the readme the key structure used by your application in redis. Here's an example.
I don't like the redis cluster (partitioning/sharding) model; for me, scaling redis to multiple nodes while maintaining consistency and understandability of the overall structure is an open problem.
Using AWS S3 is highly recommended. It is a highly secure, redundant and cheap way to store files. To use S3 from node, I recommend using the official SDK (aws-sdk
). Here's an example initialization.
var s3 = new (require ('aws-sdk')).S3 ({
apiVersion: '2006-03-01',
sslEnabled: true,
credentials: {accessKeyId: ..., secretAccessKey: ...},
params: {Bucket: ...}
});
The main methods I use are s3.upload
, s3.getObject
, s3.headObject
, s3.deleteObjects
and s3.listObjects
. It's possible to write a ~60 line code layer that provides consistent access to these methods and which can be used by the rest of the server.
I usually also use the local FS to serve files, relying on S3 only for recovery in case there's a server or disk issue. There are two main reasons for this:
- Cost - if your application potentially serves a lot of data, S3 cost can be very significant; downloading a GB once currently costs 3-4x the cost of storing it for an entire month. What makes this worse is that it is hard to predict how much data you may use on a given month, so you have an unlimited financial downside. An ideal service would not charge for downloads, just for storage.
- Speed - S3 is not very fast, even accounting for network delays. Latencies of 20-50ms are par for the course, compared for sub 5ms for a local FS.
By serving the files from your own servers, you have much better speed and your cost is low and predictable.
The downside of using the local FS is that it requires extra code and management of the files themselves. It also requires a FS server in an architecture with multiple node instances; and it requires multiple servers with partitioning logic if the data is too large to fit in a single FS server.
Whether you go with using the local FS, a single FS server or a full-fledged scalable FS server, the idea is to use S3 as the ultimate storage of the file and its guarantee of durability. When deleting files, it is recommended to delete them from S3 by moving them to a separate bucket that has a 30-day deletion lifecycle, so they are automatically deleted later; this may help in case there are unwanted deletions. This is also a good pattern in case an object is overwritten.
The application logic lives in node. Node serves all incoming requests through an HTTP API.
An HTTP API allows us to serve, with the same code, the needs of a user facing UI (either web or native), an admin and also programmatic access by third parties. We get all three for the price of one, as long as the web clients (user-facing and admin) perform client-side rendering. If, however, your application does server side rendering, adding an HTTP API might entail extra work. I personally embrace a 100% client-side rendering approach, but that's outside of the scope of backend lore.
The data transmitted between client and server is of two types:
- Files: HTTP, images. Clients can also upload files through requests of type
multipart/form-data
. - JSON data. This is the preferred format for most interactions.
In my experience, a non-trivial application which contains its own application logic plus a set of functions for interacting with FS, S3, redis and its own auth logic can be written in 2000-3000 lines of code. In my eyes, this warrants to keep the server as a single centralized concern. 2-3k of code can hardly be called a monolith.
Before you rush to embrace the microservices paradigm, I offer you the following rule of thumb: if two pieces of information are dependent on each other, they should belong to a single server. In other words, the natural boundaries for a service should be the natural boundaries of its data.
In projects where I own and maintain the code, I prefer çiçek - although the library is begging for a rewrite. In other cases, I use express with the addons body-parser, busboy-body-parser and cookie-parser.
I only use a single environment variable, to specify the name of the environment. Typical options are dev
and prod
. When no environment is provided, I default to dev
. The environment can be passed as an argument when running the server: node server ENV
and then retrieved from the server code as follows: var ENV = process.argv [2];
.
I store the configuration for the server on a js file, usually named config.js
. In case the server is open source code, I create a separate file secret.js
to store credentials and secrets. Both of these files will be referred to henceforth as config
.
In case that secret.js
is present, be sure to add it to your .gitignore
file!
Typical things that go in config
:
- Port where node listens.
- Name of the cookie where the session is stored.
- Redis configuration.
- Cookie secret (for signing cookies).
- AWS credentials.
- Whitelisted users/admins.
The config is simply required as var CONFIG = require ('./config.js');
.
To specify configuration items per environment, rather than have separate objects, it's better to split only those properties that change from environment to environment. For example:
var ENV = process.argv [2];
module.exports = {
port: 1111,
bucket: ENV === 'prod' ? 'prodbucket' : 'devbucket'
}
I find it useful to have a single function, notify
, to report errors or warnings. This function acts as a funnel through which pass all the events that may concern us. This function can then do a variety of things, like printing to a log, sending an email, or sending the data to a remote logging service.
I suggest invoking the notify function in the following cases:
- Master process fails.
- Worker process fails.
- Database fails or is unreachable.
- Server starts.
- Client transmission errors.
- Client is replied with an error code (>= 400).
Lately I'm not relying on local logs anymore; instead, I'm sending all logs to a separate log server that stores the logs as permanent files, and makes them accessible and searchable through a web admin. A saner person would probably use an established logging service. As an attentive reader pointed out, we're redirecting the standard output of node to /tmp
, which means that the logs won't be preserved after a restart. This is by design, because then we don't have the risk of logs filling up the disk (which is something that happens way more often than usually expected - enough to require another moving part, log rotation). The other advantage of not having local logs is that you don't have to ssh to different servers to see what's going on. This approach, however, requires that all important data should be logged, including (and perhaps foremost) uncaught exceptions and error stacktraces.
admin.js
: admin client, optional.client.js
: user UI.config.js
&secret.js
: configuration.deploy.sh
, script for deploying the app.mongroup.conf
, for keeping the app running.package.json
, dependency list.provision.sh
, for creating new instances.server.js
, with all server code.test.js
, the test suite.
Contained in a single markdown file, readme.md
. Usually contain the following:
- Purpose of the project and general description.
- Todo list.
- Routes, including description of behavior and payloads.
- DB structure.
- FS structure, if any.
- Further configuration instructions, if any.
- License.
When provisioning a new server, upgrade all the packages first.
ssh $HOST apt-get update
ssh $HOST DEBIAN_FRONTEND=noninteractive apt-get upgrade -y --with-new-pkgs
I recommend installing fail2ban
, which will rate-limit IPs that perform invalid SSH logins:
ssh $HOST apt-get install fail2ban -y
To provision a node server, we merely need to install node plus mon and mongroup.
# install node
ssh $HOST "curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -"
ssh $HOST apt-get install nodejs -y
# install mon & mongroup
ssh $HOST apt-get install build-essential -y
ssh $HOST '(mkdir /tmp/mon && cd /tmp/mon && curl -L# https://github.com/tj/mon/archive/master.tar.gz | tar zx --strip 1 && make install && rm -rf /tmp/mon)'
ssh $HOST npm install -g mongroup
To use mongroup to keep the server running, you need a file mongroup.conf
which can look like this:
logs = /tmp/MYAPP/logs
pids = /tmp/MYAPP/pids
node server ENV
where ENV
is dev
or prod
.
You can place most of these commands on a single provision.sh
file, or create different files for provisioning different machines. To me, the important thing is to have a set and unambiguous set of commands that will run successfully and which represent the entire configuration needed in the instance. There should be no unspecified or unwritten steps for provisioning an instance.
If you want to store the logs of your application within the server itself, please change the log path in mongroup.conf
to another location other than /tmp
. If you do this, I highly recommend you set up log rotation. For more on logging, please see the Notification section.
As long as you fully control the remote instances/servers, I don't see a need for running the application within Docker or any other sort of virtualization. Since the full environment is replicable (starting with a given OS, you run a certain number of commands to reach a provisioned instance), there's no idempotence benefit to virtualization. And by running the service on the host itself, everything else is simpler. I can only recommend virtualization if you're deploying to environments you don't fully control.
If you are running an Architecture of type B and don't mind a couple of minutes of downtime per week, a way to keep your instances fresh is to run a script (which I call refresh.sh
) like this every week, which will 1) upgrade software packages; 2) stop your app gracefully; 3) stop redis gracefully; 4) restart your instance.
Whether this is strictly necessary is debatable. At a superficial level, it's not inviting to ssh into an instance that greets you with the message system restart required. Even in contexts where software is written with quality in mind, running processes tend to become more fragile over time - and this includes the OS itself, particularly in the case of any Linux (I suspect that OpenBSD might be more likely to run forever without issues). Restarting the processes periodically might be a crude but effective way to reboot the "aging" of a runtime. The internet itself is resilient because it expects failures, instead of trying to prevent them; in this vein, having systems that can recover from reboots feels like a step in the right direction. I'm open to debate here, particularly if you have practical experience in this regard.
Warning: Before showing you refresh.sh
, please bear in mind that I don't do this on the instance(s) where I run the production database of an application with a significant amount of traffic & data - that is, I don't do this on the data servers on Architectures C and D. If you do this, you should also turn off all your nodes beforehand, to avoid serving requests with 500 errors or triggering the alerts - ignoring alerts is a very dangerous practice. In general, a coordinated and automatic outage sounds daunting and I have never implemented it. I don't have a good answer to this problem; all I know is that relying on the production database server/instance never being updated or restarted is also a fragile approach - if only because you can never rely on your instance never failing. My approach so far is to perform these operations manually, every couple months, with adrenaline pumping, after checking the backups, and they always entail downtime. I hope to have a better solution in the future.
In larger setups with multiple nodes, you can use this approach on the instances running node without experiencing downtime, as long as you don't restart all the node instances at the same time.
OK, enough warnings, here goes refresh.sh
:
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
apt-get update && DEBIAN_FRONTEND=noninteractive apt-get upgrade -y --with-new-pkgs && apt-get autoremove -y && apt-get clean
cd /path/to/yourapp && mg stop
service redis-server stop
shutdown -r now
Notice that the app is stopped before redis is stopped, to avoid serving requests with 500 errors and to not trigger your alerts indicating that the database is unreachable. If you're running this script on an instance that only runs node, you can omit the fourth line since there's no redis there.
For this to work, your instance needs to start your application automatically. For this, another script (which I call start.sh
) is needed:
cd /path/to/yourapp && mg restart
Then, you can put two entries in the crontab. You must replace M
, H
and D
with the minutes, hour and day of the week where you want to perform the refresh
, and also change the paths to indicate where the scripts are. Before leaving this in place, execute the script yourself to make sure it's working as intended!
M H * * D /path/to/refresh.sh
@reboot /path/to/start.sh
Splitting the data in a database into parts is called sharding. That name gives me chills; a shard is defined as A piece of broken glass or pottery, especially one found in an archaeological dig and in my mind's eye evokes the sight of broken glass, which is pretty much the last thing I want to see when talking about the main repository of information of a large application with potentially thousands or millions of users. For this reason, I will use the term partitioning (which usually has a FS connotation); and instead of shards, I will speak of nodes. If you're offended about this, you very likely know way more about distributed databases and should not pay attention to what I'm saying anyway.
Notice that partitioning is unrelated to whether you have read-only replicas of a database. For example, I wouldn't consider an architecture with a single redis master and a single redis replica attached to it to be an architecture with a partitioned database. If you had two redis master databases which both can process writes, then you'd be looking at a partitioned architecture.
My experience with partitioned databases is slim - I only worked once in such a context. We did a manual type of partitioning, where we stored only certain types of data on one database and certain other types of data on the other one (this is called vertical partitioning). The reason for this was the sheer size of the database, which it being Redis, was not easy to fit in memory if we had it all in one server.
Since I can only talk about my experience, I'm unable to suggest how to perform DB partitioning, especially if you're using relational databases.
Most of the time, there are standard solutions for partitioning that are intimately linked to the database you're using, and developed and promoted by the database vendors. Using this is probably much saner than rolling out your own partitioning. In any case, before adopting any partitioning scheme, I do suggest understanding the answers to the following questions:
- Does the database partition itself automatically or is it done manually? If so, how easy it is to do it?
- Is it as easy to reduce scale (merge partitions) as it is to create them?
- Is there downtime when scaling up/down? Or perhaps performance degradation? Is there any chance of losing consistency while this takes place?
- Is data distributed randomly across partitions? Can you decide where to store data?
- Does the partitioning compromise the usual guarantees that your database provides (atomicity, consistency, isolation, durability)? Does it render transactions impossible?
- In case of a node failing, is there a total write/read failure, a total write failure, a partial write failure, or possible inconsistencies later?
- How can you backup and recover your entire cluster of databases from scratch? Can you shut down the system cleanly and then recreate it somewhere else?
My experience with changing the structure of production databases is entirely within the NoSQL orbit (mostly Redis and a little bit of MongoDB). Whenever I worked with relational databases, I wasn't the one in charge of performing database migrations (although I have written a couple).
Relational databases have quite sturdy mechanisms for performing updates to the schema of a table. NoSQL databases, generally, do not, so things can go irreversibly bad much easier. For this reason, I have a protocol that I follow when modifying the structure a production database in a system that is running. Generally, this entails no downtime, unless the change I'm performing can affect the data's consistency - in which case, the API needs to be turned off and the changes made when the database is not receiving any other requests.
This protocol is probably overkill for relational databases, though I'm not sure; if in doubt, please go ask someone who has had to do this type of thing with relational databases. If you're running updates on a NoSQL database, this might come in handy.
- Write the script that performs the changes to the database. In my case, it is usually a single function that performs the necessary changes in the structure of the data.
- Test it against a local or dev environment with some data. Get it working reasonably well.
- Use a recent production backup and restore it either locally or in a dev environment that can hold all the data.
- Debug the script against the replica database with production data you created in #3. You might have to recreate the data several times, and this is fine; think that each time you do it on a dev environment, it is sparing you from committing that very mistake on a production database which thousands or millions depend on.
- Add automated sanity checks to the end of your script, and make sure that it all looks good after the script runs. Make a final clean run and check that the script does it all correctly in one go, without further intervention on your part. Eschew manual touch-ups, since those are very easy to forget later.
- Commit your script (if you haven't done it already) somewhere in your repo, so you know exactly what script you ran.
- If your script affects the consistency of the data, you might have to turn off all traffic.
- Run a full backup of the production database.
- Check that the backup you just created can be recreated on the dev environment you created in #3. If it takes a few minutes, so be it. You want to trust this backup.
- Take a deep breath. Run the script against the production database. While the script runs, consider whether this adrenaline rush isn't entirely unrelated to your choice of working with backends. When the script is done, be slow, systematic and make sure to interpret the sanity checks correctly.
- Something failed? Restore the production database from the backup you created in #8. Don't freak out. This is exactly why you made the backup in the first place.
- Nothing failed! Hurray! Turn on the APIs again. Make sure the traffic is flowing smoothly. Why do you always have to be so paranoid?
This process is the single most stressful controlled backend task that you probably will encounter. I also find it exhilarating.
I normally deploy through a bash script that performs the following:
- Determines the IP depending on the environment.
- Asks for an extra parameter "confirm" if deploying to
prod
. - Compresses the entire repo folder, ignoring the
node_modules
folder and.swp
temporary files generated by Vim (my text editor). - Copies the compressed folder to the target server.
- Uncompresses the folder.
- Adds the environment to
mongroup.conf
. - Runs
npm i --no-save
to install/update the dependencies. - Restarts the server through the command
mg restart
. - Deletes the compressed folder from the server and my local computer.
To run the server locally, after installing the dependencies with npm i --no-save
, you can merely run node server
.
I eschew automatic code deployment triggered by a commit to a certain repository branch because:
- Deploying manually with a script takes only a few seconds.
- I like to see if the deployment is indeed successful at that very moment, instead of finding out through an email later.
- Committing and deploying can reasonably be considered separate processes.
Identity is managed by the server itself, without reliance on external services. User data is stored in the database.
It is absolutely critical to hash user passwords. I recommend using bcryptjs.
Side note: wherever possible, I use pure js modules and avoid those that have C++ bindings. The latter sometimes require native packages to be installed and in my experience tend to be more fickle. js code only needs the runtime and its dependencies to run, and no compilation.
Cookies are used to identify a session. The session itself is a quantity that is cryptographically hard to guess. The cookie name is predetermined in the configuration (for example, myappname
); its value is the session itself. The session doesn't contain any user information - all user information is stored in the database.
Recently I have enabled the HttpOnly
attribute on the cookie, so that it is not accessible by client-side js. The main reason for this is that if the app is ever victim of XSS through malicious user content that I've failed to filter, users won't be fully vulnerable (otherwise the session, which is a temporary password-equivalent, would be immediately available to the attacker).
To enable CSRF prevention, I create a CSRF token bound to a particular session; this token is sent by the server on a successful login, and also through an endpoint where the client can request a CSRF token - this endpoint also serves as a way for the client to ask the server whether its currently logged in. The CSRF token is sent along with every POST
request (actually, any request that could perform changes). A CSRF token might not be necessary if you're not supporting older browsers - for a discussion on alternatives, see here and here.
Cookies are also signed. The session within the cookie should not be easily guessable, so signing it doesn't make it harder to guess. The reason for signing them, however, is that this allows us to distinguish a cookie that was valid but has now expired from an invalid cookie. In other words, we can distinguish an expired session from an attack without having to keep all expired sessions in the database.
Here's how I deal with the cookie/session lifecycle:
- I set cookies to expire the distant future (through the
Expires
attribute), and then let the server decide when a cookie has expired; when an expired session is received (as determined by the server), the server replies with a 403 and orders the browser to delete the cookie. In this way, I don't have to guess when a cookie will expire and the server retains full control over their lifecycle. - Sessions have an expiry period (could be n hours or n days); after that session hasn't been used for that period of time, it expires and is removed. Since redis has an in-built mechanism for expiring keys after a period of time, this happens automatically without extra code.
- Every time a session is used, it is automatically renewed. This avoids a user being kicked out of the session while they are using it.
Besides HttpOnly
and Expires
, I set the Path
attribute of the cookie since otherwise the cookie doesn't seem to be preserved when all tabs of the browser are closed.
Both in çiçek and express, routes are executed in the order in which they are defined. A request might be matched by one or more routes, executed in order. Any route has the power to either "pass" the request to the next matching route, or instead to reply to that request from itself. It can't, however, do both.
A few routes are executed for every incoming request. They exist in the following order.
- Avant route: happens at the beginning creates a
log
object into theresponse
, including a timestamp, for profiling and debugging purposes. - Public routes (those not requiring to be logged in).
- Auth routes.
- Gatekeeper route: this route checks for the presence of a session in the cookie; if no cookie is present, the request is replied with a 403 code and possibly
notify
us. If the session is invalid, checking for its signature can either determine an expired session or a malicious attempt. If the session is valid, the associated user data is retrieved from the database and placed within the request object; in this case, this route invokesresponse.next
to pass the request to the following route. - CSRF route: checks for the CSRF token in the body of every
POST
request. - Private routes (those requiring to be logged in).
- Admin gatekeeper route: any request looking to match a route after this one must be an admin, otherwise a 403 is returned.
- Admin routes.
The following are the typical auth routes:
POST /auth/signup
POST /auth/login
POST /auth/logout
POST /auth/destroy
(eliminates the account)POST /auth/verifyEmail
. This route is optional. This route requires sending emails.POST /auth/recover
. This route requires sending emails.POST /auth/reset
.POST /auth/changePassword
. This route possibly requires sending emails (to notify the user).
For systems with few and predefined users, its easiest to put a list of valid email addresses in CONFIG
, then when the signup logic is triggered, the system only allows the whitelisted addresses to create an account. This pattern allows for reusing the code of a more normal application with open sign in, without backdoors and without having to create the user accounts with a seed script.
They can be implemented in two ways: either by setting a flag in the database manually (or through admin routes to this effect), or through reading a list of admin users from config
. In any case, admin users still use sessions as normal users.
The master node uses cluster to fork one child process. It is recommended to fork one process per CPU - you can get the number of CPUs with this command require ('os').cpus ().length
. By doing this, you'll have n nodes serving requests, one per CPU.
In case a worker node fails, it is recommended to use the uncaughtException
handler to notify
the error, like this:
if (! cluster.isMaster) return process.on ('uncaughtException', function (error) {
notify (error);
process.exit (1);
});
It is important also to exit the worker process in case it suffered an uncaught exception, since the exception potentially renders the process unstable.
A few tasks, besides the API, can be done by the master process. These include:
- Uploading redis dumps to S3 periodically.
notify
if CPU or RAM usage is dangerously high.- Perform DB consistency/cleanup checks, particularly in
dev
.
I usually write these functions at the end of the server.js
file, after the routes.
To keep the code fluid and short, whenever I'm replying to a request with JSON data, I use a reply
function (which comes already defined in çiçek - if I'm using express, I define it at the top of server.js
) of the form reply (response, CODE, body, [headers])
.
I consider it indispensable that every route that receives data should perform a deep validation of it. If a payload breaks a server, whether inadvertently or maliciously, it is the server's fault. This is embodied in the concept of auto-activation.
For performing validations, I use a combination of teishi and custom code. Checks are usually done first synchronously, and then sometimes asynchronously against data stored in the database (which necessarily has to be reached asynchronously).
A global type of validation that happens before the actual routes (as middleware) is to check that requests with a application/json
content-type
header contain a body that is valid JSON.
I usually just use GET
and POST
methods when interchanging JSON. Often, PUT
and PATCH
can be dealt with almost the same code as POST
.
When retrieving data (GET
), the query (request payload) cannot be larger than 2048 characters. In cases where this is not enough, I send the queries through POST
. Since POST
requests can be cached with the right response headers, this entails no performance issue.
With apologies to the Church of REST!
I use ETags; to compute this cache header, I use the actual response body if it is JSON; and the modification time and file size in the case of a static file. Avoiding dates in caching has made my code and debugging much much simpler.
Since we're writing HTTP APIs, it makes sense to test them through HTTP requests. I use hitit, which is a tool that triggers HTTP requests.
My style of testing is end to end - the API is tested from the outside only. Any internals are tested through the outcomes that they provide to the routes that use them. If an internal is convoluted enough, it should be moved to its own module and have its own tests, but so far I haven't seen the need for this - code seems to become either an application or a separate tool.
Whenever a test fails, the entire suite fails. No errors or warnings are tolerated. The test suite either passes or it does not.
The tests use no backdoors and are indistinguishable from client requests. The only allowance I make for them in server code is to not send emails when the environment is local.
I further simplify things by not specifying a test database, and only running the tests locally. Since my development environment is very similar to the environment where the code runs, this seems to work well.
The tests should clean up after themselves using requests - i.e., by deleting the test user just created through a proper route.
Another advantage of testing through HTTP is that the test payloads can work as a complementary documentation of the payloads sent and received by the server for each route.
I recommend not using random quantities when sending payloads, to make the tests replicable.
If you want to break a given route, it is possible to write a function that given a schema, will generate invalid payloads. This is a sort of brute-force way of testing the validation of your route. Here's an example.
Tests are run manually: node test
, while the server is already running.
First, I write the server routes along with their documentation.
Then, I write the tests.
For each route, first I try to break it. When I can't break it, I send different payloads that should cover all its main cases.
Once the tests are passing, the backend is ready. Time to write the client; and any bugs you'll find will very likely be in the client, since the server is debugged. In this way, errors don't have to be chased on both sides of the wire.
If you don't want to install node or redis in your host machine, or you're not running Ubuntu, you can use Vagrant to set up a local ubuntu on which you can develop your application. Below is a sample Vagrantfile
to set up an Ubuntu VM with redis & node installed.
Please note that this will be useful only for your local environment, not to provision remote instances.
Vagrant.configure("2") do |config|
# Use Ubuntu 18.04
config.vm.box = "ubuntu/bionic64"
# Use 4GB of RAM and 2 cores
config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 2
end
# Map server port (if it's not 8000, replace it with the port on which your node server will listen)
config.vm.network "forwarded_port", guest: 8000, host: 8000
config.vm.provision "shell", inline: <<-SHELL
# Always login as root
grep -qxF 'sudo su -' /home/vagrant/.bashrc || echo 'sudo su -' >> /home/vagrant/.bashrc
# Update & upgrade packages
sudo apt-get update
sudo DEBIAN_FRONTEND=noninteractive apt-get upgrade -y --with-new-pkgs
# Install redis
sudo apt-get install -y redis-server
# Install node.js
curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
sudo apt-get install -y nodejs
# Other provisioning commands you may want to run
SHELL
end
The essential vagrant commands are:
vagrant up
to create the environment (if it doesn't exist) or to start it (if it already exists).vagrant ssh
to enter the environment (once it's created).vagrant halt
to shut down the environment.vagrant destroy
to destroy the environment (WARNING: this will erase all the files within your environment!).
I have recently decided to take some common parts of applications that I've been building and provide them as services in the future. This should have the following advantages:
- Reduce code size by ~500 lines and tests by ~250 lines.
- Allow for unlimited scaling.
- Provide an unified admin to see different aspects of the application.
The services I plan to extract are:
- Identity: store identities and cookies; server code should only consist of calling these functions within the routes, plus sending emails on a couple of routes. Rate limiting can also be piggybacked here.
- Files: as a service, backed by S3, served from fast, node-powered servers. Will also allow for tailing and filtering files for fast querying.
- Redis: redis will be accessible as a service through HTTP, backed by a consistent, configurable and incremental partitioning model.
- Logging:
notify
will send data to an unified service. Will allow for sending emails in certain situations. - Beat: track CPU and RAM usage in a dashboard. Will allow for sending emails in certain situations.
- Stats: unified statistics for measuring different things.
Once these are matured, they will be rolled out publicly. But the road ahead is long indeed!
This document is written by Federico Pereiro (fpereiro@gmail.com) and released into the public domain.