An easy way for anyone to get an overview of a large amount of photos from different vendors and sources.
The repo now uses yarn workspaces (https://yarnpkg.com/en/docs/workspaces) to handle shared dependencies. This, in my opinion, works well for a microservice architecture where all apps and libs should be self contained but with the benefit of sharing node_modules to prevent "module explosion".
How it works is that the root package.json defines an array of workspaces where every folder has their own package.json. When running yarn to install dependencies anywhere in the tree yarn will look at all package.json files, install all dependencies in a root node_modules folder to prevent duplicate copies, and then symlink needed modules between apps and libs. If a dependency is found in the repo it will be linked from the repo instead of installed from npm.
This also has the benefit of having one point where tools that affect the repo in general can be installed. These are installed as devDependencies in the root and doesn't affect any apps or libs. Things such as pre-commit
, prettier
etc are installed like this.
From the root or inside apps or libs, run:
yarn
yarn add <name> -W
-W
will tell yarn to install the dependency in the workspace root and not add it to anything but the root package.json.
In the folder of the app or lib, run:
yarn add <name>
This will install the dependency in the root node_modules and add the dependency to the app or lib package.json.
<name>
can be an npm module or a module found in the libs or apps folder.
The Docker images are now built using Nix.
The exact procedure depends on what OS you use, since the Docker images need to be built for a Linux target.
You're in luck! Just install Nix following the instructions at https://nixos.org/nix/. The short version: run
curl https://nixos.org/nix/install | sh
You also need to install Docker, but the exact procedure for that depends on your distro.
- Install Docker for Mac: https://www.docker.com/docker-machttps://www.docker.com/docker-mac
- Install Nix:
curl https://nixos.org/nix/install | sh
(just like on Linux) - Set up a [https://nixos.wiki/wiki/Distributed_build](remote Linux builder), which is easiest done using https://github.com/LnL7/nix-docker#running-as-a-remote-builder.
If you dont want to use Nix you can start external services like database and queue with docker-compose:
docker-compose -f docker-services-compose.yml up --build
If you are using VS code you can then:
- Create the database by running the Task
Migrate local database
- Run all apps by selecting the debug configuration called
All
. With this setups it is also possible to set break points and easily debug the code. - Start FE by run
yarn start:dev
in the web-frontend folder
Nix doesn't run natively on Windows, but runs fine (aside from microsoft/WSL#2395) under the WSL. Note that Docker for Windows only runs on Windows 10 Pro and Enterprise.
- Install WSL
- Go to Control Panel\Programs\Programs and Features
- Click "Turn Windows features on or off"
- Enable "Windows Subsystem for Linux"
- Install your favorite WSL distro (I've only tested this using Ubuntu). For instance, install Ubuntu from Microsoft Store: https://www.microsoft.com/store/productId/9NBLGGH4MSV6
- Install Docker Toolbox (if using Windows 10 Home) or Docker for Windows (if using Windows 10 Pro or Enterprise)
- Install Nix by running this in WSL:
sudo mkdir -p /etc/nix
sudo echo "use-sqlite-wal = false" >> /etc/nix/nix.conf
curl https://nixos.org/nix/install | sh
- Enable remote Docker access (if using Docker for Windows) See: https://medium.com/@sebagomez/installing-the-docker-client-on-ubuntus-windows-subsystem-for-linux-612b392a44c4
- Set DOCKER_HOST inside WSL to point to your Docker instance (localhost:2375 when using Docker for Windows)
Run ./docker-build.sh
to build and docker load
the images, and then run docker-compopse up --detach db && docker-compose up
to start everything.
All our CI builds are cached, which you can use to dramatically speed up your first-time build times. To enable the cache,
download Cachix and then run cachix use photogarden
.
Remote dependencies (from NPM) are automatically picked up from yarn.lock
, and just require a rebuild.
However, intra-workspace dependencies need to be specified in workspace.nix
, in the workspaceDependencies
field.
Projects in the Yarn workspace (libs/*
and apps/*
) are automatically picked up by the Nix build system. A Docker
image is built for each subfolder in the apps
folder.
By default Nix will store everything you have ever built, as well as all dependencies. As you can imagine, this will
grow pretty quickly. To get rid of everything that isn't currently required, you can run nix-collect-garbage -d
, which
runs a mark-and-sweep garbage collection on the Nix store.
You can also run nix optimise-store
, which will replace identical files with hardlinks. This is a bit slower than
nix-collect-garbage
and usually not quite as effective, but it leaves you with a still-populated cache, keeping the next
build fast.
Keep in mind that on Mac you'll want to GC both your host Nix and your build slave regularly!
Typical error message:
error: opening file '/home/teo/Documents/photo-garden/apps/blah/package.json': No such file or directory
This happens when there are folders inside apps or libs that aren't packages, usually because the package was renamed or
removed. To fix this, run git clean -idx apps/blah
(in this example) to remove the remaining node_modules, and then
rebuild.
Typical error message:
error: attribute 'db-migrations' missing, at /path/to/docker.nix
This happens when the Yarn package has a different name than its folder. Open its package.json
file, and verify that
the the name
attribute matches the name of the directory it is located in.
Docker can be used to run a full dev environment. Just run (after following the setup procedure in the previous section):
./docker-build.sh # Only if there are new dependencies since the last rebuild
docker-compose up --detach db
docker-compose up
Then go to http://localhost:9000/ and mark the bucket public.
The following services will be exposed to your machine when docker compose is running:
- http://localhost:3000 - The gateway app
- http://localhost:3001 - The photo app frontend (web-frontend)
localhost:4222
- Nats streaming server- http://localhost:8222/ - Nats streaming server monitoring
- http://localhost:9000/ - Minio file storage server (Amazon S3 clone)
- Access key:
not-so-access
- Secret key:
not-so-secret
- Access key:
localhost:5433
- Postgres db. Exposed on your machine to allow easy inspection.- Connect by running
psql --host=localhost --port=5433 --user=postgres
- Connect by running
If you want to reset the db (to run new migrations) and queue (to clear saved state) you can run:
docker-compose down
Prettier is now applied automatically to all commits to make code styling more common. Also removes the need to care about formatting your code. Formatting is applied to *.{js,json,md,css} files when doing a git commit
.
Will probably add automatic eslinting as well.
This is done using the npm module pre-commit
. The module runs the commands specified in package.json
pre-commit
section.
Libs are common dependencies that can be shared and used by all apps. They are implemented as npm modules to allow yarn to install them from other package.json files. No need to publish them to be able to install them in dev.
- app-name - README
- config - README
- logging - README
- communication - README
- db - README
- dropbox-api - README
These are the apps available right now. All apps have been refactored to share a same kind of structure. They have also been refactored to utilize all common libs.
To create a new app, just copy one of the existing ones and modify the package.json. Then run yarn
in the folder or in the root.
To include the app in the docker setup you must also add a section in the docker-compose.yml
file. Copy one of the other apps found there and tweak to your need.
The photos rest api like we had before but with queue handling broken out into another app.
A backend app that downloads all images imported from Dropbox. For now it also normalize the photos.
A backend app that downloads all images imported from Google Drive.
A backend app that normalizes all images from Google Drive before importing them to the db.
A backend app that consume all new photos imported and inserts them into the database.
The current gateway that serves all the frontend. Basically same as before even though a lot of refactoring has been done to utilize the new libs.
Postcss config moved from package.json to .postcssrc.json
for better compatibility. Couldn't get it to work otherwise.
These are services needed in docker that are not a lib nor an app.
Postgres docker container with initial sql schema.
Automatic migration handling for db using sql migration files. Just add a new migration file here using the specified format and recreate your docker container to have the migration applied.
These are the current queues available:
All images imported from Google Drive are published to this queue with the following format:
{
"user": string,
"photo": GoogleDrivePhotoResource
}
All photos that have been downloaded are published to this queue to indicate that further handling of the file is now possible.
Message format:
{
"id": string,
"extension": string
}
All photos that have been normalized will be published to this queue.
Message format:
{
"owner": string, // user guid
"url": string, // Url to thumbnail/base64 thumbnail
"mimeType": string,
"provider": string, // E.g. "Google" for google drive
"providerId": string,
"original": Object
}
original
is the same structure as the raw item from the provider, e.g. GoogleDrivePhotoResource if imported from Google Drive.
- Deployment not fixed yet
- Refactor unleash to an app