First things first: I'm glad you're reading this! Join our Discord to chat with other people in the Atomic Data community. If you encounter any issues, add them to the Github issue tracker. Same goes for feature requests. PR's are welcome, too! Note that opening a PR means agreeing that your code becomes distributed under the MIT license.
If you want to share some thoughts on the Atomic Data specification, please drop an issue in the Atomic Data docs repo. Check out the Roadmap if you want to learn more about our plans and the history of the project.
- Table of contents
- Running & compiling
- Git policy
- Testing
- Performance monitoring / benchmarks
- Responsible disclosure / Coordinated Vulnerability Disclosure
- Releases, Versioning and Tagging
TL;DR Clone the repo and run cargo run
from each folder (e.g. cli
or server
).
- Run
cargo run
to start the server - Go to
browser
, runpnpm install
(if you haven't already), and runpnpm dev
to start the browser - Visit your
localhost
in your locally runningatomic-data-browser
instance: (e.g.http://localhost:5173/app/show?subject=http%3A%2F%2Flocalhost
) - use
cargo watch -- cargo run
to automatically recompileatomic-server
when you update JS assets inbrowser
This project is primarily being developed in VSCode. That doesn't mean that you should, too, but it means you're less likely to run into issues.
- Tasks: The
/.vscode
directory contains varioustasks
(open command palette => search "run task") - Debugging: Install the
CodeLLDB
plugin, and press F5 to start debugging. Breakpoints, inspect... The good stuff. - Extensions: That same directory will give a couple of suggestions for extensions to install.
There are earthfile
s in browser
and in atomic-server
.
These can be used by Earthly to build all steps, including a full docker image.
- Make sure
earthly
is installed earthly --org ontola -P --satellite henk --artifact +e2e/test-results +pipeline
earthly --org ontola -P --satellite henk --artifact +build-server/atomic-server ./output/atomicserver
- Use the
mold
linker + create a.cargo/config.toml
and add[build] rustflags = ["-C", "link-arg=-fuse-ld=lld"]
- Note: this is primarily for development on linux systems, as mold for macOS requires a paid license
If you want to build atomic-server
for some other target (e.g. building for linux from macOS), you can use the cross
crate, which requires docker
.
cargo install cross
# make sure docker is running!
cross build --target x86_64-unknown-linux-musl --bin atomic-server --release
Note that this is also done in the earthly
file.
- Make sure your branch is up to date with
develop
. - Open a PR against
develop
. - Make sure all relevant tests / lint pass.
Create new branches off develop
. When an issue is ready for PR, open PR against develop
.
# Make sure nextest is installed
cargo install cargo-nextest
# Runs all tests
# NOTE: run this from the root of the workspace, or else feature flags may be excluded
cargo nextest run
# Run specific test(s)
cargo nextest run test_name_substring
# End-to-end tests, powered by PlayWright and Atomic-Data-Browser
# First, run the server
cargo run
# now, open new terminal window
cd server/e2e_tests/ && npm i && npm run test
# if things go wrong, debug!
pnpm run test-query {testname}
We want to make Atomic Server as fast as possible. For doing this, we have at least three tools: tracing, criterion and drill.
There are two ways you can use tracing
to get insights into performance.
- Run the server with
--trace opentelemetry
and add--log-level trace
to inspect more events - Run an OpenTelemetry compatible service, such as Jaeger. See
docker run
command below or use the vscode task. - Visit jaeger:
http://localhost:16686
docker run -d --platform linux/amd64 --name jaeger \
-e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:1.6
- Use the
tracing::instrument
macro to make functions traceable. Check out the tracing docs for more info. - Run the server with the
--trace chrome
flag. - Close the server. A
trace-{unix-timestamp}.json
file will be generated in the current directory. - Open this file with https://ui.perfetto.dev/ or
chrome://tracing
. This will show you a flamegraph that you can zoom into.
We have benchmarks in the /lib/benchmarks
folder. Make sure there's a benchmark for the thing you're trying to optimize, run the benchmark, then make some changes to the code, then run the benchmark again. You should be able to see the difference in performance.
# install criterion
cargo install cargo-criterion
# go to atomic-server root folder - don't run benchmarks in `./lib`
cd ..
# run benchmark
cargo criterion
# or if that does not work
cargo bench --all-features
HTTP-level benchmarking tool. Sends a ton of requests, measures how long it takes.
cargo install drill
drill -b benchmark.yml --stats
If you encounter serious security risks, please refrain from posting these publicly in the issue tracker. We could minimize the impact by first patching the issue, publishing the patch, and then (after 30 days) disclose the bug. So please first send an e-mail to joep@ontola.io describing the issue, and then we will work on fixing it as soon as possible.
- Commit changes
- Make sure all tests run properly
- Test, build and update the
/browser
versions (package.json
files, see./browser/contributing.md
) - Use
cargo workspaces version patch --no-git-commit
(and maybe replacepatch
with theminor
) to update allcargo.toml
files in one command. You'll need tocargo install cargo-workspaces
if this command is not possible. - Publish to cargo:
cargo publish
. Firstlib
, thencli
andserver
. - Publish to
npm
(seebrowser/contribute.md
) - Update the
CHANGELOG.md
files (browser and root)
The following should be triggered automatically:
- Push the
v*
tag, a Release will automatically be created on Github with the binaries. This will readCHANGELOG.md
, so make sure to add the changes from there. - The main action required on this repo, is to update the changelog and tag releases. The tags trigger the build and publish processes in the CI.
Note:
- We use semver, and are still quite far from 1.0.0.
- The version for
atomic-lib
is the most important, and dictates the versions ofcli
andserver
. Whenlib
changes minor version,cli
andserver
should follow.
- Github Action for
push
: builds + tests + docker (usingearthly
, seeEarthfile
) - Github Action for
tag
: create release + publish binaries
If the CI scripts for some reason do not do their job (buildin releases, docker file, publishing to cargo), you can follow these instructions:
cargo build --release
- Create a release on github, add the binaries
- Update the versions in cargo.toml files using Semantic Versioning.
- run
cargo publish
inlib
, than you can run the same incli
andserver
OR
- Install
cargo install cargo-release
and runcargo release patch
DockerHub has been setup to track the master
branch, but it does not tag builds other than latest
.
- build:
docker build . -t joepmeneer/atomic-server:v0.20.4 -t joepmeneer/atomic-server:latest
- run, make sure it works:
docker run joepmeneer/atomic-server:latest
- publish:
docker push -a joepmeneer/atomic-server
or:
- build and publish various builds (warning: building to ARM takes long!):
docker buildx build --platform linux/amd64,linux/arm64 . -t joepmeneer/atomic-server:v0.20.4 -t joepmeneer/atomic-server:latest --push
. Note that including the armv7 platformlinux/arm/v7
currently fails.
- Run the
deploy
Github action
or do it manually:
cd server
cargo build --release --target x86_64-unknown-linux-musl --bin atomic-server
(if it fails, use cross, see above)scp ../target/x86_64-unknown-linux-gnu/release/atomic-server atomic:~/atomic/server/atomic-server-v0.{version}
ssh atomic
(@joepio manages server)service atomic restart
# logs
journalctl -u atomic.service
# logs, since one hour, follow
journalctl -u atomic.service --since "1 hour ago" -f
- Install
wasmer
andcargo-wasi
. cd cli
- run
cargo wasi build --release --no-default-features
(note: this fails, as ring does not compile to WASI at this moment) wapm publish