This repository is a reference implementation of the OpenTDF protocol and attribute-based access control (ABAC) architecture, and sufficient tooling and testing to support the development of it.
We store several services combined in a single git repository for ease of development. These include:
- Key Access Service - An ABAC access policy enforcement point (PEP) and policy decision point (PDP).
- Attributes - An ABAC attribute authority.
- Entitlements - An ABAC entitlements policy administration point (PAP)
- Entitlements Store - An ABAC entitlements policy information point (PIP)
- Entitlements PDP - An ABAC entitlements policy decision point (PDP)
- Postgres
- Keycloak as an example OIDC authentication provider, and sample configurations for it.
- Tools and shared libraries
- Helm charts for deploying to Kubernetes
- Integration tests
- The
containers
folder contains individual containerized services in folders, each of which should have aDockerfile
- The build context for each individual containerized service should be restricted to the folder of that service - shared dependencies should either live in a shared base image, or be installable via package management.
- The
charts
folder contains Helm charts for every individual service, as well as an umbrella backend Helm chart that installs all backend services. - Integration test configs and helper scripts are stored in the
tests
folder. Notably, a useful integration test (x86 only) is available by runningtilt ci -f xtest.Tiltfile
from the repo root. - A simple local stack can be brought up with the latest releases of the images by running
tilt up
from the repo root. This simply uses Tilt to install the backend Helm chart and deploy it with locally-built Docker images and Helm charts, rather than pulling tagged and released artifacts.
This quick start guide is primarily for standing up a local, Docker-based Kube cluster for development and testing of the OpenTDF backend stack. See Existing Cluster Installation for details on using a traditional Helm deployment to an operational cluster.
-
Install Docker
-
Install kubectl
- On macOS via Homebrew:
brew install kubectl
- Others see https://kubernetes.io/docs/tasks/tools/
- On macOS via Homebrew:
-
Install a local Kubernetes manager. Options include minikube and kind. I suggest using
ctlptl
(see below) for managing several local clusters.-
minikube
- On macOS via Homebrew:
brew install minikube
- Others see https://minikube.sigs.k8s.io/docs/start/
- On macOS via Homebrew:
-
Install kind
- On macOS via Homebrew:
brew install kind
- On Linux or WSL2 for Windows:
curl -Lo kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 && chmod +x kind && sudo mv kind /usr/local/bin/kind
- Others see https://kind.sigs.k8s.io/docs/user/quick-start/#installation
- On macOS via Homebrew:
-
-
Install helm
- On macOS via Homebrew:
brew install helm
- Others see https://helm.sh/docs/intro/install/
- On macOS via Homebrew:
-
Install Tilt
- On macOS via Homebrew:
brew install tilt-dev/tap/tilt
- Others see https://docs.tilt.dev/install.html
- On macOS via Homebrew:
-
Install ctptl
- On macOS via Homebrew:
brew install tilt-dev/tap/ctlptl
- Others see https://github.com/tilt-dev/ctlptl#homebrew-maclinux
- On macOS via Homebrew:
# Install pre-requisites (drop what you've already got)
./scripts/pre-reqs docker helm tilt kind
- Generate local certs in certs/ directory
You may need to manually clean the
certs
folder occasionally
./scripts/genkeys-if-needed
- Create cluster
ctlptl create cluster kind --registry=ctlptl-registry --name kind-opentdf
- Start cluster
tilt up
will run the main Tiltfile in the repo root, e.g. ./Tiltfile. Tilt will watch the local disk
for changes, and rebuild/redeploy images on local changes.
tilt up
- Hit spacebar to open web UI
(Optional) Run
octant
-> This will open a browser window giving you a more structured and complete overview of your local Kubernetes cluster.
tilt down
ctlptl delete cluster kind-opentdf
helm repo remove keycloak
-
Install kubectl
- On macOS via Homebrew:
brew install kubectl
- Others see https://kubernetes.io/docs/tasks/tools/
- On macOS via Homebrew:
-
Install helm
- On macOS via Homebrew:
brew install helm
- Others see https://helm.sh/docs/intro/install/
- On macOS via Homebrew:
-
Officially tagged and released container images and Helm charts are stored in Github's ghcr.io OCI image repository.
- You must follow github's instructions to log into that repository, and your cluster must have a valid pull secret for this registry.
- You must override the
backend
chart'sglobal.opentdf.common.imagePullSecrets
property and supply it with the name of your cluster's existing/valid pull secret.
- Ensure your
kubectl
tool is configured to point at the desired existing cluster - TODO/FIX: Inspect the Tiltfile for the required, preexisting Kube secrets, and create them manually
- Inspect the backend Helm values file for available install flags/options.
helm install otdf-backend oci://ghcr.io/opentdf/charts/backend -f any-desired-values-overrides.yaml
helm uninstall otdf-backend
The microservices support OpenAPI, and can provide documentation and easier interaction for the REST API.
Add "/api/[service name]/docs/" to the base URL of the appropriate server. For example, http://127.0.0.1:65432/api/kas/docs/
.
KAS and EAS each have separate REST APIs that together with the SDK support the full TDF3 process for encryption,
authorization, and decryption.
Swagger-UI can be disabled through the SWAGGER_UI environment variable. See the configuration sections of the README documentation for KAS for more detail.
Please use the autoformatters included in the scripts directory. To get them running in git as a pre-commit, use the following:
scripts/black --install
scripts/prettier --install
scripts/shfmt --install
These commands will autoformat python and bash scripts after you run 'git commit' but before the commit is written to the tree. Then mail a PR and follow the advice on the PR template.
Our unit tests use pytest, and should integrate with your favorite environment.
For continuous integration, we use monotest
, which runs
all the unit tests in a python virtual environment.
To run all the unit tests in the repo:
scripts/monotest
To run a subset of unit tests (e.g. just the kas_core
tests from the kas_core subfolder):
scripts/monotest containers/kas/kas_core
Once a cluster is running, run tests/security-test/helm-test.sh
Once a cluster is running, in another terminal run:
tilt up --port 10351 -f xtest.Tiltfile
Any deployments are controlled by downstream repositories.
TODO Reference opentdf.us deployment?
To assist in quickly starting use the ./scripts/genkeys-if-needed
to build all the keys. The hostname will be assigned opentdf.local
.
Make sure to add 127.0.0.1 opentdf.local
to your /etc/hosts
or c:\windows\system32\drivers\etc\hosts
.
Additionally you can set a custom hostname BACKEND_SERVICES_HOSTNAME=myhost.com ./scripts/genkeys-if-needed
, but you might have to update the Tiltfile and various kubernetes files or helm chart values.
If you need to customization please see the Advanced Usage guide alongside the Genkey Tools.
- Decide what your host name will be for the reverse proxy will be (e.g. example.com)
- Generate TLS certs for ingress
./scripts/genkey-reverse-proxy $HOSTNAME_OF_REVERSE_PROXY
- Generate service-level certs
./scripts/genkey-apps
- (Optional) Generate client certificates
./scripts/genkey-client
for PKI support
Each genkey script has a brief help which you can access like
./scripts/genkey-apps --help
./scripts/genkey-client --help
./scripts/genkey-reverse-proxy --help
If you faced with CORS issue running abacus locally
Probably abacus running different port, you can setup origin from tilt arguments.
Arguments are optional and default value is http://localhost:3000
tilt up -- --allow-origin http://localhost:3000