-
Notifications
You must be signed in to change notification settings - Fork 36
Open
Description
(this is a design document for #52 and the PRs linked to it)
Goal
Allow users to use hephy with any S3-compatible object storage (self-hosted or otherwise).
Do that by letting users specify the S3 endpoint URL via a helm chart value. That is the most flexible way to go, but users need to know that URL (except for AWS, where they can just leave it empty). It might be a good idea to pre-define some common S3-compatible storage providers later on (so that, say, storage: "DigitalOcean" would set the appropriate endpoint). This is out of scope for this design doc though.
Code Changes
- Add an
endpointoption to thes3storage section of the main (workflow) helm chart - Use that
endpointin all components (i.e. pass it through to the respective S3 client in use)- builder: pass the
endpointinto theregionendpointof the client used here (docker/distribution/storage/s3) - registry: set the
REGISTRY_STORAGE_S3_REGIONENDPOINTenv var (as per the docs) - postgres: use the endpoint in the initial bucket creation on startup, and set
WALE_S3_ENDPOINTappropriately for the backup - slugbuilder, dockerbuilder, slugrunner: support for that comes in with object-storage-cli, which will need the endpoint parameter passed on to its s3-client
- docs: at least this document should probably mention that non-Amazon S3 also works
- builder: pass the
Tests
Not sure - I guess these would have to be integration/end-to-end tests mostly, but we'd need to start some sort of S3 server (minio?). Input welcome :)
Metadata
Metadata
Assignees
Labels
No labels