Skip to content

Design Doc: Support non-AWS S3-compatible object storage #72

@rwos

Description

@rwos

(this is a design document for #52 and the PRs linked to it)

Goal

Allow users to use hephy with any S3-compatible object storage (self-hosted or otherwise).

Do that by letting users specify the S3 endpoint URL via a helm chart value. That is the most flexible way to go, but users need to know that URL (except for AWS, where they can just leave it empty). It might be a good idea to pre-define some common S3-compatible storage providers later on (so that, say, storage: "DigitalOcean" would set the appropriate endpoint). This is out of scope for this design doc though.

Code Changes

  • Add an endpoint option to the s3 storage section of the main (workflow) helm chart
  • Use that endpoint in all components (i.e. pass it through to the respective S3 client in use)
    • builder: pass the endpoint into the regionendpoint of the client used here (docker/distribution/storage/s3)
    • registry: set the REGISTRY_STORAGE_S3_REGIONENDPOINT env var (as per the docs)
    • postgres: use the endpoint in the initial bucket creation on startup, and set WALE_S3_ENDPOINT appropriately for the backup
    • slugbuilder, dockerbuilder, slugrunner: support for that comes in with object-storage-cli, which will need the endpoint parameter passed on to its s3-client
    • docs: at least this document should probably mention that non-Amazon S3 also works

Tests

Not sure - I guess these would have to be integration/end-to-end tests mostly, but we'd need to start some sort of S3 server (minio?). Input welcome :)

Metadata

Metadata

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions