-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(storage): support for configuring the S3 endpoint #71
Conversation
This is a good start – so, we support some other storage endpoints already, and they don't use the "s3" configuration section Does this change work by itself? (with the other updates that you placed in teamhephy/distribution#1 , of course I mean) I'm trying to decide if we should actually add a section for non-s3 storage systems that are not supported by name, and if we need to go down the road of adding config sections for each new storage system we want to support so that users are not manually setting endpoint knobs on systems that are (or should be) defined in an upstream s3 library supporting repo somewhere... |
@kingdonb well this works in the sense that it doesn't break anything, but it also doesn't do anything - still needs a PR for each component to actually get that config option down to the respective s3 client (which I'll open now-ish :)).
I think (apart from minio), these are not s3-compatible though? But I don't know. I agree that the whole storage config stuff is a bit messy though - each component has to do the same things again to support all options. It would be a bit cleaner to just always, say, run a minio (as storage by default; as a gateway to gcs/aws/s3-compatibles if requested). I'm not sure if that's appropriate for every use-case though. And it won't exectly make stuff faster either. In general, I don't see through Hephy well enough yet, to say what direction would be best. So for now I just added the s3.endpoint everywhere. |
I am pretty sure they are all basically using S3 API, I just don't know how the components all support this quite inside-and-out. The teamhephy/s3-uploader repo, for one I think, would have to be used and supporting all three... as AIUI this is the component that handles WAL backups from the postgres component, and the capability of running on-cluster postgres is a requirement of every supported platform. I haven't looked deeply enough at this to say for sure, but I have every intention of digging deeper inside of the next 7 days or thereabouts! We are excited to add support for DigitalOcean Spaces at least in the near future. We want to do this for Spaces so that we can add them to the list of supported platforms quickly after their managed Kubernetes goes into general availability. (And IBM Cloud has had most of these and all of this kind of supporting infrastructure available in GA already for at least 6 months.) |
Well if it's all s3 anyway, we can simplify a lot there 👍
from a quick glance at the docs, that should indeed just work by setting that new |
Definitely seems like there are some opportunities to simplify the config, but I think for first-class support like we have with the other storage APIs, it would be better to save the user from needing to know about In other words, if it's true that there's only one region and only one possible setting for endpoint in the configuration of a DigitalOcean Spaces-backed workflow then a user who selects It should be just the same as someone who is configuring Regardless, if there are no regions in Spaces, then we can make the config just as easy. Anyway if your patch enables some users who know what they're doing to use these other storage backends by setting endpoint, without any "first-class support," then I don't see any reason why it shouldn't be merged first, ahead of all that. |
Ah, that makes sense - yeah, that should be easy then, just some helm chart stuff to make that work. |
I think for now it would be good to merge this in and allow users to set the endpoint of S3 to whatever they need it to be. After all the API of S3 is called Eventually I agree with @kingdonb that separating the configurations out would be a good idea.
Can you submit all the PRs and then give me links in Slack to what needs to be merged all at once? I would want to do some commenting and testing before we merge everything. |
Yeah we can merge this in for v2.20 as far as I'm concerned |
@Cryptophobia yes, they're all linked to #52 |
@rwos , Great news! I am going to need some time to review design docs and PRs and may ask you some questions while testing. |
part of #52
Guess I start here - whether you want to keep this open until everything else is done, or merge right away, your call 😄