-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support the s3 endpoint config option #3
Conversation
b6f2495
to
d7c5632
Compare
So this looks kind of ham-handed, but it will get the job done for anyone who is trying to use AWS_ENDPOINT instead of AWS_REGION I would like to see a design document to go with this change per https://docs.teamhephy.info/contributing/design-documents/ (We should nonetheless maintain a merged branch for those with S3 endpoints that are not in AWS regions, IMHO) |
@kingdonb I put a design doc here: teamhephy/workflow#72 - let me know if that makes sense, or if anything is missing! |
feat(postgres): use docker's official postgres:11 image
a4a5a49
to
c614d1f
Compare
@Cryptophobia rebased! |
Thanks, I hope to get to this soon. |
I will be happy to test this on DigitalOcean tonight, if I can get the cluster to deploy this time |
Let us know how it goes @kingdonb 🍿 🎥 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What @duanhongyi said, otherwise, thanks! 👍
If I don't get around to the testing this evening, then definitely this weekend. This is the only thing stopping me from declaring DigitalOcean K8s support as "basically production-ready," in an experimental sense at least. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
conn = False
There is no real meaning. It should be deleted.
rootfs/bin/create_bucket
Outdated
@@ -23,25 +24,23 @@ bucket_name = os.getenv('BUCKET_NAME') | |||
region = os.getenv('S3_REGION') | |||
|
|||
if os.getenv('DATABASE_STORAGE') == "s3": | |||
conn = boto.s3.connect_to_region(region) | |||
conn = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
conn = False
No practical significance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
too many programming languages, too many scoping rules 😄 - removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🍻,Good to merge
c614d1f
to
fe60cb4
Compare
I am excited to test this! Can anyone report that you've already shown this to be working in a cluster? I'm assuming from the discussion that it is so, but haven't seen it for myself yet. There are a number of components that demand S3API and they do not all work in the same way, I'd like to test them all together, if possible. But this is the big one. |
@kingdonb FWIW, I did have all the parts running in a cluster, and I did run the end-to-end tests against them, with an off-cluster S3 (Virtuozzo Storage at first, currently Minio). Non-production setup though, entirely possible that there's still stuff I have overlooked. |
I'm building it now... I just noticed |
It is possible that I built it wrong, or made some other configuration error... I still don't have CI in place to test a change like this. I will try your prebuilt images, @rwos |
I tried again after reading the patch and realizing I probably was meant to include "https://" in my S3_ENDPOINT value. This time I get (after some time elapses):
At this point I believe there is some fine-tuning needed, at least, or documentation about the feature... if I remove the "https" and use an unencrypted s3 endpoint, it does appear to reach the bucket. I exec'd into the container to see if I could diagnose, and since we've used the upstream image I can see that wget and curl are missing. Fortunately the failure takes some time to timeout, so I had enough time to install it... and finding that ca-certificates package is missing. Without the encryption, I get: I'd say the client is trying to enforce SSL, which is probably good. So chances are, simply adding the ca-certificates package to this image is going to resolve the situation. |
That was the issue, now I have:
It's going to be hard to ensure the database backup is KMS encrypted, when I'm using DigitalOcean spaces... there is a line in rootfs/patcher-script.d/patch_wal_e_s3.py which requests this encryption if you asked for "s3" in DATABASE_STORAGE, which I assume is set from global.storage in values.yaml I've just converted it to blanket "False" for my own testing and we'll see if that makes it usable. I wonder if this has changed at all in newer versions of boto? |
It's unencrypted but it works! The code is not mergeable though, we need a way to detect that we're using something other than AWS and KMS should not be requested. I'm afraid of the idea of making a setting for |
part of teamhephy/workflow#52