-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fetching logs does not work when using Quobyte s3 object storage #811
Comments
I wonder if the leading @bashofmann can you run |
The result is:
With the correct AWSAccessKeyId and Signature the URL also works and the file can be downloaded. |
For reference our ark config is:
|
Hmm, I'm not seeing anything in the code that would put a leading |
I debugged this and the path of the download URL the ark server returns to the CLI is correct:
But the query parameters are completely different to what
If I replace these query parameters with AWSAccessKeyId, Signature and Expires, the URL works. |
The same problem is there when trying to download a backup with the cli, by the way. |
What OpenStack version are you running? |
We are running OpenStack Mitaka and use Quobyte 2.5 for S3 Storage |
So s3 is directly provided by Quobyte? OpenStack isn't really relevant/involved? |
Sorry for the late reply, I checked with our OpenStack team and yes S3 is directly provided by Quobyte. It seems that Quobyte only supports the V2 Signer API, but ark uses V4.
|
@bashofmann we can look into this. I checked the aws sdk, and the v2 SignRequestHandler is in a We would need to define a new key for our aws ObjectStore's config to specify signature version. Would you be interested in trying to put together a PR for this? |
I looked into it a bit further and according to the AWS Python CLI, S3 is either using V4 of the Signer or V1, which seems to not even implemented in the go library. Relevant Python implementation: https://github.com/oNestLab/botocore/blob/d6c1be296e8cfe0706cb0c8bbcad9c095d0f4d09/botocore/auth.py#L860-L862 |
@bashofmann the aws go sdk does have a v2 signer, if that's what you need (with the caveat from the repo that it's a private api and may change/go away/etc). You also could do a separate quobyte plugin if you wanted to. We have #193 as a possible way to make it easier to have common AWS behavior with differences here and there (e.g. quobyte signatures), but we haven't implemented it yet. |
The v2 signer unfortunately does not work, I already tried it. And from the python code there are some differences between v1 and v2. |
Ah, that's unfortunate. I'll talk with the team about whether or not we want to own a v1 signing algorithm in the Ark code base and get back to you. |
FYI: I with the V1 algorithm downloading logs and backups works correctly with Quobyte S3 |
cc @skriss |
Some aws implementations, for example the quobyte object storage, do not support the v4 signing algorithm, but only v1. This makes it possible to configure the signatureVersion. The algorithm implementation was ported from https://github.com/oNestLab/botocore/blob/d6c1be296e8cfe0706cb0c8bbcad9c095d0f4d09/botocore/auth.py#L860-L862 which is used by the aws CLI client. This fixes vmware-tanzu#811.
Some aws implementations, for example the quobyte object storage, do not support the v4 signing algorithm, but only v1. This makes it possible to configure the signatureVersion. The algorithm implementation was ported from https://github.com/oNestLab/botocore/blob/d6c1be296e8cfe0706cb0c8bbcad9c095d0f4d09/botocore/auth.py#L860-L862 which is used by the aws CLI client. This fixes vmware-tanzu#811. Signed-off-by: Bastian Hofmann <bashofmann@gmail.com>
Some aws implementations, for example the quobyte object storage, do not support the v4 signing algorithm, but only v1. This makes it possible to configure the signatureVersion. The algorithm implementation was ported from https://github.com/oNestLab/botocore/blob/d6c1be296e8cfe0706cb0c8bbcad9c095d0f4d09/botocore/auth.py#L860-L862 which is used by the aws CLI client. This fixes vmware-tanzu#811. Signed-off-by: Bastian Hofmann <bashofmann@gmail.com>
What steps did you take and what happened:
When creating a backup with using OpenStack s3 object storage, backup and restore work just fine, but fetching the logs fails:
The files exist though and also have the correct valid content:
What did you expect to happen:
Logs are displayed correctly
The output of the following commands will help us better understand what's going on:
(Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/ark -n heptio-ark
No errors.
ark backup describe <backupname>
orkubectl get backup/<backupname> -n heptio-ark -o yaml
No errors.
ark backup logs <backupname>
Doesn't work, see above. No errors when manually downloading and viewing them.
ark restore describe <restorename>
orkubectl get restore/<restorename> -n heptio-ark -o yaml
No errors.
ark restore logs <restorename>
Doesn't work, see above. No errors when manually downloading and viewing them.
Environment:
ark version
): 0.9.3kubectl version
): 1.11.2 on server and client/etc/os-release
): Ubuntu XenialThe text was updated successfully, but these errors were encountered: