-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add s3://
as possible package URLs
#960
Comments
One open question would be: do we want to use the rust-s3 crate (which might introduce additional dependencies) that also provides some sort of native credential reading (through env vars, ...) or should we only rely on our own implementation where we store access keys, ... in pixi's credential stores? I personally would like to be able to handle pixi's credential stores for S3 credentials as well. For reproducibily's sake we should IMO definitely add the aws region and endpoint (if not default) into pixi.lock (and pixi.toml). |
AFAICT the designed above should be able to handle other providers like minio, cloudflare, gcs (for which we already have a middleware, maybe deprecate it afterwards?) as well. |
@pavelzw , any reason for not using the same approach as for gcs? I reckon using existing crates for authentication is the best path: these things have a lot of corner cases... |
IMO S3 is a protocol that has a much larger scope than AWS itself since it's used by almost all other cloud storage providers and not only AWS. Also, from what I've seen, AWS S3 is not really a hard protocol to implement. But I'm open to including it via for example rust-s3. Not sure about pulling the whole aws sdk in as a dependency, that might result in unnecessary large binary sizes for pixi. |
@pavelzw currently, the GCS middleware relies on a small external crate for authentication, but then directly maps the gcs URL to and HTTPS endpoint, rather than depending on any google cloud storage SDK. Moving to rust-s3 would only replace that URL mapping, and may be valuable to handle uploads for instance However, I think including middlewares with specific authentication mechanism is something orthogonal to this, and quite valuable (I am not loving keyring python packages...). What do you think? |
Roping in @ruben-arts since this also concerns pixi. @pavelzw do you have an idea on how you would implement this in rattler? We currently dont really have a way to parameterize channels. |
Shouldn't we be able to parametrize them similarly to how we do mirror configuration and feed this information into rattler from pixi and rattler-build? |
That would mean that we have configuration for channels living in the “channels” and in middleware. Coupling them. You also want to store the information in the lock file which then also requires specific configuration. It would be nice if we could encapsulate that some more. |
Would there be configuration like this we want to be able to set for |
I don't think so @ruben-arts . but maybe I am missing something |
theoretically gcs is s3-compatible as well, so we would be able to use GCS like this:
# pixi.toml
[s3]
region = "us-east1"
endpoint = "https://storage.googleapis.com/"
url-style = "path" you would need to specify your credentials manually using |
@pavelzw , I agree this would work. But I feel this would be much less practical than the current solution in place where one has only to specify the gcs url, and use usual GCP authentication mechanisms to summarize,
Also, I wonder if, in terms of pixi config, it wouldn’t be more natural to allow s3 config as inline tables in the channels array? |
How about making it possible to configure it as described above (or your suggestion with inline tables) and if nothing is provided, we fall back to the aws-creds crate? |
@pavelzw , I am not sure I understand what you mean. I think there are two orthogonal aspects to consider:
|
@delsner and I are planning to add S3 support to pixi/rattler.
Following ideas for modifying the config files
pixi.toml
config.toml
credentials.json
pixi.lock
Any thoughts on this proposal?
The text was updated successfully, but these errors were encountered: