-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cargo authenticateing users without sending secrets over the network #3231
Conversation
On macOS, this could be done more securely using its Keychain APIs. In macOS there's a chain of trust established between developer of the application that uses credentials all the way to the kernel that checks which codesigned processes can get credentials. Certificates are never written to disk unprotected, and 3rd party applications can never get other apps' credentials. So I suggest prefixing the whole scheme with "on platforms other than macOS,", and on macOS use the Keychain APIs. They're a few simple C calls for getting certificates and passwords. It does depend on Cargo being codesigned, but this is already an important thing Cargo needs to do. |
The paseto token returned by the credential-helper can only be used for uploading a single package if I understand the RFC correctly. This makes an attacker running
Note also that for hardware tokens the "private-key-path" would probably point to some information that needs to be passed to the hardware token and would be useless without the hardware token rather than the actual private key. |
Another point on macOS (which would probably be addressed by what @kornelski suggests) is that it would be great to leverage the TouchID support that many modern macOS machines have. Personally, I find the use of a token in this case a little strange. Based on the reference explanation, it seems like signing requests would be simpler. For example, using Amazon's SigV4 or the JOSE-based method outlined in the ACME RFC. |
One of the benefits of using an expressive bearer token format is being able to set policies around / attenuate those tokens, which is useful in a delegation context like allowing a bot to publish a specific release of a specific crate. See: #2947
TouchID functions as a policy specified via AuthZ ACL you can configure for a particular keychain item as stored by Keychain Services (possibly in the SEP, if that's desired). FWIW, I wrote a crate with bindings to the relevant API surface which would be nice to upstream into the The tricky part @kornelski mentioned is this kind of feature would need to be directly added to the cargo binary itself, as there is currently no way to e.g. use X.509 codesigning certificate chains to delegate authority to use TouchID to a signed Another problem is if you do want to store keys in the Secure Enclave Processor (SEP), they must be 256-bit. For ECDSA this means using the NIST P-256 elliptic curve, but PASETO only supports NIST P-384. |
text/0000-cargo-asymmetric-tokens.md
Outdated
In practice, I suspect many registries will not, leading to an ecosystem where most registries use less secure authentication, and creating more hazards for users. Some security properties (e.g. not supporting tokens from one registry on another) work better when all registries support them. | ||
|
||
We could use PASETO `v4.public` instead of `v3.public`. The `v4` standard uses more modern crypto, based on Ed25519. | ||
Unfortunately, most existing hardware tokens do not support Ed25519. By using `v3.public` based on P-384 we allow a `credential-process` to keep keys on the hardware. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with NIST curves is that they are smelly. Hardware is improving as we speak, and I think 99% of users will use these keys without using hardware features. Thus I believe ed25519 is way better, and v4 should be at least the default. Regarding the format discussion below, with RFC8410 there is a simple, asn.1 based self describing way to encode the keys (which is prefferable to "just" dumping the keys due to algorithm agility).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue with NIST curves is that they are smelly
The main concerns around the NIST curves (and short Weierstrass curves in general), namely invalid curve attacks due to the use of incomplete formulas, were eliminated by the discovery of complete addition formulas for prime order curves in 2015 which grew out of the Edwards formulas used by Ed25519.
While Ed25519 is a nice modern signature algorithm and one definitely worth supporting, the formulas from the aforementioned paper provide many of the same benefits Edwards formulas originally brought, namely exception-free implementations which operate in constant-time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found some comparison on hn, which says that ed25519 is generally better: https://news.ycombinator.com/item?id=19749452
Furthermore, ed25519 has less red stuff at the table at safecurves compared to the NIST curves.
Generally it might be a good idea to support multiple schemes from the start, to ensure that there is algorithm agility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Furthermore, ed25519 has less red stuff at the table at safecurves compared to the NIST curves.
The table on the safecurves site is critiquing the obsolete and incomplete formulas I was describing earlier, and is specifically addressed in the paper I linked (p.5), where citation [14]
is the safecurves site:
Prime order curves can be safe. Several of the standardized prime order curves mentioned
above have recently been critiqued in [14], where they were deemed not to meet (some or all
of) the four “ECC security” requirements: (i) Ladder, (ii) Twists, (iii) Completeness, and (iv)
Indistinguishability.
On the contrary, this paper shows that prime order curves have complete formulas that are
comparably efficient.
Why the safecurves site has not been updated to reflect the developments of this paper is uncertain, however it contains obsolete information.
The security properties of implementations of Edwards curves as used by Ed25519 or short Weierstrass curves like NIST P-256 and P-384 implemented using the complete projective formulas are effectively the same, and algorithmically quite similar. As it were, the RustCrypto p256
crate is implemented using these formulas, and we are working implementing p384
with them as well.
Where Ed25519 benefits is areas like performance (Edwards curves are more amenable to parallel implementations) and more exotic applications like batch verification, multisignatures, and signature aggregation, which are outside of the scope of this particular use case.
Generally it might be a good idea to support multiple schemes from the start, to ensure that there is algorithm agility.
I would suggest supporting Ed25519 along with ECDSA/P-{256,384} (possibly only choosing one of P-256 or P-384). While Ed25519 is great, support for it in things like hardware tokens, OS keychains, HSMs, and cloud KMS services is still somewhat spotty, but slowly improving.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make sure I understand correctly, the RFC as-is currently would not support Ed25519, and supporting both v3 and v4 (and thus both P-384 and Ed25519) is not part of the RFC? But the RFC is written in a way that that would be a possible extension in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a cryptographer and am new to all the details here. So I'm going to explain as carefully as I can, so that someone can tell me if I have something wrong.
PASETO V4
is an encoding that requires the use of the algorithm Ed25519
.
PASETO V3
is an encoding that requires the use of the algorithm ECDSA/P-384
.
Also all the PASETO variations clearly identify what version they came from.
The RFC as currently written only supports PASETO V3
. That means that cargo will only understand how to generate and read keys for PASETO V3
and use the algorithm ECDSA/P-384
. Furthermore it will insist that credential processes return something that appears to be correctly formatted as a PASETO V3
.
It is not hard to imagine how this could be lifted in a future RFC. This would allow us to allow for a new format, say PASETO V4
or Biscuit
.
That RFC would have to specify:
- How
--generate-keypair
decidesV3
/V4
/Biscuit
? Presumably, a new flag to specify format and a timeline for the default to change. - When reading a config file how cargo decides which one it has found? Almost certainly, read the version number encoded in the key and use its matching format.
- When receiving from a credential process which style to check? Presumably, we allow anything that looks like any of the supported formats. (Note that this could be done before and independently of the other changes.)
- Whether to deprecate and remove the old format? This would require extreme justification, but maybe worth discussion depending on the urgency of switching.
- Whether and how a registry can request a particular format? Presumably, Cargo will send whatever format is specified in the config, if it is not a format supported by this registry the registry will return an error code to show the problem. (How does this work with credential providers?)
Does that answer your questions? Did I miss something important?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that all makes sense, thanks. I am not clear on how 'format detection' would be done (are all these formats clearly distinct from each other in their encoded form, do we use filenames, whatever), but as long as that option exists I do not care about the technical details. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are all these formats clearly distinct from each other in their encoded form?
Basically yes. Tokens encoded in PASETO V3
start with v3.public.
. PASETO V3
keys encoded in PASERK
start with k3.secret.
and key IDs start with k3.pid.
. (That sentence is also true if you replace all the 3
s with 4
s.) If we are switching to a different family of formats, which do not have a clear prefix to identify them, we can define that we are sticking some arbitrary string in front for disambiguation purposes.
text/0000-cargo-asymmetric-tokens.md
Outdated
# Reference-level explanation | ||
[reference-level-explanation]: #reference-level-explanation | ||
|
||
In [`config.toml`](https://doc.rust-lang.org/cargo/reference/config.html) and `credentials.toml` files there is a field called `private-key-path`, which provides a path to a `PKCS#12` formatted file containing a private key. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think storing the key directly in credentials.toml makes most sense, in base64 format or similar. Most users won't benefit from having it in a separate file. Instead, there is the burden of having two files that need to be protected, and it also means more complexity. Custom use cases can use a credentials process which is superior I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest using PKCS#8 instead of PKCS#12. The latter is a container format for both PKCS#8 private keys (which are themselves asymmetric keypairs) and X.509 certificates, and X.509 certificates don't seem relevant to this particular application.
One nice thing about using PKCS#8 as opposed to something like credentials.toml is support for password-protected encrypted private keys, which can be used to protect these keys at rest. PKCS#8v2 supports deriving an encryption key using PBKDF2 or scrypt and using that to encrypt the original private key using AES.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use PKCS#8
or whatever other format, but ideally you don't store things into yet another file. That annoys users. toml supports multiline strings, so you can even have PEM encoding if you want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was already convinced that cargo should only support one way of getting the key, and that all other ways of storing keys should require a credential process. I am now convinced that cargo should accept the (encoded) key directly as opposed to a path to the key.
The argument that convinced me is that most CI/CD systems allow you to store secrets in environment variables, but not particularly in files. If cargo accepts the key directly, then we already have an environment variable that will go straight from CIs secrets system to cargo. The same way it does for the existing secret tokens.
I did not have time to edit this into the RFC today, but I will do so and my next opportunity.
Of the motivations for this feature, this seems by far the strongest to me. Package managers like cargo are particularly vulnerable to this kind of attack because malicious build scripts can easily exfiltrate such files. The other possible attacks seem much more theoretical, or else can be protected against without user-facing changes. For that reason, I don't think that an overall push to use private keys (stored at rest) instead of tokens (stored at rest) is justified. For example, if I'm setting up a publish CI pipeline, a token is more convenient than a private key, yet a private key does not protect against the primary risk (CI secret being leaked), and given how frequent setting up a CI pipeline is for me, any additional complexity for no benefit is just going to waste a lot of my time.
Even with tokens, there's no reason for a registry to store tokens in a recoverable format. The registry can store token hashes instead, and avoid this problem without inconveniencing the user. Since tokens are already random, you can avoid a lot of the complexities of storing passwords. However, specifically for accounts used by actual humans, then I would support a push to any solution that avoids storing keys at rest (ie. using an agent or hardware token) because in that case there is a much greater potential benefit. |
I added some language addressing UX concerns. When rust-lang/rust#27694 has been completed I would strongly approve of an RFC moving to using the keychain for storage on MacOS. Given that rust-lang/rust#27694 is not going to be closed imminently I think discussion of what we could do after it is out of scope for this RFC. Especially with the explicit section "Note on stability" warning people that if more secure things come along this RFC may be superseded. I added text discussing Amazon's SigV4. Thank you for reminding me to discuss it. I'm not clear why JOSE is simpler then PASETO, they are theoretically very similar formats for connecting some data to signatures of that data. JOSE can describe many more algorithms and methods, requiring documents like the one you linked to spend their time describing what parts they are comfortable using. Each version of the PASETO is only compatible with one algorithm. I don't know anything about ACME. @djc, Can you explain what aspect of what they're doing you find simpler?
We do need to be cognizant of the user experience! For use cases where a token would be just as secure, we need to keep overhead down. Generating a private key locally and putting it in CIs list of secrets, can hopefully be only one small additional step then putting the token in the secrets.
You're absolutely correct. This is only a problem in practice not in theory. However, the sentence you are responding to links to an example of crates.io getting this wrong. I can only assume if we have seen one registry get this wrong, then there are others and there will be more in the future.
We have to make it possible if we wanted to become widely adopted. I think this massively increases the ability for storing the critical secrets on dedicated hardware. |
It's mostly the notion of tokens vs signing that I was objecting to. I think "token" is loaded with a different semantic meaning than signatures. But this now makes more sense given this comment from @tarcieri:
|
I did a major round of editing on Thursday but forgot to write a summary.
|
text/0000-cargo-asymmetric-tokens.md
Outdated
|
||
Both fields can be set with `cargo login --registry=name --private-key="key" --private-key-subject="subject"`. | ||
|
||
A registry can have at most one of `private-key`, `token`, or `credential-process` set. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the private key have some kind of prefix so we can identify it as PASETO v3 in case we ever want to change to a different key format?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
v3 Private keys encoded in PASERK secret format start with k3.secret.
. So I think it will be easy to tell that the private-key
was intended for this RFC.
text/0000-cargo-asymmetric-tokens.md
Outdated
|
||
[TUF](https://theupdateframework.io/) exclusively deals with how a client downloading packages through a mirror can be assured they came from a non-compromised copy of the registry. Which is not the problem this RFC is addressing. | ||
|
||
# Unresolved questions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we handle tokens passed on the command line via --token
. Will we also support passing a private key via command line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is possible to set it using the environment variable. But we have a shorthand for setting --token
so maybe we should have a shorthand for private-key
and private-key-subject
. Do you think it's reasonable to consider this a part of the unresolved question about CLI design, or should we resolve it in the text of this RFC?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make a decision about it in the RFC.
I'd rather avoid passing private keys on the command line since it increases the chances of them being exposed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The RFC no longer mentions passing keys on the command line.
The existing commands that take a --token
, if it is set it will override any private-key
/private-key-subject
from the config.toml or environment variables. That's the only thing that maintains backwards compatibility.
text/0000-cargo-asymmetric-tokens.md
Outdated
Some registries prioritize user experience over strictest security. They can simplify the process by providing key generation on the server. If your registry works this way the workflow will be: | ||
1. Log into the registries website | ||
2. Go to the "register generate a key pair" page, and copy the command it generated for you. It will disappear when you leave the page, the server will not keep a copy of the public key! | ||
3. Run it on the command line. It will look like `cargo login --registry=name --private-key="key"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will leak the private key into the shell history. Maybe prompt for the private key instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an existing problem with the login command. On the other hand this RFC might be good opportunity to fix it. Do you think it's reasonable to consider this a part of the unresolved question about CLI design?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe if you do "cargo login" without specifying the token, cargo will prompt for it. However, if there are now two ways (token and private key) cargo would need to identify the pasted data as one or the other appropriately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed this to no longer have --private-key="key"
but just a --private-key
that signifies that the prompt will be asking you for a key and not a token.
text/0000-cargo-asymmetric-tokens.md
Outdated
|
||
Figuring out how and when crates.io should support these kinds of tokens is left to a follow-up discussion/RFC. The motivation section describes some of the things that will need to be figured out. | ||
|
||
Only after crates.io is not using secret tokens should we remove the support for them in private registries (and the code in cargo to support it). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the idea of using public key crypto to improve cargo security, however I'm somewhat concerned about deprecating any other type of authentication. It may make it harder to integrate cargo into existing authentication systems.
Using a credential-process
to issue a short-lived token seems like a flexible and secure option. It could work similar to how GitHub does HTTP authentication after installing the credential provider. Cargo would call the credential provider with the existing information (registry url, etc), the credential provider could verify identity any means it wanted (OAuth popup, 2FA key entry, etc), then produce a short-lived bearer token for cargo to send.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're reminding us that it is possible to use secret tokens with credential-process
in a secure way. Which is definitely an important thing to keep in mind.
With some ingenuity, you can use a credential-process to smuggle a secret token into the format of a asymmetric token. The credential process creates a brand-new key pair, puts in "custom" the secret token and the public key. The server can validate that the asymmetric token is valid according to this RFC, but then makes all AuthN decisions based on the embedded secret token. The decision about removal is about how much work is involved and for whom, not what is possible.
Fundamentally, this is in the future possibilities section. This RFC does not suggest deprecation or removal. The time to discuss whether deprecation or removal is a good idea is in future RFC that suggests making that change.
Team member @ehuss has proposed to merge this. The next step is review by the rest of the tagged team members: No concerns currently listed. Once a majority of reviewers approve (and at most 2 approvals are outstanding), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See this document for info about what commands tagged team members can give me. |
🔔 This is now entering its final comment period, as per the review above. 🔔 |
My main outstanding concern with this is interactions with RFC #2947 (token scopes), and how that might impact the choice of a bearer credential format. I think selecting a new bearer token format is a pretty big decision with cross-cutting concerns, and it'd be good to get input on other stakeholders for other RFCs on the pros/cons of particular formats for their applications. Specifically my concern is while PASETO is nice for this particular use case, an alternative covered in the "Rationale and alternatives" section of this RFC is the Biscuit credential format. Biscuits are overkill for this specific use case, however they would be quite nice and extremely flexible RFC #2947 token scopes, particularly for use cases like delegating one-time authority to perform a single action to an automated process/bot, which I think is one of the main applications people have in mind for that particular RFC. Biscuits would also simplify implementations of RFC #2947's scoping features both in general and for 3rd party crate registries implemented in languages other than Rust, as implementations of that format include a policy language which can be used to make authorization decisions. So implementations of that feature can lean on the policy engine of Biscuit libraries in their language to make AuthZ decisions, instead of having to implement something from scratch on top of PASETO. Alternatively it might make sense to do what was done in RFC #3139 and make the specific choices of key and bearer credential formats opaque. Though I would say there are more concerns in this RFC than in RFC #3139 where concrete descriptions in terms of a specific format are helpful, particularly for reviewing the security design at a high level. |
No, as currently defined they do not provide the guarantees required by this use case. I know that the Biscuit specification is actively being worked on, and that previous specifications (Macaroons) had many good ideas that might work for this use case. But Biscuits as they exist now do not provide the properties described here. This is based on my understanding having spent three full days trying to draft an alternative to this RFC based on Biscuits, and confirmation by talking with you and chatting on the Biscuits matrix server. If I've misunderstood the situation, please feel free to show how current Biscuits already can implement all of those properties, not just that it may be able to in the future.
At some point we do need a specification of what cargo will send and what registries should accept. The point of this RFC is to specify an opaque format for RFC #3139. Until we pick something implementation cannot start.
Algorithm agility is discussed in the future possibility section. A server is not required to support all the formats Cargo can produce. As such supporting a new format is work, but not particularly difficult. Your other points about the advantages of Biscuits are correct. But I think they have adequately been responded to or acknowledged in the alternatives and drawbacks sections. |
Can you highlight which specific concerns you have with Biscuits? You linked to a large list, and I'm not seeing anything in that list which Biscuits don't provide, IMO. |
Biscuits as they are now only have one secret key. The secret key used to create the root biscuit. From there on out if you have the biscuit you have all that is required to use it. (Yes, internally secret keys are involved, but there always included in the message. So the point stands.) The recommended way to use biscuits is for this secret key to be kept server-side. The server attenuates down this token before sending it to the user. Anyone who has this token can use it for anything it is attenuated to do. This directly falls afoul of the first two "Key additional design properties" I linked to. The server could add an attenuation that requires some other form of AuthN in order to use the token before sending it to the user. At which point this RFC can be that "other form of AuthN". Against recommendations we could have the client mint the token signing with its secret key. But that inverts the entire design of biscuit. And we would need to specify how and when a server is supposed to trust an arbitrary biscuit. Again, I would not be surprised if I missed something important. But the devil is in the details, can you fill out the details of the proposal you would like to see. |
from what I understand, you're going towards a design where the client holds a secret key and uses it to sign something so that no secret info goes over the wire and is shared with the server? This is compatible with Biscuit's design:
|
One of the goals of this RFC is to pick a specific authentication scheme and token format. We set the requirement early on that we wanted to pick one format and use versioning, and future format changes would occur via new versions. This seems to be the direction that cryptographic work is taking in general: selecting specific algorithms and formats rather than being arbitrarily extensible. As a result, our goal is to pick a specific format that meets all the requirements. Our goal is also to pick a format that we can get a security expert to take a look at and confirm that it meets those requirements. And all else being equal, we're looking for the simplest solution that meets those requirements. We've currently done all of those things for the current selection of PASETO. If another format exists that may potentially meet the requirements, that alone isn't enough to argue that we should switch to that format. To consider switching to a different format at this point, we would need 1) to be confident that it met all the requirements, in detail, 2) to have security expertise available that can review that format in detail, and 3) to have a specific valuable reason to switch above and beyond both of those (since 1 and 2 would just confirm that another format is viable, not that another format is better). And the explanation for (3) would need to show that that feature is something we actively want now (not just in the hypothetical future) and that it's worth the complexity added. I should also mention that this choice is not irrevocable; if there's an issue in the future, or if there's a feature we need and need to switch formats to get, we can always switch using a new version number. |
The final comment period, with a disposition to merge, as per the review above, is now complete. As the automated representative of the governance process, I would like to thank the author for their work and everyone else who contributed. This will be merged soon. |
Huzzah! The @rust-lang/cargo team has decided to accept this RFC. To track further discussion, subscribe to the tracking issue here: |
- RFC PR: [rust-lang/rfcs#0000](https://github.com/rust-lang/rfcs/pull/0000) | ||
- Rust Issue: [rust-lang/rust#0000](https://github.com/rust-lang/rust/issues/0000) | ||
- RFC PR: [rust-lang/rfcs#3231](https://github.com/rust-lang/rfcs/pull/3231) | ||
- Cargo Issue: [rust-lang/rust#10519](https://github.com/rust-lang/cargo/issues/10519) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Cargo Issue: [rust-lang/rust#10519](https://github.com/rust-lang/cargo/issues/10519) | |
- Cargo Issue: [rust-lang/cargo#10519](https://github.com/rust-lang/cargo/issues/10519) |
Also, the rendered link is broken. |
Thanks, I have fixed both issues. |
Implement RFC 3139: alternative registry authentication support Allows registries to request Cargo to send the authentication token for all requests, rather than just publish/yank, implementing [RFC 3139](#10474). ### Items from the [tracking issue](#10474) > Do registries need a more fine-grained switch for which API commands require authentication? This PR uses the `auth_required` boolean as described in the RFC. > The RFC mentions adding --token to additional commands like install and search These flags are not added by this PR. > Consider changing the name and form of the X- header Changed to the `www-authenticate` header as suggested by the comments. > Will there be any concerns with the interaction with rust-lang/rfcs#3231 Not that I know of. ------------- Adds a new field `"auth-required": true` to `config.json` that indicates Cargo should include the token in all requests to a registry. For HTTP registries, Cargo first attempts an un-authenticated request, then if that fails with HTTP 401, an authenticated request is attempted. The registry server may include a `www-authenticate` header with the HTTP 401 to instruct Cargo with URL the user can visit to acquire a token (crates.io/me). Since the API URL is not known (because it's stored in the index), the unstable credential provider feature is modified to key off the index url, and the registry name is no longer provided. To handle the case where an alternative registry's name is not known (such as coming from a lock file, or via `--index`), Cargo can now look up the token in the configuration by matching on the index URL. This introduces a new error if two alternative registries are configured with the same index URL. Several operations, such as `cargo install` could have had a `--token` argument added, however it appears that Cargo would like to move away from passing the token on the command line for security reasons. In this case, users would need to configure the registry via the config file (or environment variables) when using `cargo install --index ...` or similar.
This is spun out of #3139. This provides a way to use asymmetric cryptography to authenticate to registries in a manner flexible enough to be used for all communication with a registry.
This could allow a registry to require interaction with a YUBikey to publish. This could allow a registry to use client certificates as a proof of identity. The details of how registries choose to trust their users is out of scope for this RFC, as is figuring out how crates.io could use this system. If you're excited to find ways to use the system, then let's stay focused here and get it merged so you can start experimenting!
rendered