-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cargo publish times out without failure #11616
Comments
somewhat off-topic, but if you use something like https://github.com/glide-rs/flarmnet-rs/blob/1aa7363ace21096fa5fd3107207d0c03c0be984f/.github/workflows/release.yml#L17-L19 you don't have to use |
Huh, I didn't document this behavior in #11062. This was an intentional choice. We were transitioning from no timeout to a timeout and were concerned about the impact for existing users (ie not dependent on blocking, single crate) when sporadic, issues like Github being down, occur and (1) didn't want the timeout to be too long and (2) didn't want it to be an error when it wasn't before. Now, whether we have a path to later turning this into an error is a different question. We were especially worried about the impact of this behavior change on alternative registeries and had #11222 as an escape hatch but we haven't yet seen a need or even interest in it or problem with blocking. |
The situation has been improved since then. We have more HTTP info and a better block-wait notice when publishing.
To resolve this issue, the missing piece should be adding a simple sentence mentioning in |
@weihanglo while documenting helps, I think the core of this issue is about making it an error or am I missing something? |
I was thinking that it may break somebody's workflow. But since it already timed out, turning this into a hard error makes more sense than no error. What do people think? |
It makes more sense to me. |
More so I meant that we need to make sure we decide on the merit and timing of a hard error for closing this rather than closing this for documentation. |
I am admittedly not familiar with all of the implications of erroring out in this situation, but it seems appropriate given that the command's explicit intent (publish the crate) didn't succeed. I can't think of a use case where I would issue the publish command, manually or in automation, and find a failure acceptable. @epage I'd love your thoughts before digging deeper into this. |
@after-ephemera the publish endpoint reported success. However, the publish process is asynchronous to that. So the reasons someone will see a timeout are (1) there is some kind of service outage blip or (2) the rest of the publish pipeline failed. I'm assuming a failure in the publish pipeline would generally be considered a bug, be rare,and be actively monitored for by the crates.io team. I and others have seen service blips. So timeout, doesn't mean the publish failed and more than likely it succeeded. |
Ah so there's a bit more nuance here, thank you for explaining it. When you say "service blips" are you referring to network connectivity issues that are intermittent? If so, I would venture a guess that it's fine to leave as is since either that or pipeline issues are expected to be rare and intermittent. |
Basically, yes. |
then should we close this issue ? |
Problem
When trying to publish a crate, we recently experienced an issue reported here. The command
cargo publish
timed out without returning an error.Steps
This may be not repeatable without hacking the
cargo publish
network communication. Our guess is that thecrates.io
server is currently experiencing some issue.There is the GitHub CI recipe that failed (problem occurs on the
cargo publish --token $CRATES_IO
part):Below is the screenshot of a local attempt to run
cargo publish --allow-dirty
.Possible Solution(s)
Return with an error upon time out.
Notes
No response
Version
The text was updated successfully, but these errors were encountered: