-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cargo publish "memory allocation of .. bytes failed" with large, ignored test assets #7543
Comments
Thanks for the detailed bug report! I think yeah the right solution here is to just avoid reading everything in-memory and hashing it externally. It'll still probably take forever to read and hash a 1TB file though so processing exclude rules in a reasonable way would also be reasonable here! |
Hi, since this have been marked easy I thought I could help with the issue. I have read through the description but I fail to come up with a solution without knowing some facts about the application. Does So any ideas? |
@hbina Thanks for taking a look! I'm not sure I understand the question about The solution is to stop using |
I see, I will take a look again after school today. Edit: Having skimmed through it a bit. It seems like But this seems like it will break other code. Edit: Nevermind... |
Added a helper function for util::hex::hash_64 that uses streams the content instead of reading through the entire content in one go.
Update cargo, clippy Closes #69601 ## cargo 16 commits in e57bd02999c9f40d52116e0beca7d1dccb0643de..bda50510d1daf6e9c53ad6ccf603da6e0fa8103f 2020-02-21 20:20:10 +0000 to 2020-03-02 18:05:34 +0000 - Fix rare failure in collision_export test. (rust-lang/cargo#7956) - Ignore broken Cargo.toml in git sources (rust-lang/cargo#7947) - Add more fingerprint mtime debug logging. (rust-lang/cargo#7952) - Fix plugin tests for latest nightly. (rust-lang/cargo#7955) - Simplified usage code of SipHasher (rust-lang/cargo#7945) - Add a special case for git config discovery inside tests (rust-lang/cargo#7944) - Fixes issue rust-lang/cargo#7543 (rust-lang/cargo#7946) - Filter out cfgs which should not be used during build (rust-lang/cargo#7943) - Provide extra context on a query failure. (rust-lang/cargo#7934) - Try to clarify `cargo metadata`'s relationship with the workspace. (rust-lang/cargo#7927) - Update libgit2 dependency (rust-lang/cargo#7939) - Fix link in comment (rust-lang/cargo#7936) - Enable `cargo doc --open` tests on macos. (rust-lang/cargo#7932) - build-std: remove sysroot probe (rust-lang/cargo#7931) - Try to clarify how feature flags work on the "current" package. (rust-lang/cargo#7928) - Add extra details in the new feature resolver doc comment. (rust-lang/cargo#7918) ## clippy 6 commits in fc5d0cc..8b7f7e6 2020-02-24 05:58:17 +0000 to 2020-03-02 20:00:31 +0000 - Rustup to #69469 (rust-lang/rust-clippy#5254) - Some rustups (rust-lang/rust-clippy#5247) - Update git2 to 0.12 (rust-lang/rust-clippy#5232) - Rustup to #61812 (rust-lang/rust-clippy#5231) - Add lint to improve floating-point expressions (rust-lang/rust-clippy#4897) - Do not run deploy action on other repos (rust-lang/rust-clippy#5222)
This should be fixed by #7946. |
I can't believe this is the trend with software management. ( It shouldn't take large amounts of memory to compile a simple program. Once modules are compiled, they should sit in a cache until all the pieces are ready to be put together. (I wanted to say "assembled" but I didn't want someone to think I was implying assembly language.) It shouldn't have taken a $80 purchase at the local tech shop to figure out cargo's way of doing things is that it holds all the compiled pieces in RAM until things are built. I know this because I used a 2GB Pi 4 to try to build a cargo app from source and was getting kernel warnings out of left field, while on the 4GB Pi 4, that wasn't an issue though I was worried about things slowing down like they did on the 2GB device. The point being: When cargo compiles component and sets them aside, they should be stored on the harddrive (where space is plentiful and theirs room for virtual memory if that's how cargo wants to do things), not the memory where the rest of the system still needs resources to function. I really like the Rust programming language, and I am excited to try out some of its features when I write code for my own project in the future. But if rust relies on consuming large amounts of the computers more precious resources (especially the RAM), then Rust might not be the ideal programming language if its package manager is flawed. (I know this issue has been closed for a few months now. I just wanted to get these thoughts off my chest considering the amount of effort I spent this week trying to understand why even something with minimal resources would have trouble handling an other wise simple task that an older computer with lesser capabilities could handle.) OS: Manjaro Linux ARM 20.06 |
Problem
cargo publish
fails with:...if you have large files in your build directory, even if they are excluded.
I expect this to complete in all cases, i.e. a crate can be published regardless of how much memory I have (within reason!). But, I especially expect it to pass in this case, where the file causing the error is ignored, and will not be published.
Steps
build.rs
which creates a large file in this test data directory.cargo package
orcargo publish
.Possible Solution(s)
I believe this was introduced by 78a60bc , which intentionally hashes all of
build.rs
's output, to detect changes it has made. I believe that change is maybe correct in this case; ifbuild.rs
changed the test data, then maybe the tests will perform differently, so it's a different test?The allocation is failing here, which you can see with
gdb
, but not with logging or backtraces:cargo/src/cargo/ops/cargo_package.rs
Lines 693 to 694 in 759431f
i.e. read the entire file into memory, then hash the contents. Chunked hashing should not require unbounded memory?
Maybe:
I saw this in a real project, https://github.com/FauxFaux/ext4-rs , which extracts some sparse (i.e. tiny but apparently large) disc images from a tiny tar file during its tests. It publishes fine on e.g.
1.34.2
, but not on stable.Notes
Output of
cargo version
:I'm on amd64 Ubuntu with ~30gb of memory available. If you have over 1TB of virtual memory available, then the above testcase might pass (lucky you).
The text was updated successfully, but these errors were encountered: