-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ICE from running out of memory #2571
Comments
I'll try and narrow this down, since I have no idea the specifics under which this is reproducible for others. |
By checking out d3c6416 and manually reverting changes one at a time, I've determined that the following change is responsible for the ICE:
I have no idea what this is doing, only that reverting it successfully compiles (though I didn't run the test suite). Unfortunately, applying this same change to today's HEAD just gets me another ICE. I'll be bisecting that next. |
@brson have you seen this? maybe this is a big in unique ptr unification related to 32 bit somehow? |
The first commit where reverting that single line does not solve the ICE is 704a5a8, which is a commit that registers new snapshots. I've tried compiling HEAD on master using the old snapshots, but (unsurprisingly) this doesn't work. Even if I could get it to compile, it doesn't determine nor address the underlying reason for this bug. It could be useful to see if this is a problem on 32-bit hardware in general, if anyone has any lying around. Let me know if there's anything more you'd like me to try. |
(you don't need 32-bit hardware, most likely, just a 32-bit build) |
@nikomatsakis I haven't been able to reproduce it so far. I'm setting up a 32-bit debian vm now. |
I was not able to reproduce this on a 32-bit debian 6.0.5 EC2 instance. |
Hm. The last time this system had a mysterious unreproducible ICE, it resolved itself after a month or so. I suppose that's the best I can hope for, unless someone has any specific instructions for me (and if anyone wants a login to this server to toy around with it themselves, I'd be happy to oblige). I'll leave this bug open for now just in case someone else manages to hit the same thing. |
I investigated on this box and noticed that it has very little RAM (512MB maybe). I reproduced the error once and also rustc got killed by the OOM killer once. My current theory is that the addrspace change forced LLVM to use more RAM, so when we go to spawn the linker process it ends up failing, but I haven't actually debugged the call to rust_run_process to see what happens. The real problem here may be that we don't report better errors when fork fails. |
Re: reporting better errors when running out of memory, here's the result of compiling on this same server nowadays:
Is that LLVM output? I suppose it's entirely possible that the compiler has shifted in the past few months such that it no longer fails at the exact spot that caused the ICE that inspired this issue. |
I'm seeing a very similar error when trying to build rust 0.4 on a Linux VM with 1GB of memory:
Hard to say what the error actually is because upcall_call_shim_on_c_stack looks like this:
I tried changing the exception handler so that it printed the exception message using stderr, but didn't see the message. Presumably because the compiler executing is a pre-built snapshot. I did note that memory usage hit 83% as I was compiling so it's quite possible that it spiked to 100% and ran out. |
Fwiw I noticed Blub\w on irc said that he can't compile rust with 1.5GB of memory when Firefox is running. |
This is still relevant, but it's... not really a bug that can be fixed, besides drastically reducing rustc memory usage. |
It's not really possible to report a better error than what we already have:
|
For me the bug was not so much that rust ran out of memory, but that it died without saying that it ran out of memory. |
Well, it silences the failure reasons by default and reports them as an ICE. It would be a separate issue. |
I've been getting an ICE for about a week on one of my machines. 32-bit Debian. It's a Linode VPS, so I can't give the exact hardware specs. Bisecting the bug puts the first bad commit at d3c6416. Here's the RUST_LOG backtrace:
https://gist.github.com/2918045
EDIT: Title changed to reflect findings
The text was updated successfully, but these errors were encountered: