-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jemalloc related segfault in thread local access on ARM #34248
Comments
@alexcrichton The benchmark binary runs and crashes without a problem under $ qemu-arm -b 10000 target/release/bigint-fd35ca520d2bc759 --bench
$ gdb
so cross-compilation should be enough to reproduce. Or I can upload the binary :-) (it probably won't break out of qemu and kill the dinosaur but you never know) |
@petevine I'd personally recommend titling issues a bit differently, "jemalloc hates me" isn't too indicative of what's going on... I unfortunately was unable to reproduce this cross-compiling to arm-unknown-linux-gnueabihf, do you have a small sequence of steps which can reproduce this? I'd also recommend trying to minimize this as much as possible. As-is this is very unlikely to be fixed by someone else unfortunately :( |
@alexcrichton The title was my immediate reaction to, seemingly, another jemalloc issue on ARM but it turned out to be exactly the same as the old one. Do you want me to reopen #30919 instead? A quick recap of the issue(s) at hand:
|
Here's a current reproduction of the rand example crash: #0 0x7f55e4f8 in core::cell::{{impl}}::get<usize> (self=0x1)
at ..src/libcore/cell.rs:195
#1 0x7f55e4d0 in alloc::rc::RcBoxPtr::strong<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>,alloc::rc::Rc<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>>> (self=0xb6ff84c8)
at ..src/liballoc/rc.rs:875
#2 0x7f55e1f8 in alloc::rc::RcBoxPtr::inc_strong<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>,alloc::rc::Rc<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>>> (self=0xb6ff84c8)
at ..src/liballoc/rc.rs:880
#3 0x7f55e160 in alloc::rc::{{impl}}::clone<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>> (
self=0xb6ff84c8) at ..src/liballoc/rc.rs:490
#4 0x7f55e6a8 in rand::thread_rng::{{closure}} (t=0xb6ff84c8)
at .cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.3.14/src/lib.rs:884
#5 0x7f55e5e4 in std::thread::local::{{impl}}::with<alloc::rc::Rc<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>>,closure,alloc::rc::Rc<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>>> (
self=0x7f598dfc <rand::thread_rng::THREAD_RNG_KEY::h8711901663c7347a>, f=...)
at ..src/libstd/thread/local.rs:211
#6 0x7f55e51c in rand::thread_rng () at .cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.3.14/src/lib.rs:884
#7 0x7f559048 in rand_test::main () at src/main.rs:5
#8 0x7f56f03c in std::panicking::try::call::h94b205a774fa265c ()
#9 0x7f573e4c in __rust_try ()
#10 0x7f573de0 in __rust_maybe_catch_panic ()
#11 0x7f56ea90 in std::rt::lang_start::ha9a21237c3a23329 ()
#12 0x7f5598f4 in main () The more informative pub fn get(&self) -> T {
unsafe{ *self.value.get() }
} |
@alexcrichton Any idea of a better title yet? :) |
"segfault in thread local access on ARM" seems infinitely more descriptive of the actual problem than "jemalloc hates me". |
@sfackler As the issue is not new, I meant a title reflecting the underlying problem but here you go. |
Hey, where did the crash go? (@glandium's patches look OSX specific but maybe they helped too) |
The moment I've downloaded an official ARM rustc build (jemalloc enabled
0554abac6 2016-06-10
) and tried running a benchmark, I get a segfault reminiscent of another past issue. (I'll try finding the link)gdb --args parity-master/util/target/release/bigint-ea02333b5a5f7bfd --bench
EDIT:
I meant this issue:
#30919
If the jemalloc downgrade was reverted later, maybe it's the same issue that was never fixed?
The text was updated successfully, but these errors were encountered: