Closed
Description
The moment I've downloaded an official ARM rustc build (jemalloc enabled 0554abac6 2016-06-10
) and tried running a benchmark, I get a segfault reminiscent of another past issue. (I'll try finding the link)
gdb --args parity-master/util/target/release/bigint-ea02333b5a5f7bfd --bench
running 7 tests
test u128_mul ... bench: 502,505 ns/iter (+/- 6,315)
test u256_add ...
Program received signal SIGSEGV, Segmentation fault.
std::thread::local::{{impl}}::with<alloc::rc::Rc<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>>,closure,alloc::rc::Rc<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>>> (self=<optimized out>, f=...)
at /buildslave/rust-buildbot/slave/nightly-dist-rustc-cross-host-linux/build/src/libstd/thread/local.rs:211
211 /buildslave/rust-buildbot/slave/nightly-dist-rustc-cross-host-linux/build/src/libstd/thread/local.rs: No such file or directory.
(gdb) bt
#0 std::thread::local::{{impl}}::with<alloc::rc::Rc<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>>,closure,alloc::rc::Rc<core::cell::RefCell<rand::reseeding::ReseedingRng<rand::StdRng, rand::ThreadRngReseeder>>>> (self=<optimized out>, f=...)
at /buildslave/rust-buildbot/slave/nightly-dist-rustc-cross-host-linux/build/src/libstd/thread/local.rs:211
#1 rand::thread_rng ()
at .cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.3.14/src/lib.rs:884
#2 0x7f55c044 in rand::random<u64> ()
at .cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.3.14/src/lib.rs:946
#3 0x7f55bdf8 in bigint::u256_add::{{closure}} () at benches/bigint.rs:37
#4 test::{{impl}}::iter<bigint::uint::U256,closure> (self=<optimized out>, inner=...)
at /buildslave/rust-buildbot/slave/nightly-dist-rustc-cross-host-linux/build/src/libtest/lib.rs:1225
#5 bigint::u256_add (b=<optimized out>) at benches/bigint.rs:35
#6 0x7f56d7b0 in test::run_test::h6c9837483a43f30b ()
EDIT:
I meant this issue:
#30919
If the jemalloc downgrade was reverted later, maybe it's the same issue that was never fixed?