Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cargo 1.56 beta hang when run inside Gentoo's sandbox #89522

Closed
12101111 opened this issue Oct 4, 2021 · 35 comments
Closed

cargo 1.56 beta hang when run inside Gentoo's sandbox #89522

12101111 opened this issue Oct 4, 2021 · 35 comments
Labels
C-bug Category: This is a bug. P-medium Medium priority regression-from-stable-to-beta Performance or correctness regression from stable to beta. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Milestone

Comments

@12101111
Copy link
Contributor

12101111 commented Oct 4, 2021

Not sure if this is considered a bug/regression of rust. If not, just close this issue.

#81825 break cargo run inside Gentoo's sandbox.

How to Reproduce

  1. try to run sandbox cargo build on any rust project that has few dependence ( eg. rustc, Firefox or https://github.com/ogham/exa )
  2. cargo hang after rustc compiling some crates.
  3. Some zombie rustc or cargo processes don't exit
 16.7m   0.0   0.1   0:01.05 S  `- /home/han/.rustup/toolchains/nightly-x86_64-unknown-linux-musl/bin/cargo build --offline
  0.0m   0.0   0.0   0:00.00 Z      `- [cargo] <defunct>
  7.2m   0.0   0.0   0:00.00 S      `- /home/han/.rustup/toolchains/nightly-x86_64-unknown-linux-musl/bin/cargo build --offline
  0.0m   0.0   0.0   0:00.00 Z      `- [cargo] <defunct>
  7.3m   0.0   0.0   0:00.00 S      `- /home/han/.rustup/toolchains/nightly-x86_64-unknown-linux-musl/bin/cargo build --offline
  7.3m   0.0   0.0   0:00.00 S      `- /home/han/.rustup/toolchains/nightly-x86_64-unknown-linux-musl/bin/cargo build --offline

图片

The bisect process:

  1. Use cargo bisect-rustc with add timeout functionality for bisecting hangs  cargo-bisect-rustc#135
  2. test.sh:
#!/bin/sh
export LIBGIT2_SYS_USE_PKG_CONFIG=1
export PKG_CONFIG_ALLOW_CROSS=1
sandbox cargo build --offline
  1. cargo bisect-rustc --start=2021-07-30 --end=2021-09-30 --script=./test.sh -t 120
  2. Result: Regression in 4e21ef2

Meta

rustc --version --verbose:

rustc 1.57.0-nightly (c02371c44 2021-10-01)
binary: rustc
commit-hash: c02371c442f811878ab3a0f5a813402b6dfd45d2
commit-date: 2021-10-01
host: x86_64-unknown-linux-musl
release: 1.57.0-nightly
LLVM version: 13.0.0

and

rustc 1.56.0-nightly (gentoo)
binary: rustc
commit-hash: unknown
commit-date: unknown
host: x86_64-unknown-linux-musl
release: 1.56.0-nightly
LLVM version: 13.0.0

(my custom build of rustc 1.56 beta3)
Backtrace of cargo

* thread #1, name = 'cargo', stop reason = signal SIGSTOP
  * frame #0: 0x00007f133a7fc005 ld-musl-x86_64.so.1`__cp_end
    frame #1: 0x00007f133a7f7d4f ld-musl-x86_64.so.1`__syscall_cp_c(nr=202, u=<unavailable>, v=<unavailable>, w=<unavailable>, x=<unavailable>, y=<unavailable>, z=<unavailable>) at pthread_cancel.c:33:6
    frame #2: 0x00007f133a7f6d9d ld-musl-x86_64.so.1`__timedwait_cp [inlined] __futex4_cp(addr=0x00007ffc7fd9fe34, op=<unavailable>, val=2, to=0x00007ffc7fd9fdb0) at __timedwait.c:24:6
    frame #3: 0x00007f133a7f6d73 ld-musl-x86_64.so.1`__timedwait_cp(addr=<unavailable>, val=2, clk=<unavailable>, at=<unavailable>, priv=<unavailable>) at __timedwait.c:52:7
    frame #4: 0x00007f133a7f8364 ld-musl-x86_64.so.1`__pthread_cond_timedwait(c=<unavailable>, m=<unavailable>, ts=0x00007ffc7fd9fef8) at pthread_cond_timedwait.c:100:9
    frame #5: 0x000055e1fdfe0f69 cargo`std::sys::unix::condvar::Condvar::wait_timeout::h23e2f7508abac320(self=0x00007f1339e367f0, mutex=0x00007f1339e01fe0, dur=<unavailable>) at condvar.rs:114:17
    frame #6: 0x000055e1fdab6ddd cargo`cargo::core::compiler::job_queue::DrainState::drain_the_queue::h8617a92b08e36d7f [inlined] std::sys_common::condvar::Condvar::wait_timeout::h66918be3196c044a(self=0x00007f1339e39930, mutex=0x00007f1339e39900, dur=<unavailable>) at condvar.rs:56:9
    frame #7: 0x000055e1fdab6db3 cargo`cargo::core::compiler::job_queue::DrainState::drain_the_queue::h8617a92b08e36d7f at condvar.rs:383:27
    frame #8: 0x000055e1fdab6db3 cargo`cargo::core::compiler::job_queue::DrainState::drain_the_queue::h8617a92b08e36d7f at condvar.rs:460:21
    frame #9: 0x000055e1fdab6d1b cargo`cargo::core::compiler::job_queue::DrainState::drain_the_queue::h8617a92b08e36d7f [inlined] cargo::util::queue::Queue$LT$T$GT$::pop::h2ea0ee746f980829(self=0x00007f1339e39900, timeout=(secs = 0, nanos = 500000000)) at queue.rs:53:35
    frame #10: 0x000055e1fdab6cff cargo`cargo::core::compiler::job_queue::DrainState::drain_the_queue::h8617a92b08e36d7f at job_queue.rs:753:23
    frame #11: 0x000055e1fdab6196 cargo`cargo::core::compiler::job_queue::DrainState::drain_the_queue::h8617a92b08e36d7f(self=DrainState @ 0x00007ffc7fda1860, cx=<unavailable>, plan=0x00007ffc7fda06e0, scope=0x00007ffc7fda0538, jobserver_helper=0x00007ffc7fda1828) at job_queue.rs:815:26
    frame #12: 0x000055e1fda72921 cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813 [inlined] cargo::core::compiler::job_queue::JobQueue::execute::_$u7b$$u7b$closure$u7d$$u7d$::hf90730b55ba59c90(scope=0x00007ffc7fda0538) at job_queue.rs:523:19
    frame #13: 0x000055e1fda728ce cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813 [inlined] crossbeam_utils::thread::scope::_$u7b$$u7b$closure$u7d$$u7d$::hf13fb80fde516676 at thread.rs:160:65
    frame #14: 0x000055e1fda728a4 cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::h88df4760e96f8590(self=AssertUnwindSafe<crossbeam_utils::thread::scope::{closure#0}> @ 0x00007f555674c930) at unwind_safe.rs:271:9
    frame #15: 0x000055e1fda728a4 cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813 at panicking.rs:403:40
    frame #16: 0x000055e1fda728a4 cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813 at panicking.rs:367:19
    frame #17: 0x000055e1fda728a4 cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813 [inlined] std::panic::catch_unwind::h0b5c675c73e34c87(f=AssertUnwindSafe<crossbeam_utils::thread::scope::{closure#0}> @ 0x00007f555674beb0) at panic.rs:129:14
    frame #18: 0x000055e1fda728a4 cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813 at thread.rs:160:18
    frame #19: 0x000055e1fda727b4 cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813 at job_queue.rs:522:9
    frame #20: 0x000055e1fda7246c cargo`cargo::core::compiler::context::Context::compile::h9f54fb813dbf7813(self=Context @ 0x00007ffc7fda1ff8, exec=0x00007ffc7fda3338) at mod.rs:172:9
    frame #21: 0x000055e1fdb8c003 cargo`cargo::ops::cargo_compile::compile_ws::h6ab50db5e4205ff8(ws=<unavailable>, options=<unavailable>, exec=0x00007ffc7fda3338) at cargo_compile.rs:289:5
    frame #22: 0x000055e1fdb8bd78 cargo`cargo::ops::cargo_compile::compile::hb9e83d2d2ee350b1 [inlined] cargo::ops::cargo_compile::compile_with_exec::hdf76cca5b8825788(ws=0x00007ffc7fda3880, options=0x00007ffc7fda3398, exec=0x00007ffc7fda3338) at cargo_compile.rs:273:5
    frame #23: 0x000055e1fdb8bd49 cargo`cargo::ops::cargo_compile::compile::hb9e83d2d2ee350b1(ws=0x00007ffc7fda3880, options=0x00007ffc7fda3398) at cargo_compile.rs:262:5
    frame #24: 0x000055e1fd7b9821 cargo`cargo::commands::build::exec::hdb65e5bc1890c5dd(config=0x00007ffc7fda43c0, args=0x00007f133a8096f8) at build.rs:70:5
    frame #25: 0x000055e1fd7a7b4b cargo`cargo::cli::main::h020e3208cf5c699b [inlined] cargo::cli::execute_subcommand::hafd2d73dcffb661a(config=0x00007ffc7fda43c0, cmd=(data_ptr = "build", length = 5), subcommand_args=0x00007f133a8096f8) at cli.rs:289:16
    frame #26: 0x000055e1fd7a7b2c cargo`cargo::cli::main::h020e3208cf5c699b(config=<unavailable>) at cli.rs:158:5
    frame #27: 0x000055e1fd7cc721 cargo`cargo::main::he0c7f80e60940d9a at main.rs:39:13
    frame #28: 0x000055e1fd7625b3 cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::hc64d3bba25859aa2 [inlined] core::ops::function::FnOnce::call_once::ha569b56495e86f01((null)=<unavailable>) at function.rs:227:5
    frame #29: 0x000055e1fd7625b1 cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::hc64d3bba25859aa2(f=<unavailable>) at backtrace.rs:125:18
    frame #30: 0x000055e1fd7642f9 cargo`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::hb8b16518e98ed065 at rt.rs:63:18
    frame #31: 0x000055e1fdfdf36d cargo`std::rt::lang_start_internal::hd97130f1945ced1b [inlined] core::ops::function::impls::_$LT$impl$u20$core..ops..function..FnOnce$LT$A$GT$$u20$for$u20$$RF$F$GT$::call_once::h908ffaeb6451df95(self=<unavailable>) at function.rs:259:13
    frame #32: 0x000055e1fdfdf36a cargo`std::rt::lang_start_internal::hd97130f1945ced1b at panicking.rs:403:40
    frame #33: 0x000055e1fdfdf36a cargo`std::rt::lang_start_internal::hd97130f1945ced1b at panicking.rs:367:19
    frame #34: 0x000055e1fdfdf36a cargo`std::rt::lang_start_internal::hd97130f1945ced1b [inlined] std::panic::catch_unwind::h5113c0636a0e0138(f=<unavailable>) at panic.rs:129:14
    frame #35: 0x000055e1fdfdf36a cargo`std::rt::lang_start_internal::hd97130f1945ced1b [inlined] std::rt::lang_start_internal::_$u7b$$u7b$closure$u7d$$u7d$::h562d3655e61ef3f4 at rt.rs:45:48
    frame #36: 0x000055e1fdfdf36a cargo`std::rt::lang_start_internal::hd97130f1945ced1b at panicking.rs:403:40
    frame #37: 0x000055e1fdfdf36a cargo`std::rt::lang_start_internal::hd97130f1945ced1b at panicking.rs:367:19
    frame #38: 0x000055e1fdfdf36a cargo`std::rt::lang_start_internal::hd97130f1945ced1b [inlined] std::panic::catch_unwind::h94128c5cd6630768(f=(main = &(dyn core::ops::function::Fn<> @ 0x00007f555927e3d0)) at panic.rs:129:14
    frame #39: 0x000055e1fdfdf36a cargo`std::rt::lang_start_internal::hd97130f1945ced1b(main=&(dyn core::ops::function::Fn<(), Output=i32> + core::marker::Sync + core::panic::unwind_safe::RefUnwindSafe) @ 0x00007f555927e3d0, argc=3, argv=0x00007ffc7fda52f8) at rt.rs:45:20
    frame #40: 0x000055e1fd7d3a3b cargo`main + 43
    frame #41: 0x00007f133a79f589 ld-musl-x86_64.so.1`libc_start_main_stage2(main=(cargo`main), argc=<unavailable>, argv=0x00007ffc7fda52f8) at __libc_start_main.c:94:7
    frame #42: 0x000055e1fd761396 cargo`_start + 22
  thread #2, name = 'cargo', stop reason = signal SIGSTOP
    frame #0: 0x00007f133a7fc005 ld-musl-x86_64.so.1`__cp_end
    frame #1: 0x00007f133a7f7d4f ld-musl-x86_64.so.1`__syscall_cp_c(nr=202, u=<unavailable>, v=<unavailable>, w=<unavailable>, x=<unavailable>, y=<unavailable>, z=<unavailable>) at pthread_cancel.c:33:6
    frame #2: 0x00007f133a7f6d9d ld-musl-x86_64.so.1`__timedwait_cp [inlined] __futex4_cp(addr=0x00007f1339ddc6f4, op=<unavailable>, val=2, to=0x0000000000000000) at __timedwait.c:24:6
    frame #3: 0x00007f133a7f6d73 ld-musl-x86_64.so.1`__timedwait_cp(addr=<unavailable>, val=2, clk=<unavailable>, at=<unavailable>, priv=<unavailable>) at __timedwait.c:52:7
    frame #4: 0x00007f133a7f8364 ld-musl-x86_64.so.1`__pthread_cond_timedwait(c=<unavailable>, m=<unavailable>, ts=0x0000000000000000) at pthread_cond_timedwait.c:100:9
    frame #5: 0x000055e1fdf9d558 cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::heca2c6c8146f2d88 [inlined] std::sys::unix::condvar::Condvar::wait::h97cb58c028df68c7(self=0x00007f1339e368f0, mutex=<unavailable>) at condvar.rs:82:17
    frame #6: 0x000055e1fdf9d54c cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::heca2c6c8146f2d88 [inlined] std::sys_common::condvar::Condvar::wait::hace72b24a7515a0c(self=0x00007f1339e3ac00, mutex=0x00007f1339e3abe0) at condvar.rs:44:9
    frame #7: 0x000055e1fdf9d532 cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::heca2c6c8146f2d88 at condvar.rs:187:13
    frame #8: 0x000055e1fdf9d532 cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::heca2c6c8146f2d88 at lib.rs:473:24
    frame #9: 0x000055e1fdf9d3c2 cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::heca2c6c8146f2d88 [inlined] jobserver::imp::spawn_helper::_$u7b$$u7b$closure$u7d$$u7d$::h3ced2b0a48026e49 at unix.rs:240:9
    frame #10: 0x000055e1fdf9d3c2 cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::heca2c6c8146f2d88(f=<unavailable>) at backtrace.rs:125:18
    frame #11: 0x000055e1fdf9da19 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4d887db3b5e0c4e4 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h84be27402cbd26b3 at mod.rs:481:17
    frame #12: 0x000055e1fdf9da00 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4d887db3b5e0c4e4 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::h1755315c5044ae49 at unwind_safe.rs:271:9
    frame #13: 0x000055e1fdf9da00 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4d887db3b5e0c4e4 [inlined] std::panicking::try::do_call::h278e56399fe643c7 at panicking.rs:403:40
    frame #14: 0x000055e1fdf9da00 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4d887db3b5e0c4e4 [inlined] std::panicking::try::hca1de3f42269fb3b at panicking.rs:367:19
    frame #15: 0x000055e1fdf9da00 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4d887db3b5e0c4e4 [inlined] std::panic::catch_unwind::h746387e2e744592d at panic.rs:129:14
    frame #16: 0x000055e1fdf9da00 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4d887db3b5e0c4e4 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::hac2facaf9a6e368c at mod.rs:480:30
    frame #17: 0x000055e1fdf9d9b0 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h4d887db3b5e0c4e4((null)=0x00007f1339e369b0, (null)=<unavailable>) at function.rs:227:5
    frame #18: 0x000055e1fdfe7cb3 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h8ffa971d6f77ff65(self=Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> @ 0x00007f5559234cf0) at boxed.rs:1636:9
    frame #19: 0x000055e1fdfe7cad cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hf48067d9dbade9e9(self=0x00007f1339e02d50) at boxed.rs:1636:9
    frame #20: 0x000055e1fdfe7ca6 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8(main=0x00007f1339e02d50) at thread.rs:106:17
    frame #21: 0x00007f133a7f955c ld-musl-x86_64.so.1`start(p=0x00007f1339ddc900) at pthread_create.c:203:17
    frame #22: 0x00007f133a7fbfcb ld-musl-x86_64.so.1`__clone + 47
  thread #3, name = 'cargo', stop reason = signal SIGSTOP
    frame #0: 0x00007f133a7fc005 ld-musl-x86_64.so.1`__cp_end
    frame #1: 0x00007f133a7f7d4f ld-musl-x86_64.so.1`__syscall_cp_c(nr=7, u=<unavailable>, v=<unavailable>, w=<unavailable>, x=<unavailable>, y=<unavailable>, z=<unavailable>) at pthread_cancel.c:33:6
    frame #2: 0x00007f133a7e36a5 ld-musl-x86_64.so.1`poll(fds=<unavailable>, n=<unavailable>, timeout=<unavailable>) at poll.c:9:9
    frame #3: 0x000055e1fdf94203 cargo`cargo_util::read2::imp::read2::h405e7286ee17620f(out_pipe=<unavailable>, err_pipe=<unavailable>, data=&mut dyn core::ops::function::FnMut<(bool, &mut alloc::vec::Vec<u8, alloc::alloc::Global>, bool), Output=()> @ 0x00007f55590e8610) at read2.rs:36:30
    frame #4: 0x000055e1fdf92284 cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306 at process_builder.rs:252:13
    frame #5: 0x000055e1fdf92188 cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306(self=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f13399be140, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f13399be150, capture_output=<unavailable>) at process_builder.rs:248:22
    frame #6: 0x000055e1fdad7fed cargo`_$LT$cargo..core..compiler..DefaultExecutor$u20$as$u20$cargo..core..compiler..Executor$GT$::exec::hc2b9e38618027b47(self=<unavailable>, cmd=<unavailable>, _id=<unavailable>, _target=<unavailable>, _mode=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f55590ea490, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f55590ea590) at mod.rs:133:9
    frame #7: 0x000055e1fdadd7c2 cargo`cargo::core::compiler::rustc::_$u7b$$u7b$closure$u7d$$u7d$::hd08ce3b824e17e69(state=<unavailable>) at mod.rs:318:13
    frame #8: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558fe79e0, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #9: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558fe7a60), tx=0x00007f13399be7c8) at job.rs:31:9
    frame #10: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f13399be7c8) at job.rs:36:13
    frame #11: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339e253e0, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #12: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558fe7a20, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #13: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558fe7720), tx=0x00007f13399be7c8) at job.rs:31:9
    frame #14: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f13399be7c8) at job.rs:36:13
    frame #15: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339e25410, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #16: 0x000055e1fdabb099 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558fe8330, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #17: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558fe83b0), tx=0x00007f13399be7c8) at job.rs:31:9
    frame #18: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Job::run::heb80c1616b1f170f(state=0x00007f13399be7c8) at job.rs:62:9
    frame #19: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b(state=JobState @ 0x00007f13399be7c8) at job_queue.rs:1040:34
    frame #20: 0x000055e1fd83904f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf [inlined] cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::hfe68ac7b45ac03ab at job_queue.rs:1098:21
    frame #21: 0x000055e1fd838ffd cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf at thread.rs:437:31
    frame #22: 0x000055e1fd838fd0 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf((null)=0x00007f1339e3a630, (null)=<unavailable>) at function.rs:227:5
    frame #23: 0x000055e1fd81863e cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::ha48699e8e8187cbd(self=Box<(dyn core::ops::function::FnOnce<(), Output=()> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558fe8b30) at boxed.rs:1636:9
    frame #24: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] crossbeam_utils::thread::ScopedThreadBuilder::spawn::_$u7b$$u7b$closure$u7d$$u7d$::hbf07fd2ee35affe3 at thread.rs:449:44
    frame #25: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b(f=(closure = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558fe8df0)) at backtrace.rs:125:18
    frame #26: 0x000055e1fd8393ab cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h77f48923b3b16330 at mod.rs:481:17
    frame #27: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hcf85bf374106e3c6(self=<unavailable>) at unwind_safe.rs:271:9
    frame #28: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:403:40
    frame #29: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:367:19
    frame #30: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::panic::catch_unwind::hfac1c3b1661d60a6(f=<unavailable>) at panic.rs:129:14
    frame #31: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::hdfc489316504bc38 at mod.rs:480:30
    frame #32: 0x000055e1fd839350 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3((null)=0x00007f1339e024f0, (null)=<unavailable>) at function.rs:227:5
    frame #33: 0x000055e1fdfe7cb3 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h8ffa971d6f77ff65(self=Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> @ 0x00007f5558fe8af0) at boxed.rs:1636:9
    frame #34: 0x000055e1fdfe7cad cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hf48067d9dbade9e9(self=0x00007f1339e02df0) at boxed.rs:1636:9
    frame #35: 0x000055e1fdfe7ca6 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8(main=0x00007f1339e02df0) at thread.rs:106:17
    frame #36: 0x00007f133a7f955c ld-musl-x86_64.so.1`start(p=0x00007f13399be900) at pthread_create.c:203:17
    frame #37: 0x00007f133a7fbfcb ld-musl-x86_64.so.1`__clone + 47
  thread #4, name = 'cargo', stop reason = signal SIGSTOP
    frame #0: 0x00007f133a7fc005 ld-musl-x86_64.so.1`__cp_end
    frame #1: 0x00007f133a7f7d4f ld-musl-x86_64.so.1`__syscall_cp_c(nr=7, u=<unavailable>, v=<unavailable>, w=<unavailable>, x=<unavailable>, y=<unavailable>, z=<unavailable>) at pthread_cancel.c:33:6
    frame #2: 0x00007f133a7e36a5 ld-musl-x86_64.so.1`poll(fds=<unavailable>, n=<unavailable>, timeout=<unavailable>) at poll.c:9:9
    frame #3: 0x000055e1fdf94203 cargo`cargo_util::read2::imp::read2::h405e7286ee17620f(out_pipe=<unavailable>, err_pipe=<unavailable>, data=&mut dyn core::ops::function::FnMut<(bool, &mut alloc::vec::Vec<u8, alloc::alloc::Global>, bool), Output=()> @ 0x00007f5558fe8e50) at read2.rs:36:30
    frame #4: 0x000055e1fdf92284 cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306 at process_builder.rs:252:13
    frame #5: 0x000055e1fdf92188 cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306(self=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f13397bb140, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f13397bb150, capture_output=<unavailable>) at process_builder.rs:248:22
    frame #6: 0x000055e1fdad7fed cargo`_$LT$cargo..core..compiler..DefaultExecutor$u20$as$u20$cargo..core..compiler..Executor$GT$::exec::hc2b9e38618027b47(self=<unavailable>, cmd=<unavailable>, _id=<unavailable>, _target=<unavailable>, _mode=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f5558fe8d10, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f5558fe8db0) at mod.rs:133:9
    frame #7: 0x000055e1fdadd7c2 cargo`cargo::core::compiler::rustc::_$u7b$$u7b$closure$u7d$$u7d$::hd08ce3b824e17e69(state=<unavailable>) at mod.rs:318:13
    frame #8: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558fe8e70, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #9: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558fe8d10), tx=0x00007f13397bb7c8) at job.rs:31:9
    frame #10: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f13397bb7c8) at job.rs:36:13
    frame #11: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339dfece0, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #12: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558fe8e50, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #13: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558fe8b10), tx=0x00007f13397bb7c8) at job.rs:31:9
    frame #14: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f13397bb7c8) at job.rs:36:13
    frame #15: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339dfed10, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #16: 0x000055e1fdabb099 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558fe8e30, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #17: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558fe8af0), tx=0x00007f13397bb7c8) at job.rs:31:9
    frame #18: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Job::run::heb80c1616b1f170f(state=0x00007f13397bb7c8) at job.rs:62:9
    frame #19: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b(state=JobState @ 0x00007f13397bb7c8) at job_queue.rs:1040:34
    frame #20: 0x000055e1fd83904f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf [inlined] cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::hfe68ac7b45ac03ab at job_queue.rs:1098:21
    frame #21: 0x000055e1fd838ffd cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf at thread.rs:437:31
    frame #22: 0x000055e1fd838fd0 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf((null)=0x00007f1339e3a7c0, (null)=<unavailable>) at function.rs:227:5
    frame #23: 0x000055e1fd81863e cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::ha48699e8e8187cbd(self=Box<(dyn core::ops::function::FnOnce<(), Output=()> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558fe8e30) at boxed.rs:1636:9
    frame #24: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] crossbeam_utils::thread::ScopedThreadBuilder::spawn::_$u7b$$u7b$closure$u7d$$u7d$::hbf07fd2ee35affe3 at thread.rs:449:44
    frame #25: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b(f=(closure = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558fe8af0)) at backtrace.rs:125:18
    frame #26: 0x000055e1fd8393ab cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h77f48923b3b16330 at mod.rs:481:17
    frame #27: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hcf85bf374106e3c6(self=<unavailable>) at unwind_safe.rs:271:9
    frame #28: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:403:40
    frame #29: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:367:19
    frame #30: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::panic::catch_unwind::hfac1c3b1661d60a6(f=<unavailable>) at panic.rs:129:14
    frame #31: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::hdfc489316504bc38 at mod.rs:480:30
    frame #32: 0x000055e1fd839350 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3((null)=0x00007f1339e02430, (null)=<unavailable>) at function.rs:227:5
    frame #33: 0x000055e1fdfe7cb3 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h8ffa971d6f77ff65(self=Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> @ 0x00007f5558fe8e30) at boxed.rs:1636:9
    frame #34: 0x000055e1fdfe7cad cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hf48067d9dbade9e9(self=0x00007f1339e02ad0) at boxed.rs:1636:9
    frame #35: 0x000055e1fdfe7ca6 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8(main=0x00007f1339e02ad0) at thread.rs:106:17
    frame #36: 0x00007f133a7f955c ld-musl-x86_64.so.1`start(p=0x00007f13397bb900) at pthread_create.c:203:17
    frame #37: 0x00007f133a7fbfcb ld-musl-x86_64.so.1`__clone + 47
  thread #5, name = 'cargo', stop reason = signal SIGSTOP
    frame #0: 0x00007f133a7fc005 ld-musl-x86_64.so.1`__cp_end
    frame #1: 0x00007f133a7f7d4f ld-musl-x86_64.so.1`__syscall_cp_c(nr=0, u=<unavailable>, v=<unavailable>, w=<unavailable>, x=<unavailable>, y=<unavailable>, z=<unavailable>) at pthread_cancel.c:33:6
    frame #2: 0x00007f133a800dff ld-musl-x86_64.so.1`read(fd=<unavailable>, buf=<unavailable>, count=<unavailable>) at read.c:6:9
    frame #3: 0x000055e1fdfe883c cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67 [inlined] std::sys::unix::fd::FileDesc::read::ha60ab583a50f0955(buf=(data_ptr = "", length = 8)) at fd.rs:65:13
    frame #4: 0x000055e1fdfe882a cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67 [inlined] std::sys::unix::pipe::AnonPipe::read::he8533efcd2d11395(buf=(data_ptr = "", length = 8)) at pipe.rs:49:9
    frame #5: 0x000055e1fdfe882a cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67(self=0x00007f13395a4180, default=Stdio @ 0x00007f13395a3c60, needs_stdin=<unavailable>) at process_unix.rs:107:19
    frame #6: 0x000055e1fdfd988c cargo`std::process::Command::spawn::ha9bec88ec3740324(self=<unavailable>) at process.rs:879:9
    frame #7: 0x000055e1fdf9218e cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306 [inlined] cargo_util::process_builder::ProcessBuilder::exec_with_streaming::_$u7b$$u7b$closure$u7d$$u7d$::hee5747160a64ba2c at process_builder.rs:249:29
    frame #8: 0x000055e1fdf92188 cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306(self=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f13395a4140, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f13395a4150, capture_output=<unavailable>) at process_builder.rs:248:22
    frame #9: 0x000055e1fdad7fed cargo`_$LT$cargo..core..compiler..DefaultExecutor$u20$as$u20$cargo..core..compiler..Executor$GT$::exec::hc2b9e38618027b47(self=<unavailable>, cmd=<unavailable>, _id=<unavailable>, _target=<unavailable>, _mode=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f5558e536f0, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f5558e536f0) at mod.rs:133:9
    frame #10: 0x000055e1fdadd7c2 cargo`cargo::core::compiler::rustc::_$u7b$$u7b$closure$u7d$$u7d$::hd08ce3b824e17e69(state=<unavailable>) at mod.rs:318:13
    frame #11: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e536f0, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #12: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e53650), tx=0x00007f13395a47c8) at job.rs:31:9
    frame #13: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f13395a47c8) at job.rs:36:13
    frame #14: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339dfc1f0, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #15: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e53650, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #16: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e53650), tx=0x00007f13395a47c8) at job.rs:31:9
    frame #17: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f13395a47c8) at job.rs:36:13
    frame #18: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339dfb110, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #19: 0x000055e1fdabb099 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e53650, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #20: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e53650), tx=0x00007f13395a47c8) at job.rs:31:9
    frame #21: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Job::run::heb80c1616b1f170f(state=0x00007f13395a47c8) at job.rs:62:9
    frame #22: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b(state=JobState @ 0x00007f13395a47c8) at job_queue.rs:1040:34
    frame #23: 0x000055e1fd83904f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf [inlined] cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::hfe68ac7b45ac03ab at job_queue.rs:1098:21
    frame #24: 0x000055e1fd838ffd cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf at thread.rs:437:31
    frame #25: 0x000055e1fd838fd0 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf((null)=0x00007f1339e3ab80, (null)=<unavailable>) at function.rs:227:5
    frame #26: 0x000055e1fd81863e cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::ha48699e8e8187cbd(self=Box<(dyn core::ops::function::FnOnce<(), Output=()> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e543d0) at boxed.rs:1636:9
    frame #27: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] crossbeam_utils::thread::ScopedThreadBuilder::spawn::_$u7b$$u7b$closure$u7d$$u7d$::hbf07fd2ee35affe3 at thread.rs:449:44
    frame #28: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b(f=(closure = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e53bf0)) at backtrace.rs:125:18
    frame #29: 0x000055e1fd8393ab cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h77f48923b3b16330 at mod.rs:481:17
    frame #30: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hcf85bf374106e3c6(self=<unavailable>) at unwind_safe.rs:271:9
    frame #31: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:403:40
    frame #32: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:367:19
    frame #33: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::panic::catch_unwind::hfac1c3b1661d60a6(f=<unavailable>) at panic.rs:129:14
    frame #34: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::hdfc489316504bc38 at mod.rs:480:30
    frame #35: 0x000055e1fd839350 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3((null)=0x00007f13395a8110, (null)=<unavailable>) at function.rs:227:5
    frame #36: 0x000055e1fdfe7cb3 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h8ffa971d6f77ff65(self=Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> @ 0x00007f5558e54bb0) at boxed.rs:1636:9
    frame #37: 0x000055e1fdfe7cad cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hf48067d9dbade9e9(self=0x00007f1339e02b50) at boxed.rs:1636:9
    frame #38: 0x000055e1fdfe7ca6 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8(main=0x00007f1339e02b50) at thread.rs:106:17
    frame #39: 0x00007f133a7f955c ld-musl-x86_64.so.1`start(p=0x00007f13395a4900) at pthread_create.c:203:17
    frame #40: 0x00007f133a7fbfcb ld-musl-x86_64.so.1`__clone + 47
  thread #6, name = 'cargo', stop reason = signal SIGSTOP
    frame #0: 0x00007f133a7fc005 ld-musl-x86_64.so.1`__cp_end
    frame #1: 0x00007f133a7f7d4f ld-musl-x86_64.so.1`__syscall_cp_c(nr=0, u=<unavailable>, v=<unavailable>, w=<unavailable>, x=<unavailable>, y=<unavailable>, z=<unavailable>) at pthread_cancel.c:33:6
    frame #2: 0x00007f133a800dff ld-musl-x86_64.so.1`read(fd=<unavailable>, buf=<unavailable>, count=<unavailable>) at read.c:6:9
    frame #3: 0x000055e1fdfe883c cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67 [inlined] std::sys::unix::fd::FileDesc::read::ha60ab583a50f0955(buf=(data_ptr = "", length = 8)) at fd.rs:65:13
    frame #4: 0x000055e1fdfe882a cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67 [inlined] std::sys::unix::pipe::AnonPipe::read::he8533efcd2d11395(buf=(data_ptr = "", length = 8)) at pipe.rs:49:9
    frame #5: 0x000055e1fdfe882a cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67(self=0x00007f133939f180, default=Stdio @ 0x00007f133939ec60, needs_stdin=<unavailable>) at process_unix.rs:107:19
    frame #6: 0x000055e1fdfd988c cargo`std::process::Command::spawn::ha9bec88ec3740324(self=<unavailable>) at process.rs:879:9
    frame #7: 0x000055e1fdf9218e cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306 [inlined] cargo_util::process_builder::ProcessBuilder::exec_with_streaming::_$u7b$$u7b$closure$u7d$$u7d$::hee5747160a64ba2c at process_builder.rs:249:29
    frame #8: 0x000055e1fdf92188 cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306(self=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f133939f140, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f133939f150, capture_output=<unavailable>) at process_builder.rs:248:22
    frame #9: 0x000055e1fdad7fed cargo`_$LT$cargo..core..compiler..DefaultExecutor$u20$as$u20$cargo..core..compiler..Executor$GT$::exec::hc2b9e38618027b47(self=<unavailable>, cmd=<unavailable>, _id=<unavailable>, _target=<unavailable>, _mode=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f5558e536f0, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f5558e536f0) at mod.rs:133:9
    frame #10: 0x000055e1fdadd7c2 cargo`cargo::core::compiler::rustc::_$u7b$$u7b$closure$u7d$$u7d$::hd08ce3b824e17e69(state=<unavailable>) at mod.rs:318:13
    frame #11: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e53650, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #12: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e536f0), tx=0x00007f133939f7c8) at job.rs:31:9
    frame #13: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f133939f7c8) at job.rs:36:13
    frame #14: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339f75e90, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #15: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e53650, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #16: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e536f0), tx=0x00007f133939f7c8) at job.rs:31:9
    frame #17: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f133939f7c8) at job.rs:36:13
    frame #18: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339f75f20, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #19: 0x000055e1fdabb099 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e53650, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #20: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e536f0), tx=0x00007f133939f7c8) at job.rs:31:9
    frame #21: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Job::run::heb80c1616b1f170f(state=0x00007f133939f7c8) at job.rs:62:9
    frame #22: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b(state=JobState @ 0x00007f133939f7c8) at job_queue.rs:1040:34
    frame #23: 0x000055e1fd83904f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf [inlined] cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::hfe68ac7b45ac03ab at job_queue.rs:1098:21
    frame #24: 0x000055e1fd838ffd cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf at thread.rs:437:31
    frame #25: 0x000055e1fd838fd0 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf((null)=0x00007f1339e3aa40, (null)=<unavailable>) at function.rs:227:5
    frame #26: 0x000055e1fd81863e cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::ha48699e8e8187cbd(self=Box<(dyn core::ops::function::FnOnce<(), Output=()> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e536f0) at boxed.rs:1636:9
    frame #27: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] crossbeam_utils::thread::ScopedThreadBuilder::spawn::_$u7b$$u7b$closure$u7d$$u7d$::hbf07fd2ee35affe3 at thread.rs:449:44
    frame #28: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b(f=(closure = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e536f0)) at backtrace.rs:125:18
    frame #29: 0x000055e1fd8393ab cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h77f48923b3b16330 at mod.rs:481:17
    frame #30: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hcf85bf374106e3c6(self=<unavailable>) at unwind_safe.rs:271:9
    frame #31: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:403:40
    frame #32: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:367:19
    frame #33: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::panic::catch_unwind::hfac1c3b1661d60a6(f=<unavailable>) at panic.rs:129:14
    frame #34: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::hdfc489316504bc38 at mod.rs:480:30
    frame #35: 0x000055e1fd839350 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3((null)=0x00007f13395a8260, (null)=<unavailable>) at function.rs:227:5
    frame #36: 0x000055e1fdfe7cb3 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h8ffa971d6f77ff65(self=Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> @ 0x00007f5558e536f0) at boxed.rs:1636:9
    frame #37: 0x000055e1fdfe7cad cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hf48067d9dbade9e9(self=0x00007f1339e02cd0) at boxed.rs:1636:9
    frame #38: 0x000055e1fdfe7ca6 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8(main=0x00007f1339e02cd0) at thread.rs:106:17
    frame #39: 0x00007f133a7f955c ld-musl-x86_64.so.1`start(p=0x00007f133939f900) at pthread_create.c:203:17
    frame #40: 0x00007f133a7fbfcb ld-musl-x86_64.so.1`__clone + 47
  thread #7, name = 'cargo', stop reason = signal SIGSTOP
    frame #0: 0x00007f133a7fc005 ld-musl-x86_64.so.1`__cp_end
    frame #1: 0x00007f133a7f7d4f ld-musl-x86_64.so.1`__syscall_cp_c(nr=0, u=<unavailable>, v=<unavailable>, w=<unavailable>, x=<unavailable>, y=<unavailable>, z=<unavailable>) at pthread_cancel.c:33:6
    frame #2: 0x00007f133a800dff ld-musl-x86_64.so.1`read(fd=<unavailable>, buf=<unavailable>, count=<unavailable>) at read.c:6:9
    frame #3: 0x000055e1fdfe883c cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67 [inlined] std::sys::unix::fd::FileDesc::read::ha60ab583a50f0955(buf=(data_ptr = "", length = 8)) at fd.rs:65:13
    frame #4: 0x000055e1fdfe882a cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67 [inlined] std::sys::unix::pipe::AnonPipe::read::he8533efcd2d11395(buf=(data_ptr = "", length = 8)) at pipe.rs:49:9
    frame #5: 0x000055e1fdfe882a cargo`std::sys::unix::process::process_inner::_$LT$impl$u20$std..sys..unix..process..process_common..Command$GT$::spawn::h1d6f77b3e0d6cc67(self=0x00007f1338f90180, default=Stdio @ 0x00007f1338f8fc60, needs_stdin=<unavailable>) at process_unix.rs:107:19
    frame #6: 0x000055e1fdfd988c cargo`std::process::Command::spawn::ha9bec88ec3740324(self=<unavailable>) at process.rs:879:9
    frame #7: 0x000055e1fdf9218e cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306 [inlined] cargo_util::process_builder::ProcessBuilder::exec_with_streaming::_$u7b$$u7b$closure$u7d$$u7d$::hee5747160a64ba2c at process_builder.rs:249:29
    frame #8: 0x000055e1fdf92188 cargo`cargo_util::process_builder::ProcessBuilder::exec_with_streaming::h35d634369e7c6306(self=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f1338f90140, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f1338f90150, capture_output=<unavailable>) at process_builder.rs:248:22
    frame #9: 0x000055e1fdad7fed cargo`_$LT$cargo..core..compiler..DefaultExecutor$u20$as$u20$cargo..core..compiler..Executor$GT$::exec::hc2b9e38618027b47(self=<unavailable>, cmd=<unavailable>, _id=<unavailable>, _target=<unavailable>, _mode=<unavailable>, on_stdout_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f5558e536f0, on_stderr_line=&mut dyn core::ops::function::FnMut<(&str), Output=core::result::Result<(), anyhow::Error>> @ 0x00007f5558e53650) at mod.rs:133:9
    frame #10: 0x000055e1fdadd7c2 cargo`cargo::core::compiler::rustc::_$u7b$$u7b$closure$u7d$$u7d$::hd08ce3b824e17e69(state=<unavailable>) at mod.rs:318:13
    frame #11: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e536f0, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #12: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e536f0), tx=0x00007f1338f907c8) at job.rs:31:9
    frame #13: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f1338f907c8) at job.rs:36:13
    frame #14: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339e20be0, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #15: 0x000055e1fd83759c cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f5558e53650, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #16: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e53650), tx=0x00007f1338f907c8) at job.rs:31:9
    frame #17: 0x000055e1fd837596 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f [inlined] cargo::core::compiler::job::Work::then::_$u7b$$u7b$closure$u7d$$u7d$::hee839d6bf751337f(state=0x00007f1338f907c8) at job.rs:36:13
    frame #18: 0x000055e1fd83758f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hc40448ed66974f1f((null)=0x00007f1339e20c70, (null)=(&cargo::core::compiler::job_queue::JobState) @ r15) at function.rs:227:5
    frame #19: 0x000055e1fdabb099 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h909d17ba2dd3a07c(self=Box<(dyn core::ops::function::FnOnce<(&cargo::core::compiler::job_queue::JobState), Output=core::result::Result<(), anyhow::Error>> + core::marker::Send), alloc::alloc::Global> @ 0x00007f555ce452c0, args=(&cargo::core::compiler::job_queue::JobState) @ r15) at boxed.rs:1636:9
    frame #20: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Work::call::he5314677cd002415(self=(inner = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f555ce452c0), tx=0x00007f1338f907c8) at job.rs:31:9
    frame #21: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b [inlined] cargo::core::compiler::job::Job::run::heb80c1616b1f170f(state=0x00007f1338f907c8) at job.rs:62:9
    frame #22: 0x000055e1fdabb093 cargo`cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::he62fa79974c9987b(state=JobState @ 0x00007f1338f907c8) at job_queue.rs:1040:34
    frame #23: 0x000055e1fd83904f cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf [inlined] cargo::core::compiler::job_queue::DrainState::run::_$u7b$$u7b$closure$u7d$$u7d$::hfe68ac7b45ac03ab at job_queue.rs:1098:21
    frame #24: 0x000055e1fd838ffd cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf at thread.rs:437:31
    frame #25: 0x000055e1fd838fd0 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hda7669d323da0eaf((null)=0x00007f1339bcab80, (null)=<unavailable>) at function.rs:227:5
    frame #26: 0x000055e1fd81863e cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::ha48699e8e8187cbd(self=Box<(dyn core::ops::function::FnOnce<(), Output=()> + core::marker::Send), alloc::alloc::Global> @ 0x00007f555ce452c0) at boxed.rs:1636:9
    frame #27: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b [inlined] crossbeam_utils::thread::ScopedThreadBuilder::spawn::_$u7b$$u7b$closure$u7d$$u7d$::hbf07fd2ee35affe3 at thread.rs:449:44
    frame #28: 0x000055e1fd81863b cargo`std::sys_common::backtrace::__rust_begin_short_backtrace::h24259d003811a78b(f=(closure = alloc::boxed::Box<(dyn core::ops::function::FnOnce<>, alloc::alloc::Global> @ 0x00007f5558e53650)) at backtrace.rs:125:18
    frame #29: 0x000055e1fd8393ab cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h77f48923b3b16330 at mod.rs:481:17
    frame #30: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] _$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::hcf85bf374106e3c6(self=<unavailable>) at unwind_safe.rs:271:9
    frame #31: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:403:40
    frame #32: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 at panicking.rs:367:19
    frame #33: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::panic::catch_unwind::hfac1c3b1661d60a6(f=<unavailable>) at panic.rs:129:14
    frame #34: 0x000055e1fd8393a6 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3 [inlined] std::thread::Builder::spawn_unchecked::_$u7b$$u7b$closure$u7d$$u7d$::hdfc489316504bc38 at mod.rs:480:30
    frame #35: 0x000055e1fd839350 cargo`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::hf941baa3d65828b3((null)=0x00007f13395a8fe0, (null)=<unavailable>) at function.rs:227:5
    frame #36: 0x000055e1fdfe7cb3 cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h8ffa971d6f77ff65(self=Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> @ 0x00007f5558e536f0) at boxed.rs:1636:9
    frame #37: 0x000055e1fdfe7cad cargo`std::sys::unix::thread::Thread::new::thread_start::he0eb13837439c3a8 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call

@12101111 12101111 added the C-bug Category: This is a bug. label Oct 4, 2021
@Mark-Simulacrum Mark-Simulacrum added this to the 1.56.0 milestone Oct 4, 2021
@Mark-Simulacrum Mark-Simulacrum added the regression-from-stable-to-beta Performance or correctness regression from stable to beta. label Oct 4, 2021
@rustbot rustbot added the I-prioritize Issue: Indicates that prioritization has been requested for this issue. label Oct 4, 2021
@Mark-Simulacrum
Copy link
Member

cc @joshtriplett

Seems ... a little surprising? I wouldn't expect that PR to change cargo behavior.

@joshtriplett
Copy link
Member

The most likely reason I can think of would be if the Gentoo sandbox tool doesn't understand the clone3 syscall.

@cuviper
Copy link
Member

cuviper commented Oct 15, 2021

I can reproduce this in a gentoo container. On a RHEL 8 host (kernel 4.18, without clone3), it's fine. On Fedora 34 (kernel 5.14, with clone3) it's fine with -j1, sometimes hangs with -j2, and consistently hangs with more jobs (default 16 logical threads here). The child process is stuck here:

#0  0x00007f193f3bdf8b in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007f193f3b6fd3 in pthread_mutex_lock () from /lib64/libpthread.so.0
#2  0x00007f193f3f65d0 in ?? () from /usr/lib64/libsandbox.so
#3  0x00007f193f3fa2f2 in access () from /usr/lib64/libsandbox.so
#4  0x00007f193f3fb168 in execvp () from /usr/lib64/libsandbox.so
#5  0x0000560a24303bdf in std::sys::unix::process::process_common::Command::do_exec ()
    at library/std/src/sys/unix/process/process_unix.rs:367
#6  0x0000560a24303577 in std::sys::unix::process::process_common::Command::spawn ()
    at library/std/src/sys/unix/process/process_unix.rs:77
#7  0x0000560a242f5bdc in std::process::Command::spawn () at library/std/src/process.rs:879
#8  0x0000560a2413aafe in cargo_util::process_builder::ProcessBuilder::exec_with_streaming ()

@cuviper
Copy link
Member

cuviper commented Oct 15, 2021

I don't know anything about libsandbox.so, but I am suspicious of that fact that we're bypassing the system fork without the same clone flags and other internal bookkeeping. The glibc implementation usually clones this way:

/* Call the clone syscall with fork semantic.  The CTID address is used
   to store the child thread ID at its locationm, to erase it in child memory
   when the child exits, and do a wakeup on the futex at that address.

   The architecture with non-default kernel abi semantic should correctlly
   override it with one of the supported calling convention (check generic
   kernel-features.h for the clone abi variants).  */
static inline pid_t
arch_fork (void *ctid)
{
  const int flags = CLONE_CHILD_SETTID | CLONE_CHILD_CLEARTID | SIGCHLD;

... where ctid comes from _Fork, arch_fork (&THREAD_SELF->tid), and afterward that also initializes the child's robust mutext list. There's further fork handling that's supposed to happen after that in __libc_fork too.

In musl they have similar cleanups in the child, manually calling gettid and clearing lock state.

I suppose it's generally not safe to call pthread_mutex_lock from a forked child, but libsandbox adds some low-level hooks to make it work -- I see it has its own fork and exec* functions. But we've half-way bypassed it here, with a manual clone3 syscall and then a library-level execvp.

I think it's also possible in general that some fork-safe libc functions will be unsafe if you've bypassed the library fork.

@cuviper
Copy link
Member

cuviper commented Oct 15, 2021

Perhaps we should initialize HAS_CLONE3 to false while we work this out -- it's only useful for the unstable pidfd stuff anyway. Or we could only use clone3 when directly needed for pidfd, which will be ~never for now.

@Mark-Simulacrum
Copy link
Member

I agree that it seems like a good idea to either revert the patch wholesale on 1.56 (current beta) or apply some minimal fix that avoids this problem while we discuss -- @cuviper would you be up for posting a direct-to-beta branch patch making that change?

Once that lands we can bump this to track 1.57 rather than the 1.56 milestone to make sure we still keep it tracked, but we probably want to apply the partial "revert" on master as well to avoid a continuous cherry pick dance.

@joshtriplett
Copy link
Member

joshtriplett commented Oct 15, 2021

I just read through the source of the Gentoo sandbox tool, and I don't think this is an issue with bypassing libc at all; this seems like a bug in the libsandbox library. And as far as I can tell, it isn't specific to clone3; this would happen with any program calling clone instead of fork to spawn a new process, as well. I think it might also happen when calling posix_spawn on glibc, since glibc will use clone rather than fork for that.

libsandbox hooks fork, and uses that hook as what it describes as "a poor man's pthread_atfork()"; it acquires its lock, calls the function, then releases its lock, to ensure that the fork doesn't happen while another thread holds the lock. However, libsandbox does not hook either clone or clone3. I would guess that libsandbox assumes that clone is only used for threads. I think the correct fix here would be for libsandbox to hook both clone and clone3, and perform the same logic whenever the flags do not include CLONE_VM.

I would propose that we temporarily disable HAS_CLONE3 to give more time to work around this issue, but I don't think we should permanently work around this issue. I think it needs fixing in libsandbox.

@joshtriplett
Copy link
Member

joshtriplett commented Oct 15, 2021

There's a similar issue at https://bugs.gentoo.org/807832 involving java, and it appears to also involve clone3. Though, oddly, the strace there shows CLONE_VM, so that may be a different issue.

cuviper added a commit to cuviper/rust that referenced this issue Oct 15, 2021
In rust-lang#89522 we learned that `clone3` is interacting poorly with Gentoo's
`sandbox` tool. We only need that for the unstable pidfd extensions, so
otherwise avoid that and use a normal `fork`.
@cuviper
Copy link
Member

cuviper commented Oct 15, 2021

See #89924 for restricting clone3 to pidfd needs.

I think the correct fix here would be for libsandbox to hook both clone and clone3, and perform the same logic whenever the flags do not include CLONE_VM.

I suppose they could hook clone, but they can't hook a syscall clone3 the same way. I haven't looked if they're doing syscall filtering at all, but if so, is there anything they could do besides outright disallowing it?

But I'm still concerned about bypassing libc in general. Suppose instead of "a poor man's pthread_atfork()", they could have used the real thing, or any other library might be trying to do the right thing here. Even libc internals will have state it's trying to maintain, like the TID and robust mutex list, so I fear we can't really call libc at all after this.

@joshtriplett
Copy link
Member

That's true, clone3 would be harder to interpose. As far as I can tell, sandbox has support for tracing binaries from the outside by using ptrace, which seems substantially safer in any case. (I can think of other approaches to that kind of sandboxing, as well, which would be much more robust; a mount namespace and a filesystem, for instance, or an overlayfs and a "commit" operation.)

@codonell
Copy link

The clone3 syscall can do many different semantically distinct operations depending on the flags passed to the syscall. I'm here as a libc author at @cuviper request.

If a language runtime or library interposes fork it will have a very difficult time complying with all of the standards requirements and libc internals required to bring the forked process back to a consistent state once the fork is complete. Simply calling clone3 as an emulation of fork is possible, but once you do this you may never call a libc function again.

My understanding here is that Rust started using clone3-as-fork, and the sandbox library interposes execvp. In this situation the bug is in Rust for calling the C library execvp after clone3-as-fork. Once Rust calls clone3-as-fork it has diverged from the underlying C runtime and may not call another C library function.

In general once a process calls clone to emulate pthread_create, fork, or vfork and create a process or thread and bypasses libc in doing so, it may never call a libc function again. There are instances of this in the wild, including the resource-recapturing pseudo-thread that valgrind creates while it waits for all the system threads to exit. You must be extremely careful what you do in the new pseudo-thread/process, and the official answer is that you may never call back into libc once you've done that. The unofficial answer is that anything that doesn't touch the thread-register, or attempt to use resources from the parent, is likely to work, but it's not safe.

In this case it looks like libsandbox's wrapper calls pthread_mutext_lock which is a libc function and that will attempt to access the thread register and possible look at global state, and this is in an inconsistent state because the libc atfork handlers have not run to return the runtime to a consistent state (worse if it's a multithreaded fork).

To answer @joshtriplett question, calling 'posix_spawn' should not cause any problems because we call internal hidden versions of exec* family functions that cannot be interposed at the ELF level (we do this to provide a consistent implementation of posix_spawn and hide the internal detail that it calls exec* syscalls at some point to create the process).

Does that explanation help?

@cuviper
Copy link
Member

cuviper commented Oct 15, 2021

Thanks @codonell!

Now looking forward, I think we're going to need a different way to get the pidfd before we can stabilize that API. That's just too big of a footgun, especially combined with the pre_exec callback that's only documented to require async-signal-safety. I'll add this to the tracking issue for that feature.

bors added a commit to rust-lang-ci/rust that referenced this issue Oct 16, 2021
Only use `clone3` when needed for pidfd

In rust-lang#89522 we learned that `clone3` is interacting poorly with Gentoo's
`sandbox` tool. We only need that for the unstable pidfd extensions, so
otherwise avoid that and use a normal `fork`.

r? `@Mark-Simulacrum`
@cuviper
Copy link
Member

cuviper commented Oct 16, 2021

#89924 merged for beta, so I'm moving the milestone to 1.57. #89930 is for master, but if that doesn't land before the new branch, we'll need a new beta backport.

@cuviper cuviper modified the milestones: 1.56.0, 1.57.0 Oct 16, 2021
@vapier
Copy link

vapier commented Oct 18, 2021

In this case it looks like libsandbox's wrapper calls pthread_mutext_lock which is a libc function and that will attempt to access the thread register and possible look at global state, and this is in an inconsistent state because the libc atfork handlers have not run to return the runtime to a consistent state (worse if it's a multithreaded fork).

i think this analysis is misunderstanding what libsandbox is doing. which is not unreasonable for looking at a ~20 year old code base with many maintainers.

the interposed fork() simply forces the ordering:

  • sandbox's pthread_mutex_lock
  • glibc's fork
  • sandbox's pthread_mutex_unlock

it doesn't implement fork itself or call such syscalls directly. so glibc's internal consistency is maintained.

@joshtriplett
Copy link
Member

@vapier I think the concern here is that if Rust uses clone3 in place of fork, and the libc implementation doesn't have a wrapper for clone3, then even if libsandbox could interpose the syscall (e.g. via BPF or similar) it would potentially be an issue to call the libc's pthread_mutex_unlock after clone3.

Do you have any thoughts on how the Gentoo sandbox tool could handle sandboxed applications that want to call clone3 to spawn a new process?

@richfelker
Copy link

richfelker commented Oct 18, 2021

the interposed fork() simply forces the ordering:

  • sandbox's pthread_mutex_lock
  • glibc's fork
  • sandbox's pthread_mutex_unlock

This is undefined behavior unless the unlock is skipped in the child. You cannot call unlock on a mutex the calling thread does not own.

Do you have any thoughts on how the Gentoo sandbox tool could handle sandboxed applications that want to call clone3 to spawn a new process?

Block it with seccomp (in particular, force it to fail with ENOSYS) so that the application has to fallback to something that won't break things.

@joshtriplett
Copy link
Member

joshtriplett commented Oct 18, 2021

@richfelker Which Linux libc implementations will this actually cause problems with in practice? It works with glibc's "fast" pthread_mutex implementation, and will certainly fail with the "error-checking" and "recursive" types. Does musl have a mutex type that acts like glibc's "fast" type and doesn't care where the unlock happens?

Would calling pthread_mutex_init unconditionally in the child work, to "unlock" the mutex by reinitializing it to an unlocked state without actually calling pthread_mutex_unlock?

Block it with seccomp (in particular, force it to fail with ENOSYS) so that the application has to fallback to something that won't break things.

That will not work for applications that are designed to run on recent kernels and expect clone3 to actually function as designed. Over time, some applications will simply require newer syscalls and fail without them, even if Rust has fallbacks for applications that don't need a pidfd.

I'm asking after a way to handle such applications, not break such applications.

(I'm wondering if the ideal solution would be for applications using clone3 to just be handled via the external ptrace mechanism.)

@richfelker
Copy link

Which Linux libc implementations will this actually cause problems with in practice?

"Which...will this actually cause problems" is the wrong question to ask about UB. It will (or at least should) trap with an appropriate sanitizer. It will "happen to work" with current performance-optimized implementations simply because not recording the owner at all is currently the fastest behavior, but if that ever changed, trapping would be the preferred behavior for musl (generally we prefer for UB to immediately crash, but don't go out of our way making things slower just to achieve that, especially for UB that's not inherently a memory-safety issue).

That will not work for applications that are designed to run on recent kernels and expect clone3 to actually function as designed.

I guess we have different philosophies about application design. I believe they should have portable fallbacks, even at runtime, not to depend on non-baseline functionality, and should not hard-code "version" assumptions that might be invalid in a context that's not even Linux but another system providing a Linux-syscall-compatible interface -- or a sandbox that has legitimate sandboxing reason not to offer the new interface.

(I'm wondering if the ideal solution would be for applications using clone3 to just be handled via the external ptrace mechanism.)

I'm not sure how this works, so it might be viable, but it would break internal ptrace use or other external ptrace use by a debugger. I would consider breaking that to be a much worse sin than breaking clone3, which cannot be used by applications in any reasonable way to begin with unless they're written in asm.

@joshtriplett
Copy link
Member

joshtriplett commented Oct 18, 2021

@richfelker I'm aware of the implications of UB. Rust's use of clone3 isn't itself directly invoking UB; libsandbox as an LD_PRELOADed library is invoking UB already in its handling of fork, and isn't handling clone3 at all, which makes the combination fail, so we're trying to find solutions that will work in practice. libsandbox, as far as I can tell, isn't designed to be a complete sandbox, just a best-effort sandbox for detecting issues in code run as part of gentoo build processes.

LD_PRELOAD libraries are already going to rely rather extensively on the specifics and internals of particular software and libraries, in order to do their job; I don't think it's at all unreasonable to ask the practical question of what the best path would be for that LD_PRELOAD library to break less in this circumstance. As I understand it, libsandbox is not necessarily expected to be 100% portable to all possible environments.

At the moment, we've gone from "any application spawning another process from Rust on Linux will use clone3" to "any application spawning another process from Rust on Linux and wanting a pidfd will use clone3", but that's still a legitimate thing for an application to expect to work. That isn't inherently broken; libsandbox's non-handling of clone3 is leading to a deadlock on libsandbox's internal lock.

I guess we have different philosophies about application design.

I'm not saying all applications should be written that way. I'm saying some applications will be written that way, and those applications are not broken. (If you believe that no applications should ever be written to require current kernel features, then yes, we have fundamentally different perspectives on applications and portability.) It's perfectly acceptable to write an application that says "requires Linux >= 5.x with xyz syscalls available", and such an application can simply exit with an error if a syscall provided by that kernel doesn't work. (Along similar lines, some applications will at some point start requiring io_uring, and may not necessarily wish to provide fallbacks.)

I'm very much hoping we can avoid straying into the specifics of libc design here, and just look for concrete solutions for making libsandbox and applications using clone/clone3 compatible with each other.

and should not hard-code "version" assumptions that might be invalid

I'm not suggesting that an application should detect Linux 5.x and assume that clone3 will work. I'm suggesting that an application may legitimately document that it requires Linux 5.x, and then call clone3 and simply fail on ENOSYS.

it would break internal ptrace use or other external ptrace use by a debugger

That seems less likely to come up inside of a gentoo build process being run under libsandbox, outside of an unusual test suite.

clone3, which cannot be used by applications in any reasonable way to begin with unless they're written in asm.

Using clone3 with CLONE_VM to spawn a thread would require some assembly or potentially very carefully written higher-level code. But using clone3 to spawn a new process, in the style of fork, does not require assembly. And if you need to spawn a process and obtain a pidfd, without knowing what other code in the same address space might be doing with SIGCHLD handling, you need CLONE_PIDFD and thus either clone or clone3.

@richfelker
Copy link

At the moment, we've gone from "any application spawning another process from Rust on Linux will use clone3" to "any application spawning another process from Rust on Linux and wanting a pidfd will use clone3", but that's still a legitimate thing for an application to expect to work. That isn't inherently broken; libsandbox's non-handling of clone3 is leading to a deadlock on libsandbox's internal lock.

From my perspective, libsandbox is pretty much a red herring. It's just the way you happened to observe the problem. clone3 (or any raw syscall that makes a new process or thread behind the runtime's back) is inherently unsafe and there's basically nothing (except rolling your own syscalls in asm) you can do from the child context after doing so.

But using clone3 to spawn a new process, in the style of fork, does not require assembly.

It does. Not because of the stack or memory sharing issue, but because the contents of the TCB are no longer valid in the child process, and because you cannot call libc (or anything that might cause code in the dynamic linker to run) without a valid TCB.

@richfelker
Copy link

And if you need to spawn a process and obtain a pidfd, without knowing what other code in the same address space might be doing with SIGCHLD handling, you need CLONE_PIDFD and thus either clone or clone3.

As explained above, you can' use clone or clone3 here. You can however avoid the "SIGCHLD handling" issue you're concerned about. Setup a communication channel between parent and child with a unix socketpair and have the child open its own pidfd and send it back via SCM_RIGHTS before performing execve. Alternatively, just synchronize over the communication channel and have the parent actually open the pidfd.

@joshtriplett
Copy link
Member

joshtriplett commented Oct 18, 2021

As far as I can tell, clone and clone3 are both non-portable syscalls with no associated standard, which would tend to make their behavior more of a practical consideration than a theoretical one. Could you please point to what standard or similar that you're using to treat them as having undefined behavior in this context? It seems like you're referencing implementation-specific behavior of specific libc implementations (insofar as whether any given function expects to access TLS/TCB at any specific time), rather than referencing a standard or similar. That's perfectly fine, but then it seems reasonable to further reference the actual behavior of specific libc implementations regarding whether code works in practice.

When it comes to something specified in POSIX or SUS or similar, I can understand carefully scrutinizing what the standard defines and doesn't define, and then hesitating before delving further into implementation-specific additional functionality and permissiveness. But in this case, as far as I can tell, I'm not aware of any specific standard making this behavior verboten, I know that real applications beyond just Rust already actually use this behavior in practice and are likely to continue to do so, and I know that this behavior does in fact work in libc implementations. In practice, it seems like what arose here is not an issue with either Rust's behavior or any given libc implementation, but with a particular LD_PRELOAD interoposer library whose behavior is itself relying on internals of libc implementations in a way that happens to be incompatible.

Rather than explore more elaborate ways that this could theoretically fail, I think it'd be helpful to look at the practical consequences in C libraries. In practice, it appears to be a non-issue to call (for instance) execv after clone. For instance, the following code appears to compile and run correctly in both glibc and musl:

#define _GNU_SOURCE

#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int child_func(void *ptr_param)
{
    unsigned long param = (unsigned long)ptr_param;
    fprintf(stderr, "child: param=%lu; calling execv\n", param);
    char *const argv[] = { "echo", "hello from echo executed by the child process", NULL };
    execv("/bin/echo", argv);
    exit(1);
}

unsigned char stack[16384];

int main(void)
{
    int pidfd = -1;
    int pid = clone(child_func, stack + 16384 - 16, CLONE_PIDFD, (void *)42UL, &pidfd);
    if (pid == -1)
        err(1, "clone");
    printf("parent: pid=%i pidfd=%i\n", pid, pidfd);
    return 0;
}

On both musl and glibc, this produces output like the following (other than varying PIDs):

parent: pid=732327 pidfd=3
child: param=42; calling execv
hello from echo executed by the child process

In practice, what is wrong with the above program, other than the fact that a library using LD_PRELOAD to interpose fork and not clone may have issues with it?

For completeness, here's a similar program using clone3:

#define _GNU_SOURCE

#include <err.h>
#include <sched.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <syscall.h>
#include <unistd.h>

struct clone_args {
    uint64_t flags;
    uint64_t pidfd;
    uint64_t child_tid;
    uint64_t parent_tid;
    uint64_t exit_signal;
    uint64_t stack;
    uint64_t stack_size;
    uint64_t tls;
    uint64_t set_tid;
    uint64_t set_tid_size;
    uint64_t cgroup;
};

int main(void)
{
    int pidfd = -1;
    struct clone_args clone_args = {
        .flags = CLONE_PIDFD,
        .pidfd = (uint64_t)&pidfd,
    };
    long ret = syscall(SYS_clone3, &clone_args, sizeof(clone_args));
    if (ret == -1)
        err(1, "clone3");
    else if (ret == 0) {
        fprintf(stderr, "child: calling execv\n");
        char *const argv[] = { "echo", "hello from echo executed by the child process", NULL };
        execv("/bin/echo", argv);
        exit(1);
    }
    printf("parent: pid=%ld pidfd=%i\n", ret, pidfd);
    return 0;
}

@cuviper
Copy link
Member

cuviper commented Oct 18, 2021

there's basically nothing (except rolling your own syscalls in asm) you can do from the child context after doing so.

To whit, note that even our SYS_clone3 is calling the libc syscall (2). In order to follow up with a libc-free SYS_exec*, we would really need a raw sysenter/whatever for each target.

Rather than explore more elaborate ways that this could theoretically fail, I think it'd be helpful to look at the practical consequences in C libraries. In practice, it appears to be a non-issue to call (for instance) execv after clone. For instance, the following code appears to compile and run correctly in both glibc and musl:

@joshtriplett -- we argue against this kind of Rust UB analysis all the time! Even in testing this reported issue, I found that -j2 only locked up some of the time. If it's not specified, then we should at least look at the libc implementation to convince ourselves there's no data race in a particular approach, in line with Carlos's "unofficial answer", but even then it's harder to predict whether that's likely to remain true.

@joshtriplett
Copy link
Member

@cuviper I absolutely agree! I'm not proposing that we ignore a real but rare race. I'm proposing that we consider whether there exists a problem at all with a given function or combination of functions.

@codonell
Copy link

As far as I can tell, clone and clone3 are both non-portable syscalls with no associated standard, which would tend to make their behavior more of a practical consideration than a theoretical one. Could you please point to what standard or similar that you're using to treat them as having undefined behavior in this context? It seems like you're referencing implementation-specific behavior of specific libc implementations (insofar as whether any given function expects to access TLS/TCB at any specific time), rather than referencing a standard or similar. That's perfectly fine, but then it seems reasonable to further reference the actual behavior of specific libc implementations regarding whether code works in practice.

The only reference to UB in this discussion is the unlock of the lock via pthread_mutex_unlock in a thread or process that did not own the lock.

The specific code in question comes from sandbox libsandbox/wrapper-funcs/fork.c:

 16 #define WRAPPER_PRE_CHECKS() \
 17 ({ \
 18         /* pthread_atfork(sb_lock, sb_unlock, sb_unlock); */ \
 19         sb_lock(); \
 20         result = SB_HIDDEN_FUNC(WRAPPER_NAME)(WRAPPER_ARGS_FULL); \
 21         sb_unlock(); \
 22         false; \
 23 })

The emulated pthread_atfork as defined in the comment is UB because you may not unlock anything in the child that was a lock held by the parent, and this is as defined by POSIX.

When it comes to something specified in POSIX or SUS or similar, I can understand carefully scrutinizing what the standard defines and doesn't define, and then hesitating before delving further into implementation-specific additional functionality and permissiveness. But in this case, as far as I can tell, I'm not aware of any specific standard making this behavior verboten, I know that real applications beyond just Rust already actually use this behavior in practice and are likely to continue to do so, and I know that this behavior does in fact work in libc implementations. In practice, it seems like what arose here is not an issue with either Rust's behavior or any given libc implementation, but with a particular LD_PRELOAD interoposer library whose behavior is itself relying on internals of libc implementations in a way that happens to be incompatible.

I am telling you it is verboten.

The design and usage of kernel syscalls as the building blocks for a language runtime are fuzzy at best.

You appear to be asking, as a Rust core language developer, where the contract between the C runtime and Rust runtime is defined in such a way that it appears clear you cannot do what you want to do.

There is no such definition. We haven't written down anywhere, but as two libc developers are telling you, there are syscalls that if you call, you irrevocably leave the C runtime behind, and this is a concrete issue.

POSIX, SUS, ISO C, ISO C++... they will all avoid saying anything about this issue, because they do not want to get involved in the discrete semantics that exist between a given kernel and the runtime implementation.

For example Rust uses __pthread_get_minstack@@GLIBC_PRIVATE and never negotiated the interface. We haven't removed it because we respect Rust developers, and want to design something open and future proof before moving forward.

The definition of what is allowed exists between us as developers.

Rather than explore more elaborate ways that this could theoretically fail, I think it'd be helpful to look at the practical consequences in C libraries. In practice, it appears to be a non-issue to call (for instance) execv after clone. For instance, the following code appears to compile and run correctly in both glibc and musl:

This is not how language implementations are designed.

Design is done at a higher level to allow room for implementations to change and grow.

#define _GNU_SOURCE

#include <err.h>
#include <sched.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

int child_func(void *ptr_param)
{
    unsigned long param = (unsigned long)ptr_param;
    fprintf(stderr, "child: param=%lu; calling execv\n", param);
    char *const argv[] = { "echo", "hello from echo executed by the child process", NULL };
    execv("/bin/echo", argv);
    exit(1);
}```

At this point you have created a child process on your own.

The child doesn't exist from the perspective of the C runtime.

When execv fails, and it will fail, where does the thread local error value for errno get written?

This will immediately crash or corrupt state if the child execv fails because the write to errno (required by POSIX for the failing execv) is required to store the state of errno per-thread, and at this point you have failed to create a new C runtime thread (because you bypassed the normal process for doing this).

If you want I can follow up with the Linux Man Pages project and add subsequent sections to man2/clone.2 to indicate how calling these functions may leave the C runtime behind.

Generally we have used the Linux Man Pages project, carefully stewarded by Michael Kerrisk, as the place to iron our such discussions about ABI and API (we had a particularly very very long discussion about allowable futex behaviour there).

@cuviper
Copy link
Member

cuviper commented Oct 18, 2021

If you want I can follow up with the Linux Man Pages project and add subsequent sections to man2/clone.2 to indicate how calling these functions may leave the C runtime behind.

I think that would be valuable! If there are a number like this, perhaps it would also be good to have a note in syscalls (2).

(we had a particularly very very long discussion about allowable futex behaviour there)

Hmm, Rust is using bare futex for thread parking, so I wonder what "allowable" behavior you're referring to...

@vapier
Copy link

vapier commented Oct 18, 2021

Do you have any thoughts on how the Gentoo sandbox tool could handle sandboxed applications that want to call clone3 to spawn a new process?

that's being discussed in the Gentoo bug tracker. i don't think it needs to be hashed out here too when the project in question (rust) isn't specific to Gentoo. let rust worry about the systems/standards they're supporting, and forget about Gentoo's sandbox entirely.

the interposed fork() simply forces the ordering:

  • sandbox's pthread_mutex_lock
  • glibc's fork
  • sandbox's pthread_mutex_unlock

This is undefined behavior unless the unlock is skipped in the child. You cannot call unlock on a mutex the calling thread does not own.

let's assume you're right for the sake of argument. i could not care less. libsandbox does not target POSIX compliant or standard abstract machines -- it is full of gnarly low level checks & API assumptions that go far below the standard. it only needs to run on the subset of real world configs that Gentoo supports. so we can look into this further if such a real world configuration actually materializes.

@joshtriplett
Copy link
Member

@codonell I apologize, that was not at all what I was going for, and I didn't intend to come across as though I was trying to give no future wiggle-room in the implementation of libc interfaces. Quite the opposite: I was trying to understand what would go wrong, in order to understand what the issue was and how to work around it. I would very much like to find ways to make use of recent Linux kernel functionality without causing problems within the implementation of C libraries, without having to implement elaborate or suboptimal workarounds, and with as much reasonable future-proofing as we can to allow for the flexible evolution of both Rust and C libraries.

My intent was to better understand what the issue was, not to declare it a non-issue. I was not attempting to be sarcastic when I asked about specifications or about what was wrong with the code I posted, but re-reading my message I feel like I came across that way. Again, my apologies.

I'm used to seeing two different kinds of issues in interfaces like this:

  1. Undefined behavior by spec: even though something works in practice with specific implementations, it's reserved and some implementations may well have different behavior that will cause breakage. This case can sometimes be worth poking at if there's an important functionality gap and in practice all implementations (or all relevant implementations) make stronger guarantees and are unlikely to ever do otherwise, but even then we try to be cautious about such assumptions.
  2. Undefined behavior by implementation: there's no specification, but in practice there's a practical problem that can arise with certain usage, and that problem may not be immediately obvious. There may also be issues related to future flexibility or evolution of interfaces, which fall closer to case (1). Either way, this case is certainly broader than just what appears to work at first; just because it appears to work doesn't mean it will work in the future or in all corner cases.

I had previously been assuming that we were in case (2), and was trying to understand what the practical issue and considerations were; that was what led me to post code that appeared to me to work, to understand what I was missing. Before your most recent message, it didn't seem clear to me what the practical issue with calls to clone/clone3 were, and in previous messages the proscription of clone/clone3 had seemed less like an explanation and more like a pronouncement ("this is not allowed" rather than "here's what will actually go wrong"), which led me to wonder if I was misunderstanding and we were in case (1) and there was a spec-violation I was not aware of. Because of that, I had been wondering what specification might apply, and what the boundaries of that spec were, and whether we might be able to work around the issue for practical purposes since we have targets specific to particular C libraries rather than to any arbitrary C library (hence my attempts to seek practical implementation-specific issues).

FWIW, Rust also runs into this kind of issue regularly from the perspective of being the interface-provider as well, and I tend to be one of the folks pushing for Rust to be more permissive about what it allows and more minimal about what it leaves undefined, in order to support unusual low-level code. So, I'm not trying to apply a different approach to the interfaces we consume than the interfaces we produce. ;)

Your message clarified exactly the practical issues I was wondering about (e.g. where does errno get written to). Failures in the error case make perfect sense, and explain how code can appear to work at first but still have non-obvious problems.

In the case of a call to clone or clone3 that does not pass CLONE_VM, shouldn't the answer be that the forked child process writes to the same errno location that the parent would have used? After a failed call to execv, the child appears to have a correct errno value, and &errno has the same value in the child that it does in the parent. (I modified the code to execv a binary that doesn't exist, and errno was ENOENT.) Would it potentially be feasible to ensure that the TLS state of the thread that called clone/clone3 (and only that thread, since other threads will not exist in the new process) remains valid in the new process? (That wouldn't allow creating a thread without going through the C library, which would be a separate issue, but it could allow safely creating a new process without going through the C library.)


I've seen some discussions going around (such as at LPC last year) about how Linux C libraries could expose clone/clone3 wrappers that would allow for this kind of usage. What's the current state of those discussions? Is there a potential approach to a clone3 wrapper we could use in the future, and would help implementing such a wrapper be useful? Or, is there some potential interface that could allow notifying the C library of a new thread after having created it? Or, more minimally, an interface to provide a location for errno, which might allow calling a subset of the C library that doesn't need other thread-local values?

(Not for the first time I find myself wishing there was an alternate C library interface that did away with errno entirely, in favor of something more like the underlying syscalls: returning an error value.)

@richfelker
Copy link

Could you please point to what standard or similar that you're using to treat them as having undefined behavior in this context? It seems like you're referencing implementation-specific behavior of specific libc implementations (insofar as whether any given function expects to access TLS/TCB at any specific time), rather than referencing a standard or similar.

It's undefined because we don't define the behavior of going behind the implementation's back and breaking invariants it depends on. It's as simple as that. It does not need to be spelled out explicitly that "X is UB". Yes "the TCB" is an implementation detail. That's not the point. The point is that you're creating a process where you intend to use part of the implementation, which depends on its own implementation details, with some of those details having been undermined via making a syscall that produces an inconsistent state.

@richfelker
Copy link

I've seen some discussions going around (such as at LPC last year) about how Linux C libraries could expose clone/clone3 wrappers that would allow for this kind of usage. What's the current state of those discussions? Is there a potential approach to a clone3 wrapper we could use in the future, and would help implementing such a wrapper be useful?

Yes, a clone3 function would solve the problem. However, right now both glibc and musl lack any clear model for how the existing clone function should work, and the current implementation (in at least musl, and I think in both) is missing locking that really should be present to give even minimal consistency analogous to _Fork. Once that's solved, I don't see a good reason the same couldn't be done for clone3.

@brauner
Copy link

brauner commented Oct 20, 2021

I've seen some discussions going around (such as at LPC last year) about how Linux C libraries could expose clone/clone3 wrappers that would allow for this kind of usage. What's the current state of those discussions? Is there a potential approach to a clone3 wrapper we could use in the future, and would help implementing such a wrapper be useful? Or, is there some potential interface that could allow notifying the C library of a new thread after having created it? Or, more minimally, an interface to provide a location for errno, which might allow calling a subset of the C library that doesn't need other thread-local values?

Hey, I'm the original author of clone3() and I gave a session about this at last year's LPC so far two things have happened. There was a brief discussion after an RFE filed by Lennart to make use of clone3() in systemd: https://sourceware.org/bugzilla/show_bug.cgi?id=26371
and glibc did gain an internal clone3() wrapper https://public-inbox.org/libc-alpha/20210214224505.4448-1-hjl.tools@gmail.com
This work is merged but it is not exposed as public api.

@gyakovlev
Copy link

gyakovlev commented Oct 21, 2021

looks like commits from #89924 made it to 1.56.0

I've added 1.56.0 to gentoo today, tested with sandbox-2.25, seems to be ok so far.
I could build firefox (does not use cargo I think), ripgrep and couple other apps.

@cuviper
Copy link
Member

cuviper commented Oct 21, 2021

Yes, but master and beta-1.57 do not have any fix yet, so we need to keep tracking this.

@apiraino
Copy link
Contributor

Assigning priority as discussed in the Zulip thread of the Prioritization Working Group.

@rustbot label -I-prioritize +P-medium +T-compiler

@rustbot rustbot added P-medium Medium priority T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. and removed I-prioritize Issue: Indicates that prioritization has been requested for this issue. labels Oct 28, 2021
cuviper added a commit to cuviper/rust that referenced this issue Nov 5, 2021
In rust-lang#89522 we learned that `clone3` is interacting poorly with Gentoo's
`sandbox` tool. We only need that for the unstable pidfd extensions, so
otherwise avoid that and use a normal `fork`.
matthiaskrgr added a commit to matthiaskrgr/rust that referenced this issue Nov 10, 2021
Only use `clone3` when needed for pidfd

In rust-lang#89522 we learned that `clone3` is interacting poorly with Gentoo's
`sandbox` tool. We only need that for the unstable pidfd extensions, so
otherwise avoid that and use a normal `fork`.

This is a re-application of beta rust-lang#89924, now that we're aware that we need
more than just a temporary release fix. I also reverted 12fbabd, as
that was just fallout from using `clone3` instead of `fork`.

r? `@Mark-Simulacrum`
cc `@joshtriplett`
cuviper added a commit to cuviper/rust that referenced this issue Nov 16, 2021
In rust-lang#89522 we learned that `clone3` is interacting poorly with Gentoo's
`sandbox` tool. We only need that for the unstable pidfd extensions, so
otherwise avoid that and use a normal `fork`.

(cherry picked from commit 85b55ce)
@cuviper
Copy link
Member

cuviper commented Nov 19, 2021

Backported in #90938.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Category: This is a bug. P-medium Medium priority regression-from-stable-to-beta Performance or correctness regression from stable to beta. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests