Skip to content

Data race detected on a very early allocation (via bevy_asset v0.6.0) #2020

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
saethlin opened this issue Mar 12, 2022 · 3 comments · Fixed by #2142
Closed

Data race detected on a very early allocation (via bevy_asset v0.6.0) #2020

saethlin opened this issue Mar 12, 2022 · 3 comments · Fixed by #2142

Comments

@saethlin
Copy link
Member

saethlin commented Mar 12, 2022

I've run cargo miri test on a lot of published crates, and exactly once I saw a data race, when testing bevy_asset version 0.6.0. Here's the output I collected:

[INFO] [stderr]      Running unittests (/opt/rustwide/target/miri/x86_64-unknown-linux-gnu/debug/deps/bevy_asset-ed299afda737c4dc)
[INFO] [stdout] 
[INFO] [stdout] running 12 tests
[INFO] [stderr] warning: thread support is experimental and incomplete: weak memory effects are not emulated.
[INFO] [stderr] 
[INFO] [stderr] thread 'rustc' panicked at 'called `Result::unwrap()` on an `Err` value: InterpErrorInfo(InterpErrorInfoInner { kind: Data race detected between Deallocate on Thread(id = 1) and Read on Thread(id = 0, name = "main") at alloc245+0x13 (current vector clock = VClock([10, 2]), conflicting timestamp = VClock([15])), backtrace: None })', src/tools/miri/src/eval.rs:321:32
[INFO] [stderr] note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
[INFO] [stderr] 
[INFO] [stderr] error: internal compiler error: unexpected panic
[INFO] [stderr] 
[INFO] [stderr] note: the compiler unexpectedly panicked. this is a bug.
[INFO] [stderr] 
[INFO] [stderr] note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
[INFO] [stderr] 
[INFO] [stderr] note: rustc 1.60.0-nightly (6abb6385b 2022-01-26) running on x86_64-unknown-linux-gnu
[INFO] [stderr] 
[INFO] [stderr] note: compiler flags: -Z randomize-layout -Z miri-disable-isolation -Z miri-ignore-leaks -Z miri-check-number-validity -Z miri-tag-raw-pointers -C embed-bitcode=no -C debuginfo=2 -C incremental
[INFO] [stderr] 
[INFO] [stderr] note: some of the compiler flags provided by cargo are hidden
[INFO] [stderr] 
[INFO] [stderr] query stack during panic:
[INFO] [stderr] end of query stack
[INFO] [stderr] warning: 1 warning emitted
[INFO] [stderr] 
[INFO] [stderr] error: test failed, to rerun pass '--lib'
[INFO] [stdout] test asset_server::test::case_insensitive_extensions ... 

This strikes me as suspicious, because that allocation ID is too low to be part of this crate's tests. It's somehow part of the Rust runtime.

This was from a run with commit deb9bfd, so this panic comes from

EnvVars::cleanup(&mut ecx).unwrap();

From @RalfJung on the Zulip:

and it says some thread read an env var without being synchronized with program termination. which I guess can happen if there is a background thread still lingering around that has not been joined?

This seems plausible to me, because a lot of programs and test suites leak threads (thus the -Z miri-ignore-leaks above). But it's still hard to understand how this could cause a data race. From the output it doesn't make sense that this would be some odd state on panic, because the only panic here is the data race itself.


I've tried and failed to reproduce this. Unfortunately I don't have the Cargo.lock that I used for this, so it may or may not be relevant. There is one in the packaged crate, but when testing crates I remove them after the download.

@RalfJung
Copy link
Member

But it's still hard to understand how this could cause a data race.

The action of the machine when it cleans up and deallocates the env var do not acquire the env var lock. Thus they are, technically, causing a data race with some other thread having recently accessed those same locations.

It would probably make most sense to just disable the race checker for these "administrative" actions -- the machien has halted at this point, no threads are running, so it's not like there can actually be any races.

@saethlin
Copy link
Member Author

saethlin commented Apr 8, 2022

I have now seen this again!

I cloned https://github.com/tokio-rs/tokio, current commit is 83477c725acbb6a0da54afc26c67a8bd57e3e0b9 and I ran

RUST_BACKTRACE=0 MIRIFLAGS="-Zmiri-strict-provenance -Zmiri-check-number-validity -Zmiri-disable-isolation -Zmiri-panic-on-unsupported" cargo +miri miri test --no-fail-fast --all-features
     Running tests/context.rs (/tmp/tokio/target/miri/x86_64-unknown-linux-gnu/debug/deps/context-d7c401ad407e7aa3)

running 1 test
test tokio_context_with_another_runtime ... warning: thread support is experimental and incomplete: weak memory effects are not emulated.

thread 'rustc' panicked at 'called `Result::unwrap()` on an `Err` value: InterpErrorInfo(InterpErrorInfoInner { kind: Data race detected between Deallocate on Thread(id = 1) and Read on Thread(id = 0, name = "main") at alloc65+0xf (current vector clock = VClock([12, 2]), conflicting timestamp = VClock([32])), backtrace: None })', src/eval.rs:362:32
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

error: internal compiler error: unexpected panic

note: the compiler unexpectedly panicked. this is a bug.

note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md

note: rustc 1.61.0-nightly (6af09d250 2022-04-03) running on x86_64-unknown-linux-gnu

note: compiler flags: -C embed-bitcode=no -C debuginfo=2 -C incremental -Z miri-strict-provenance -Z miri-check-number-validity -Z miri-disable-isolation -Z miri-panic-on-unsupported

note: some of the compiler flags provided by cargo are hidden

query stack during panic:
end of query stack

@saethlin
Copy link
Member Author

saethlin commented Apr 8, 2022

Also this one reproduces, which is nice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants