Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rust nightly 2018-08-17 or later causes random segmentation faults or panics #53529

Closed
yorickpeterse opened this issue Aug 20, 2018 · 22 comments · Fixed by #53571
Closed

Rust nightly 2018-08-17 or later causes random segmentation faults or panics #53529

yorickpeterse opened this issue Aug 20, 2018 · 22 comments · Fixed by #53571
Labels
I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness regression-from-stable-to-nightly Performance or correctness regression from stable to nightly.
Milestone

Comments

@yorickpeterse
Copy link

yorickpeterse commented Aug 20, 2018

https://gitlab.com/inko-lang/inko is a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example:

  • On Windows it will either fail with a memory allocation error, or an error in the runtime test library (more on this in a moment).
  • On Linux it will segfault. Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler.
  • Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected.

The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows:

#0  0x00007ffff7e12763 in _int_malloc () from /usr/lib/libc.so.6
#1  0x00007ffff7e13ada in malloc () from /usr/lib/libc.so.6
#2  0x0000555555568e6b in alloc::alloc::alloc (layout=...) at /checkout/src/liballoc/alloc.rs:78
#3  <libinko::chunk::Chunk<T>>::new (capacity=3) at src/chunk.rs:29
#4  libinko::register::Register::new (amount=3) at src/register.rs:23
#5  libinko::execution_context::ExecutionContext::from_block (block=0x7fffdc0971e0, return_register=Some = {...}) at src/execution_context.rs:60
#6  libinko::vm::machine::Machine::run (self=<optimized out>, process=<optimized out>) at src/vm/machine.rs:2350
#7  0x0000555555568b7f in libinko::vm::machine::Machine::run_with_error_handling (self=0x55555567d8c0, process=0x7ffff75fcbd0) at src/vm/machine.rs:351
#8  0x00005555555c88a4 in libinko::vm::machine::Machine::start_primary_threads::{{closure}} (process=...) at src/vm/machine.rs:260
#9  <libinko::pool::PoolInner<T>>::process (self=<optimized out>, index=0, closure=0x7ffff75fcc60) at src/pool.rs:186
#10 0x00005555555b739d in <libinko::pool::Pool<T>>::run::{{closure}} () at src/pool.rs:126
#11 std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /checkout/src/libstd/sys_common/backtrace.rs:136
#12 0x00005555555cb0dc in std::thread::Builder::spawn::{{closure}}::{{closure}} () at /checkout/src/libstd/thread/mod.rs:409
#13 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>) at /checkout/src/libstd/panic.rs:313
#14 std::panicking::try::do_call (data=<optimized out>) at /checkout/src/libstd/panicking.rs:310
#15 0x0000555555618a3a in __rust_maybe_catch_panic () at libpanic_unwind/lib.rs:102
#16 0x00005555555ba39b in std::panicking::try (f=...) at /checkout/src/libstd/panicking.rs:289
#17 std::panic::catch_unwind (f=...) at /checkout/src/libstd/panic.rs:392
#18 std::thread::Builder::spawn::{{closure}} () at /checkout/src/libstd/thread/mod.rs:408
#19 <F as alloc::boxed::FnBox<A>>::call_box (self=0x55555567db50, args=<optimized out>) at /checkout/src/liballoc/boxed.rs:642
#20 0x00005555556090db in _$LT$alloc..boxed..Box$LT$$LP$dyn$u20$alloc..boxed..FnBox$LT$A$C$$u20$Output$u3d$R$GT$$u20$$u2b$$u20$$u27$a$RP$$GT$$u20$as$u20$core..ops..function..FnOnce$LT$A$GT$$GT$::call_once::h904fcd0dbdc71d4f () at /checkout/src/liballoc/boxed.rs:652
#21 std::sys_common::thread::start_thread () at libstd/sys_common/thread.rs:24
#22 0x00005555555f83b6 in std::sys::unix::thread::Thread::new::thread_start () at libstd/sys/unix/thread.rs:90
#23 0x00007ffff7f73a9d in start_thread () from /usr/lib/libpthread.so.0
#24 0x00007ffff7e89a43 in clone () from /usr/lib/libc.so.6

While for another segfault the backtrace is instead:

#0  libinko::runtime_panic::display_panic (process=0x7f, message="ObjectValue::as_block() called on a non block object") at src/runtime_panic.rs:11
#1  0x0000555555568baa in libinko::vm::machine::Machine::panic (self=0x55555567d8c0, process=0x7ffff71f8bd0, message="") at src/vm/machine.rs:3750
#2  libinko::vm::machine::Machine::run_with_error_handling (self=0x55555567d8c0, process=0x7ffff71f8bd0) at src/vm/machine.rs:352
#3  0x00005555555c88a4 in libinko::vm::machine::Machine::start_primary_threads::{{closure}} (process=...) at src/vm/machine.rs:260
#4  <libinko::pool::PoolInner<T>>::process (self=<optimized out>, index=4, closure=0x7ffff71f8c60) at src/pool.rs:186
#5  0x00005555555b739d in <libinko::pool::Pool<T>>::run::{{closure}} () at src/pool.rs:126
#6  std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /checkout/src/libstd/sys_common/backtrace.rs:136
#7  0x00005555555cb0dc in std::thread::Builder::spawn::{{closure}}::{{closure}} () at /checkout/src/libstd/thread/mod.rs:409
#8  <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>) at /checkout/src/libstd/panic.rs:313
#9  std::panicking::try::do_call (data=<optimized out>) at /checkout/src/libstd/panicking.rs:310
#10 0x0000555555618a3a in __rust_maybe_catch_panic () at libpanic_unwind/lib.rs:102
#11 0x00005555555ba39b in std::panicking::try (f=...) at /checkout/src/libstd/panicking.rs:289
#12 std::panic::catch_unwind (f=...) at /checkout/src/libstd/panic.rs:392
#13 std::thread::Builder::spawn::{{closure}} () at /checkout/src/libstd/thread/mod.rs:408
#14 <F as alloc::boxed::FnBox<A>>::call_box (self=0x55555567e5d0, args=<optimized out>) at /checkout/src/liballoc/boxed.rs:642
#15 0x00005555556090db in _$LT$alloc..boxed..Box$LT$$LP$dyn$u20$alloc..boxed..FnBox$LT$A$C$$u20$Output$u3d$R$GT$$u20$$u2b$$u20$$u27$a$RP$$GT$$u20$as$u20$core..ops..function..FnOnce$LT$A$GT$$GT$::call_once::h904fcd0dbdc71d4f () at /checkout/src/liballoc/boxed.rs:652
#16 std::sys_common::thread::start_thread () at libstd/sys_common/thread.rs:24
#17 0x00005555555f83b6 in std::sys::unix::thread::Thread::new::thread_start () at libstd/sys/unix/thread.rs:90
#18 0x00007ffff7f73a9d in start_thread () from /usr/lib/libpthread.so.0
#19 0x00007ffff7e89a43 in clone () from /usr/lib/libc.so.6

And a third segfault:

#0  0x0000555555568d84 in libinko::vm::machine::Machine::run (self=<optimized out>, process=<optimized out>) at src/vm/machine.rs:388
#1  0x0000555555568b7f in libinko::vm::machine::Machine::run_with_error_handling (self=0x55555567d8c0, process=0x7ffff6cf3bd0) at src/vm/machine.rs:351
#2  0x00005555555c88a4 in libinko::vm::machine::Machine::start_primary_threads::{{closure}} (process=...) at src/vm/machine.rs:260
#3  <libinko::pool::PoolInner<T>>::process (self=<optimized out>, index=10, closure=0x7ffff6cf3c60) at src/pool.rs:186
#4  0x00005555555b739d in <libinko::pool::Pool<T>>::run::{{closure}} () at src/pool.rs:126
#5  std::sys_common::backtrace::__rust_begin_short_backtrace (f=...) at /checkout/src/libstd/sys_common/backtrace.rs:136
#6  0x00005555555cb0dc in std::thread::Builder::spawn::{{closure}}::{{closure}} () at /checkout/src/libstd/thread/mod.rs:409
#7  <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once (self=..., _args=<optimized out>) at /checkout/src/libstd/panic.rs:313
#8  std::panicking::try::do_call (data=<optimized out>) at /checkout/src/libstd/panicking.rs:310
#9  0x0000555555618a3a in __rust_maybe_catch_panic () at libpanic_unwind/lib.rs:102
#10 0x00005555555ba39b in std::panicking::try (f=...) at /checkout/src/libstd/panicking.rs:289
#11 std::panic::catch_unwind (f=...) at /checkout/src/libstd/panic.rs:392
#12 std::thread::Builder::spawn::{{closure}} () at /checkout/src/libstd/thread/mod.rs:408
#13 <F as alloc::boxed::FnBox<A>>::call_box (self=0x55555567f590, args=<optimized out>) at /checkout/src/liballoc/boxed.rs:642
#14 0x00005555556090db in _$LT$alloc..boxed..Box$LT$$LP$dyn$u20$alloc..boxed..FnBox$LT$A$C$$u20$Output$u3d$R$GT$$u20$$u2b$$u20$$u27$a$RP$$GT$$u20$as$u20$core..ops..function..FnOnce$LT$A$GT$$GT$::call_once::h904fcd0dbdc71d4f () at /checkout/src/liballoc/boxed.rs:652
#15 std::sys_common::thread::start_thread () at libstd/sys_common/thread.rs:24
#16 0x00005555555f83b6 in std::sys::unix::thread::Thread::new::thread_start () at libstd/sys/unix/thread.rs:90
#17 0x00007ffff7f73a9d in start_thread () from /usr/lib/libpthread.so.0
#18 0x00007ffff7e89a43 in clone () from /usr/lib/libc.so.6

In case of the last segfault, it seems certain local variables that are used are NULL pointers, when this should be impossible. Debugging this in GDB proves to be quite difficult, as a variety of variables are reported as even when debugging symbols are included. For example, for the last backtrace the output of "info locals" is:

(gdb) info locals
instruction = 0x0
index = 1
code = <optimized out>
context = 0x7fffb4000c30
reductions = 984

The VM test suite passes, even when running cargo test --release. I'm wondering if perhaps code is optimised in the wrong way, and this is somehow not triggered in the test suite (certainly possible, code coverage is not 100%).

Reproducing this is a bit weird. If we leave the code as-is, the segmentation faults rarely occur, instead the VM panics with the following:

Stack trace (the most recent call comes last):
  0: "/home/yorickpeterse/Projects/inko/inko/runtime/src/std/process.inko", line 324, in "<block>"
  1: "/home/yorickpeterse/Projects/inko/inko/runtime/src/std/test/runner.inko", line 281, in "<lambda>"
  2: "/home/yorickpeterse/Projects/inko/inko/runtime/src/std/test/runner.inko", line 220, in "run"
Process 1 panicked: ObjectValue::as_block() called on a non block object

However, if we apply the following patch things will start to segfault really quick:

diff --git a/runtime/src/std/test/runner.inko b/runtime/src/std/test/runner.inko
index 8175e2e..45fa998 100644
--- a/runtime/src/std/test/runner.inko
+++ b/runtime/src/std/test/runner.inko
@@ -217,6 +217,8 @@ object Runner {
   def run {
     let command = @receiver.receive

+    _INKOC.stdout_write(command.inspect + "\n")
+
     command.run(@state)

     @state.terminate?.if_true {
  1. git clone https://gitlab.com/inko-lang/inko.git
  2. cd inko
  3. make -C vm profile
  4. curl https://gist.githubusercontent.com/YorickPeterse/2be478ab617ad02e9e2495130e8f32f0/raw/38ca8bcab963d5b9fc4d192e126f546bff0f6aa9/crash.patch | patch -p1 -N
  5. env RUBYLIB=./compiler/lib ./compiler/bin/inko-test -d runtime --vm vm/target/release/ivm

Note that the last command requires Ruby 2.3 or newer. This will run the test suite of the standard library, which is where all the crashes happen rather frequently (probably because they run much more than the VM's own test suite).

@jonas-schievink
Copy link
Contributor

There's a lot of questionable unsafe code in that project. For example, this function should really be marked as unsafe (same goes for CompiledCode::instruction and many other functions), many uses of unsafe in the bytecode parser are really odd and can probably be removed in favor of safe alternatives, and the use of unsafe in the VM is an invitation for use-after-frees. It's very likely that the cause of the issue is preexisting UB due to misuse of unsafe.

@yorickpeterse
Copy link
Author

yorickpeterse commented Aug 20, 2018

@jonastepe Simply marking functions as unsafe to my knowledge doesn't influence the compiler's code generation. Many of the unsafe operations are safe in practise, as they will only fail if (for example) the input bytecode is invalid. Tagging them as unsafe does nothing but make the code significantly more verbose.

The various unsafe operations in the bytecode parser are there to convert sequences of bytes into particular types. To the best of my knowledge there is no stable way of converting a [u8] into say a u64, requiring you to use mem::transmute instead. The closest is u64::from_bytes but this:

  1. Requires nightly
  2. Assumes the platform's endianness, instead of allowing you to specify which to use.

@jonas-schievink
Copy link
Contributor

Simply marking functions as unsafe to my knowledge doesn't influence the compiler's code generation.

Yes, but marking a function as unsafe requires all call sites to opt into that unsafety, and the functions I mentioned all assume that their arguments are valid (and cause UB if not). This additional constraint on the arguments is expressed by marking the function unsafe and documenting the safety constraints.

The various unsafe operations in the bytecode parser are there to convert sequences of bytes into particular types. To the best of my knowledge there is no stable way of converting a [u8] into say a u64, requiring you to use mem::transmute instead.

Check out the byteorder crate.

@yorickpeterse
Copy link
Author

Yes, but marking a function as unsafe requires all call sites to opt into that unsafety, and the functions I mentioned all assume that their arguments are valid (and cause UB if not). This additional constraint on the arguments is expressed by marking the function unsafe and documenting the safety constraints.

This wouldn't achieve much, as a lot would have to be marked unsafe as VMs and GCs inherently perform many operations Rust may consider unsafe. There are a few cases where I want to get rid of more risky code (e.g. the borrowing of instruction you highlighted), but all of this feels rather offtopic since it's very project specific.

Check out the byteorder crate.

I avoided it in the past because it also just used mem::transmute, so it didn't feel like much of an improvement. It appears this is no longer the case, so I'll take another look.

@yorickpeterse
Copy link
Author

To better understand what is going on I dumped the MIR output using nightly-2018-08-16-x86_64-unknown-linux-gnu and nightly-x86_64-unknown-linux-gnu (2018-08-18). After replacing a bunch of SHA hashes with placeholders (to reduce diff noise), I ended up with the exact same MIR before and after. Next I'll try LLVM IR.

@yorickpeterse
Copy link
Author

It appears the LLVM IR uses a different order for things when using the different nightly versions. This generates a lot of diff noise, so I'm not sure how useful this information will be. I tried to clean it up a bit by replacing various SHA hashes with placeholders, but even then there is a lot of noise.

@yorickpeterse
Copy link
Author

The ASM output was a bit easier to clean up, but there are still lots of differences. To be exact, git diff --stat reports:

 /tmp/{before.s => after.s} | 97799 ++++++++++++++++++++++---------------------
 1 file changed, 48949 insertions(+), 48850 deletions(-)

Again this is probably because of ordering. The new version has 99 lines more, but I'm really not sure if that's expected or not.

@yorickpeterse
Copy link
Author

yorickpeterse commented Aug 20, 2018

Using https://github.com/rust-lang-nursery/cargo-bisect-rustc I found that nightly-2018-08-19 is the first version that introduces the regression, while nightly-2018-08-18 works fine. The commits included into the 18th nightly can be found at 1fa9449...33b923f (if I got things right at least).

@jonas-schievink
Copy link
Contributor

This hints at #53286, which can change the behaviour of any latent UB, or #52553, since your project uses VecDeque in a few places.

@yorickpeterse
Copy link
Author

@jonas-schievink I don't fully understand the first mentioned PR. What would I need to do to verify if that has any impact?

For the second one, I'll see if I can somehow get my code to use a version of VecDeque from before that PR, to see if it affects anything.

@yorickpeterse
Copy link
Author

A custom version of VecDeque probably isn't going to work. For example, I'm using Rayon which only implements its traits for the default VecDeque :<

@yorickpeterse
Copy link
Author

I ended up replacing all my uses of VecDeque with LinkedList to test if the VecDeque changes are to blame. I used LinkedList because its API is closest to VecDeque, making it easier to test things. When using LinkedList, my code runs fine. This suggests that the changes in #52553 may have somehow broken things.

cc @Pazzaz

@jonas-schievink
Copy link
Contributor

@yorickpeterse The first PR changes details about the generated code that is passed to LLVM. In fact, it just removes unneeded information. The easiest way to check if it's at fault is to manually build the compiler from nightly-2018-08-19 with the PR reverted. If the problem stops happening, it's at fault (or more likely, UB in your code is).

It would also be helpful if you could reduce the code as much as possible so that we can get a few hundred line example that shows the issue.

@yorickpeterse
Copy link
Author

Specifically, the patch I applied is: https://gist.github.com/YorickPeterse/940db2613c77bec9738124e6df273320

@yorickpeterse
Copy link
Author

yorickpeterse commented Aug 20, 2018

I narrowed this down to the following file being broken by the changes: https://gitlab.com/inko-lang/inko/blob/master/vm/src/mailbox.rs. If I switch back to VecDeque for this file, my code breaks.

There are two possibilities here:

  1. There was a bug before in my code that only somehow didn't get triggered before
  2. The VecDeque changes themselves break things

The Mailbox data structure has multiple threads accessing it concurrently. This is synchronised using a mutex of type Mutex<()> (provided by parking_lot). The structure is broken up into essentially two parts: a synchronised external part, and an unsynchronised internal part. The internal part is only used by a single thread, while multiple threads can compete for the external part. For this reason the mutex is separate, so we don't need to synchronise both.

The more I look at this code, the more I'm suspecting option 1 is the problem. In particular, there's a piece of code that doesn't synchronise access to the external part, when in theory other threads could still write to it, although I don't think this is being triggered at the moment. Edit: I forgot, this is synchronised earlier on, preventing concurrent writes.

I'll see if I can refactor this code a bit so it's safer, that way we can at least rule out my own code being the problem (or not).

@yorickpeterse
Copy link
Author

Upon closer inspection, the VecDeque changes specifically affect VecDeque::append, and I happen to use that in my Mailbox structure. Specifically, I have the following:

self.internal.append(&mut self.external.drain(0..).collect());

I don't remember why this is using self.external.drain(0..) since self.external is also a VecDeque, probably I wrote this when not paying attention. Regardless, changing this to self.internal.append(&mut self.external); doesn't make any difference regarding the errors.

Next I changed this code to the following:

self.internal.extend(self.external.drain(..));

This is based on the old implementation of VecDeque::append, which was:

self.extend(other.drain(..));

This then produces output like:

................................................................................................fish: “env RUBYLIB=./compiler/lib ./co…” terminated by signal SIGSEGV (Address boundary error)

This is a bit odd considering the implementation is the same as before, but at least now part of the program is running.

@yorickpeterse
Copy link
Author

There was another place where I relied on VecDeque::append. After also replacing that with the self.extend(other.drain(..)) pattern my code now runs fine again, both on nightly and stable Rust.

This suggests the failure is definitely due to the changes made to VecDeque::append. Now to see if I can get this in a standalone example.

@MaloJaffre
Copy link
Contributor

MaloJaffre commented Aug 20, 2018

Good news: Thanks to the great investigation of @yorickpeterse and @jonas-schievink, I think I found the reason for the segfault: the new VecDeque::append calls unused_as_mut_slices which unsoundly creates mutable slices to uninitialized memory.
I will try to send a fixing PR tomorrow.

@yorickpeterse
Copy link
Author

@MaloJaffre Aha, thanks! I was looking at the code to see how I might be able to trigger this problem, but I hadn't made my way to unused_as_mut_slices just yet. Looking at the code, I'm not sure exactly how to trigger this reliably.

@jonas-schievink
Copy link
Contributor

@MaloJaffre That could be it, nice find!

@MaloJaffre
Copy link
Contributor

The problem AFAIK is that this function is triggering Undefined Behavior, and so can cause the program to segfault randomly, and so it may be difficult to reproduce it reliably.

@frewsxcv frewsxcv added regression-from-stable-to-nightly Performance or correctness regression from stable to nightly. I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness labels Aug 21, 2018
@alexcrichton alexcrichton added this to the 1.30 milestone Aug 21, 2018
@Pazzaz
Copy link
Contributor

Pazzaz commented Aug 21, 2018

I can verify that this is because of my PR. Minimal example that segfaults for me:

use std::collections::VecDeque;

fn main() {
    let mut dst = VecDeque::new();
    dst.push_front(Box::new(1));
    dst.push_front(Box::new(2));
    dst.pop_back();

    let mut src = VecDeque::new();
    src.push_front(Box::new(0));
    dst.append(&mut src);
    for a in dst {
    }
}

And the reason for the segfault is apparent when run with simpler types

use std::collections::VecDeque;

fn main() {
    let mut dst = VecDeque::new();
    dst.push_front(12);
    dst.push_front(1234);
    dst.pop_back();

    let mut src = VecDeque::new();
    src.push_front(1);
    dst.append(&mut src);
    for (i, a) in dst.iter().enumerate() {
        println!("{}, {}", i, a);
    }
}

This will print

0, 1234
1, 1
2, 0
3, 0
4, 0
5, 0
6, 0
7, 0
8, 1234
9, 1
10, 0
11, 0
12, 0
13, 0
14, 0
15, 0
16, 1234
17, 1
18, 0
...

and continue in such a loop forever, printing the buffer over and over again. This happens because of line 1911 hitting an edge case when src.len() == dst_high.len() that causes head to be placed outside of the buffer. This can be fixed by changing that line to

if dst_high.len() == src_total {
    0
} else {
    original_head + src_total
}

I think that is the only thing that is causing these segfaults.

MaloJaffre added a commit to MaloJaffre/rust that referenced this issue Aug 22, 2018
…onSapin"

This partially reverts commit d5b6b95,
reversing changes made to 6b1ff19.

Fixes rust-lang#53529.
Cc: rust-lang#53564.
MaloJaffre added a commit to MaloJaffre/rust that referenced this issue Aug 22, 2018
bors added a commit that referenced this issue Aug 23, 2018
Fix unsoundness for VecDeque

 See individual commit for more details.

r? @RalfJung.

Fixes #53566, fixes #53529
bors added a commit that referenced this issue Aug 29, 2018
Reoptimize VecDeque::append

~Unfortunately, I don't know if these changes fix the unsoundness mentioned in #53529, so it is stil a WIP.
This is also completely untested.
The VecDeque code contains other unsound code: one example : [reading unitialized memory](https://play.rust-lang.org/?gist=6ff47551769af61fd8adc45c44010887&version=nightly&mode=release&edition=2015) (detected by MIRI), so I think this code will need a bigger refactor to make it clearer and safer.~

Note: this is based on #53571.
r? @SimonSapin
Cc: #53529 #52553 @yorickpeterse @jonas-schievink @Pazzaz @shepmaster.
yorickpeterse pushed a commit to inko-lang/inko that referenced this issue Feb 19, 2019
This should stop CI from failing until
rust-lang/rust#53529 is resolved in Rust.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I-unsound Issue: A soundness hole (worst kind of bug), see: https://en.wikipedia.org/wiki/Soundness regression-from-stable-to-nightly Performance or correctness regression from stable to nightly.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants