-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Option
to get undefined behaviour in a const
#95332
Comments
We do not guarantee that all Undefined Behavior is detected, not at run-time and not at compile-time. When you use We do detect some UB, e.g. here: pub const UNDEFINED: NonZeroU8 = unsafe { NonZeroU8::new_unchecked(0) }; But exhaustively checking for all UB is super expensive at best. Even Miri, which does a lot more work than CTFE, does not detect all UB. So I consider this not a bug. |
Miri detects this if the code is shifted to normal runtime eval. Is it worth considering moving this check into CTFE or does that just open up too many questions? |
on the one hand i would like CTFE to catch it, on the other I'm worried if people might think that it'll then catch the runtime version as well (which it won't). |
I'm not so sure about that. C++ has set a precedent that const-eval has extra UB checking powers that normal runtime eval does not have. |
As Ralf noted, this can be very expensive. We can benchmark it to see how expensive and make a decision based on that. We can also consider using a lint for these extra checks and only turning them on if the lint is warn/deny (and set it to allow by default for speed). Oh, even funnier idea: clippy can overwrite the const eval query to make it run with validation. This way |
Specifically in the narrow case of |
I am suspicious of claims that this check is expensive. Miri is poorly-benchmarked, does CTFE have any benchmarks? |
Yes, CTFE is benchmarked with the rustc benchmarks. Trying it out is as simple as opening a PR removing
|
It feels like it can just have debug_assert! inside the unchecked functions, or would it not be ideal? |
@fee1-dead I've had a PR up for the past few months that does just that :) #92686 |
That wouldn't make any difference here though, since the stdlib we ship is compiled without debug assertions. |
sigh of course the bigger problem is that all those checks are disabled in const eval. And I did that because I assumed while writing it that const eval would catch all these things. Around and around we go...
diff --git a/compiler/rustc_const_eval/src/interpret/machine.rs b/compiler/rustc_const_eval/src/interpret/machine.rs
index 7d75c84d108..121121cdab2 100644
--- a/compiler/rustc_const_eval/src/interpret/machine.rs
+++ b/compiler/rustc_const_eval/src/interpret/machine.rs
@@ -422,7 +422,7 @@ fn force_int_for_alignment_check(_memory_extra: &Self::MemoryExtra) -> bool {
#[inline(always)]
fn enforce_validity(_ecx: &InterpCx<$mir, $tcx, Self>) -> bool {
- false // for now, we don't enforce validity
+ true
}
#[inline(always)] |
Then an unconditional check using const_eval_select would suffice? |
Yes that would be an option. |
The compiler contains two instances of the You'll want to only enable validation for the constant evaluator, not for constant propagation; that's what @oli-obk described. But I think even that is going to be way too slow. If we truly want to do something here we should have a type-based heuristic to only perform validation on certain 'interesting' types like scalars with a non-total validity range. But that might mean even more people expect us to actually catch all UB... |
I know nothing about the const evaluator, but can a single extra check against zero, in limited situations, really be that much of a slow down? And if so, could it be done based on if some cargo key is enabled or not? Like only do the extra checks if debug_assertions are enabled? |
The check we are talking about -- the check Miri performs -- is to check on every single assignment that the value being assigned indeed satisfies validity of the type the assignment happens at. That's why I mentioned a typed-based heuristic to only check in certain situations, e.g. only assignments of type |
Just so everyone knows, I'm working on the suggested PR above, but this breaks a lot of UI tests that |
I've implemented my ideas in #95377. The general idea is to try to detect when we are executing unsafe code, and toggle on checking for that code. The implementation is not clean, I'm fiddling with another iteration locally that will make things a lot more readable. That PR detects the UB that was reported here, along with the test for #64506. Which I think is pretty cool. The naive approach reports:
My best work so far is:
My biggest problem now is that I can't figure out where those ~1.9% of instructions is coming from. My best guess at the moment is that simply having the validation code in the CTFE code instead of compiled out because it's behind an So on the one hand, I'm tempted to make some compensatory optimizations to bring the regression down. But I don't know if that's cheating. We are turning on more checks. It would be kind of crazy if that didn't cause a slowdown. Are compensatory optimizations of interested here, such as to get the inlining back to what it was? What is considered an acceptable performance regression for merging a PR that turns on some validation in CTFE? |
Could you describe at a high level when the extra check kicks in? EDIT: Oh I can't read, you said that. Inside I am not sure if we want to do something ad-hoc that catches this UB but still leaves many similar cases of UB undetected. |
Unfortunately, I don't think we have any benchmarks that actually execute We also agree that this check is not water-tight, right? There are ways to 'delay' the UB until after the |
Just to make sure we understand, the point of this is not to catch all UB in const eval, or all invalid data. The idea is to selectively turn on validation where it's likely to be particularly helpful.
The validation that CTFE does is already flaky with respect to how code is factored, so I am not convinced that this patch is any worse in this particular regard.
Sure we do. Basically all serious software executes
While building the
I picked |
Hm, fair. Out stress test benchmark I think doesn't do this though. |
The worst-case scenario appears to be this sort of shenanigans, which produces a ~58% regression (at a guess the 8% is related to deciding to enable the checks): diff --git a/collector/benchmarks/ctfe-stress-4/src/lib.rs b/collector/benchmarks/ctfe-stress-4/src/lib.rs
index 074a18c5..1119b3b1 100644
--- a/collector/benchmarks/ctfe-stress-4/src/lib.rs
+++ b/collector/benchmarks/ctfe-stress-4/src/lib.rs
@@ -11,10 +11,10 @@ use std::mem::MaybeUninit;
macro_rules! const_repeat {
// Base case: Use 16 at the end to avoid function calls at the leafs as much as possible.
([16] $e: expr, $T: ty) => {{
- $e; $e; $e; $e;
- $e; $e; $e; $e;
- $e; $e; $e; $e;
- $e; $e; $e; $e
+ unsafe{$e}; unsafe{$e}; unsafe{$e}; unsafe{$e};
+ unsafe{$e}; unsafe{$e}; unsafe{$e}; unsafe{$e};
+ unsafe{$e}; unsafe{$e}; unsafe{$e}; unsafe{$e};
+ unsafe{$e}; unsafe{$e}; unsafe{$e}; unsafe{$e}
}};
([1] $e: expr, $T: ty) => {{
$e Simply putting the whole benchmark in an |
So the check interacts badly with macros, or so? Because that should still all be unsafe code then, right? |
I suspect the check interacts badly with functions. It's extremely local and sloppy, in the name of doing little work when there is no unsafe code. It could definitely be tightened up if you or Oli have suggestions on a more principled but 🤔 appropriately thrifty way to do this. Whenever we call If we call We also defensively assume that MIR from outside the current crate contains |
I tried this code:
And assumed the compiler would reject it, but there was no compilation errors.
Note that the following also compiled:
I am assuming this is caused by layout optimizations ?
Meta
rustc --version --verbose
:rustc +nightly --version --verbose
:The text was updated successfully, but these errors were encountered: