-
Notifications
You must be signed in to change notification settings - Fork 97
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support to simd_bitmask
intrinsic
#2131
Comments
Here is the definition from the intrinsics documentation: // truncate integer vector to bitmask
// `fn simd_bitmask(vector) -> unsigned integer` takes a vector of integers and
// returns either an unsigned integer or array of `u8`.
// Every element in the vector becomes a single bit in the returned bitmask.
// If the vector has less than 8 lanes, a u8 is returned with zeroed trailing bits.
// The bit order of the result depends on the byte endianness. LSB-first for little
// endian and MSB-first for big endian.
//
// UB if called on a vector with values other than 0 and -1. From the actual Masks specification, the integer to mask operation is documented as:
|
Given this specification I believe that Kani should generate a "concatenation" expression where each operand will be an "if" with a condition of "lane notequal 0" and "1" and "0" as the two cases. (I still don't understand under which condition an array is to be produced as result.) |
The array will be produced if the input vector has more than 128 lanes. You basically need one bit per lane. |
Kani can do this, but it will be inefficient. CBMC can probably encode operations that result in a single bit, while Kani will have to generate the results for each lane first, then join them in different bytes. |
This intrinsic seems to be used by string comparison and hash map / hash set functions such as insert. |
After talking to @remi-delmas-3000, we decided to implement this on the Kani side since CBMC modeling happens before bitblasting. So we would still need to do this operation byte-wise. |
I created a basic Rust implementation of simd_bitmask that we could potentially use as a stub for this. However, I bumped into an issue while trying to transmute a SIMD structure. I created rust-lang/rust#113465 to capture the issue. It's not clear if this issue will affect the verification use flow since we will hook the implementation inside the |
This work is also blocked by #2590 |
You can't just use tuple access. You instead have to replicate our insane pattern for a legal read of a Simd type, sorry: /// Converts a SIMD vector to an array.
#[inline]
pub const fn to_array(self) -> [T; N] {
let mut tmp = core::mem::MaybeUninit::uninit();
// SAFETY: writing to `tmp` is safe and initializes it.
//
// FIXME: We currently use a pointer store instead of `transmute_copy` because `repr(simd)`
// results in padding for non-power-of-2 vectors (so vectors are larger than arrays).
//
// NOTE: This deliberately doesn't just use `self.0`, see the comment
// on the struct definition for details.
unsafe {
self.store(tmp.as_mut_ptr());
tmp.assume_init()
}
} |
Thanks @workingjubilee ! I think rust-lang/portable-simd#339 also has another workaround that seems to work: pub const fn as_array(&self) -> &[T; N] {
// SAFETY: `Simd<T, N>` is just an overaligned `[T; N]` with
// potential padding at the end, so pointer casting to a
// `&[T; N]` is safe.
//
// NOTE: This deliberately doesn't just use `&self.0`, see the comment
// on the struct definition for details.
unsafe { &*(self as *const Self as *const [T; N]) }
} |
Requested feature: Add support to
simd_bitmask
intrinsicUse case: This intrinsic is commonly found during the code generation of the top 100 crates. Harnesses that exercise this intrinsic will fail due to unimplemented feature.
Link to relevant documentation (Rust reference, Nomicon, RFC):
The text was updated successfully, but these errors were encountered: