Skip to content

Commit d69265b

Browse files
ojedafbq
authored andcommitted
rust: upgrade to Rust 1.77.0
This is the next upgrade to the Rust toolchain, from 1.76.0 to 1.77.0 (i.e. the latest) [1]. See the upgrade policy [2] and the comments on the first upgrade in commit 3ed03f4 ("rust: upgrade to Rust 1.68.2"). The `offset_of` feature (single-field `offset_of!`) that we were using got stabilized in Rust 1.77.0 [3]. Therefore, now the only unstable features allowed to be used outside the `kernel` crate is `new_uninit`, though other code to be upstreamed may increase the list. Please see [4] for details. Rust 1.77.0 merged the `unused_tuple_struct_fields` lint into `dead_code`, thus upgrading it from `allow` to `warn` [5]. In turn, this makes `rustc` complain about the `ThisModule`'s pointer field being never read. Thus locally `allow` it for the moment, since we will have users later on (e.g. Binder needs a `as_ptr` method [6]). Rust 1.77.0 introduces the `--check-cfg` feature [7], for which there is a Call for Testing going on [8]. We were requested to test it and we found it useful [9] -- we will likely enable it in the future. The vast majority of changes are due to our `alloc` fork being upgraded at once. There are two kinds of changes to be aware of: the ones coming from upstream, which we should follow as closely as possible, and the updates needed in our added fallible APIs to keep them matching the newer infallible APIs coming from upstream. Instead of taking a look at the diff of this patch, an alternative approach is reviewing a diff of the changes between upstream `alloc` and the kernel's. This allows to easily inspect the kernel additions only, especially to check if the fallible methods we already have still match the infallible ones in the new version coming from upstream. Another approach is reviewing the changes introduced in the additions in the kernel fork between the two versions. This is useful to spot potentially unintended changes to our additions. To apply these approaches, one may follow steps similar to the following to generate a pair of patches that show the differences between upstream Rust and the kernel (for the subset of `alloc` we use) before and after applying this patch: # Get the difference with respect to the old version. git -C rust checkout $(linux/scripts/min-tool-version.sh rustc) git -C linux ls-tree -r --name-only HEAD -- rust/alloc | cut -d/ -f3- | grep -Fv README.md | xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH git -C linux diff --patch-with-stat --summary -R > old.patch git -C linux restore rust/alloc # Apply this patch. git -C linux am rust-upgrade.patch # Get the difference with respect to the new version. git -C rust checkout $(linux/scripts/min-tool-version.sh rustc) git -C linux ls-tree -r --name-only HEAD -- rust/alloc | cut -d/ -f3- | grep -Fv README.md | xargs -IPATH cp rust/library/alloc/src/PATH linux/rust/alloc/PATH git -C linux diff --patch-with-stat --summary -R > new.patch git -C linux restore rust/alloc Now one may check the `new.patch` to take a look at the additions (first approach) or at the difference between those two patches (second approach). For the latter, a side-by-side tool is recommended. Link: https://github.com/rust-lang/rust/blob/stable/RELEASES.md#version-1770-2024-03-21 [1] Link: https://rust-for-linux.com/rust-version-policy [2] Link: rust-lang/rust#118799 [3] Link: #2 [4] Link: rust-lang/rust#118297 [5] Link: https://lore.kernel.org/rust-for-linux/20231101-rust-binder-v1-2-08ba9197f637@google.com/#Z31rust:kernel:lib.rs [6] Link: https://doc.rust-lang.org/nightly/unstable-book/compiler-flags/check-cfg.html [7] Link: rust-lang/rfcs#3013 (comment) [8] Link: rust-lang/rust#82450 (comment) [9] Signed-off-by: Miguel Ojeda <ojeda@kernel.org> Link: https://lore.kernel.org/r/20240217002717.57507-1-ojeda@kernel.org
1 parent 0201093 commit d69265b

File tree

11 files changed

+161
-91
lines changed

11 files changed

+161
-91
lines changed

Documentation/process/changes.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ you probably needn't concern yourself with pcmciautils.
3131
====================== =============== ========================================
3232
GNU C 5.1 gcc --version
3333
Clang/LLVM (optional) 13.0.1 clang --version
34-
Rust (optional) 1.76.0 rustc --version
34+
Rust (optional) 1.77.0 rustc --version
3535
bindgen (optional) 0.65.1 bindgen --version
3636
GNU make 3.82 make --version
3737
bash 4.2 bash --version

rust/alloc/alloc.rs

+3-3
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
#![stable(feature = "alloc_module", since = "1.28.0")]
66

77
#[cfg(not(test))]
8-
use core::intrinsics;
8+
use core::hint;
99

1010
#[cfg(not(test))]
1111
use core::ptr::{self, NonNull};
@@ -210,7 +210,7 @@ impl Global {
210210
let new_size = new_layout.size();
211211

212212
// `realloc` probably checks for `new_size >= old_layout.size()` or something similar.
213-
intrinsics::assume(new_size >= old_layout.size());
213+
hint::assert_unchecked(new_size >= old_layout.size());
214214

215215
let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size);
216216
let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?;
@@ -301,7 +301,7 @@ unsafe impl Allocator for Global {
301301
// SAFETY: `new_size` is non-zero. Other conditions must be upheld by the caller
302302
new_size if old_layout.align() == new_layout.align() => unsafe {
303303
// `realloc` probably checks for `new_size <= old_layout.size()` or something similar.
304-
intrinsics::assume(new_size <= old_layout.size());
304+
hint::assert_unchecked(new_size <= old_layout.size());
305305

306306
let raw_ptr = realloc(ptr.as_ptr(), old_layout, new_size);
307307
let ptr = NonNull::new(raw_ptr).ok_or(AllocError)?;

rust/alloc/boxed.rs

+2-2
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@
2626
//! Creating a recursive data structure:
2727
//!
2828
//! ```
29+
//! ##[allow(dead_code)]
2930
//! #[derive(Debug)]
3031
//! enum List<T> {
3132
//! Cons(T, Box<List<T>>),
@@ -194,8 +195,7 @@ mod thin;
194195
#[fundamental]
195196
#[stable(feature = "rust1", since = "1.0.0")]
196197
// The declaration of the `Box` struct must be kept in sync with the
197-
// `alloc::alloc::box_free` function or ICEs will happen. See the comment
198-
// on `box_free` for more details.
198+
// compiler or ICEs will happen.
199199
pub struct Box<
200200
T: ?Sized,
201201
#[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global,

rust/alloc/lib.rs

+4-3
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,6 @@
105105
#![feature(allocator_api)]
106106
#![feature(array_chunks)]
107107
#![feature(array_into_iter_constructors)]
108-
#![feature(array_methods)]
109108
#![feature(array_windows)]
110109
#![feature(ascii_char)]
111110
#![feature(assert_matches)]
@@ -122,7 +121,6 @@
122121
#![feature(const_size_of_val)]
123122
#![feature(const_waker)]
124123
#![feature(core_intrinsics)]
125-
#![feature(core_panic)]
126124
#![feature(deprecated_suggestion)]
127125
#![feature(dispatch_from_dyn)]
128126
#![feature(error_generic_member_access)]
@@ -132,6 +130,7 @@
132130
#![feature(fmt_internals)]
133131
#![feature(fn_traits)]
134132
#![feature(hasher_prefixfree_extras)]
133+
#![feature(hint_assert_unchecked)]
135134
#![feature(inline_const)]
136135
#![feature(inplace_iteration)]
137136
#![feature(iter_advance_by)]
@@ -141,6 +140,8 @@
141140
#![feature(maybe_uninit_slice)]
142141
#![feature(maybe_uninit_uninit_array)]
143142
#![feature(maybe_uninit_uninit_array_transpose)]
143+
#![feature(non_null_convenience)]
144+
#![feature(panic_internals)]
144145
#![feature(pattern)]
145146
#![feature(ptr_internals)]
146147
#![feature(ptr_metadata)]
@@ -149,7 +150,6 @@
149150
#![feature(set_ptr_value)]
150151
#![feature(sized_type_properties)]
151152
#![feature(slice_from_ptr_range)]
152-
#![feature(slice_group_by)]
153153
#![feature(slice_ptr_get)]
154154
#![feature(slice_ptr_len)]
155155
#![feature(slice_range)]
@@ -182,6 +182,7 @@
182182
#![feature(const_ptr_write)]
183183
#![feature(const_trait_impl)]
184184
#![feature(const_try)]
185+
#![feature(decl_macro)]
185186
#![feature(dropck_eyepatch)]
186187
#![feature(exclusive_range_pattern)]
187188
#![feature(fundamental)]

rust/alloc/raw_vec.rs

+6-7
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
use core::alloc::LayoutError;
66
use core::cmp;
7-
use core::intrinsics;
7+
use core::hint;
88
use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties};
99
use core::ptr::{self, NonNull, Unique};
1010
use core::slice;
@@ -317,7 +317,7 @@ impl<T, A: Allocator> RawVec<T, A> {
317317
///
318318
/// # Panics
319319
///
320-
/// Panics if the new capacity exceeds `isize::MAX` bytes.
320+
/// Panics if the new capacity exceeds `isize::MAX` _bytes_.
321321
///
322322
/// # Aborts
323323
///
@@ -358,7 +358,7 @@ impl<T, A: Allocator> RawVec<T, A> {
358358
}
359359
unsafe {
360360
// Inform the optimizer that the reservation has succeeded or wasn't needed
361-
core::intrinsics::assume(!self.needs_to_grow(len, additional));
361+
hint::assert_unchecked(!self.needs_to_grow(len, additional));
362362
}
363363
Ok(())
364364
}
@@ -381,7 +381,7 @@ impl<T, A: Allocator> RawVec<T, A> {
381381
///
382382
/// # Panics
383383
///
384-
/// Panics if the new capacity exceeds `isize::MAX` bytes.
384+
/// Panics if the new capacity exceeds `isize::MAX` _bytes_.
385385
///
386386
/// # Aborts
387387
///
@@ -402,7 +402,7 @@ impl<T, A: Allocator> RawVec<T, A> {
402402
}
403403
unsafe {
404404
// Inform the optimizer that the reservation has succeeded or wasn't needed
405-
core::intrinsics::assume(!self.needs_to_grow(len, additional));
405+
hint::assert_unchecked(!self.needs_to_grow(len, additional));
406406
}
407407
Ok(())
408408
}
@@ -553,7 +553,7 @@ where
553553
debug_assert_eq!(old_layout.align(), new_layout.align());
554554
unsafe {
555555
// The allocator checks for alignment equality
556-
intrinsics::assume(old_layout.align() == new_layout.align());
556+
hint::assert_unchecked(old_layout.align() == new_layout.align());
557557
alloc.grow(ptr, old_layout, new_layout)
558558
}
559559
} else {
@@ -591,7 +591,6 @@ fn handle_reserve(result: Result<(), TryReserveError>) {
591591
// `> isize::MAX` bytes will surely fail. On 32-bit and 16-bit we need to add
592592
// an extra guard for this in case we're running on a platform which can use
593593
// all 4GB in user-space, e.g., PAE or x32.
594-
595594
#[inline]
596595
fn alloc_guard(alloc_size: usize) -> Result<(), TryReserveError> {
597596
if usize::BITS < 64 && alloc_size > isize::MAX as usize {

rust/alloc/slice.rs

+2-2
Original file line numberDiff line numberDiff line change
@@ -53,14 +53,14 @@ pub use core::slice::{from_mut, from_ref};
5353
pub use core::slice::{from_mut_ptr_range, from_ptr_range};
5454
#[stable(feature = "rust1", since = "1.0.0")]
5555
pub use core::slice::{from_raw_parts, from_raw_parts_mut};
56+
#[stable(feature = "slice_group_by", since = "1.77.0")]
57+
pub use core::slice::{ChunkBy, ChunkByMut};
5658
#[stable(feature = "rust1", since = "1.0.0")]
5759
pub use core::slice::{Chunks, Windows};
5860
#[stable(feature = "chunks_exact", since = "1.31.0")]
5961
pub use core::slice::{ChunksExact, ChunksExactMut};
6062
#[stable(feature = "rust1", since = "1.0.0")]
6163
pub use core::slice::{ChunksMut, Split, SplitMut};
62-
#[unstable(feature = "slice_group_by", issue = "80552")]
63-
pub use core::slice::{GroupBy, GroupByMut};
6464
#[stable(feature = "rust1", since = "1.0.0")]
6565
pub use core::slice::{Iter, IterMut};
6666
#[stable(feature = "rchunks", since = "1.31.0")]

rust/alloc/vec/into_iter.rs

+69-39
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,17 @@ use core::ops::Deref;
2020
use core::ptr::{self, NonNull};
2121
use core::slice::{self};
2222

23+
macro non_null {
24+
(mut $place:expr, $t:ident) => {{
25+
#![allow(unused_unsafe)] // we're sometimes used within an unsafe block
26+
unsafe { &mut *(ptr::addr_of_mut!($place) as *mut NonNull<$t>) }
27+
}},
28+
($place:expr, $t:ident) => {{
29+
#![allow(unused_unsafe)] // we're sometimes used within an unsafe block
30+
unsafe { *(ptr::addr_of!($place) as *const NonNull<$t>) }
31+
}},
32+
}
33+
2334
/// An iterator that moves out of a vector.
2435
///
2536
/// This `struct` is created by the `into_iter` method on [`Vec`](super::Vec)
@@ -43,10 +54,12 @@ pub struct IntoIter<
4354
// the drop impl reconstructs a RawVec from buf, cap and alloc
4455
// to avoid dropping the allocator twice we need to wrap it into ManuallyDrop
4556
pub(super) alloc: ManuallyDrop<A>,
46-
pub(super) ptr: *const T,
47-
pub(super) end: *const T, // If T is a ZST, this is actually ptr+len. This encoding is picked so that
48-
// ptr == end is a quick test for the Iterator being empty, that works
49-
// for both ZST and non-ZST.
57+
pub(super) ptr: NonNull<T>,
58+
/// If T is a ZST, this is actually ptr+len. This encoding is picked so that
59+
/// ptr == end is a quick test for the Iterator being empty, that works
60+
/// for both ZST and non-ZST.
61+
/// For non-ZSTs the pointer is treated as `NonNull<T>`
62+
pub(super) end: *const T,
5063
}
5164

5265
#[stable(feature = "vec_intoiter_debug", since = "1.13.0")]
@@ -70,7 +83,7 @@ impl<T, A: Allocator> IntoIter<T, A> {
7083
/// ```
7184
#[stable(feature = "vec_into_iter_as_slice", since = "1.15.0")]
7285
pub fn as_slice(&self) -> &[T] {
73-
unsafe { slice::from_raw_parts(self.ptr, self.len()) }
86+
unsafe { slice::from_raw_parts(self.ptr.as_ptr(), self.len()) }
7487
}
7588

7689
/// Returns the remaining items of this iterator as a mutable slice.
@@ -99,7 +112,7 @@ impl<T, A: Allocator> IntoIter<T, A> {
99112
}
100113

101114
fn as_raw_mut_slice(&mut self) -> *mut [T] {
102-
ptr::slice_from_raw_parts_mut(self.ptr as *mut T, self.len())
115+
ptr::slice_from_raw_parts_mut(self.ptr.as_ptr(), self.len())
103116
}
104117

105118
/// Drops remaining elements and relinquishes the backing allocation.
@@ -126,7 +139,7 @@ impl<T, A: Allocator> IntoIter<T, A> {
126139
// this creates less assembly
127140
self.cap = 0;
128141
self.buf = unsafe { NonNull::new_unchecked(RawVec::NEW.ptr()) };
129-
self.ptr = self.buf.as_ptr();
142+
self.ptr = self.buf;
130143
self.end = self.buf.as_ptr();
131144

132145
// Dropping the remaining elements can panic, so this needs to be
@@ -138,9 +151,9 @@ impl<T, A: Allocator> IntoIter<T, A> {
138151

139152
/// Forgets to Drop the remaining elements while still allowing the backing allocation to be freed.
140153
pub(crate) fn forget_remaining_elements(&mut self) {
141-
// For th ZST case, it is crucial that we mutate `end` here, not `ptr`.
154+
// For the ZST case, it is crucial that we mutate `end` here, not `ptr`.
142155
// `ptr` must stay aligned, while `end` may be unaligned.
143-
self.end = self.ptr;
156+
self.end = self.ptr.as_ptr();
144157
}
145158

146159
#[cfg(not(no_global_oom_handling))]
@@ -162,7 +175,7 @@ impl<T, A: Allocator> IntoIter<T, A> {
162175
// say that they're all at the beginning of the "allocation".
163176
0..this.len()
164177
} else {
165-
this.ptr.sub_ptr(buf)..this.end.sub_ptr(buf)
178+
this.ptr.sub_ptr(this.buf)..this.end.sub_ptr(buf)
166179
};
167180
let cap = this.cap;
168181
let alloc = ManuallyDrop::take(&mut this.alloc);
@@ -189,37 +202,43 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> {
189202

190203
#[inline]
191204
fn next(&mut self) -> Option<T> {
192-
if self.ptr == self.end {
193-
None
194-
} else if T::IS_ZST {
195-
// `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by
196-
// reducing the `end`.
197-
self.end = self.end.wrapping_byte_sub(1);
198-
199-
// Make up a value of this ZST.
200-
Some(unsafe { mem::zeroed() })
205+
if T::IS_ZST {
206+
if self.ptr.as_ptr() == self.end as *mut _ {
207+
None
208+
} else {
209+
// `ptr` has to stay where it is to remain aligned, so we reduce the length by 1 by
210+
// reducing the `end`.
211+
self.end = self.end.wrapping_byte_sub(1);
212+
213+
// Make up a value of this ZST.
214+
Some(unsafe { mem::zeroed() })
215+
}
201216
} else {
202-
let old = self.ptr;
203-
self.ptr = unsafe { self.ptr.add(1) };
217+
if self.ptr == non_null!(self.end, T) {
218+
None
219+
} else {
220+
let old = self.ptr;
221+
self.ptr = unsafe { old.add(1) };
204222

205-
Some(unsafe { ptr::read(old) })
223+
Some(unsafe { ptr::read(old.as_ptr()) })
224+
}
206225
}
207226
}
208227

209228
#[inline]
210229
fn size_hint(&self) -> (usize, Option<usize>) {
211230
let exact = if T::IS_ZST {
212-
self.end.addr().wrapping_sub(self.ptr.addr())
231+
self.end.addr().wrapping_sub(self.ptr.as_ptr().addr())
213232
} else {
214-
unsafe { self.end.sub_ptr(self.ptr) }
233+
unsafe { non_null!(self.end, T).sub_ptr(self.ptr) }
215234
};
216235
(exact, Some(exact))
217236
}
218237

219238
#[inline]
220239
fn advance_by(&mut self, n: usize) -> Result<(), NonZeroUsize> {
221240
let step_size = self.len().min(n);
222-
let to_drop = ptr::slice_from_raw_parts_mut(self.ptr as *mut T, step_size);
241+
let to_drop = ptr::slice_from_raw_parts_mut(self.ptr.as_ptr(), step_size);
223242
if T::IS_ZST {
224243
// See `next` for why we sub `end` here.
225244
self.end = self.end.wrapping_byte_sub(step_size);
@@ -261,7 +280,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> {
261280
// Safety: `len` indicates that this many elements are available and we just checked that
262281
// it fits into the array.
263282
unsafe {
264-
ptr::copy_nonoverlapping(self.ptr, raw_ary.as_mut_ptr() as *mut T, len);
283+
ptr::copy_nonoverlapping(self.ptr.as_ptr(), raw_ary.as_mut_ptr() as *mut T, len);
265284
self.forget_remaining_elements();
266285
return Err(array::IntoIter::new_unchecked(raw_ary, 0..len));
267286
}
@@ -270,7 +289,7 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> {
270289
// Safety: `len` is larger than the array size. Copy a fixed amount here to fully initialize
271290
// the array.
272291
return unsafe {
273-
ptr::copy_nonoverlapping(self.ptr, raw_ary.as_mut_ptr() as *mut T, N);
292+
ptr::copy_nonoverlapping(self.ptr.as_ptr(), raw_ary.as_mut_ptr() as *mut T, N);
274293
self.ptr = self.ptr.add(N);
275294
Ok(raw_ary.transpose().assume_init())
276295
};
@@ -288,26 +307,33 @@ impl<T, A: Allocator> Iterator for IntoIter<T, A> {
288307
// Also note the implementation of `Self: TrustedRandomAccess` requires
289308
// that `T: Copy` so reading elements from the buffer doesn't invalidate
290309
// them for `Drop`.
291-
unsafe { if T::IS_ZST { mem::zeroed() } else { ptr::read(self.ptr.add(i)) } }
310+
unsafe { if T::IS_ZST { mem::zeroed() } else { self.ptr.add(i).read() } }
292311
}
293312
}
294313

295314
#[stable(feature = "rust1", since = "1.0.0")]
296315
impl<T, A: Allocator> DoubleEndedIterator for IntoIter<T, A> {
297316
#[inline]
298317
fn next_back(&mut self) -> Option<T> {
299-
if self.end == self.ptr {
300-
None
301-
} else if T::IS_ZST {
302-
// See above for why 'ptr.offset' isn't used
303-
self.end = self.end.wrapping_byte_sub(1);
304-
305-
// Make up a value of this ZST.
306-
Some(unsafe { mem::zeroed() })
318+
if T::IS_ZST {
319+
if self.end as *mut _ == self.ptr.as_ptr() {
320+
None
321+
} else {
322+
// See above for why 'ptr.offset' isn't used
323+
self.end = self.end.wrapping_byte_sub(1);
324+
325+
// Make up a value of this ZST.
326+
Some(unsafe { mem::zeroed() })
327+
}
307328
} else {
308-
self.end = unsafe { self.end.sub(1) };
329+
if non_null!(self.end, T) == self.ptr {
330+
None
331+
} else {
332+
let new_end = unsafe { non_null!(self.end, T).sub(1) };
333+
*non_null!(mut self.end, T) = new_end;
309334

310-
Some(unsafe { ptr::read(self.end) })
335+
Some(unsafe { ptr::read(new_end.as_ptr()) })
336+
}
311337
}
312338
}
313339

@@ -333,7 +359,11 @@ impl<T, A: Allocator> DoubleEndedIterator for IntoIter<T, A> {
333359
#[stable(feature = "rust1", since = "1.0.0")]
334360
impl<T, A: Allocator> ExactSizeIterator for IntoIter<T, A> {
335361
fn is_empty(&self) -> bool {
336-
self.ptr == self.end
362+
if T::IS_ZST {
363+
self.ptr.as_ptr() == self.end as *mut _
364+
} else {
365+
self.ptr == non_null!(self.end, T)
366+
}
337367
}
338368
}
339369

0 commit comments

Comments
 (0)