-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking Issue for Iterator::next_chunk
#98326
Comments
Should this method support |
I'm wondering about the intended uses of this API. If it is for performance (autovectorization) then it seems fairly brittle. I just tried doing this benchmark: #[bench]
fn bench_next_chunk(b: &mut Bencher) {
let v = vec![13u8; 2048];
b.iter(|| {
const CHUNK: usize = 8;
let mut sum = [0u32; CHUNK];
let mut iter = black_box(v.clone()).into_iter();
while let Ok(chunk) = iter.next_chunk::<CHUNK>() {
for i in 0..CHUNK {
sum[i] += chunk[i] as u32;
}
}
sum
})
} and it took And that's with direct, linear memory access on a Another limitation is that it can't be used in a for-in loop or chained further with other iterator adapters, an issue #92393 didn't have. |
This is probably an example of a case where you should first use And the change into and out of simd form will have a cost, so you'll want to ensure you're doing enough work inside each loop pass or it might exceed the cost of the scalar only code. |
yeah that was just an example to illustrate that the default implementation is suboptimal and we'll need optimized implementations on most linear-memory sources and length-preserving adapters. |
Mm, yeah. Myself I would imagine using this most on iterators that aren't linear memory sources, and "the point" would be to have a clean and unified way to turn some crazy data source into simple SIMD chunks. |
Might be good to align the naming of this function with
Or
|
So I'm finding myself caring less and less about the actual naming. I've wanted this feature at least 3 times in the last year and had to write my own version to get around. Can we just pick something and get the feature stabilized? |
I believe the current implementation also suffers from this compiler optimization issue: #110734 Godbolt: https://godbolt.org/z/Te5Wo9eh7 |
I'm not sure what that issue has to do with |
Let's get this stabilized. Here are my answers to the Unresolved Questions:
Probably, but that should be under a separate feature and not block this.
It should be allowed and return a zero-length array without advancing the iterator. No good reason to forbid it. |
For |
I disagree. For those, it's necessary by definition. There's no reason to apply the same logic here besides, in my opinion, a misguided desire for consistency. The behavior of "Doing something about it" will pretty much require const expressions just like all the other ones, needlessly delaying the stabilization. |
Well, it would prevent implementations of |
I can see that maybe being the case down the line unless we add something like Edit: actually the above does NOT work and would need some kind of It's possible to achieve a similar thing via specialization but it's not ideal. |
@rust-lang/libs Is there anything blocking a stabilization pull request? I'm happy to open that |
I think for the name If things are otherwise blocked on the handling of |
Whether a chunk is a slice or an array already is not consistent. E.g. slices already have
That would block the option of adding a |
Fair point, though I don't think any of these methods have been stabilized, unless I'm mistaken. Actually, upon reviewing this post I see that In any case, the confusion I point to has to do with readability. By using clear and distinct naming, you do not have to stop and think through a logical argument like the one you suggest when scanning over the code. I just don't see any downside in using a few more characters to avoid the ambiguity. But I also don't think we need to bikeshed this further.
Thanks for pointing this out, that's what I was missing. |
Before making this stable it would be useful to improve the documentation so that it provides an example with a loop. Now the examples have fixed amount of calls to next_chunk. I would think the more common use case is when not operating on a known input. https://doc.rust-lang.org/std/iter/trait.Iterator.html#method.next_chunk |
…cuviper fix Drop items getting leaked in Filter::next_chunk The optimization only makes sense for non-drop elements anyway. Use the default implementation for items that are Drop instead. It also simplifies the implementation. fixes rust-lang#126872 tracking issue rust-lang#98326
…cuviper fix Drop items getting leaked in Filter::next_chunk The optimization only makes sense for non-drop elements anyway. Use the default implementation for items that are Drop instead. It also simplifies the implementation. fixes rust-lang#126872 tracking issue rust-lang#98326
Rollup merge of rust-lang#126879 - the8472:next-chunk-filter-drop, r=cuviper fix Drop items getting leaked in Filter::next_chunk The optimization only makes sense for non-drop elements anyway. Use the default implementation for items that are Drop instead. It also simplifies the implementation. fixes rust-lang#126872 tracking issue rust-lang#98326
Feature gate:
#![feature(iter_next_chunk)]
This is a tracking issue for the
.next_chunk()
method on iterators which allows you to advance the iteratorN
elements and return an array[T; N]
Public API
Steps / History
Iterator::next_chunk
#93700slice_iter.copied().next_chunk()
#103166Unresolved Questions
next_array()
ornext_array_chunk()
.next_chunk_back
toDoubleEndedIterator
?N = 0
?The text was updated successfully, but these errors were encountered: