Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rollup of 8 pull requests #39735

Closed
wants to merge 21 commits into from
Closed

Rollup of 8 pull requests #39735

wants to merge 21 commits into from

Conversation

Stjepan Glavina and others added 21 commits February 8, 2017 12:19
This way we can call `cmp` instead of `partial_cmp` in the loop,
removing some burden of optimizing `Option`s away from the compiler.

PR rust-lang#39538 introduced a regression where sorting slices suddenly became
slower, since `slice1.lt(slice2)` was much slower than
`slice1.cmp(slice2) == Less`. This problem is now fixed.

To verify, I benchmarked this simple program:
```rust
fn main() {
    let mut v = (0..2_000_000).map(|x| x * x * x * 18913515181).map(|x| vec![x, x ^ 3137831591]).collect::<Vec<_>>();
    v.sort();
}
```

Before this PR, it would take 0.95 sec, and now it takes 0.58 sec.
I also tried changing the `is_less` lambda to use `cmp` and
`partial_cmp`. Now all three versions (`lt`, `cmp`, `partial_cmp`) are
equally performant for sorting slices - all of them take 0.58 sec on the
benchmark.
Right now we just run `shasum` on an absolute path but right now the shasum
files only include filenames, so let's use `current_dir` and just the file name
to only have the file name emitted.
The previous fix contained an error where `toml::encode` returned a runtime
error, so this version just constructs a literal `toml::Value`.
I spent a good chunk of time tracking down a buffer overrun bug that
resulted from me mistakenly thinking that `reserve` was based on the
current capacity not the current length. It would be helpful if this
were called out explicitly in the docs.
The `Iterator.nth()` documentation says "Note that all preceding elements will be consumed". I assumed from that that the preceding elements would be the *only* ones that were consumed, but in fact the returned element is consumed as well.

The way I read the documentation, I assumed that `nth(0)` would not discard anything (as there are 0 preceding elements), so I added a sentence clarifying that it does. I also rephrased it to avoid the stunted "i.e." phrasing.
…hton

Fix a misleading statement in `Iterator.nth()`

The `Iterator.nth()` documentation says "Note that all preceding elements will be consumed". I assumed from that that the preceding elements would be the *only* ones that were consumed, but in fact the returned element is consumed as well.

The way I read the documentation, I assumed that `nth(0)` would not discard anything (there are 0 preceding elements, and maybe it just peeks at the start of the iterator somehow), so I added a sentence clarifying that it does. I also rephrased it to avoid the stunted "i.e." phrasing.
…d, r=alexcrichton

Specialize `PartialOrd<A> for [A] where A: Ord`

This way we can call `cmp` instead of `partial_cmp` in the loop, removing some burden of optimizing `Option`s away from the compiler.

PR rust-lang#39538 introduced a regression where sorting slices suddenly became slower, since `slice1.lt(slice2)` was much slower than `slice1.cmp(slice2) == Less`. This problem is now fixed.

To verify, I benchmarked this simple program:
```rust
fn main() {
    let mut v = (0..2_000_000).map(|x| x * x * x * 18913515181).map(|x| vec![x, x ^ 3137831591]).collect::<Vec<_>>();
    v.sort();
}
```

Before this PR, it would take 0.95 sec, and now it takes 0.58 sec.
I also tried changing the `is_less` lambda to use `cmp` and `partial_cmp`. Now all three versions (`lt`, `cmp`, `partial_cmp`) are equally performant for sorting slices - all of them take 0.58 sec on the
benchmark.

Tangentially, as soon as we get `default impl`, it might be a good idea to implement a blanket default impl for `lt`, `gt`, `le`, `ge` in terms of `cmp` whenever possible. Today, those four functions by default are only implemented in terms of `partial_cmp`.

r? @alexcrichton
Don't include directory names in shasums

Right now we just run `shasum` on an absolute path but right now the shasum
files only include filenames, so let's use `current_dir` and just the file name
to only have the file name emitted.
Actually fix manifest generation

The previous fix contained an error where `toml::encode` returned a runtime
error, so this version just constructs a literal `toml::Value`.
remove wrong packed struct test

This UB was found by running the test under [Miri](https://github.com/solson/miri) which rejects these unsafe unaligned loads. 😄
…ichton

Explicitly mention that `Vec::reserve` is based on len not capacity

I spent a good chunk of time tracking down a buffer overrun bug that
resulted from me mistakenly thinking that `reserve` was based on the
current capacity not the current length. It would be helpful if this
were called out explicitly in the docs.
Update 1.15.1 relnotes

Matching what is on stable.
Updated nightly book with installing nightly instructions
@rust-highfive
Copy link
Collaborator

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @steveklabnik (or someone else) soon.

If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes.

Please see the contribution instructions for more information.

@frewsxcv frewsxcv deleted the rollup branch February 11, 2017 04:41
@Centril Centril added the rollup A PR which is a rollup label Oct 24, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rollup A PR which is a rollup
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants