Skip to content

Conversation

@a1phyr
Copy link
Contributor

@a1phyr a1phyr commented Sep 28, 2025

This move a branch and more code into the cold method finish_grow, which means that less code is inlined at each try_reserve site. Additionally, this reduces the amount of parameters, so they can all be passed by registers.

This move a branch and more code into the cold method `finish_grow`,
which means that less code is inlined at each `try_reserve` site.
Additionally, this reduces the amount of parameters, so they can all be
passed by registers.
@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-libs Relevant to the library team, which will review and decide on the PR/issue. labels Sep 28, 2025
@rustbot
Copy link
Collaborator

rustbot commented Sep 28, 2025

r? @Mark-Simulacrum

rustbot has assigned @Mark-Simulacrum.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

@Kobzol
Copy link
Member

Kobzol commented Sep 28, 2025

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rust-bors

This comment has been minimized.

rust-bors bot added a commit that referenced this pull request Sep 28, 2025
Move more code to `RawVec::finish_grow`
@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Sep 28, 2025
@rust-bors
Copy link

rust-bors bot commented Sep 28, 2025

☀️ Try build successful (CI)
Build commit: 006e85f (006e85fa61b49abee68e8f45daa550ffd4a90c84, parent: 4ffeda10e10d4fa0c8edbd0dd9642d8ae7d3e66e)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (006e85f): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request means it may be perf-sensitive – we'll automatically label it not fit for rolling up. You can override this, but we strongly advise not to, due to possible changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please do so in sufficient writing along with @rustbot label: +perf-regression-triaged. If not, please fix the regressions and do another perf run. If its results are neutral or positive, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.

mean range count
Regressions ❌
(primary)
3.0% [3.0%, 3.0%] 1
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.2% [-0.3%, -0.1%] 3
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.6% [-0.3%, 3.0%] 4

Max RSS (memory usage)

Results (primary 0.1%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
2.3% [2.3%, 2.3%] 1
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-2.0% [-2.0%, -2.0%] 1
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) 0.1% [-2.0%, 2.3%] 2

Cycles

Results (primary 2.4%, secondary -3.0%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
2.4% [2.4%, 2.4%] 1
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-3.0% [-3.0%, -3.0%] 2
All ❌✅ (primary) 2.4% [2.4%, 2.4%] 1

Binary size

Results (primary -0.3%, secondary -0.2%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
0.0% [0.0%, 0.0%] 1
Improvements ✅
(primary)
-0.3% [-1.3%, -0.0%] 19
Improvements ✅
(secondary)
-0.2% [-1.1%, -0.1%] 42
All ❌✅ (primary) -0.3% [-1.3%, -0.0%] 19

Bootstrap: 469.714s -> 468.998s (-0.15%)
Artifact size: 387.67 MiB -> 387.45 MiB (-0.06%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Sep 28, 2025
@Mark-Simulacrum
Copy link
Member

Seems like this is a small binary size win at the cost of a few instruction count/cycle regressions, but very small ones (including when looking at the aggregate across all our benchmarks).

@bors r+

@bors
Copy link
Collaborator

bors commented Oct 11, 2025

📌 Commit e52fe65 has been approved by Mark-Simulacrum

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Oct 11, 2025
@bors
Copy link
Collaborator

bors commented Oct 11, 2025

⌛ Testing commit e52fe65 with merge be0ade2...

@bors
Copy link
Collaborator

bors commented Oct 11, 2025

☀️ Test successful - checks-actions
Approved by: Mark-Simulacrum
Pushing be0ade2 to master...

@bors bors added the merged-by-bors This PR was explicitly merged by bors. label Oct 11, 2025
@bors bors merged commit be0ade2 into rust-lang:master Oct 11, 2025
12 checks passed
@rustbot rustbot added this to the 1.92.0 milestone Oct 11, 2025
@github-actions
Copy link
Contributor

What is this? This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.

Comparing 360a3a4 (parent) -> be0ade2 (this PR)

Test differences

No test diffs found

Test dashboard

Run

cargo run --manifest-path src/ci/citool/Cargo.toml -- \
    test-dashboard be0ade2b602bdfe37a3cc259fcc79e8624dcba94 --output-dir test-dashboard

And then open test-dashboard/index.html in your browser to see an overview of all executed tests.

Job duration changes

  1. aarch64-apple: 9434.6s -> 7663.7s (-18.8%)
  2. x86_64-gnu-llvm-20-2: 4925.6s -> 5648.1s (14.7%)
  3. dist-apple-various: 3635.3s -> 4101.9s (12.8%)
  4. dist-s390x-linux: 5460.0s -> 4920.2s (-9.9%)
  5. x86_64-msvc-2: 7258.5s -> 6656.4s (-8.3%)
  6. x86_64-gnu-aux: 6605.1s -> 6079.8s (-8.0%)
  7. dist-sparcv9-solaris: 5256.9s -> 4847.2s (-7.8%)
  8. dist-x86_64-apple: 6654.0s -> 7150.9s (7.5%)
  9. x86_64-gnu-llvm-20-3: 6645.9s -> 6173.0s (-7.1%)
  10. test-various: 4664.1s -> 4332.9s (-7.1%)
How to interpret the job duration changes?

Job durations can vary a lot, based on the actual runner instance
that executed the job, system noise, invalidated caches, etc. The table above is provided
mostly for t-infra members, for simpler debugging of potential CI slow-downs.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (be0ade2): comparison URL.

Overall result: ✅ improvements - no action needed

@rustbot label: -perf-regression

Instruction count

Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.2% [-0.4%, -0.1%] 5
Improvements ✅
(secondary)
-0.1% [-0.2%, -0.0%] 7
All ❌✅ (primary) -0.2% [-0.4%, -0.1%] 5

Max RSS (memory usage)

Results (secondary -1.0%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-1.0% [-1.0%, -1.0%] 1
All ❌✅ (primary) - - 0

Cycles

Results (primary -2.2%, secondary 0.8%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
2.5% [1.9%, 3.2%] 2
Improvements ✅
(primary)
-2.2% [-2.2%, -2.2%] 1
Improvements ✅
(secondary)
-2.7% [-2.7%, -2.7%] 1
All ❌✅ (primary) -2.2% [-2.2%, -2.2%] 1

Binary size

Results (primary -0.3%, secondary -0.2%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
- - 0
Improvements ✅
(primary)
-0.3% [-1.2%, -0.0%] 19
Improvements ✅
(secondary)
-0.2% [-0.9%, -0.2%] 39
All ❌✅ (primary) -0.3% [-1.2%, -0.0%] 19

Bootstrap: 473.775s -> 472.888s (-0.19%)
Artifact size: 388.16 MiB -> 388.10 MiB (-0.01%)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

merged-by-bors This PR was explicitly merged by bors. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-libs Relevant to the library team, which will review and decide on the PR/issue.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants