Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cranelift-wasm: Only allocate if vectors need bitcasts #4543

Merged
merged 2 commits into from
Jul 27, 2022

Conversation

jameysharp
Copy link
Contributor

For wasm programs using SIMD vector types, the type known at function
entry or exit may not match the type used within the body of the
function, so we have to bitcast them. We have to check all calls and
returns for this condition, which involves comparing a subset of a
function's signature with the CLIF types we're trying to use. Currently,
this check heap-allocates a short-lived Vec for the appropriate subset
of the signature.

But most of the time none of the values need a bitcast. So this patch
avoids allocating unless at least one bitcast is actually required, and
only saves the pointers to values that need fixing up. I leaned heavily
on iterators to keep space usage constant until we discover allocation
is necessary after all.

I don't think it's possible to eliminate the allocation entirely,
because the function signature we're examining is borrowed from the
FuncBuilder, but we need to mutably borrow the FuncBuilder to insert the
bitcast instructions. Fortunately, the FromIterator implementation for
Vec doesn't reserve any heap space if the iterator is empty, so we can
unconditionally collect into a Vec and still avoid unnecessary
allocations.

Since the relationship between a function signature and a list of CLIF
values is somewhat complicated, I refactored all the uses of
bitcast_arguments and wasm_param_types. Instead there's
bitcast_wasm_params and bitcast_wasm_returns which share a helper
that combines the previous pair of functions into one.

According to DHAT, when compiling the Sightglass Spidermonkey benchmark,
this avoids 52k allocations averaging about 9 bytes each, which are
freed on average within 2k instructions.

Most Sightglass benchmarks, including Spidermonkey, show no performance
difference with this change. Only one has a slowdown, and it's small:

compilation :: nanoseconds :: benchmarks/shootout-matrix/benchmark.wasm

Δ = 689373.34 ± 593720.78 (confidence = 99%)

lazy-bitcast.so is 0.94x to 1.00x faster than main-e121c209f.so!
main-e121c209f.so is 1.00x to 1.06x faster than lazy-bitcast.so!

[19741713 21375844.46 32976047] lazy-bitcast.so
[19345471 20686471.12 30872267] main-e121c209f.so

But several Sightglass benchmarks have notable speed-ups, with smaller
improvements for shootout-ed25519, meshoptimizer, and pulldown-cmark,
and larger ones as follows:

compilation :: nanoseconds :: benchmarks/bz2/benchmark.wasm

Δ = 20071545.47 ± 2950014.92 (confidence = 99%)

lazy-bitcast.so is 1.26x to 1.36x faster than main-e121c209f.so!
main-e121c209f.so is 0.73x to 0.80x faster than lazy-bitcast.so!

[55995164 64849257.68 89083031] lazy-bitcast.so
[79382460 84920803.15 98016185] main-e121c209f.so

compilation :: nanoseconds :: benchmarks/blake3-scalar/benchmark.wasm

Δ = 16620780.61 ± 5395162.13 (confidence = 99%)

lazy-bitcast.so is 1.14x to 1.28x faster than main-e121c209f.so!
main-e121c209f.so is 0.77x to 0.88x faster than lazy-bitcast.so!

[54604352 79877776.35 103666598] lazy-bitcast.so
[94011835 96498556.96 106200091] main-e121c209f.so

compilation :: nanoseconds :: benchmarks/intgemm-simd/benchmark.wasm

Δ = 36891254.34 ± 12403663.50 (confidence = 99%)

lazy-bitcast.so is 1.12x to 1.24x faster than main-e121c209f.so!
main-e121c209f.so is 0.79x to 0.90x faster than lazy-bitcast.so!

[131610845 201289587.88 247341883] lazy-bitcast.so
[232065032 238180842.22 250957563] main-e121c209f.so

For wasm programs using SIMD vector types, the type known at function
entry or exit may not match the type used within the body of the
function, so we have to bitcast them. We have to check all calls and
returns for this condition, which involves comparing a subset of a
function's signature with the CLIF types we're trying to use. Currently,
this check heap-allocates a short-lived Vec for the appropriate subset
of the signature.

But most of the time none of the values need a bitcast. So this patch
avoids allocating unless at least one bitcast is actually required, and
only saves the pointers to values that need fixing up. I leaned heavily
on iterators to keep space usage constant until we discover allocation
is necessary after all.

I don't think it's possible to eliminate the allocation entirely,
because the function signature we're examining is borrowed from the
FuncBuilder, but we need to mutably borrow the FuncBuilder to insert the
bitcast instructions. Fortunately, the FromIterator implementation for
Vec doesn't reserve any heap space if the iterator is empty, so we can
unconditionally collect into a Vec and still avoid unnecessary
allocations.

Since the relationship between a function signature and a list of CLIF
values is somewhat complicated, I refactored all the uses of
`bitcast_arguments` and `wasm_param_types`. Instead there's
`bitcast_wasm_params` and `bitcast_wasm_returns` which share a helper
that combines the previous pair of functions into one.

According to DHAT, when compiling the Sightglass Spidermonkey benchmark,
this avoids 52k allocations averaging about 9 bytes each, which are
freed on average within 2k instructions.

Most Sightglass benchmarks, including Spidermonkey, show no performance
difference with this change. Only one has a slowdown, and it's small:

compilation :: nanoseconds :: benchmarks/shootout-matrix/benchmark.wasm

  Δ = 689373.34 ± 593720.78 (confidence = 99%)

  lazy-bitcast.so is 0.94x to 1.00x faster than main-e121c209f.so!
  main-e121c209f.so is 1.00x to 1.06x faster than lazy-bitcast.so!

  [19741713 21375844.46 32976047] lazy-bitcast.so
  [19345471 20686471.12 30872267] main-e121c209f.so

But several Sightglass benchmarks have notable speed-ups, with smaller
improvements for shootout-ed25519, meshoptimizer, and pulldown-cmark,
and larger ones as follows:

compilation :: nanoseconds :: benchmarks/bz2/benchmark.wasm

  Δ = 20071545.47 ± 2950014.92 (confidence = 99%)

  lazy-bitcast.so is 1.26x to 1.36x faster than main-e121c209f.so!
  main-e121c209f.so is 0.73x to 0.80x faster than lazy-bitcast.so!

  [55995164 64849257.68 89083031] lazy-bitcast.so
  [79382460 84920803.15 98016185] main-e121c209f.so

compilation :: nanoseconds :: benchmarks/blake3-scalar/benchmark.wasm

  Δ = 16620780.61 ± 5395162.13 (confidence = 99%)

  lazy-bitcast.so is 1.14x to 1.28x faster than main-e121c209f.so!
  main-e121c209f.so is 0.77x to 0.88x faster than lazy-bitcast.so!

  [54604352 79877776.35 103666598] lazy-bitcast.so
  [94011835 96498556.96 106200091] main-e121c209f.so

compilation :: nanoseconds :: benchmarks/intgemm-simd/benchmark.wasm

  Δ = 36891254.34 ± 12403663.50 (confidence = 99%)

  lazy-bitcast.so is 1.12x to 1.24x faster than main-e121c209f.so!
  main-e121c209f.so is 0.79x to 0.90x faster than lazy-bitcast.so!

  [131610845 201289587.88 247341883] lazy-bitcast.so
  [232065032 238180842.22 250957563] main-e121c209f.so
@github-actions github-actions bot added cranelift Issues related to the Cranelift code generator cranelift:wasm labels Jul 27, 2022
Copy link
Member

@cfallin cfallin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, and these are fantastic speedups!

A few nits below (or rather, one clarity nit and one possible smallvec experiment) but otherwise LGTM.

arguments: &'a mut [Value],
params: &[ir::AbiParam],
is_wasm: impl Fn(usize) -> bool,
) -> Vec<(Type, &'a mut Value)> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it help at all to use a SmallVec here, tuned to some size that captures most cases?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about using SmallVec but figured first I'd try the experiment that doesn't require any tuning. I'm not sure: how would you suggest picking a size? In the past I've usually picked however many elements will fit in two usize, because that's the minimum size the inline part of a SmallVec will consume anyway. But in this case that's 1, which hardly seems worth doing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The SmallVec in this case will be on-stack so we can be a bit looser with its size I think (and sub rsp, 128 is almost always faster than malloc(128)). For some cases like this I have just picked a reasonable value but to be more objective about it we could try 1, 4, 16 and see what the difference is...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the intgemm-simd benchmark, Sightglass says plain Vec is 53-73% faster than a size-16 SmallVec, and 8-22% faster than a size-4 SmallVec. I don't understand that result, do you?

Unless there's a simple explanation I'm not seeing, I'm inclined to not think further about SmallVec right now.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's somewhat surprising, but I can hypothesize at least one potential explanation: if the return value is almost always an empty list, then the empty SmallVec takes more stack space than an empty Vec and hence creates a little more spread in the memory usage (so bad cache effects). I'm stretching a bit with that though.

Anyway if it's rare enough then a simple Vec is totally fine here; it's certainly better than what we had before!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to give size-1 SmallVec a shot too for completeness: if it's a matter of different stack frame sizes, then that should be no worse than Vec. But Vec was 44-58% faster on intgemm-simd than even size-1 SmallVec. So I remain thoroughly puzzled. 🤷

is_wasm: impl Fn(usize) -> bool,
) -> Vec<(Type, &'a mut Value)> {
// zip_eq is from the itertools::Itertools trait
params
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we split up this iterator chain a bit and give names to intermediates to make it easier to follow? I'm thinking perhaps wasm_param_tys for the first part up to grabbing value_types; then another intermediate after zip_eq, with a comment that we're zipping with given wasm arg values and asserting length is equal; then do the last two filters on those, commenting that we're taking all vectors whose arg-type is not equal to the param's value type.

Copy link
Member

@cfallin cfallin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

err, I meant to set the r+ bit with that one.

@jameysharp jameysharp merged commit ad050e6 into bytecodealliance:main Jul 27, 2022
@jameysharp jameysharp deleted the lazy-bitcast branch July 27, 2022 23:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cranelift:wasm cranelift Issues related to the Cranelift code generator
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants