Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
cranelift-wasm: Only allocate if vectors need bitcasts (#4543)
For wasm programs using SIMD vector types, the type known at function entry or exit may not match the type used within the body of the function, so we have to bitcast them. We have to check all calls and returns for this condition, which involves comparing a subset of a function's signature with the CLIF types we're trying to use. Currently, this check heap-allocates a short-lived Vec for the appropriate subset of the signature. But most of the time none of the values need a bitcast. So this patch avoids allocating unless at least one bitcast is actually required, and only saves the pointers to values that need fixing up. I leaned heavily on iterators to keep space usage constant until we discover allocation is necessary after all. I don't think it's possible to eliminate the allocation entirely, because the function signature we're examining is borrowed from the FuncBuilder, but we need to mutably borrow the FuncBuilder to insert the bitcast instructions. Fortunately, the FromIterator implementation for Vec doesn't reserve any heap space if the iterator is empty, so we can unconditionally collect into a Vec and still avoid unnecessary allocations. Since the relationship between a function signature and a list of CLIF values is somewhat complicated, I refactored all the uses of `bitcast_arguments` and `wasm_param_types`. Instead there's `bitcast_wasm_params` and `bitcast_wasm_returns` which share a helper that combines the previous pair of functions into one. According to DHAT, when compiling the Sightglass Spidermonkey benchmark, this avoids 52k allocations averaging about 9 bytes each, which are freed on average within 2k instructions. Most Sightglass benchmarks, including Spidermonkey, show no performance difference with this change. Only one has a slowdown, and it's small: compilation :: nanoseconds :: benchmarks/shootout-matrix/benchmark.wasm Δ = 689373.34 ± 593720.78 (confidence = 99%) lazy-bitcast.so is 0.94x to 1.00x faster than main-e121c209f.so! main-e121c209f.so is 1.00x to 1.06x faster than lazy-bitcast.so! [19741713 21375844.46 32976047] lazy-bitcast.so [19345471 20686471.12 30872267] main-e121c209f.so But several Sightglass benchmarks have notable speed-ups, with smaller improvements for shootout-ed25519, meshoptimizer, and pulldown-cmark, and larger ones as follows: compilation :: nanoseconds :: benchmarks/bz2/benchmark.wasm Δ = 20071545.47 ± 2950014.92 (confidence = 99%) lazy-bitcast.so is 1.26x to 1.36x faster than main-e121c209f.so! main-e121c209f.so is 0.73x to 0.80x faster than lazy-bitcast.so! [55995164 64849257.68 89083031] lazy-bitcast.so [79382460 84920803.15 98016185] main-e121c209f.so compilation :: nanoseconds :: benchmarks/blake3-scalar/benchmark.wasm Δ = 16620780.61 ± 5395162.13 (confidence = 99%) lazy-bitcast.so is 1.14x to 1.28x faster than main-e121c209f.so! main-e121c209f.so is 0.77x to 0.88x faster than lazy-bitcast.so! [54604352 79877776.35 103666598] lazy-bitcast.so [94011835 96498556.96 106200091] main-e121c209f.so compilation :: nanoseconds :: benchmarks/intgemm-simd/benchmark.wasm Δ = 36891254.34 ± 12403663.50 (confidence = 99%) lazy-bitcast.so is 1.12x to 1.24x faster than main-e121c209f.so! main-e121c209f.so is 0.79x to 0.90x faster than lazy-bitcast.so! [131610845 201289587.88 247341883] lazy-bitcast.so [232065032 238180842.22 250957563] main-e121c209f.so
- Loading branch information