[red-knot] make large-union benchmark more challenging #17416
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Now that we've fixed one large source of constant overhead in building large unions of literals, we can afford to make this benchmark a bit nastier, by letting the union get 8x as big (up to 2048 elements, as opposed to just 256 before). The motivation for doing this is so that the benchmark can more clearly show the distinction between constant-overhead optimizations and algorithmic-complexity improvements (which are relatively more important the larger the union in the benchmark gets.)
We expect the largest unions (from large code-generated enums) that we need to support may be around 4k or 5k elements, so ideally we'd increase the benchmark to that size, but at the moment that still makes the benchmark too slow.
Test Plan
cargo bench --bench red_knot