Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
selftests/bpf: Add benchmark for local_storage get
Add a benchmarks to demonstrate the performance cliff for local_storage get as the number of local_storage maps increases beyond current local_storage implementation's cache size. "sequential get" and "interleaved get" benchmarks are added, both of which do many bpf_task_storage_get calls on sets of task local_storage maps of various counts, while considering a single specific map to be 'important' and counting task_storage_gets to the important map separately in addition to normal 'hits' count of all gets. Goal here is to mimic scenario where a particular program using one map - the important one - is running on a system where many other local_storage maps exist and are accessed often. While "sequential get" benchmark does bpf_task_storage_get for map 0, 1, ..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4 bpf_task_storage_gets for the important map for every 10 map gets. This is meant to highlight performance differences when important map is accessed far more frequently than non-important maps. A "hashmap control" benchmark is also included for easy comparison of standard bpf hashmap lookup vs local_storage get. The benchmark is identical to "sequential get", but creates and uses BPF_MAP_TYPE_HASH instead of local storage. Addition of this benchmark is inspired by conversation with Alexei in a previous patchset's thread [0], which highlighted the need for such a benchmark to motivate and validate improvements to local_storage implementation. My approach in that series focused on improving performance for explicitly-marked 'important' maps and was rejected with feedback to make more generally-applicable improvements while avoiding explicitly marking maps as important. Thus the benchmark reports both general and important-map-focused metrics, so effect of future work on both is clear. Regarding the benchmark results. On a powerful system (Skylake, 20 cores, 256gb ram): Local Storage ============= Hashmap Control w/ 500 maps hashmap (control) sequential get: hits throughput: 64.689 ± 2.806 M ops/s, hits latency: 15.459 ns/op, important_hits throughput: 0.129 ± 0.006 M ops/s num_maps: 1 local_storage cache sequential get: hits throughput: 3.793 ± 0.101 M ops/s, hits latency: 263.623 ns/op, important_hits throughput: 3.793 ± 0.101 M ops/s local_storage cache interleaved get: hits throughput: 6.539 ± 0.209 M ops/s, hits latency: 152.938 ns/op, important_hits throughput: 6.539 ± 0.209 M ops/s num_maps: 10 local_storage cache sequential get: hits throughput: 20.237 ± 0.439 M ops/s, hits latency: 49.415 ns/op, important_hits throughput: 2.024 ± 0.044 M ops/s local_storage cache interleaved get: hits throughput: 22.421 ± 0.874 M ops/s, hits latency: 44.601 ns/op, important_hits throughput: 8.007 ± 0.312 M ops/s num_maps: 16 local_storage cache sequential get: hits throughput: 25.582 ± 0.346 M ops/s, hits latency: 39.090 ns/op, important_hits throughput: 1.599 ± 0.022 M ops/s local_storage cache interleaved get: hits throughput: 26.615 ± 0.601 M ops/s, hits latency: 37.573 ns/op, important_hits throughput: 8.468 ± 0.191 M ops/s num_maps: 17 local_storage cache sequential get: hits throughput: 22.932 ± 0.436 M ops/s, hits latency: 43.606 ns/op, important_hits throughput: 1.349 ± 0.026 M ops/s local_storage cache interleaved get: hits throughput: 24.005 ± 0.462 M ops/s, hits latency: 41.658 ns/op, important_hits throughput: 7.306 ± 0.140 M ops/s num_maps: 24 local_storage cache sequential get: hits throughput: 16.025 ± 0.821 M ops/s, hits latency: 62.402 ns/op, important_hits throughput: 0.668 ± 0.034 M ops/s local_storage cache interleaved get: hits throughput: 17.691 ± 0.744 M ops/s, hits latency: 56.526 ns/op, important_hits throughput: 4.976 ± 0.209 M ops/s num_maps: 32 local_storage cache sequential get: hits throughput: 11.865 ± 0.180 M ops/s, hits latency: 84.279 ns/op, important_hits throughput: 0.371 ± 0.006 M ops/s local_storage cache interleaved get: hits throughput: 14.383 ± 0.108 M ops/s, hits latency: 69.525 ns/op, important_hits throughput: 4.014 ± 0.030 M ops/s num_maps: 100 local_storage cache sequential get: hits throughput: 6.105 ± 0.190 M ops/s, hits latency: 163.798 ns/op, important_hits throughput: 0.061 ± 0.002 M ops/s local_storage cache interleaved get: hits throughput: 7.055 ± 0.129 M ops/s, hits latency: 141.746 ns/op, important_hits throughput: 1.843 ± 0.034 M ops/s num_maps: 1000 local_storage cache sequential get: hits throughput: 0.433 ± 0.010 M ops/s, hits latency: 2309.469 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s local_storage cache interleaved get: hits throughput: 0.499 ± 0.026 M ops/s, hits latency: 2002.510 ns/op, important_hits throughput: 0.127 ± 0.007 M ops/s Looking at the "sequential get" results, it's clear that as the number of task local_storage maps grows beyond the current cache size (16), there's a significant reduction in hits throughput. Note that current local_storage implementation assigns a cache_idx to maps as they are created. Since "sequential get" is creating maps 0..n in order and then doing bpf_task_storage_get calls in the same order, the benchmark is effectively ensuring that a map will not be in cache when the program tries to access it. For "interleaved get" results, important-map hits throughput is greatly increased as the important map is more likely to be in cache by virtue of being accessed far more frequently. Throughput still reduces as # maps increases, though. Note that the test programs need to split task_storage_get calls across multiple programs to work around the verifier's MAX_USED_MAPS limitations. As evidenced by the unintuitive-looking results for smaller num_maps benchmark runs, overhead which is amortized across larger num_maps in other runs dominates when there are fewer maps. To get a sense of the overhead, I commented out bpf_task_storage_get/bpf_map_lookup_elem in local_storage_bench.h and ran the benchmark on the same host as 'real' run. Results: Local Storage ============= Hashmap Control w/ 500 maps hashmap (control) sequential get: hits throughput: 115.812 ± 2.513 M ops/s, hits latency: 8.635 ns/op, important_hits throughput: 0.232 ± 0.005 M ops/s num_maps: 1 local_storage cache sequential get: hits throughput: 4.031 ± 0.033 M ops/s, hits latency: 248.094 ns/op, important_hits throughput: 4.031 ± 0.033 M ops/s local_storage cache interleaved get: hits throughput: 7.997 ± 0.088 M ops/s, hits latency: 125.046 ns/op, important_hits throughput: 7.997 ± 0.088 M ops/s num_maps: 10 local_storage cache sequential get: hits throughput: 34.000 ± 0.077 M ops/s, hits latency: 29.412 ns/op, important_hits throughput: 3.400 ± 0.008 M ops/s local_storage cache interleaved get: hits throughput: 37.895 ± 0.670 M ops/s, hits latency: 26.389 ns/op, important_hits throughput: 13.534 ± 0.239 M ops/s num_maps: 16 local_storage cache sequential get: hits throughput: 46.947 ± 0.283 M ops/s, hits latency: 21.300 ns/op, important_hits throughput: 2.934 ± 0.018 M ops/s local_storage cache interleaved get: hits throughput: 47.301 ± 1.027 M ops/s, hits latency: 21.141 ns/op, important_hits throughput: 15.050 ± 0.327 M ops/s num_maps: 17 local_storage cache sequential get: hits throughput: 45.871 ± 0.414 M ops/s, hits latency: 21.800 ns/op, important_hits throughput: 2.698 ± 0.024 M ops/s local_storage cache interleaved get: hits throughput: 46.591 ± 1.969 M ops/s, hits latency: 21.463 ns/op, important_hits throughput: 14.180 ± 0.599 M ops/s num_maps: 24 local_storage cache sequential get: hits throughput: 58.053 ± 1.043 M ops/s, hits latency: 17.226 ns/op, important_hits throughput: 2.419 ± 0.043 M ops/s local_storage cache interleaved get: hits throughput: 58.115 ± 0.377 M ops/s, hits latency: 17.207 ns/op, important_hits throughput: 16.345 ± 0.106 M ops/s num_maps: 32 local_storage cache sequential get: hits throughput: 68.548 ± 0.820 M ops/s, hits latency: 14.588 ns/op, important_hits throughput: 2.142 ± 0.026 M ops/s local_storage cache interleaved get: hits throughput: 63.015 ± 0.963 M ops/s, hits latency: 15.869 ns/op, important_hits throughput: 17.586 ± 0.269 M ops/s num_maps: 100 local_storage cache sequential get: hits throughput: 95.375 ± 1.286 M ops/s, hits latency: 10.485 ns/op, important_hits throughput: 0.954 ± 0.013 M ops/s local_storage cache interleaved get: hits throughput: 76.996 ± 2.614 M ops/s, hits latency: 12.988 ns/op, important_hits throughput: 20.111 ± 0.683 M ops/s num_maps: 1000 local_storage cache sequential get: hits throughput: 119.965 ± 1.386 M ops/s, hits latency: 8.336 ns/op, important_hits throughput: 0.120 ± 0.001 M ops/s local_storage cache interleaved get: hits throughput: 92.665 ± 0.788 M ops/s, hits latency: 10.792 ns/op, important_hits throughput: 23.581 ± 0.200 M ops/s Adjusting for overhead, latency numbers for "hashmap control" and "sequential get" are: hashmap_control: ~6.8ns sequential_get_1: ~15.5ns sequential_get_10: ~20ns sequential_get_16: ~17.8ns sequential_get_17: ~21.8ns sequential_get_24: ~45.2ns sequential_get_32: ~69.7ns sequential_get_100: ~153.3ns sequential_get_1000: ~2300ns Clearly demonstrating a cliff. When running the benchmarks it may be necessary to bump 'open files' ulimit for a successful run. [0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
- Loading branch information