Skip to content

Commit

Permalink
selftests/bpf: Add benchmark for local_storage get
Browse files Browse the repository at this point in the history
Add a benchmarks to demonstrate the performance cliff for local_storage
get as the number of local_storage maps increases beyond current
local_storage implementation's cache size.

"sequential get" and "interleaved get" benchmarks are added, both of
which do many bpf_task_storage_get calls on sets of task local_storage
maps of various counts, while considering a single specific map to be
'important' and counting task_storage_gets to the important map
separately in addition to normal 'hits' count of all gets. Goal here is
to mimic scenario where a particular program using one map - the
important one - is running on a system where many other local_storage
maps exist and are accessed often.

While "sequential get" benchmark does bpf_task_storage_get for map 0, 1,
..., {9, 99, 999} in order, "interleaved" benchmark interleaves 4
bpf_task_storage_gets for the important map for every 10 map gets. This
is meant to highlight performance differences when important map is
accessed far more frequently than non-important maps.

A "hashmap control" benchmark is also included for easy comparison of
standard bpf hashmap lookup vs local_storage get. The benchmark is
identical to "sequential get", but creates and uses BPF_MAP_TYPE_HASH
instead of local storage.

Addition of this benchmark is inspired by conversation with Alexei in a
previous patchset's thread [0], which highlighted the need for such a
benchmark to motivate and validate improvements to local_storage
implementation. My approach in that series focused on improving
performance for explicitly-marked 'important' maps and was rejected
with feedback to make more generally-applicable improvements while
avoiding explicitly marking maps as important. Thus the benchmark
reports both general and important-map-focused metrics, so effect of
future work on both is clear.

Regarding the benchmark results. On a powerful system (Skylake, 20
cores, 256gb ram):

Local Storage
=============
        Hashmap Control w/ 500 maps
hashmap (control) sequential    get:  hits throughput: 64.689 ± 2.806 M ops/s, hits latency: 15.459 ns/op, important_hits throughput: 0.129 ± 0.006 M ops/s

        num_maps: 1
local_storage cache sequential  get:  hits throughput: 3.793 ± 0.101 M ops/s, hits latency: 263.623 ns/op, important_hits throughput: 3.793 ± 0.101 M ops/s
local_storage cache interleaved get:  hits throughput: 6.539 ± 0.209 M ops/s, hits latency: 152.938 ns/op, important_hits throughput: 6.539 ± 0.209 M ops/s

        num_maps: 10
local_storage cache sequential  get:  hits throughput: 20.237 ± 0.439 M ops/s, hits latency: 49.415 ns/op, important_hits throughput: 2.024 ± 0.044 M ops/s
local_storage cache interleaved get:  hits throughput: 22.421 ± 0.874 M ops/s, hits latency: 44.601 ns/op, important_hits throughput: 8.007 ± 0.312 M ops/s

        num_maps: 16
local_storage cache sequential  get:  hits throughput: 25.582 ± 0.346 M ops/s, hits latency: 39.090 ns/op, important_hits throughput: 1.599 ± 0.022 M ops/s
local_storage cache interleaved get:  hits throughput: 26.615 ± 0.601 M ops/s, hits latency: 37.573 ns/op, important_hits throughput: 8.468 ± 0.191 M ops/s

        num_maps: 17
local_storage cache sequential  get:  hits throughput: 22.932 ± 0.436 M ops/s, hits latency: 43.606 ns/op, important_hits throughput: 1.349 ± 0.026 M ops/s
local_storage cache interleaved get:  hits throughput: 24.005 ± 0.462 M ops/s, hits latency: 41.658 ns/op, important_hits throughput: 7.306 ± 0.140 M ops/s

        num_maps: 24
local_storage cache sequential  get:  hits throughput: 16.025 ± 0.821 M ops/s, hits latency: 62.402 ns/op, important_hits throughput: 0.668 ± 0.034 M ops/s
local_storage cache interleaved get:  hits throughput: 17.691 ± 0.744 M ops/s, hits latency: 56.526 ns/op, important_hits throughput: 4.976 ± 0.209 M ops/s

        num_maps: 32
local_storage cache sequential  get:  hits throughput: 11.865 ± 0.180 M ops/s, hits latency: 84.279 ns/op, important_hits throughput: 0.371 ± 0.006 M ops/s
local_storage cache interleaved get:  hits throughput: 14.383 ± 0.108 M ops/s, hits latency: 69.525 ns/op, important_hits throughput: 4.014 ± 0.030 M ops/s

        num_maps: 100
local_storage cache sequential  get:  hits throughput: 6.105 ± 0.190 M ops/s, hits latency: 163.798 ns/op, important_hits throughput: 0.061 ± 0.002 M ops/s
local_storage cache interleaved get:  hits throughput: 7.055 ± 0.129 M ops/s, hits latency: 141.746 ns/op, important_hits throughput: 1.843 ± 0.034 M ops/s

        num_maps: 1000
local_storage cache sequential  get:  hits throughput: 0.433 ± 0.010 M ops/s, hits latency: 2309.469 ns/op, important_hits throughput: 0.000 ± 0.000 M ops/s
local_storage cache interleaved get:  hits throughput: 0.499 ± 0.026 M ops/s, hits latency: 2002.510 ns/op, important_hits throughput: 0.127 ± 0.007 M ops/s

Looking at the "sequential get" results, it's clear that as the
number of task local_storage maps grows beyond the current cache size
(16), there's a significant reduction in hits throughput. Note that
current local_storage implementation assigns a cache_idx to maps as they
are created. Since "sequential get" is creating maps 0..n in order and
then doing bpf_task_storage_get calls in the same order, the benchmark
is effectively ensuring that a map will not be in cache when the program
tries to access it.

For "interleaved get" results, important-map hits throughput is greatly
increased as the important map is more likely to be in cache by virtue
of being accessed far more frequently. Throughput still reduces as #
maps increases, though.

Note that the test programs need to split task_storage_get calls across
multiple programs to work around the verifier's MAX_USED_MAPS
limitations. As evidenced by the unintuitive-looking results for smaller
num_maps benchmark runs, overhead which is amortized across larger
num_maps in other runs dominates when there are fewer maps. To get a
sense of the overhead, I commented out
bpf_task_storage_get/bpf_map_lookup_elem in local_storage_bench.h and
ran the benchmark on the same host as 'real' run. Results:

Local Storage
=============
        Hashmap Control w/ 500 maps
hashmap (control) sequential    get:  hits throughput: 115.812 ± 2.513 M ops/s, hits latency: 8.635 ns/op, important_hits throughput: 0.232 ± 0.005 M ops/s

        num_maps: 1
local_storage cache sequential  get:  hits throughput: 4.031 ± 0.033 M ops/s, hits latency: 248.094 ns/op, important_hits throughput: 4.031 ± 0.033 M ops/s
local_storage cache interleaved get:  hits throughput: 7.997 ± 0.088 M ops/s, hits latency: 125.046 ns/op, important_hits throughput: 7.997 ± 0.088 M ops/s

        num_maps: 10
local_storage cache sequential  get:  hits throughput: 34.000 ± 0.077 M ops/s, hits latency: 29.412 ns/op, important_hits throughput: 3.400 ± 0.008 M ops/s
local_storage cache interleaved get:  hits throughput: 37.895 ± 0.670 M ops/s, hits latency: 26.389 ns/op, important_hits throughput: 13.534 ± 0.239 M ops/s

        num_maps: 16
local_storage cache sequential  get:  hits throughput: 46.947 ± 0.283 M ops/s, hits latency: 21.300 ns/op, important_hits throughput: 2.934 ± 0.018 M ops/s
local_storage cache interleaved get:  hits throughput: 47.301 ± 1.027 M ops/s, hits latency: 21.141 ns/op, important_hits throughput: 15.050 ± 0.327 M ops/s

        num_maps: 17
local_storage cache sequential  get:  hits throughput: 45.871 ± 0.414 M ops/s, hits latency: 21.800 ns/op, important_hits throughput: 2.698 ± 0.024 M ops/s
local_storage cache interleaved get:  hits throughput: 46.591 ± 1.969 M ops/s, hits latency: 21.463 ns/op, important_hits throughput: 14.180 ± 0.599 M ops/s

        num_maps: 24
local_storage cache sequential  get:  hits throughput: 58.053 ± 1.043 M ops/s, hits latency: 17.226 ns/op, important_hits throughput: 2.419 ± 0.043 M ops/s
local_storage cache interleaved get:  hits throughput: 58.115 ± 0.377 M ops/s, hits latency: 17.207 ns/op, important_hits throughput: 16.345 ± 0.106 M ops/s

        num_maps: 32
local_storage cache sequential  get:  hits throughput: 68.548 ± 0.820 M ops/s, hits latency: 14.588 ns/op, important_hits throughput: 2.142 ± 0.026 M ops/s
local_storage cache interleaved get:  hits throughput: 63.015 ± 0.963 M ops/s, hits latency: 15.869 ns/op, important_hits throughput: 17.586 ± 0.269 M ops/s

        num_maps: 100
local_storage cache sequential  get:  hits throughput: 95.375 ± 1.286 M ops/s, hits latency: 10.485 ns/op, important_hits throughput: 0.954 ± 0.013 M ops/s
local_storage cache interleaved get:  hits throughput: 76.996 ± 2.614 M ops/s, hits latency: 12.988 ns/op, important_hits throughput: 20.111 ± 0.683 M ops/s

        num_maps: 1000
local_storage cache sequential  get:  hits throughput: 119.965 ± 1.386 M ops/s, hits latency: 8.336 ns/op, important_hits throughput: 0.120 ± 0.001 M ops/s
local_storage cache interleaved get:  hits throughput: 92.665 ± 0.788 M ops/s, hits latency: 10.792 ns/op, important_hits throughput: 23.581 ± 0.200 M ops/s

Adjusting for overhead, latency numbers for "hashmap control" and "sequential get" are:

hashmap_control:     ~6.8ns
sequential_get_1:    ~15.5ns
sequential_get_10:   ~20ns
sequential_get_16:   ~17.8ns
sequential_get_17:   ~21.8ns
sequential_get_24:   ~45.2ns
sequential_get_32:   ~69.7ns
sequential_get_100:  ~153.3ns
sequential_get_1000: ~2300ns

Clearly demonstrating a cliff.

When running the benchmarks it may be necessary to bump 'open files'
ulimit for a successful run.

  [0]: https://lore.kernel.org/all/20220420002143.1096548-1-davemarchevsky@fb.com

Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
  • Loading branch information
davemarchevsky authored and Kernel Patches Daemon committed May 19, 2022
1 parent 7657bba commit d927b05
Show file tree
Hide file tree
Showing 10 changed files with 613 additions and 1 deletion.
6 changes: 5 additions & 1 deletion tools/testing/selftests/bpf/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -560,6 +560,9 @@ $(OUTPUT)/bench_ringbufs.o: $(OUTPUT)/ringbuf_bench.skel.h \
$(OUTPUT)/bench_bloom_filter_map.o: $(OUTPUT)/bloom_filter_bench.skel.h
$(OUTPUT)/bench_bpf_loop.o: $(OUTPUT)/bpf_loop_bench.skel.h
$(OUTPUT)/bench_strncmp.o: $(OUTPUT)/strncmp_bench.skel.h
$(OUTPUT)/bench_local_storage.o: $(OUTPUT)/local_storage_bench__get_seq.skel.h \
$(OUTPUT)/local_storage_bench__get_int.skel.h \
$(OUTPUT)/local_storage_bench__hashmap.skel.h
$(OUTPUT)/bench.o: bench.h testing_helpers.h $(BPFOBJ)
$(OUTPUT)/bench: LDLIBS += -lm
$(OUTPUT)/bench: $(OUTPUT)/bench.o \
Expand All @@ -571,7 +574,8 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o \
$(OUTPUT)/bench_ringbufs.o \
$(OUTPUT)/bench_bloom_filter_map.o \
$(OUTPUT)/bench_bpf_loop.o \
$(OUTPUT)/bench_strncmp.o
$(OUTPUT)/bench_strncmp.o \
$(OUTPUT)/bench_local_storage.o
$(call msg,BINARY,,$@)
$(Q)$(CC) $(CFLAGS) $(LDFLAGS) $(filter %.a %.o,$^) $(LDLIBS) -o $@

Expand Down
57 changes: 57 additions & 0 deletions tools/testing/selftests/bpf/bench.c
Original file line number Diff line number Diff line change
Expand Up @@ -150,6 +150,53 @@ void ops_report_final(struct bench_res res[], int res_cnt)
printf("latency %8.3lf ns/op\n", 1000.0 / hits_mean * env.producer_cnt);
}

void local_storage_report_progress(int iter, struct bench_res *res,
long delta_ns)
{
double important_hits_per_sec, hits_per_sec;
double delta_sec = delta_ns / 1000000000.0;

hits_per_sec = res->hits / 1000000.0 / delta_sec;
important_hits_per_sec = res->important_hits / 1000000.0 / delta_sec;

printf("Iter %3d (%7.3lfus): ", iter, (delta_ns - 1000000000) / 1000.0);

printf("hits %8.3lfM/s ", hits_per_sec);
printf("important_hits %8.3lfM/s\n", important_hits_per_sec);
}

void local_storage_report_final(struct bench_res res[], int res_cnt)
{
double important_hits_mean = 0.0, important_hits_stddev = 0.0;
double hits_mean = 0.0, hits_stddev = 0.0;
int i;

for (i = 0; i < res_cnt; i++) {
hits_mean += res[i].hits / 1000000.0 / (0.0 + res_cnt);
important_hits_mean += res[i].important_hits / 1000000.0 / (0.0 + res_cnt);
}

if (res_cnt > 1) {
for (i = 0; i < res_cnt; i++) {
hits_stddev += (hits_mean - res[i].hits / 1000000.0) *
(hits_mean - res[i].hits / 1000000.0) /
(res_cnt - 1.0);
important_hits_stddev +=
(important_hits_mean - res[i].important_hits / 1000000.0) *
(important_hits_mean - res[i].important_hits / 1000000.0) /
(res_cnt - 1.0);
}

hits_stddev = sqrt(hits_stddev);
important_hits_stddev = sqrt(important_hits_stddev);
}
printf("Summary: hits throughput %8.3lf \u00B1 %5.3lf M ops/s, ",
hits_mean, hits_stddev);
printf("hits latency %8.3lf ns/op, ", 1000.0 / hits_mean);
printf("important_hits throughput %8.3lf \u00B1 %5.3lf M ops/s\n",
important_hits_mean, important_hits_stddev);
}

const char *argp_program_version = "benchmark";
const char *argp_program_bug_address = "<bpf@vger.kernel.org>";
const char argp_program_doc[] =
Expand Down Expand Up @@ -188,12 +235,14 @@ static const struct argp_option opts[] = {
extern struct argp bench_ringbufs_argp;
extern struct argp bench_bloom_map_argp;
extern struct argp bench_bpf_loop_argp;
extern struct argp bench_local_storage_argp;
extern struct argp bench_strncmp_argp;

static const struct argp_child bench_parsers[] = {
{ &bench_ringbufs_argp, 0, "Ring buffers benchmark", 0 },
{ &bench_bloom_map_argp, 0, "Bloom filter map benchmark", 0 },
{ &bench_bpf_loop_argp, 0, "bpf_loop helper benchmark", 0 },
{ &bench_local_storage_argp, 0, "local_storage benchmark", 0 },
{ &bench_strncmp_argp, 0, "bpf_strncmp helper benchmark", 0 },
{},
};
Expand Down Expand Up @@ -396,6 +445,9 @@ extern const struct bench bench_hashmap_with_bloom;
extern const struct bench bench_bpf_loop;
extern const struct bench bench_strncmp_no_helper;
extern const struct bench bench_strncmp_helper;
extern const struct bench bench_local_storage_cache_seq_get;
extern const struct bench bench_local_storage_cache_interleaved_get;
extern const struct bench bench_local_storage_cache_hashmap_control;

static const struct bench *benchs[] = {
&bench_count_global,
Expand Down Expand Up @@ -430,6 +482,9 @@ static const struct bench *benchs[] = {
&bench_bpf_loop,
&bench_strncmp_no_helper,
&bench_strncmp_helper,
&bench_local_storage_cache_seq_get,
&bench_local_storage_cache_interleaved_get,
&bench_local_storage_cache_hashmap_control,
};

static void setup_benchmark()
Expand Down Expand Up @@ -547,5 +602,7 @@ int main(int argc, char **argv)
bench->report_final(state.results + env.warmup_sec,
state.res_cnt - env.warmup_sec);

if (bench->teardown)
bench->teardown();
return 0;
}
5 changes: 5 additions & 0 deletions tools/testing/selftests/bpf/bench.h
Original file line number Diff line number Diff line change
Expand Up @@ -34,12 +34,14 @@ struct bench_res {
long hits;
long drops;
long false_hits;
long important_hits;
};

struct bench {
const char *name;
void (*validate)(void);
void (*setup)(void);
void (*teardown)(void);
void *(*producer_thread)(void *ctx);
void *(*consumer_thread)(void *ctx);
void (*measure)(struct bench_res* res);
Expand All @@ -61,6 +63,9 @@ void false_hits_report_progress(int iter, struct bench_res *res, long delta_ns);
void false_hits_report_final(struct bench_res res[], int res_cnt);
void ops_report_progress(int iter, struct bench_res *res, long delta_ns);
void ops_report_final(struct bench_res res[], int res_cnt);
void local_storage_report_progress(int iter, struct bench_res *res,
long delta_ns);
void local_storage_report_final(struct bench_res res[], int res_cnt);

static inline __u64 get_time_ns(void)
{
Expand Down
Loading

0 comments on commit d927b05

Please sign in to comment.