Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cleanup(engines): detach per-cpu kernel metrics from global kernel metrics #2031

Merged
merged 6 commits into from
Sep 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 23 additions & 2 deletions test/libscap/test_suites/engines/bpf/bpf.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -138,8 +138,8 @@ TEST(bpf, metrics_v2_check_per_CPU_stats)

ssize_t num_possible_CPUs = num_possible_cpus();

// We want to check our CPUs counters
uint32_t flags = METRICS_V2_KERNEL_COUNTERS;
// Enabling `METRICS_V2_KERNEL_COUNTERS_PER_CPU` we also enable `METRICS_V2_KERNEL_COUNTERS`
uint32_t flags = METRICS_V2_KERNEL_COUNTERS_PER_CPU;
uint32_t nstats = 0;
int32_t rc = 0;
const metrics_v2* stats_v2 = scap_get_stats_v2(h, flags, &nstats, &rc);
Expand All @@ -151,9 +151,18 @@ TEST(bpf, metrics_v2_check_per_CPU_stats)
ssize_t found = 0;
char expected_name[METRIC_NAME_MAX] = "";
snprintf(expected_name, METRIC_NAME_MAX, N_EVENTS_PER_CPU_PREFIX"%ld", found);
bool check_general_kernel_counters_presence = false;

while(i < nstats)
{
// We check if `METRICS_V2_KERNEL_COUNTERS` are enabled as well
if(strncmp(stats_v2[i].name, N_EVENTS_PREFIX, sizeof(N_EVENTS_PREFIX)) == 0)
{
check_general_kernel_counters_presence = true;
i++;
continue;
}

// `sizeof(N_EVENTS_PER_CPU_PREFIX)-1` because we need to exclude the `\0`
if(strncmp(stats_v2[i].name, N_EVENTS_PER_CPU_PREFIX, sizeof(N_EVENTS_PER_CPU_PREFIX)-1) == 0)
{
Expand All @@ -176,6 +185,8 @@ TEST(bpf, metrics_v2_check_per_CPU_stats)
}
}

ASSERT_TRUE(check_general_kernel_counters_presence) << "per-CPU counter are enabled but general kernel counters are not";

// This test could fail in case of rare race conditions in which the number of available CPUs changes
// between the scap_open and the `num_possible_cpus` function. In CI we shouldn't have hot plugs so probably we
// can live with this.
Expand Down Expand Up @@ -220,6 +231,16 @@ TEST(bpf, metrics_v2_check_results)
FAIL() << "unable to find stat '" << stat_name << "' into the array";
}
}

// Check per-CPU stats are not enabled since we didn't provide the flag.
for(i = 0; i < nstats; i++)
{
if(strncmp(stats_v2[i].name, N_EVENTS_PER_CPU_PREFIX, sizeof(N_EVENTS_PER_CPU_PREFIX)-1) == 0)
{
FAIL() << "per-CPU counters are enabled but we didn't provide the flag!";
}
}

scap_close(h);
}

Expand Down
25 changes: 23 additions & 2 deletions test/libscap/test_suites/engines/kmod/kmod.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -193,8 +193,8 @@ TEST(kmod, metrics_v2_check_per_CPU_stats)

ssize_t num_online_CPUs = sysconf(_SC_NPROCESSORS_ONLN);

// We want to check our CPUs counters
uint32_t flags = METRICS_V2_KERNEL_COUNTERS;
// Enabling `METRICS_V2_KERNEL_COUNTERS_PER_CPU` we also enable `METRICS_V2_KERNEL_COUNTERS`
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should definitely unify the test for the 3 engines in some way because we are copying and pasting the same code 3 times for all the tests, in the end, the interface is the same... BTW I'm not doing it in this PR :/

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree!

uint32_t flags = METRICS_V2_KERNEL_COUNTERS_PER_CPU;
uint32_t nstats = 0;
int32_t rc = 0;
const metrics_v2* stats_v2 = scap_get_stats_v2(h, flags, &nstats, &rc);
Expand All @@ -206,9 +206,18 @@ TEST(kmod, metrics_v2_check_per_CPU_stats)
ssize_t found = 0;
char expected_name[METRIC_NAME_MAX] = "";
snprintf(expected_name, METRIC_NAME_MAX, N_EVENTS_PER_DEVICE_PREFIX"%ld", found);
bool check_general_kernel_counters_presence = false;

while(i < nstats)
{
// We check if `METRICS_V2_KERNEL_COUNTERS` are enabled as well
if(strncmp(stats_v2[i].name, N_EVENTS_PREFIX, sizeof(N_EVENTS_PREFIX)) == 0)
{
check_general_kernel_counters_presence = true;
i++;
continue;
}

// `sizeof(N_EVENTS_PER_DEVICE_PREFIX)-1` because we need to exclude the `\0`
if(strncmp(stats_v2[i].name, N_EVENTS_PER_DEVICE_PREFIX, sizeof(N_EVENTS_PER_DEVICE_PREFIX)-1) == 0)
{
Expand All @@ -231,6 +240,8 @@ TEST(kmod, metrics_v2_check_per_CPU_stats)
}
}

ASSERT_TRUE(check_general_kernel_counters_presence) << "per-CPU counter are enabled but general kernel counters are not";

// This test could fail in case of rare race conditions in which the number of online CPUs changes
// between the scap_open and the `sysconf(_SC_NPROCESSORS_ONLN)` function. In CI we shouldn't have hot plugs so probably we
// can live with this.
Expand Down Expand Up @@ -271,6 +282,16 @@ TEST(kmod, metrics_v2_check_results)
FAIL() << "unable to find stat '" << stat_name << "' into the array";
}
}

// Check per-CPU stats are not enabled since we didn't provide the flag.
for(i = 0; i < nstats; i++)
{
if(strncmp(stats_v2[i].name, N_EVENTS_PER_DEVICE_PREFIX, sizeof(N_EVENTS_PER_DEVICE_PREFIX)-1) == 0)
{
FAIL() << "per-CPU counters are enabled but we didn't provide the flag!";
}
}

scap_close(h);
}

Expand Down
25 changes: 23 additions & 2 deletions test/libscap/test_suites/engines/modern_bpf/modern_bpf.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -257,8 +257,8 @@ TEST(modern_bpf, metrics_v2_check_per_CPU_stats)

ssize_t num_possible_CPUs = num_possible_cpus();

// We want to check our CPUs counters
uint32_t flags = METRICS_V2_KERNEL_COUNTERS;
// Enabling `METRICS_V2_KERNEL_COUNTERS_PER_CPU` we also enable `METRICS_V2_KERNEL_COUNTERS`
uint32_t flags = METRICS_V2_KERNEL_COUNTERS_PER_CPU;
uint32_t nstats = 0;
int32_t rc = 0;
const metrics_v2* stats_v2 = scap_get_stats_v2(h, flags, &nstats, &rc);
Expand All @@ -270,9 +270,18 @@ TEST(modern_bpf, metrics_v2_check_per_CPU_stats)
ssize_t found = 0;
char expected_name[METRIC_NAME_MAX] = "";
snprintf(expected_name, METRIC_NAME_MAX, N_EVENTS_PER_CPU_PREFIX"%ld", found);
bool check_general_kernel_counters_presence = false;

while(i < nstats)
{
// We check if `METRICS_V2_KERNEL_COUNTERS` are enabled as well
if(strncmp(stats_v2[i].name, N_EVENTS_PREFIX, sizeof(N_EVENTS_PREFIX)) == 0)
{
check_general_kernel_counters_presence = true;
i++;
continue;
}

// `sizeof(N_EVENTS_PER_CPU_PREFIX)-1` because we need to exclude the `\0`
if(strncmp(stats_v2[i].name, N_EVENTS_PER_CPU_PREFIX, sizeof(N_EVENTS_PER_CPU_PREFIX)-1) == 0)
{
Expand All @@ -295,6 +304,8 @@ TEST(modern_bpf, metrics_v2_check_per_CPU_stats)
}
}

ASSERT_TRUE(check_general_kernel_counters_presence) << "per-CPU counter are enabled but general kernel counters are not";

// This test could fail in case of rare race conditions in which the number of available CPUs changes
// between the scap_open and the `num_possible_cpus` function. In CI we shouldn't have hot plugs so probably we
// can live with this.
Expand Down Expand Up @@ -340,6 +351,16 @@ TEST(modern_bpf, metrics_v2_check_results)
FAIL() << "unable to find stat '" << stat_name << "' into the array";
}
}

// Check per-CPU stats are not enabled since we didn't provide the flag.
for(i = 0; i < nstats; i++)
{
if(strncmp(stats_v2[i].name, N_EVENTS_PER_CPU_PREFIX, sizeof(N_EVENTS_PER_CPU_PREFIX)-1) == 0)
{
FAIL() << "per-CPU counters are enabled but we didn't provide the flag!";
}
}

scap_close(h);
}

Expand Down
60 changes: 32 additions & 28 deletions userspace/libpman/src/stats.c
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ typedef enum modern_bpf_libbpf_stats
} modern_bpf_libbpf_stats;

const char *const modern_bpf_kernel_counters_stats_names[] = {
[MODERN_BPF_N_EVTS] = "n_evts",
[MODERN_BPF_N_EVTS] = N_EVENTS_PREFIX,
[MODERN_BPF_N_DROPS_BUFFER_TOTAL] = "n_drops_buffer_total",
[MODERN_BPF_N_DROPS_BUFFER_CLONE_FORK_ENTER] = "n_drops_buffer_clone_fork_enter",
[MODERN_BPF_N_DROPS_BUFFER_CLONE_FORK_EXIT] = "n_drops_buffer_clone_fork_exit",
Expand Down Expand Up @@ -140,10 +140,10 @@ int pman_get_scap_stats(struct scap_stats *stats)
return errno;
}

static void set_u64_monotonic_kernel_counter(uint32_t pos, uint64_t val)
static void set_u64_monotonic_kernel_counter(uint32_t pos, uint64_t val, uint32_t metric_flag)
{
g_state.stats[pos].type = METRIC_VALUE_TYPE_U64;
g_state.stats[pos].flags = METRICS_V2_KERNEL_COUNTERS;
g_state.stats[pos].flags = metric_flag;
g_state.stats[pos].unit = METRIC_VALUE_UNIT_COUNT;
g_state.stats[pos].metric_type = METRIC_VALUE_METRIC_TYPE_MONOTONIC;
g_state.stats[pos].value.u64 = val;
Expand All @@ -166,11 +166,15 @@ struct metrics_v2 *pman_get_metrics_v2(uint32_t flags, uint32_t *nstats, int32_t
}
}

// At the moment for each available CPU we want:
// - the number of events.
// - the number of drops.
uint32_t per_cpu_stats = g_state.n_possible_cpus* 2;

uint32_t per_cpu_stats = 0;
if(flags & METRICS_V2_KERNEL_COUNTERS_PER_CPU)
{
// At the moment for each available CPU we want:
// - the number of events.
// - the number of drops.
per_cpu_stats = g_state.n_possible_cpus* 2;
}

g_state.nstats = MODERN_BPF_MAX_KERNEL_COUNTERS_STATS + per_cpu_stats + (nprogs_attached * MODERN_BPF_MAX_LIBBPF_STATS);
g_state.stats = (metrics_v2 *)calloc(g_state.nstats, sizeof(metrics_v2));
if(!g_state.stats)
Expand All @@ -197,7 +201,7 @@ struct metrics_v2 *pman_get_metrics_v2(uint32_t flags, uint32_t *nstats, int32_t

for(uint32_t stat = 0; stat < MODERN_BPF_MAX_KERNEL_COUNTERS_STATS; stat++)
{
set_u64_monotonic_kernel_counter(stat, 0);
set_u64_monotonic_kernel_counter(stat, 0, METRICS_V2_KERNEL_COUNTERS);
strlcpy(g_state.stats[stat].name, (char*)modern_bpf_kernel_counters_stats_names[stat], METRIC_NAME_MAX);
}

Expand Down Expand Up @@ -234,15 +238,18 @@ struct metrics_v2 *pman_get_metrics_v2(uint32_t flags, uint32_t *nstats, int32_t
g_state.stats[MODERN_BPF_N_DROPS_SCRATCH_MAP].value.u64 += cnt_map.n_drops_max_event_size;
g_state.stats[MODERN_BPF_N_DROPS].value.u64 += (cnt_map.n_drops_buffer + cnt_map.n_drops_max_event_size);

// We set the num events for that CPU.
set_u64_monotonic_kernel_counter(pos, cnt_map.n_evts);
snprintf(g_state.stats[pos].name, METRIC_NAME_MAX, N_EVENTS_PER_CPU_PREFIX"%d", index);
pos++;

// We set the drops for that CPU.
set_u64_monotonic_kernel_counter(pos, cnt_map.n_drops_buffer + cnt_map.n_drops_max_event_size);
snprintf(g_state.stats[pos].name, METRIC_NAME_MAX, N_DROPS_PER_CPU_PREFIX"%d", index);
pos++;
if((flags & METRICS_V2_KERNEL_COUNTERS_PER_CPU))
Andreagit97 marked this conversation as resolved.
Show resolved Hide resolved
{
// We set the num events for that CPU.
set_u64_monotonic_kernel_counter(pos, cnt_map.n_evts, METRICS_V2_KERNEL_COUNTERS_PER_CPU);
snprintf(g_state.stats[pos].name, METRIC_NAME_MAX, N_EVENTS_PER_CPU_PREFIX"%d", index);
pos++;

// We set the drops for that CPU.
set_u64_monotonic_kernel_counter(pos, cnt_map.n_drops_buffer + cnt_map.n_drops_max_event_size, METRICS_V2_KERNEL_COUNTERS_PER_CPU);
snprintf(g_state.stats[pos].name, METRIC_NAME_MAX, N_DROPS_PER_CPU_PREFIX"%d", index);
pos++;
}
}
offset = pos;
}
Expand Down Expand Up @@ -285,25 +292,22 @@ struct metrics_v2 *pman_get_metrics_v2(uint32_t flags, uint32_t *nstats, int32_t
g_state.stats[offset].type = METRIC_VALUE_TYPE_U64;
g_state.stats[offset].flags = METRICS_V2_LIBBPF_STATS;
strlcpy(g_state.stats[offset].name, info.name, METRIC_NAME_MAX);
strlcat(g_state.stats[offset].name, modern_bpf_libbpf_stats_names[stat], sizeof(g_state.stats[offset].name));
switch(stat)
{
case RUN_CNT:
strlcat(g_state.stats[offset].name, modern_bpf_libbpf_stats_names[RUN_CNT], sizeof(g_state.stats[offset].name));
g_state.stats[stat].flags = METRICS_V2_KERNEL_COUNTERS;
Andreagit97 marked this conversation as resolved.
Show resolved Hide resolved
g_state.stats[stat].unit = METRIC_VALUE_UNIT_COUNT;
g_state.stats[stat].metric_type = METRIC_VALUE_METRIC_TYPE_MONOTONIC;
g_state.stats[offset].unit = METRIC_VALUE_UNIT_COUNT;
g_state.stats[offset].metric_type = METRIC_VALUE_METRIC_TYPE_MONOTONIC;
g_state.stats[offset].value.u64 = info.run_cnt;
break;
case RUN_TIME_NS:
strlcat(g_state.stats[offset].name, modern_bpf_libbpf_stats_names[RUN_TIME_NS], sizeof(g_state.stats[offset].name));
g_state.stats[stat].unit = METRIC_VALUE_UNIT_TIME_NS_COUNT;
g_state.stats[stat].metric_type = METRIC_VALUE_METRIC_TYPE_MONOTONIC;
g_state.stats[offset].unit = METRIC_VALUE_UNIT_TIME_NS_COUNT;
g_state.stats[offset].metric_type = METRIC_VALUE_METRIC_TYPE_MONOTONIC;
g_state.stats[offset].value.u64 = info.run_time_ns;
break;
case AVG_TIME_NS:
strlcat(g_state.stats[offset].name, modern_bpf_libbpf_stats_names[AVG_TIME_NS], sizeof(g_state.stats[offset].name));
g_state.stats[stat].unit = METRIC_VALUE_UNIT_TIME_NS;
g_state.stats[stat].metric_type = METRIC_VALUE_METRIC_TYPE_NON_MONOTONIC_CURRENT;
g_state.stats[offset].unit = METRIC_VALUE_UNIT_TIME_NS;
g_state.stats[offset].metric_type = METRIC_VALUE_METRIC_TYPE_NON_MONOTONIC_CURRENT;
g_state.stats[offset].value.u64 = 0;
if(info.run_cnt > 0)
{
Expand Down
47 changes: 26 additions & 21 deletions userspace/libscap/engine/bpf/scap_bpf.c
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ limitations under the License.
#include <libscap/strerror.h>

static const char * const bpf_kernel_counters_stats_names[] = {
[BPF_N_EVTS] = "n_evts",
[BPF_N_EVTS] = N_EVENTS_PREFIX,
[BPF_N_DROPS_BUFFER_TOTAL] = "n_drops_buffer_total",
[BPF_N_DROPS_BUFFER_CLONE_FORK_ENTER] = "n_drops_buffer_clone_fork_enter",
[BPF_N_DROPS_BUFFER_CLONE_FORK_EXIT] = "n_drops_buffer_clone_fork_exit",
Expand Down Expand Up @@ -1688,10 +1688,10 @@ int32_t scap_bpf_get_stats(struct scap_engine_handle engine, scap_stats* stats)
return SCAP_SUCCESS;
}

static void set_u64_monotonic_kernel_counter(struct metrics_v2* m, uint64_t val)
static void set_u64_monotonic_kernel_counter(struct metrics_v2* m, uint64_t val, uint32_t metric_flag)
{
m->type = METRIC_VALUE_TYPE_U64;
m->flags = METRICS_V2_KERNEL_COUNTERS;
m->flags = metric_flag;
m->unit = METRIC_VALUE_UNIT_COUNT;
m->metric_type = METRIC_VALUE_METRIC_TYPE_MONOTONIC;
m->value.u64 = val;
Expand Down Expand Up @@ -1722,11 +1722,15 @@ const struct metrics_v2* scap_bpf_get_stats_v2(struct scap_engine_handle engine,
}
}

// At the moment for each available CPU we want:
// - the number of events.
// - the number of drops.
uint32_t per_cpu_stats = handle->m_ncpus* 2;

uint32_t per_cpu_stats = 0;
if(flags & METRICS_V2_KERNEL_COUNTERS_PER_CPU)
{
// At the moment for each available CPU we want:
// - the number of events.
// - the number of drops.
per_cpu_stats = handle->m_ncpus* 2;
}

handle->m_nstats = BPF_MAX_KERNEL_COUNTERS_STATS + per_cpu_stats + (nprogs_attached * BPF_MAX_LIBBPF_STATS);
handle->m_stats = (metrics_v2*)calloc(handle->m_nstats, sizeof(metrics_v2));
if(!handle->m_stats)
Expand All @@ -1746,7 +1750,7 @@ const struct metrics_v2* scap_bpf_get_stats_v2(struct scap_engine_handle engine,
{
for(uint32_t stat = 0; stat < BPF_MAX_KERNEL_COUNTERS_STATS; stat++)
{
set_u64_monotonic_kernel_counter(&(stats[stat]), 0);
set_u64_monotonic_kernel_counter(&(stats[stat]), 0, METRICS_V2_KERNEL_COUNTERS);
strlcpy(stats[stat].name, (char*)bpf_kernel_counters_stats_names[stat], METRIC_NAME_MAX);
}

Expand Down Expand Up @@ -1783,15 +1787,18 @@ const struct metrics_v2* scap_bpf_get_stats_v2(struct scap_engine_handle engine,
v.n_drops_pf + \
v.n_drops_bug;

// We set the num events for that CPU.
set_u64_monotonic_kernel_counter(&(stats[pos]), v.n_evts);
snprintf(stats[pos].name, METRIC_NAME_MAX, N_EVENTS_PER_CPU_PREFIX"%d", cpu);
pos++;

// We set the drops for that CPU.
set_u64_monotonic_kernel_counter(&(stats[pos]), v.n_drops_buffer + v.n_drops_scratch_map + v.n_drops_pf + v.n_drops_bug);
snprintf(stats[pos].name, METRIC_NAME_MAX, N_DROPS_PER_CPU_PREFIX"%d", cpu);
pos++;
if((flags & METRICS_V2_KERNEL_COUNTERS_PER_CPU))
{
// We set the num events for that CPU.
set_u64_monotonic_kernel_counter(&(stats[pos]), v.n_evts, METRICS_V2_KERNEL_COUNTERS_PER_CPU);
snprintf(stats[pos].name, METRIC_NAME_MAX, N_EVENTS_PER_CPU_PREFIX"%d", cpu);
pos++;

// We set the drops for that CPU.
set_u64_monotonic_kernel_counter(&(stats[pos]), v.n_drops_buffer + v.n_drops_scratch_map + v.n_drops_pf + v.n_drops_bug, METRICS_V2_KERNEL_COUNTERS_PER_CPU);
snprintf(stats[pos].name, METRIC_NAME_MAX, N_DROPS_PER_CPU_PREFIX"%d", cpu);
pos++;
}
}
offset = pos;
}
Expand Down Expand Up @@ -1849,22 +1856,20 @@ const struct metrics_v2* scap_bpf_get_stats_v2(struct scap_engine_handle engine,
{
strlcpy(stats[offset].name, info.name, METRIC_NAME_MAX);
}
strlcat(stats[offset].name, bpf_libbpf_stats_names[stat], sizeof(stats[offset].name));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nit] re comment here https://github.com/falcosecurity/libs/pull/2031/files#diff-12833abd4271488260dae0ba178c6ad3f0bc63642f793a20b06ab4eb10d02cf9L1839 libbpf stats were introduced w/ kernel 5.1 so folks with lower kernels can't reach this code since we check for libbpf stats being enabled.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right but since usually many bpf features are backported I'm not so confident in removing it... I found this commit 957ab1c, unfortunately, I don't remember why I added it but i bet I had found an issue on some old machines...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair, yes the backports.

switch(stat)
{
case RUN_CNT:
strlcat(stats[offset].name, bpf_libbpf_stats_names[RUN_CNT], sizeof(stats[offset].name));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow up here: Shouldn't this stay here, because we concat the name according to the switch statement?
Also we seem to use stat for the loop and here for the switch statement. Perhaps let's use separate wording for clarity?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is to call strlcat just once with the generic variable stat

strlcat(stats[offset].name, bpf_libbpf_stats_names[stat], sizeof(stats[offset].name));

instead of triplicating the same line 3 times using an explicit enum

strlcat(stats[offset].name, bpf_libbpf_stats_names[RUN_CNT], sizeof(stats[offset].name));
strlcat(stats[offset].name, bpf_libbpf_stats_names[RUN_TIME_NS], sizeof(stats[offset].name));
strlcat(stats[offset].name, bpf_libbpf_stats_names[AVG_TIME_NS], sizeof(stats[offset].name));

Also we seem to use stat for the loop and here for the switch statement. Perhaps let's use separate wording for clarity?

I am not sure I got this, we are using stat (the index of the array) in the switch case to select the right metric

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looked again and yes stat is the index to the bpf_libbpf_stats ... I suppose a big confusion with stat and offset. Thanks for clarifying and also working on this.

stats[offset].value.u64 = info.run_cnt;
stats[offset].unit = METRIC_VALUE_UNIT_COUNT;
stats[offset].metric_type = METRIC_VALUE_METRIC_TYPE_MONOTONIC;
break;
case RUN_TIME_NS:
strlcat(stats[offset].name, bpf_libbpf_stats_names[RUN_TIME_NS], sizeof(stats[offset].name));
stats[offset].value.u64 = info.run_time_ns;
stats[offset].unit = METRIC_VALUE_UNIT_TIME_NS_COUNT;
stats[offset].metric_type = METRIC_VALUE_METRIC_TYPE_MONOTONIC;
break;
case AVG_TIME_NS:
strlcat(stats[offset].name, bpf_libbpf_stats_names[AVG_TIME_NS], sizeof(stats[offset].name));
stats[offset].value.u64 = 0;
stats[offset].unit = METRIC_VALUE_UNIT_TIME_NS;
stats[offset].metric_type = METRIC_VALUE_METRIC_TYPE_NON_MONOTONIC_CURRENT;
Expand Down
Loading
Loading