lib: fluent-otel-proto: upgrade to opentelemetry-proto v1.9.0#11433
lib: fluent-otel-proto: upgrade to opentelemetry-proto v1.9.0#11433
Conversation
📝 WalkthroughWalkthroughThis PR introduces OpenTelemetry Profiles v1development dictionary-based encoding and decoding support, updates multiple GitHub Actions workflows, bumps library versions, and extends protobuf-c message descriptors. The changes span cmetrics, cprofiles, ctraces, fluent-otel-proto libraries and opentelemetry/stdout plugins, with significant additions to profiles dictionary handling, text encoding render modes, and protobuf schema support. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
eca7021 to
5e9ccca
Compare
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
5e9ccca to
cf86fca
Compare
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
cf86fca to
9dc2747
Compare
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
77dc901 to
9bce6c9
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9bce6c9fc7
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| /* Check if the payload is gRPC compressed */ | ||
| if (grpc_request && strcasecmp(request->content_type, "application/grpc") == 0) { | ||
| if (grpc_request && is_grpc_content_type(request->content_type) == FLB_TRUE) { | ||
|
|
There was a problem hiding this comment.
Normalize gRPC content type before log/trace decoding
The new is_grpc_content_type() check means requests with application/grpc+proto (or application/grpc+json) now enter the gRPC framing path, but the raw request->content_type is still forwarded to opentelemetry_process_traces/opentelemetry_process_logs, which only accept exact application/grpc/application/*protobuf and otherwise return “Unsupported content type” (see plugins/in_opentelemetry/opentelemetry_traces.c lines 168–175 and opentelemetry_logs.c lines 641–648). In practice, standard gRPC clients that send application/grpc+proto will still fail after framing is stripped. Consider normalizing the content type (e.g., treat any application/grpc* as application/grpc) or updating the log/trace handlers to accept the same variants.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
plugins/out_stdout/stdout.c (1)
193-197:⚠️ Potential issue | 🟡 MinorPre-existing leak:
profiles_contextnot destroyed on encode failure.When
cprof_encode_text_createfails, thecontinueskipscprof_decode_msgpack_destroy(profiles_context)at line 200, leaking the decoded context.Proposed fix
if (ret != CPROF_ENCODE_TEXT_SUCCESS) { flb_plg_debug(ins, "cprofiles text encoder returned : %d", ret); + cprof_decode_msgpack_destroy(profiles_context); continue; }lib/cprofiles/src/cprof_encode_text.c (1)
1167-1176:⚠️ Potential issue | 🔴 CriticalBug: Inverted NULL check causes allocation failures to be silently swallowed.
Line 1171 checks
sds_result != NULL(success) and returnsresult, but whensds_result == NULL(allocation failure), the code falls through to returnCPROF_ENCODE_TEXT_SUCCESS. Every other analogous check in this file uses== NULLto detect failure. This means an OOM during suffix encoding goes unreported.🐛 Proposed fix
- if (sds_result != NULL) { - return result; + if (sds_result == NULL) { + return CPROF_ENCODE_TEXT_ALLOCATION_ERROR; } return CPROF_ENCODE_TEXT_SUCCESS;
🤖 Fix all issues with AI agents
In `@lib/cprofiles/src/cprof_decode_opentelemetry.c`:
- Around line 113-171: The function decode_profile_sample_entry currently
accepts out-of-range stack, link and attribute indices when a dictionary is
provided; update it to validate input_sample->stack_index, each
input_sample->attribute_indices[i], and input_sample->link_index against the
corresponding dictionary tables and counts before using them: when dictionary !=
NULL ensure input_sample->stack_index is within 0..dictionary->n_stack_table-1
(return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR if not), when
iterating attribute indices check each attribute_index is >=0 and <
dictionary->n_attribute_table before calling cprof_sample_add_attribute, and if
input_sample->link_index >=0 validate it is < dictionary->n_stack_table (or the
appropriate dictionary table for link indices) before assigning sample->link;
keep using cprof_sample_add_location_index / cprof_sample_add_attribute /
cprof_sample_add_timestamp error returns as-is.
In `@lib/cprofiles/src/cprof_encode_opentelemetry.c`:
- Around line 983-1011: The realloc call in dict_add_string currently assigns
directly to dict->string_table which would lose the original pointer on failure;
change it to use a temporary pointer (e.g., char **tmp =
realloc(dict->string_table, (dict->n_string_table + 1) * sizeof(char *))); check
tmp for NULL, and on failure call cfl_sds_destroy(dup) and return -1 without
modifying dict->string_table or dict->n_string_table; on success assign
dict->string_table = tmp, set dict->string_table[dict->n_string_table] = dup,
and then increment dict->n_string_table (keep the existing post-increment
behavior or increment explicitly). Ensure dup is destroyed on any early error
paths.
- Around line 1674-1680: The fail cleanup currently frees states 0..state_idx-1
but misses freeing the partially-initialized current state at states[state_idx];
change the cleanup to free up to and including the current state (e.g., adjust
the loop in the fail block that calls free_profile_encoding_state to iterate i =
0; i <= state_idx; i++) so the partially-initialized &states[state_idx] is
released before free(states) and destroy_profiles_dictionary(dict).
- Around line 1544-1631: The attribute table is not remapped into the
dictionary: add int32_t *attribute_map and size_t attribute_map_count to struct
profile_encoding_state, allocate and populate attribute_map in
build_profiles_dictionary by iterating profile->attribute_table and adding
entries into dict->attribute_table (use the same pattern as
mapping_map/function_map/location_map and the dict_add_* helpers), and update
pack_cprof_sample to remap sample attribute indices through
encoding_state->attribute_map (writing remapped values into
otlp_sample->attribute_indices) so samples reference the dictionary's attribute
indices instead of raw profile indices.
In `@lib/cprofiles/src/cprof_encode_text.c`:
- Around line 2466-2481: Remove the duplicate unreachable success check after
calling decrement_indentation_level: keep the first validation of result
immediately after the call (the if (result != CPROF_ENCODE_TEXT_SUCCESS) return
result; that follows result = decrement_indentation_level(context);) and delete
the second identical if-block that appears later (the redundant check of result
!= CPROF_ENCODE_TEXT_SUCCESS). This removes dead code while preserving the
intended error handling around decrement_indentation_level in
cprof_encode_text.c.
In `@lib/cprofiles/tests/opentelemetry_transcoder.c`:
- Around line 98-106: Check the return value of cprof_sample_type_str_create and
handle NULL by cleaning up (call cprof_profile_destroy,
cprof_scope_profiles_destroy, cprof_resource_profiles_destroy, cprof_destroy)
and returning NULL; also change the error sentinel check for
cprof_profile_string_add from id == 0 to id == (size_t)-1 (or SIZE_MAX) so
allocation failures are detected and trigger the same cleanup path. Ensure both
failures follow the same cleanup sequence and return NULL.
- Around line 82-86: The error path frees scope_profiles while it is still
linked into resource_profiles->scope_profiles (created by
cprof_scope_profiles_create), causing use-after-free when
cprof_resource_profiles_destroy later iterates the list; to fix, after creating
scope_profiles in create_cprof_with_dictionary_tables, either remove it from the
parent list (call the appropriate list-remove operation on
resource_profiles->scope_profiles) before calling cprof_scope_profiles_destroy,
or stop calling cprof_scope_profiles_destroy in those error paths and let
cprof_resource_profiles_destroy perform the cascade cleanup for scope_profiles;
update all error exits after cprof_scope_profiles_create (and similar spots
noted) to follow one consistent approach.
In `@lib/fluent-otel-proto/README.md`:
- Line 114: Fix the typo and truncated wording in the FLUENT_PROTO_PROFILES
table entry: change "regeneartion" to "regeneration" and replace "C inter" with
"C interfaces" so the description reads something like "Include the regeneration
of C interfaces for `profiles.proto`"; update the FLUENT_PROTO_PROFILES cell in
the README table accordingly (look for the table row containing the
FLUENT_PROTO_PROFILES header to locate the text).
🧹 Nitpick comments (4)
lib/fluent-otel-proto/README.md (1)
72-72: Add language specifier to fenced code block.Static analysis (markdownlint MD040) flags this code block as missing a language specifier.
Proposed fix
-``` +```bash git clone https://github.com/fluent/protobuf-clib/cprofiles/src/cprof_encode_text.c (2)
577-601:append_escaped_string:*bufupdate for thecfl_sds_catpath is correct but asymmetric and fragile.For the non-quote path (Line 591),
cfl_sds_printf(buf, …)updates*bufthrough the pointer. For the quote path (Line 588),cfl_sds_cat(*buf, …)does not update*buf; the deferred assignment at Line 597 handles that. This works, but the two code paths use fundamentally different update mechanisms, making this easy to break during future edits. Consider usingcfl_sds_catconsistently and updating*bufuniformly for both branches.♻️ Proposed simplification
for (p = str; *p != '\0'; p++) { if (*p == '"') { result = cfl_sds_cat(*buf, "\\\"", 2); } else { - result = cfl_sds_printf(buf, "%c", *p); + result = cfl_sds_cat(*buf, p, 1); } if (result == NULL) { return CPROF_ENCODE_TEXT_ALLOCATION_ERROR; } - if (*p == '"') { - *buf = result; - } + *buf = result; }
477-575: O(n) linear scans in resolve helpers – acceptable for debug rendering but worth noting.
resolve_mapping_index,resolve_location_index,resolve_function_index, andresolve_attribute_indexall iterate the linked list from the head on every call. In the resolved render mode, these are called per-sample/per-location, making the overall complexity O(n·m). For large profiles with thousands of samples/locations, this could be noticeable. Fine for debug/diagnostic output but keep in mind if this path is ever used on a hot path.lib/cprofiles/tests/text_encoder.c (1)
652-653: Consider removing or guarding theprintf/fflushdebug output.Unconditionally printing the entire encoded result to stdout can be noisy in CI logs, especially for large profiles. Consider wrapping this behind an environment variable or
#ifdef DEBUGguard, or removing it entirely sinceTEST_CHECK(cfl_sds_len(text_encoder_result) > 1)already validates the output.
| static int decode_profile_sample_entry(struct cprof_sample *sample, | ||
| Opentelemetry__Proto__Profiles__V1development__Sample *input_sample) | ||
| Opentelemetry__Proto__Profiles__V1development__Sample *input_sample, | ||
| Opentelemetry__Proto__Profiles__V1development__ProfilesDictionary *dictionary) | ||
| { | ||
| int32_t location_index; | ||
| int result; | ||
| size_t index; | ||
|
|
||
| for (index = 0 ; | ||
| index < input_sample->n_location_index; | ||
| index++) { | ||
|
|
||
| result = cprof_sample_add_location_index(sample, input_sample->location_index[index]); | ||
|
|
||
| if (result != 0) { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | ||
| Opentelemetry__Proto__Profiles__V1development__Stack *stack; | ||
|
|
||
| /* Resolve stack_index to location indices from dictionary.stack_table */ | ||
| if (dictionary != NULL && dictionary->stack_table != NULL && | ||
| input_sample->stack_index >= 0 && | ||
| (size_t)input_sample->stack_index < dictionary->n_stack_table) { | ||
| stack = dictionary->stack_table[input_sample->stack_index]; | ||
| if (stack != NULL && stack->location_indices != NULL) { | ||
| for (index = 0; index < stack->n_location_indices; index++) { | ||
| location_index = stack->location_indices[index]; | ||
|
|
||
| if (location_index < 0 || | ||
| (dictionary->location_table != NULL && | ||
| (size_t) location_index >= dictionary->n_location_table)) { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | ||
| } | ||
|
|
||
| result = cprof_sample_add_location_index(sample, | ||
| (uint64_t) location_index); | ||
| if (result != 0) { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| for (index = 0 ; | ||
| index < input_sample->n_value; | ||
| index++) { | ||
|
|
||
| result = cprof_sample_add_value(sample, input_sample->value[index]); | ||
|
|
||
| for (index = 0; index < input_sample->n_values; index++) { | ||
| result = cprof_sample_add_value(sample, input_sample->values[index]); | ||
| if (result != 0) { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | ||
| } | ||
| } | ||
|
|
||
| for (index = 0 ; | ||
| index < input_sample->n_attributes; | ||
| index++) { | ||
|
|
||
| result = cprof_sample_add_attribute(sample, input_sample->attributes[index]); | ||
|
|
||
| for (index = 0; index < input_sample->n_attribute_indices; index++) { | ||
| result = cprof_sample_add_attribute(sample, | ||
| (uint64_t)input_sample->attribute_indices[index]); | ||
| if (result != 0) { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | ||
| } | ||
| } | ||
|
|
||
| for (index = 0 ; | ||
| index < input_sample->n_timestamps_unix_nano; | ||
| index++) { | ||
|
|
||
| result = cprof_sample_add_timestamp(sample, input_sample->timestamps_unix_nano[index]); | ||
|
|
||
| for (index = 0; index < input_sample->n_timestamps_unix_nano; index++) { | ||
| result = cprof_sample_add_timestamp(sample, | ||
| input_sample->timestamps_unix_nano[index]); | ||
| if (result != 0) { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | ||
| } | ||
| } | ||
|
|
||
| sample->locations_start_index = input_sample->locations_start_index; | ||
| sample->locations_length = input_sample->locations_length; | ||
| sample->link = input_sample->link; | ||
| sample->link = (uint64_t)(input_sample->link_index >= 0 ? input_sample->link_index : 0); | ||
|
|
||
| return CPROF_DECODE_OPENTELEMETRY_SUCCESS; |
There was a problem hiding this comment.
Validate stack/link/attribute indices against dictionary bounds.
Out-of-range indices are currently accepted silently, which can corrupt downstream lookups.
🛡️ Suggested fix
- if (dictionary != NULL && dictionary->stack_table != NULL &&
- input_sample->stack_index >= 0 &&
- (size_t)input_sample->stack_index < dictionary->n_stack_table) {
- stack = dictionary->stack_table[input_sample->stack_index];
+ if (dictionary != NULL && dictionary->stack_table != NULL) {
+ if (input_sample->stack_index < 0 ||
+ (size_t)input_sample->stack_index >= dictionary->n_stack_table) {
+ return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR;
+ }
+ stack = dictionary->stack_table[input_sample->stack_index];
if (stack != NULL && stack->location_indices != NULL) {
for (index = 0; index < stack->n_location_indices; index++) {
location_index = stack->location_indices[index];
@@
- for (index = 0; index < input_sample->n_attribute_indices; index++) {
+ for (index = 0; index < input_sample->n_attribute_indices; index++) {
+ if (dictionary != NULL && dictionary->attribute_table != NULL &&
+ (size_t)input_sample->attribute_indices[index] >= dictionary->n_attribute_table) {
+ return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR;
+ }
result = cprof_sample_add_attribute(sample,
(uint64_t)input_sample->attribute_indices[index]);
if (result != 0) {
return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR;
}
}
@@
- sample->link = (uint64_t)(input_sample->link_index >= 0 ? input_sample->link_index : 0);
+ if (dictionary != NULL && dictionary->link_table != NULL) {
+ if (input_sample->link_index < 0 ||
+ (size_t)input_sample->link_index >= dictionary->n_link_table) {
+ return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR;
+ }
+ }
+ sample->link = (uint64_t)(input_sample->link_index >= 0 ? input_sample->link_index : 0);🤖 Prompt for AI Agents
In `@lib/cprofiles/src/cprof_decode_opentelemetry.c` around lines 113 - 171, The
function decode_profile_sample_entry currently accepts out-of-range stack, link
and attribute indices when a dictionary is provided; update it to validate
input_sample->stack_index, each input_sample->attribute_indices[i], and
input_sample->link_index against the corresponding dictionary tables and counts
before using them: when dictionary != NULL ensure input_sample->stack_index is
within 0..dictionary->n_stack_table-1 (return
CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR if not), when iterating
attribute indices check each attribute_index is >=0 and <
dictionary->n_attribute_table before calling cprof_sample_add_attribute, and if
input_sample->link_index >=0 validate it is < dictionary->n_stack_table (or the
appropriate dictionary table for link indices) before assigning sample->link;
keep using cprof_sample_add_location_index / cprof_sample_add_attribute /
cprof_sample_add_timestamp error returns as-is.
| /* Find or add string in dictionary; returns dict string index. */ | ||
| static int32_t dict_add_string( | ||
| Opentelemetry__Proto__Profiles__V1development__ProfilesDictionary *dict, | ||
| const char *str) | ||
| { | ||
| size_t i; | ||
| char *dup; | ||
|
|
||
| if (str == NULL) { | ||
| str = ""; | ||
| } | ||
| for (i = 0; i < dict->n_string_table; i++) { | ||
| if (dict->string_table[i] != NULL && strcmp(dict->string_table[i], str) == 0) { | ||
| return (int32_t) i; | ||
| } | ||
| } | ||
| dup = cfl_sds_create(str); | ||
| if (dup == NULL) { | ||
| return -1; | ||
| } | ||
| dict->string_table = realloc(dict->string_table, | ||
| (dict->n_string_table + 1) * sizeof(char *)); | ||
| if (dict->string_table == NULL) { | ||
| cfl_sds_destroy(dup); | ||
| return -1; | ||
| } | ||
| dict->string_table[dict->n_string_table] = dup; | ||
| return (int32_t) dict->n_string_table++; | ||
| } |
There was a problem hiding this comment.
Avoid losing the existing string_table on realloc failure.
Assigning realloc directly to dict->string_table drops the old pointer if allocation fails.
🔧 Suggested fix
- dict->string_table = realloc(dict->string_table,
- (dict->n_string_table + 1) * sizeof(char *));
- if (dict->string_table == NULL) {
+ char **tmp = realloc(dict->string_table,
+ (dict->n_string_table + 1) * sizeof(char *));
+ if (tmp == NULL) {
cfl_sds_destroy(dup);
return -1;
}
- dict->string_table[dict->n_string_table] = dup;
+ dict->string_table = tmp;
+ dict->string_table[dict->n_string_table] = dup;🤖 Prompt for AI Agents
In `@lib/cprofiles/src/cprof_encode_opentelemetry.c` around lines 983 - 1011, The
realloc call in dict_add_string currently assigns directly to dict->string_table
which would lose the original pointer on failure; change it to use a temporary
pointer (e.g., char **tmp = realloc(dict->string_table, (dict->n_string_table +
1) * sizeof(char *))); check tmp for NULL, and on failure call
cfl_sds_destroy(dup) and return -1 without modifying dict->string_table or
dict->n_string_table; on success assign dict->string_table = tmp, set
dict->string_table[dict->n_string_table] = dup, and then increment
dict->n_string_table (keep the existing post-increment behavior or increment
explicitly). Ensure dup is destroyed on any early error paths.
| for (i = 0; i < st->string_map_count; i++) { | ||
| si = dict_add_string(dict, | ||
| profile->string_table[i] ? profile->string_table[i] : ""); | ||
| if (si < 0) { | ||
| goto fail; | ||
| } | ||
| st->string_map[i] = si; | ||
| } | ||
| } | ||
|
|
||
| opentelemetry__proto__resource__v1__resource__init(instance); | ||
| /* stack_index_by_sample: for each sample, resolve location_index[] to dict stack */ | ||
| st->sample_count = cfl_list_size(&profile->samples); | ||
| if (st->sample_count > 0) { | ||
| st->stack_index_by_sample = malloc(st->sample_count * sizeof(int32_t)); | ||
| if (st->stack_index_by_sample == NULL) { | ||
| goto fail; | ||
| } | ||
| } | ||
|
|
||
| if (attribute_count > 0) { | ||
| instance->attributes = initialize_attribute_list(attribute_count); | ||
| /* Build mapping_table, function_table, location_table, link_table entries and | ||
| * per-profile maps (profile index -> dict index) so stacks reference real locations. */ | ||
| st->location_map_count = cfl_list_size(&profile->locations); | ||
| st->mapping_map_count = cfl_list_size(&profile->mappings); | ||
| st->function_map_count = cfl_list_size(&profile->functions); | ||
| st->link_map_count = cfl_list_size(&profile->link_table); | ||
|
|
||
| if (st->mapping_map_count > 0) { | ||
| st->mapping_map = malloc(st->mapping_map_count * sizeof(int32_t)); | ||
| if (st->mapping_map == NULL) { | ||
| goto fail; | ||
| } | ||
| i = 0; | ||
| cfl_list_foreach(map_iter, &profile->mappings) { | ||
| cprof_mapping = cfl_list_entry(map_iter, struct cprof_mapping, _head); | ||
| si = dict_add_mapping(dict, cprof_mapping, st->string_map, st->string_map_count); | ||
| if (si < 0) { | ||
| goto fail; | ||
| } | ||
| st->mapping_map[i++] = si; | ||
| } | ||
| } | ||
| if (st->function_map_count > 0) { | ||
| st->function_map = malloc(st->function_map_count * sizeof(int32_t)); | ||
| if (st->function_map == NULL) { | ||
| goto fail; | ||
| } | ||
| i = 0; | ||
| cfl_list_foreach(func_iter, &profile->functions) { | ||
| cprof_func = cfl_list_entry(func_iter, struct cprof_function, _head); | ||
| si = dict_add_function(dict, cprof_func, st->string_map, st->string_map_count); | ||
| if (si < 0) { | ||
| goto fail; | ||
| } | ||
| st->function_map[i++] = si; | ||
| } | ||
| } | ||
| if (st->location_map_count > 0) { | ||
| st->location_map = malloc(st->location_map_count * sizeof(int32_t)); | ||
| if (st->location_map == NULL) { | ||
| goto fail; | ||
| } | ||
| i = 0; | ||
| cfl_list_foreach(loc_iter, &profile->locations) { | ||
| cprof_loc = cfl_list_entry(loc_iter, struct cprof_location, _head); | ||
| si = dict_add_location(dict, cprof_loc, | ||
| st->mapping_map, st->mapping_map_count, | ||
| st->function_map, st->function_map_count); | ||
| if (si < 0) { | ||
| goto fail; | ||
| } | ||
| st->location_map[i++] = si; | ||
| } | ||
| } | ||
| if (st->link_map_count > 0) { | ||
| st->link_map = malloc(st->link_map_count * sizeof(int32_t)); | ||
| if (st->link_map == NULL) { | ||
| goto fail; | ||
| } | ||
| i = 0; | ||
| cfl_list_foreach(link_iter, &profile->link_table) { | ||
| cprof_link = cfl_list_entry(link_iter, struct cprof_link, _head); | ||
| si = dict_add_link(dict, cprof_link); | ||
| if (si < 0) { | ||
| goto fail; | ||
| } | ||
| st->link_map[i++] = si; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect how profile/sample attribute indices are defined and populated.
rg -n "struct cprof_profile|attribute_table|attributes_count|cprof_sample_add_attribute|attribute_indices" lib/cprofilesRepository: fluent/fluent-bit
Length of output: 25917
🏁 Script executed:
#!/bin/bash
# Get the pack_cprof_sample function and surrounding context
sed -n '2120,2160p' lib/cprofiles/src/cprof_encode_opentelemetry.cRepository: fluent/fluent-bit
Length of output: 1610
🏁 Script executed:
#!/bin/bash
# Get the build_profiles_dictionary function including the range mentioned
sed -n '1544,1631p' lib/cprofiles/src/cprof_encode_opentelemetry.cRepository: fluent/fluent-bit
Length of output: 4421
🏁 Script executed:
#!/bin/bash
# Search for dict_add_attribute, attribute_map, and the "leave 0 for now" comment
rg -n "dict_add_attribute|attribute_map|leave 0 for now" lib/cprofiles/src/cprof_encode_opentelemetry.cRepository: fluent/fluent-bit
Length of output: 141
🏁 Script executed:
#!/bin/bash
# Get context around the "leave 0 for now" comment
sed -n '1075,1095p' lib/cprofiles/src/cprof_encode_opentelemetry.cRepository: fluent/fluent-bit
Length of output: 931
🏁 Script executed:
#!/bin/bash
# Search for dict_add_mapping function definition
rg -n "^static.*dict_add_mapping|^dict_add_mapping" -A 20 lib/cprofiles/src/cprof_encode_opentelemetry.c | head -50Repository: fluent/fluent-bit
Length of output: 857
🏁 Script executed:
#!/bin/bash
# Search for the profile_encoding_state structure definition
rg -n "struct profile_encoding_state" -A 30 lib/cprofiles/src/cprof_encode_opentelemetry.c | head -60Repository: fluent/fluent-bit
Length of output: 2282
🏁 Script executed:
#!/bin/bash
# Check how samples store attributes and if profile->attribute_table exists
rg -n "struct cprof_sample" -A 10 lib/cprofiles/include/cprofiles/cprofiles.hRepository: fluent/fluent-bit
Length of output: 1540
🏁 Script executed:
#!/bin/bash
# Get the complete cprof_sample structure
sed -n '51,75p' lib/cprofiles/include/cprofiles/cprofiles.hRepository: fluent/fluent-bit
Length of output: 583
🏁 Script executed:
#!/bin/bash
# Search for how attribute_table is populated in profiles and how attributes are added to samples
rg -n "cprof_sample_add_attribute|profile->attribute_table" -B 2 -A 2 lib/cprofiles/src/cprof_decode_opentelemetry.c | head -40Repository: fluent/fluent-bit
Length of output: 655
🏁 Script executed:
#!/bin/bash
# Search for where dictionary->attribute_table is built
rg -n "dict->attribute_table|dict->n_attribute_table" lib/cprofiles/src/cprof_encode_opentelemetry.cRepository: fluent/fluent-bit
Length of output: 289
🏁 Script executed:
#!/bin/bash
# Search for where attributes are actually added to samples in the codebase
rg -n "cprof_sample_add_attribute" -B 3 -A 3 lib/cprofiles/src/ lib/cprofiles/tests/ | head -60Repository: fluent/fluent-bit
Length of output: 1121
🏁 Script executed:
#!/bin/bash
# Search for where samples get their attributes from and how they should be encoded
rg -n "otlp_sample->attribute_indices" lib/cprofiles/src/cprof_encode_opentelemetry.c -B 5 -A 5Repository: fluent/fluent-bit
Length of output: 540
🏁 Script executed:
#!/bin/bash
# Search for any attribute mapping or attribute dictionary building
rg -n "attribute_map|dict_add_attribute|n_attribute_table\s*=" lib/cprofiles/src/cprof_encode_opentelemetry.cRepository: fluent/fluent-bit
Length of output: 43
🏁 Script executed:
#!/bin/bash
# Get context around the attribute_table handling in decode
sed -n '414,435p' lib/cprofiles/src/cprof_decode_opentelemetry.cRepository: fluent/fluent-bit
Length of output: 1148
Add attribute table mapping to dictionary encoding.
Attributes are copied directly from the sample to otlp_sample->attribute_indices without remapping to dictionary indices, unlike strings, mappings, functions, locations, and links. The profile_encoding_state has no attribute_map, and build_profiles_dictionary contains no attribute mapping logic. The code explicitly notes this with a comment at line 1085: /* attribute_indices reference dict attribute_table; leave 0 for now */.
To fix: (1) add int32_t *attribute_map and size_t attribute_map_count to profile_encoding_state, (2) build the attribute map in build_profiles_dictionary by iterating over profile->attribute_table and populating dict->attribute_table, and (3) in pack_cprof_sample, remap indices using encoding_state->attribute_map before copying to otlp_sample->attribute_indices.
🤖 Prompt for AI Agents
In `@lib/cprofiles/src/cprof_encode_opentelemetry.c` around lines 1544 - 1631, The
attribute table is not remapped into the dictionary: add int32_t *attribute_map
and size_t attribute_map_count to struct profile_encoding_state, allocate and
populate attribute_map in build_profiles_dictionary by iterating
profile->attribute_table and adding entries into dict->attribute_table (use the
same pattern as mapping_map/function_map/location_map and the dict_add_*
helpers), and update pack_cprof_sample to remap sample attribute indices through
encoding_state->attribute_map (writing remapped values into
otlp_sample->attribute_indices) so samples reference the dictionary's attribute
indices instead of raw profile indices.
| fail: | ||
| for (i = 0; i < state_idx; i++) { | ||
| free_profile_encoding_state(&states[i]); | ||
| } | ||
|
|
||
| opentelemetry__proto__profiles__v1development__attribute_unit__init(instance); | ||
|
|
||
| return instance; | ||
| free(states); | ||
| destroy_profiles_dictionary(dict); | ||
| return CPROF_ENCODE_OPENTELEMETRY_ALLOCATION_ERROR; |
There was a problem hiding this comment.
Free the current profile state on failure.
If a failure happens before state_idx++, allocations in the current state aren’t released.
🧹 Suggested fix
-fail:
- for (i = 0; i < state_idx; i++) {
+fail:
+ for (i = 0; i <= state_idx && i < state_count; i++) {
free_profile_encoding_state(&states[i]);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fail: | |
| for (i = 0; i < state_idx; i++) { | |
| free_profile_encoding_state(&states[i]); | |
| } | |
| opentelemetry__proto__profiles__v1development__attribute_unit__init(instance); | |
| return instance; | |
| free(states); | |
| destroy_profiles_dictionary(dict); | |
| return CPROF_ENCODE_OPENTELEMETRY_ALLOCATION_ERROR; | |
| fail: | |
| for (i = 0; i <= state_idx && i < state_count; i++) { | |
| free_profile_encoding_state(&states[i]); | |
| } | |
| free(states); | |
| destroy_profiles_dictionary(dict); | |
| return CPROF_ENCODE_OPENTELEMETRY_ALLOCATION_ERROR; |
🤖 Prompt for AI Agents
In `@lib/cprofiles/src/cprof_encode_opentelemetry.c` around lines 1674 - 1680, The
fail cleanup currently frees states 0..state_idx-1 but misses freeing the
partially-initialized current state at states[state_idx]; change the cleanup to
free up to and including the current state (e.g., adjust the loop in the fail
block that calls free_profile_encoding_state to iterate i = 0; i <= state_idx;
i++) so the partially-initialized &states[state_idx] is released before
free(states) and destroy_profiles_dictionary(dict).
| instance->string_table_count); | ||
|
|
||
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | ||
| return result; | ||
| } | ||
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | ||
| return result; | ||
| } | ||
|
|
||
| result = decrement_indentation_level(context); | ||
|
|
||
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | ||
| return result; | ||
| } | ||
| } | ||
|
|
||
| result = encode_string_array( | ||
| context, | ||
| CFL_TRUE, | ||
| "String table : [", | ||
| ", ", | ||
| " ]\n", | ||
| (char **) instance->string_table, | ||
| instance->string_table_count); | ||
|
|
||
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | ||
| return result; | ||
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | ||
| return result; | ||
| } | ||
| } |
There was a problem hiding this comment.
Dead code: duplicate success check at Lines 2478-2480.
The result variable still holds the return value of decrement_indentation_level which is already checked at Lines 2472-2476. The second identical check at Lines 2478-2480 is unreachable dead code.
🧹 Proposed fix – remove the dead check
result = decrement_indentation_level(context);
if (result != CPROF_ENCODE_TEXT_SUCCESS) {
return result;
}
-
- if (result != CPROF_ENCODE_TEXT_SUCCESS) {
- return result;
- }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| instance->string_table_count); | |
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | |
| return result; | |
| } | |
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | |
| return result; | |
| } | |
| result = decrement_indentation_level(context); | |
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | |
| return result; | |
| } | |
| } | |
| result = encode_string_array( | |
| context, | |
| CFL_TRUE, | |
| "String table : [", | |
| ", ", | |
| " ]\n", | |
| (char **) instance->string_table, | |
| instance->string_table_count); | |
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | |
| return result; | |
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | |
| return result; | |
| } | |
| } | |
| instance->string_table_count); | |
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | |
| return result; | |
| } | |
| result = decrement_indentation_level(context); | |
| if (result != CPROF_ENCODE_TEXT_SUCCESS) { | |
| return result; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@lib/cprofiles/src/cprof_encode_text.c` around lines 2466 - 2481, Remove the
duplicate unreachable success check after calling decrement_indentation_level:
keep the first validation of result immediately after the call (the if (result
!= CPROF_ENCODE_TEXT_SUCCESS) return result; that follows result =
decrement_indentation_level(context);) and delete the second identical if-block
that appears later (the redundant check of result != CPROF_ENCODE_TEXT_SUCCESS).
This removes dead code while preserving the intended error handling around
decrement_indentation_level in cprof_encode_text.c.
| cprof_scope_profiles_destroy(scope_profiles); | ||
| cprof_resource_profiles_destroy(resource_profiles); | ||
| cprof_destroy(cprof); | ||
| return NULL; | ||
| } |
There was a problem hiding this comment.
Use-after-free in error path: scope_profiles is already linked into resource_profiles.
cprof_scope_profiles_create (line 74) adds scope_profiles to resource_profiles->scope_profiles via cfl_list_add. When the scope allocation fails here, cprof_scope_profiles_destroy frees scope_profiles but does not remove it from resource_profiles's list. The subsequent cprof_resource_profiles_destroy iterates that list and accesses the freed memory.
The same pattern repeats in every error path after line 74 (including lines 90–93, 101–105, 110–113, 118–122, 126–130) and throughout create_cprof_with_dictionary_tables.
Fix: in error paths after scope_profiles is created, remove it from the parent list before destroying it, or simply rely on cprof_resource_profiles_destroy to cascade the cleanup and don't destroy scope_profiles separately.
Proposed fix for error paths after scope_profiles creation
scope_profiles->scope = cprof_instrumentation_scope_create("", "", NULL, 0);
if (scope_profiles->scope == NULL) {
- cprof_scope_profiles_destroy(scope_profiles);
+ cfl_list_del(&scope_profiles->_head);
+ cprof_scope_profiles_destroy(scope_profiles);
cprof_resource_profiles_destroy(resource_profiles);
cprof_destroy(cprof);
return NULL;
}Or more simply, skip the individual destroy and let the cascade handle it:
scope_profiles->scope = cprof_instrumentation_scope_create("", "", NULL, 0);
if (scope_profiles->scope == NULL) {
- cprof_scope_profiles_destroy(scope_profiles);
cprof_resource_profiles_destroy(resource_profiles);
cprof_destroy(cprof);
return NULL;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cprof_scope_profiles_destroy(scope_profiles); | |
| cprof_resource_profiles_destroy(resource_profiles); | |
| cprof_destroy(cprof); | |
| return NULL; | |
| } | |
| cfl_list_del(&scope_profiles->_head); | |
| cprof_scope_profiles_destroy(scope_profiles); | |
| cprof_resource_profiles_destroy(resource_profiles); | |
| cprof_destroy(cprof); | |
| return NULL; | |
| } |
🤖 Prompt for AI Agents
In `@lib/cprofiles/tests/opentelemetry_transcoder.c` around lines 82 - 86, The
error path frees scope_profiles while it is still linked into
resource_profiles->scope_profiles (created by cprof_scope_profiles_create),
causing use-after-free when cprof_resource_profiles_destroy later iterates the
list; to fix, after creating scope_profiles in
create_cprof_with_dictionary_tables, either remove it from the parent list (call
the appropriate list-remove operation on resource_profiles->scope_profiles)
before calling cprof_scope_profiles_destroy, or stop calling
cprof_scope_profiles_destroy in those error paths and let
cprof_resource_profiles_destroy perform the cascade cleanup for scope_profiles;
update all error exits after cprof_scope_profiles_create (and similar spots
noted) to follow one consistent approach.
| cprof_sample_type_str_create(profile, "count", "", CPROF_AGGREGATION_TEMPORALITY_CUMULATIVE); | ||
| id = cprof_profile_string_add(profile, "main", -1); | ||
| if (id == 0) { | ||
| cprof_profile_destroy(profile); | ||
| cprof_scope_profiles_destroy(scope_profiles); | ||
| cprof_resource_profiles_destroy(resource_profiles); | ||
| cprof_destroy(cprof); | ||
| return NULL; | ||
| } |
There was a problem hiding this comment.
Unchecked return from cprof_sample_type_str_create; wrong sentinel for cprof_profile_string_add error.
cprof_sample_type_str_create(line 98) returnsNULLon failure but the return value is not checked.cprof_profile_string_addreturns(size_t)-1on error (per the implementation incprof_profile.c), but the check at line 100 testsid == 0. Sinceid == 0is the empty-string sentinel and will never be returned for the non-empty string"main", this doesn't cause a false-positive, but it will miss actual allocation failures (which returnSIZE_MAX).
These are unlikely to matter in practice for test code but are worth noting.
🤖 Prompt for AI Agents
In `@lib/cprofiles/tests/opentelemetry_transcoder.c` around lines 98 - 106, Check
the return value of cprof_sample_type_str_create and handle NULL by cleaning up
(call cprof_profile_destroy, cprof_scope_profiles_destroy,
cprof_resource_profiles_destroy, cprof_destroy) and returning NULL; also change
the error sentinel check for cprof_profile_string_add from id == 0 to id ==
(size_t)-1 (or SIZE_MAX) so allocation failures are detected and trigger the
same cleanup path. Ensure both failures follow the same cleanup sequence and
return NULL.
| | FLUENT_PROTO_TRACE | Include the regeneration of C interfaces for `trace.proto` and `trace_service.proto` files. | On | | ||
| | FLUENT_PROTO_LOGS | Include the regeneration of C interfaces for `logs.proto` and `logs_service.proto` files. | On | | ||
| | FLUENT_PROTO_METRICS | Include the regeneration of C interfaces for `metrics.proto` and `metrics_service.proto` files. | On | | ||
| | FLUENT_PROTO_PROFILES | Include the regeneartion of C inter for `profiles.proto` | On | |
There was a problem hiding this comment.
Typo and truncated text in FLUENT_PROTO_PROFILES description.
"regeneartion" → "regeneration" and "C inter" → "C interfaces" (text appears truncated).
Proposed fix
-| FLUENT_PROTO_PROFILES | Include the regeneartion of C inter for `profiles.proto` | On |
+| FLUENT_PROTO_PROFILES | Include the regeneration of C interfaces for `profiles.proto` | On |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | FLUENT_PROTO_PROFILES | Include the regeneartion of C inter for `profiles.proto` | On | | |
| | FLUENT_PROTO_PROFILES | Include the regeneration of C interfaces for `profiles.proto` | On | |
🧰 Tools
🪛 LanguageTool
[grammar] ~114-~114: Ensure spelling is correct
Context: ...| | FLUENT_PROTO_PROFILES | Include the regeneartion of C inter for profiles.proto | On | ...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
In `@lib/fluent-otel-proto/README.md` at line 114, Fix the typo and truncated
wording in the FLUENT_PROTO_PROFILES table entry: change "regeneartion" to
"regeneration" and replace "C inter" with "C interfaces" so the description
reads something like "Include the regeneration of C interfaces for
`profiles.proto`"; update the FLUENT_PROTO_PROFILES cell in the README table
accordingly (look for the table row containing the FLUENT_PROTO_PROFILES header
to locate the text).
This branch updates Fluent Bit’s OpenTelemetry dependency stack to OTel v1.9.0 and aligns profile ingestion/encoding behavior across libs and plugins.
What changed
User-visible impact
Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.
Summary by CodeRabbit
New Features
Bug Fixes
Chores