Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add gateway usage for otel-agent #34435

Merged
merged 15 commits into from
Mar 17, 2025
Merged

Conversation

ogaca-dd
Copy link
Contributor

@ogaca-dd ogaca-dd commented Feb 25, 2025

What does this PR do?

This PR introduces gateway usage for the otel-agent. For more context, refer to open-telemetry/opentelemetry-collector-contrib#37499.

The datadog.otel.gateway metrics is now sent for OTLP metrics, traces, and logs by the otel-agent. Additionally, it is also sent for OTLP metrics when using OTLP ingestion, but not for traces and logs.

Motivation

Describe how you validated your changes

The following commands generates the metric datadog.otel.gateway at 1

telemetrygen metrics --otlp-insecure   --otlp-attributes "datadog.host.name"=\"host1\"
telemetrygen metrics --otlp-insecure   --otlp-attributes "datadog.host.name"=\"host2\"

Same tests were performed for traces and for logs.

Possible Drawbacks / Trade-offs

Additional Notes

@ogaca-dd ogaca-dd changed the title Olivierg/gateway usage otel agent Add gateway usage for otel-agent Feb 25, 2025
@ogaca-dd ogaca-dd added changelog/no-changelog qa/done QA done before merge and regressions are covered by tests labels Feb 25, 2025
@github-actions github-actions bot added the long review PR is complex, plan time to review it label Feb 25, 2025

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
@ogaca-dd ogaca-dd force-pushed the olivierg/gateway-usage-otel-agent branch from e4956b4 to 1b798f2 Compare February 27, 2025 16:39

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
…/gateway-usage-otel-agent
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Feb 27, 2025

Uncompressed package size comparison

Comparison with ancestor d2566ecf5e7659b271be4de104b4690cf22756b1

Diff per package
package diff status size ancestor threshold
datadog-heroku-agent-amd64-deb 0.01MB ⚠️ 440.63MB 440.62MB 0.50MB
datadog-agent-amd64-deb 0.01MB ⚠️ 809.75MB 809.74MB 0.50MB
datadog-agent-x86_64-rpm 0.01MB ⚠️ 819.54MB 819.53MB 0.50MB
datadog-agent-x86_64-suse 0.01MB ⚠️ 819.54MB 819.53MB 0.50MB
datadog-iot-agent-aarch64-rpm 0.01MB ⚠️ 59.47MB 59.47MB 0.50MB
datadog-iot-agent-arm64-deb 0.01MB ⚠️ 59.40MB 59.40MB 0.50MB
datadog-agent-arm64-deb 0.01MB ⚠️ 800.74MB 800.73MB 0.50MB
datadog-agent-aarch64-rpm 0.00MB 810.51MB 810.51MB 0.50MB
datadog-iot-agent-x86_64-rpm 0.00MB 62.25MB 62.24MB 0.50MB
datadog-iot-agent-x86_64-suse 0.00MB 62.25MB 62.24MB 0.50MB
datadog-iot-agent-amd64-deb 0.00MB 62.18MB 62.17MB 0.50MB
datadog-dogstatsd-amd64-deb 0.00MB 39.33MB 39.33MB 0.50MB
datadog-dogstatsd-x86_64-rpm 0.00MB 39.41MB 39.41MB 0.50MB
datadog-dogstatsd-x86_64-suse 0.00MB 39.41MB 39.41MB 0.50MB
datadog-dogstatsd-arm64-deb 0.00MB 37.86MB 37.86MB 0.50MB

Decision

⚠️ Warning

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Feb 27, 2025

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

dda inv aws.create-vm --pipeline-id=58834378 --os-family=ubuntu

Note: This applies to commit 5e93a70

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Feb 27, 2025

Static quality checks ✅

Please find below the results from static quality gates

Successful checks

Info

Result Quality gate On disk size On disk size limit On wire size On wire size limit
static_quality_gate_agent_deb_amd64 783.64 MiB 801.8 MiB 191.03 MiB 202.62 MiB
static_quality_gate_agent_deb_arm64 775.1 MiB 793.14 MiB 173.13 MiB 184.51 MiB
static_quality_gate_agent_rpm_amd64 783.6 MiB 801.79 MiB 193.17 MiB 205.03 MiB
static_quality_gate_agent_rpm_arm64 775.13 MiB 793.09 MiB 175.38 MiB 186.44 MiB
static_quality_gate_agent_suse_amd64 783.72 MiB 801.81 MiB 193.17 MiB 205.03 MiB
static_quality_gate_agent_suse_arm64 775.06 MiB 793.14 MiB 175.38 MiB 186.44 MiB
static_quality_gate_dogstatsd_deb_amd64 37.58 MiB 47.67 MiB 9.74 MiB 19.78 MiB
static_quality_gate_dogstatsd_deb_arm64 36.18 MiB 46.27 MiB 8.45 MiB 18.49 MiB
static_quality_gate_dogstatsd_rpm_amd64 37.58 MiB 47.67 MiB 9.75 MiB 19.79 MiB
static_quality_gate_dogstatsd_suse_amd64 37.58 MiB 47.67 MiB 9.75 MiB 19.79 MiB
static_quality_gate_iot_agent_deb_amd64 59.37 MiB 69.0 MiB 14.91 MiB 24.8 MiB
static_quality_gate_iot_agent_deb_arm64 56.73 MiB 66.4 MiB 12.88 MiB 22.8 MiB
static_quality_gate_iot_agent_rpm_amd64 59.37 MiB 69.0 MiB 14.93 MiB 24.8 MiB
static_quality_gate_iot_agent_rpm_arm64 56.73 MiB 66.4 MiB 12.87 MiB 22.8 MiB
static_quality_gate_iot_agent_suse_amd64 59.37 MiB 69.0 MiB 14.93 MiB 24.8 MiB
static_quality_gate_docker_agent_amd64 868.38 MiB 886.12 MiB 292.17 MiB 304.21 MiB
static_quality_gate_docker_agent_arm64 883.06 MiB 900.79 MiB 278.49 MiB 290.47 MiB
static_quality_gate_docker_agent_jmx_amd64 1.04 GiB 1.06 GiB 367.26 MiB 379.33 MiB
static_quality_gate_docker_agent_jmx_arm64 1.04 GiB 1.06 GiB 349.58 MiB 361.55 MiB
static_quality_gate_docker_dogstatsd_amd64 45.73 MiB 55.78 MiB 17.25 MiB 27.28 MiB
static_quality_gate_docker_dogstatsd_arm64 44.36 MiB 54.45 MiB 16.12 MiB 26.16 MiB
static_quality_gate_docker_cluster_agent_amd64 264.87 MiB 274.78 MiB 106.28 MiB 116.28 MiB
static_quality_gate_docker_cluster_agent_arm64 280.8 MiB 290.82 MiB 101.11 MiB 111.12 MiB

Copy link

cit-pr-commenter bot commented Feb 27, 2025

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: b19ee587-e2a8-42e4-8b1e-7180972a6f87

Baseline: d2566ec
Comparison: 5e93a70
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_logs % cpu utilization +0.77 [-1.99, +3.53] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization +0.76 [-0.06, +1.57] 1 Logs
quality_gate_idle_all_features memory utilization +0.15 [+0.09, +0.21] 1 Logs bounds checks dashboard
quality_gate_idle memory utilization +0.09 [+0.02, +0.16] 1 Logs bounds checks dashboard
file_to_blackhole_0ms_latency_http1 egress throughput +0.01 [-0.83, +0.85] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput +0.01 [-0.89, +0.90] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.01 [-0.01, +0.03] 1 Logs
file_to_blackhole_1000ms_latency egress throughput +0.00 [-0.77, +0.78] 1 Logs
file_to_blackhole_300ms_latency egress throughput -0.00 [-0.63, +0.62] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.01 [-0.30, +0.27] 1 Logs
file_to_blackhole_0ms_latency egress throughput -0.02 [-0.81, +0.77] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.03 [-0.66, +0.61] 1 Logs
tcp_syslog_to_blackhole ingress throughput -0.03 [-0.09, +0.02] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.13 [-0.92, +0.66] 1 Logs
file_tree memory utilization -0.21 [-0.33, -0.10] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.23 [-0.70, +0.25] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
…/gateway-usage-otel-agent
@ogaca-dd ogaca-dd marked this pull request as ready for review March 3, 2025 10:05
@ogaca-dd ogaca-dd requested review from a team as code owners March 3, 2025 10:05
Copy link
Contributor

@louis-cqrl louis-cqrl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for ARUN files

@louis-cqrl louis-cqrl removed the request for review from jeremy-hanna March 3, 2025 10:07
ogaca-dd added 3 commits March 4, 2025 18:44

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
…/gateway-usage-otel-agent

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
Copy link
Member

@truthbk truthbk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I see you've taken the time to create a component so the attribute could also be leveraged elsewhere in the codebase. The thing is I think the metric only makes sense in the context of the otel-agent (aka. the Collector), so I'm not sure if it's worth to componentize it and have it injected everywhere. We should maybe discuss if this is overkill and unneeded elsewhere.

@@ -128,7 +131,7 @@ func runAgent(tagger tagger.Component, compression logscompression.Component) {
wg.Add(3)

go startTraceAgent(&wg, lambdaSpanChan, coldStartSpanId, serverlessDaemon, tagger, rcService)
go startOtlpAgent(&wg, metricAgent, serverlessDaemon, tagger)
go startOtlpAgent(&wg, metricAgent, serverlessDaemon, tagger, gatewayUsage)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Serverless could never be a gateway, so this isn't really required at all here.

In general I feel like this metric should only be coming from the OTel Collector (aka otel-agent) but not necessarily other processes with some OTel processing capabilities like serverless or the trace-agent, etc. We should maybe discuss.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to know. My understand was that it should be sent for serverless and OTLP.

@ogaca-dd
Copy link
Contributor Author

ogaca-dd commented Mar 7, 2025

So, I see you've taken the time to create a component so the attribute could also be leveraged elsewhere in the codebase. The thing is I think the metric only makes sense in the context of the otel-agent (aka. the Collector), so I'm not sure if it's worth to componentize it and have it injected everywhere. We should maybe discuss if this is overkill and unneeded elsewhere.

I created a component for several reasons:

  • Ensuring a unique instance: Gateway usage requires a single instance to maintain correctness. Using a component guarantees uniqueness across different locations.
  • Multiple use cases: My understanding is that this should be used in multiple places—OTel agent, OTLP ingest, and serverless.
  • Ease of creation: It’s fast to create with components.new-component.

That said, if gateway usage is strictly limited to the OTel agent, there’s no strong reason to keep it as a component, and I should remove it.

@ogaca-dd ogaca-dd requested a review from truthbk March 7, 2025 11:43

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
@truthbk
Copy link
Member

truthbk commented Mar 10, 2025

Just formal confirmation that the gateway usage metric is only to be expected in the Collectors (both embedded and OSS), but not in the OTLP ingests we see in the trace-agent or serverless. I'm not sure if the component is still needed in those cases, but indeed I would avoid making the change larger than necessary.

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
Copy link
Contributor

Serverless Benchmark Results

BenchmarkStartEndInvocation comparison between 4107eeb and e307a28.

tl;dr

Use these benchmarks as an insight tool during development.

  1. Skim down the vs base column in each chart. If there is a ~, then there was no statistically significant change to the benchmark. Otherwise, ensure the estimated percent change is either negative or very small.

  2. The last row of each chart is the geomean. Ensure this percentage is either negative or very small.

What is this benchmarking?

The BenchmarkStartEndInvocation compares the amount of time it takes to call the start-invocation and end-invocation endpoints. For universal instrumentation languages (Dotnet, Golang, Java, Ruby), this represents the majority of the duration overhead added by our tracing layer.

The benchmark is run using a large variety of lambda request payloads. In the charts below, there is one row for each event payload type.

How do I interpret these charts?

The charts below comes from benchstat. They represent the statistical change in duration (sec/op), memory overhead (B/op), and allocations (allocs/op).

The benchstat docs explain how to interpret these charts.

Before the comparison table, we see common file-level configuration. If there are benchmarks with different configuration (for example, from different packages), benchstat will print separate tables for each configuration.

The table then compares the two input files for each benchmark. It shows the median and 95% confidence interval summaries for each benchmark before and after the change, and an A/B comparison under "vs base". ... The p-value measures how likely it is that any differences were due to random chance (i.e., noise). The "~" means benchstat did not detect a statistically significant difference between the two inputs. ...

Note that "statistically significant" is not the same as "large": with enough low-noise data, even very small changes can be distinguished from noise and considered statistically significant. It is, of course, generally easier to distinguish large changes from noise.

Finally, the last row of the table shows the geometric mean of each column, giving an overall picture of how the benchmarks changed. Proportional changes in the geomean reflect proportional changes in the benchmarks. For example, given n benchmarks, if sec/op for one of them increases by a factor of 2, then the sec/op geomean will increase by a factor of ⁿ√2.

I need more help

First off, do not worry if the benchmarks are failing. They are not tests. The intention is for them to be a tool for you to use during development.

If you would like a hand interpreting the results come chat with us in #serverless-agent in the internal DataDog slack or in #serverless in the public DataDog slack. We're happy to help!

Benchmark stats

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
…/gateway-usage-otel-agent

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
Copy link
Member

@truthbk truthbk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a short conversation to make sure it makes sense that indeed the gatewayusage is leveraged in all the DD exporters (traces, logs, metrics), and it does seem to make sense because we can't guarantee any single signal type will be exporter. This looks good 👍

Copy link
Contributor

@duncanista duncanista left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving from Serverless, but from our files, the update is just an empty line

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
…package. Update references from attributes.GatewayUsage to otel.GatewayUsage for improved nil handling

Verified

This commit was signed with the committer’s verified signature.
ogaca-dd Olivier G
Copy link

Go Package Import Differences

Baseline: d2566ec
Comparison: 5e93a70

binaryosarchchange
agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
agentwindowsamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
agentdarwinamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
agentdarwinarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
iot-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
iot-agentlinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
heroku-agentlinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
serverlesslinuxamd64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel
serverlesslinuxarm64
+1, -0
+github.com/DataDog/datadog-agent/pkg/util/otel

@ogaca-dd
Copy link
Contributor Author

/merge

@dd-devflow
Copy link

dd-devflow bot commented Mar 17, 2025

View all feedbacks in Devflow UI.
2025-03-17 17:35:39 UTC ℹ️ Start processing command /merge


2025-03-17 17:35:46 UTC ℹ️ MergeQueue: pull request added to the queue

The expected merge time in main is approximately 40m (p90).


2025-03-17 18:28:19 UTC ℹ️ MergeQueue: This merge request was merged

@dd-mergequeue dd-mergequeue bot merged commit 7c1a2dd into main Mar 17, 2025
487 checks passed
@dd-mergequeue dd-mergequeue bot deleted the olivierg/gateway-usage-otel-agent branch March 17, 2025 18:28
@github-actions github-actions bot added this to the 7.65.0 milestone Mar 17, 2025
arbll pushed a commit that referenced this pull request Mar 19, 2025

Unverified

This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog component/system-probe long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants