-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporter/datadogexporter] Rely on http.Client's timeout instead of in exporterhelper's #6414
[exporter/datadogexporter] Rely on http.Client's timeout instead of in exporterhelper's #6414
Conversation
…n exporterhelper's
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a question, LGTM
|
||
return &traceEdgeConnectionImpl{ | ||
traceURL: rootURL + "/api/v0.2/traces", | ||
statsURL: rootURL + "/api/v0.2/stats", | ||
buildInfo: buildInfo, | ||
apiKey: apiKey, | ||
client: utils.NewHTTPClient(traceEdgeTimeout), | ||
client: utils.NewHTTPClient(settings), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This changes the timeout value from 10 seconds to 15 seconds (if I read
opentelemetry-collector-contrib/exporter/datadogexporter/factory.go
Lines 49 to 53 in 77d48a5
func defaulttimeoutSettings() exporterhelper.TimeoutSettings { | |
return exporterhelper.TimeoutSettings{ | |
Timeout: 15 * time.Second, | |
} | |
} |
defaulttimeoutSettings
to return 10 seconds instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is fine (we care about the timeout of the batch processor for traces but that's about it, the timeout here should be fine no matter the value)
@gbbr could you review? |
/easycla |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall the PR looks good, just one question before approving.
@@ -154,7 +154,8 @@ func createMetricsExporter( | |||
cfg, | |||
set, | |||
pushMetricsFn, | |||
exporterhelper.WithTimeout(cfg.TimeoutSettings), | |||
// explicitly disable since we rely on http.Client timeout logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this be confusing if some exporters use this TimeoutSettings
as the timeout setting for the entire operation where as other exporters use it per network call? This makes me think we should have a different configuration option for it. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tried to use a pattern that is already used in the Collector in other exporters:
opentelemetry-collector-contrib/exporter/signalfxexporter/factory.go
Lines 121 to 122 in 243d642
// explicitly disable since we rely on http.Client timeout logic. | |
exporterhelper.WithTimeout(exporterhelper.TimeoutSettings{Timeout: 0}), |
opentelemetry-collector-contrib/exporter/lokiexporter/factory.go
Lines 71 to 72 in 243d642
// explicitly disable since we rely on http.Client timeout logic. | |
exporterhelper.WithTimeout(exporterhelper.TimeoutSettings{Timeout: 0}), |
opentelemetry-collector-contrib/exporter/splunkhecexporter/factory.go
Lines 105 to 106 in 243d642
// explicitly disable since we rely on http.Client timeout logic. | |
exporterhelper.WithTimeout(exporterhelper.TimeoutSettings{Timeout: 0}), |
opentelemetry-collector-contrib/exporter/observiqexporter/exporter.go
Lines 42 to 43 in 243d642
// explicitly disable since we rely on http.Client timeout logic. | |
exporterhelper.WithTimeout(exporterhelper.TimeoutSettings{Timeout: 0}), |
I would expect most users not to care about a global Consume[Metrics/Traces/Logs]
function timeout (they don't even need to know that such a function exists to use the exporter). If we use different options I think we should have a wider conversation to have a consistent solution for all exporters that do this differently today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, I'm happy to approve this PR. It would be good to have that discussion if the pattern of disabling this timeout already happening, maybe the global timeout is less important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Opened open-telemetry/opentelemetry-collector#4497 to discuss this
…n exporterhelper's (open-telemetry#6414)
Signed-off-by: Bogdan <bogdandrutu@gmail.com>
Description:
Rely on
http.Client
timeout instead ofexporterhelper
's.Since
exporterhelper
's applies to the wholepush[Trace/Metric]Data
call and we do several network requests per call, it is preferable to do it this way to prevent one network call to cause timeouts in the next one.This is also required for #6412, since retries inside the push functions increase the time taken.
Link to tracking Issue: n/a
Testing: Things to test manually
Documentation: none, this should be transparent to the user.