Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/datadogexporter] Rely on http.Client's timeout instead of in exporterhelper's #6414

Merged
merged 1 commit into from
Nov 30, 2021

Conversation

mx-psi
Copy link
Member

@mx-psi mx-psi commented Nov 23, 2021

Description:

Rely on http.Client timeout instead of exporterhelper's.

Since exporterhelper's applies to the whole push[Trace/Metric]Data call and we do several network requests per call, it is preferable to do it this way to prevent one network call to cause timeouts in the next one.

This is also required for #6412, since retries inside the push functions increase the time taken.

Link to tracking Issue: n/a

Testing: Things to test manually

  • Check that timeout does occur on network calls.

Documentation: none, this should be transparent to the user.

@mx-psi mx-psi marked this pull request as ready for review November 23, 2021 10:49
@mx-psi mx-psi requested review from a team, Aneurysm9 and KSerrania and removed request for Aneurysm9 November 23, 2021 10:49
Copy link
Contributor

@KSerrania KSerrania left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a question, LGTM


return &traceEdgeConnectionImpl{
traceURL: rootURL + "/api/v0.2/traces",
statsURL: rootURL + "/api/v0.2/stats",
buildInfo: buildInfo,
apiKey: apiKey,
client: utils.NewHTTPClient(traceEdgeTimeout),
client: utils.NewHTTPClient(settings),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes the timeout value from 10 seconds to 15 seconds (if I read

func defaulttimeoutSettings() exporterhelper.TimeoutSettings {
return exporterhelper.TimeoutSettings{
Timeout: 15 * time.Second,
}
}
correctly), is that fine? Should we change defaulttimeoutSettings to return 10 seconds instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is fine (we care about the timeout of the batch processor for traces but that's about it, the timeout here should be fine no matter the value)

exporter/datadogexporter/metrics_exporter.go Show resolved Hide resolved
@mx-psi
Copy link
Member Author

mx-psi commented Nov 23, 2021

@gbbr could you review?

@mx-psi
Copy link
Member Author

mx-psi commented Nov 23, 2021

/easycla

@mx-psi mx-psi added the ready to merge Code review completed; ready to merge by maintainers label Nov 24, 2021
Copy link
Contributor

@codeboten codeboten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall the PR looks good, just one question before approving.

@@ -154,7 +154,8 @@ func createMetricsExporter(
cfg,
set,
pushMetricsFn,
exporterhelper.WithTimeout(cfg.TimeoutSettings),
// explicitly disable since we rely on http.Client timeout logic.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this be confusing if some exporters use this TimeoutSettings as the timeout setting for the entire operation where as other exporters use it per network call? This makes me think we should have a different configuration option for it. What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried to use a pattern that is already used in the Collector in other exporters:

// explicitly disable since we rely on http.Client timeout logic.
exporterhelper.WithTimeout(exporterhelper.TimeoutSettings{Timeout: 0}),
// explicitly disable since we rely on http.Client timeout logic.
exporterhelper.WithTimeout(exporterhelper.TimeoutSettings{Timeout: 0}),
// explicitly disable since we rely on http.Client timeout logic.
exporterhelper.WithTimeout(exporterhelper.TimeoutSettings{Timeout: 0}),
// explicitly disable since we rely on http.Client timeout logic.
exporterhelper.WithTimeout(exporterhelper.TimeoutSettings{Timeout: 0}),

I would expect most users not to care about a global Consume[Metrics/Traces/Logs] function timeout (they don't even need to know that such a function exists to use the exporter). If we use different options I think we should have a wider conversation to have a consistent solution for all exporters that do this differently today.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I'm happy to approve this PR. It would be good to have that discussion if the pattern of disabling this timeout already happening, maybe the global timeout is less important.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@codeboten codeboten merged commit 90abdb2 into open-telemetry:main Nov 30, 2021
@mx-psi mx-psi deleted the mx-psi/timeout-per-call branch December 1, 2021 13:16
jamesmoessis pushed a commit to atlassian-forks/opentelemetry-collector-contrib that referenced this pull request Dec 8, 2021
povilasv referenced this pull request in coralogix/opentelemetry-collector-contrib Dec 19, 2022
Signed-off-by: Bogdan <bogdandrutu@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready to merge Code review completed; ready to merge by maintainers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants