-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data race in v0.48.0 #4895
Comments
+1 i've got it too in 0.48.0 |
There does seem to be a race issue with the counter of bytes read from the body that we increment and read with no regards for parallelism.
I'm looking into this and will open a PR soon. |
So, no PR yet, as I can't create a reproduction test.
It seems the race is indeed on accessing |
Using |
Hm, I also cannot reproduce it. It happened only once in CI. Running it with |
+1 frequency: |
+1 frequency: |
The OpenTelemetry Collector contrib-distro upgraded to |
Advanced apologies for any perceived cynicism, however I think it's wroth calling out: This sort of issue really degrades confidence in otel in general. There really isn't a defense for not having caught this in Otel's own tests -- it was certainly caught others' tests. I hope part of the solution here involves an improved release pipeline with tests that run with the -race flag enabled. To elaborate a bit more, regardless of the race itself causing any actual production issues, this sort of thing does cause developer pain and delays for repositories which do adhere to strict testing standards. This is especially true in light of go modules' tendency to update packages on |
Tests do run with the -race flag enabled, and we do adhere to strong testing principles. But tests, as extensive as they can be, are not a sure way to prevent any bugs. This is indeed a regrettable regression, but it wasn't caught by any other tests. You are more than welcome to help us review further PRs so this doesn't happen again. |
Any chance of a |
Likely by the end of this week or beginning of next. |
Fix a data race on container run by updating otelhttp: open-telemetry/opentelemetry-go-contrib#4895 Co-authored-by: Manuel de la Peña <mdelapenya@gmail.com>
Description
Looks like roundtripper can return before finishing sending the request. Perhaps when server has responded faster than the request was fully sent? I think in HTTP/2 mode stdlib HTTP client handles send/receive independently and in HTTP/1 it only starts reading the response once the request has been fully sent.
https://pkg.go.dev/net/http#RoundTripper says:
Which is the case in the new code - it reads a field from the request body wrapper without waiting for the
Close
to be called.Environment
otelhttp
version: v0.48.0Steps To Reproduce
Run
TestProxy_UserAccessHappyPath
with-race
. I don't have a small reproducer, sorry.Expected behavior
No data race.
The text was updated successfully, but these errors were encountered: