-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test flake in OauthIntegrationTest.UnauthenticatedFlow #12960
Comments
I've been able to reproduce this only when running |
cc @lizan Saw this flake on a recent coverage run. I think the main difference between coverage and regular runs is related to log levels. It being reproducible in dbg mode seems interesting. |
I seem to have repo'ed under ASAN in the 1.16 release branch. [ RUN ] OauthIntegrationTest.UnauthenticatedFlow==12==ERROR: AddressSanitizer: heap-use-after-free on address 0x6190000cc528 at pc 0x000017db0c94 bp 0x7ffc46b3b3b0 sp 0x7ffc46b3b3a8 0x6190000cc528 is located 936 bytes inside of 992-byte region [0x6190000cc180,0x6190000cc560) previously allocated by thread T0 here: SUMMARY: AddressSanitizer: heap-use-after-free oauth_integration_test.cc:? in Envoy::Extensions::HttpFilters::Oauth::(anonymous namespace)::OauthIntegrationTest_UnauthenticatedFlow_Test::TestBody() |
Yeah I can see that this is highest flaky test in master branch (crash): |
I think I understand the bug. Attempt to send end stream in the test boddy after the client got a fully reply for that stream and deleted it. |
The client request stream can be deleted under the call stack of Envoy::IntegrationCodecClient::startRequest if the proxy replies quickly enough. Attempts to send an end stream on that request result in use-after-free on the client stream in cases where the client processed the full reply inside startRequest. Fixes #12960 Signed-off-by: Antonio Vicente <avd@google.com>
The client request stream can be deleted under the call stack of Envoy::IntegrationCodecClient::startRequest if the proxy replies quickly enough. Attempts to send an end stream on that request result in use-after-free on the client stream in cases where the client processed the full reply inside startRequest. Fixes envoyproxy#12960 Signed-off-by: Antonio Vicente <avd@google.com>
The client request stream can be deleted under the call stack of Envoy::IntegrationCodecClient::startRequest if the proxy replies quickly enough. Attempts to send an end stream on that request result in use-after-free on the client stream in cases where the client processed the full reply inside startRequest. Fixes envoyproxy#12960 Signed-off-by: Antonio Vicente <avd@google.com> Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
The client request stream can be deleted under the call stack of Envoy::IntegrationCodecClient::startRequest if the proxy replies quickly enough. Attempts to send an end stream on that request result in use-after-free on the client stream in cases where the client processed the full reply inside startRequest. Fixes envoyproxy#12960 Signed-off-by: Antonio Vicente <avd@google.com> Signed-off-by: Christoph Pakulski <christoph@tetrate.io>
The client request stream can be deleted under the call stack of Envoy::IntegrationCodecClient::startRequest if the proxy replies quickly enough. Attempts to send an end stream on that request result in use-after-free on the client stream in cases where the client processed the full reply inside startRequest. Fixes #12960 Signed-off-by: Antonio Vicente <avd@google.com> Signed-off-by: Christoph Pakulski <christoph@tetrate.io> Co-authored-by: antonio <avd@google.com>
The client request stream can be deleted under the call stack of Envoy::IntegrationCodecClient::startRequest if the proxy replies quickly enough. Attempts to send an end stream on that request result in use-after-free on the client stream in cases where the client processed the full reply inside startRequest. Fixes envoyproxy#12960 Signed-off-by: Antonio Vicente <avd@google.com>
The client request stream can be deleted under the call stack of Envoy::IntegrationCodecClient::startRequest if the proxy replies quickly enough. Attempts to send an end stream on that request result in use-after-free on the client stream in cases where the client processed the full reply inside startRequest. Fixes envoyproxy#12960 Signed-off-by: Antonio Vicente <avd@google.com> Signed-off-by: Qin Qin <qqin@google.com>
In a CI coverage run for another PR, a flake was caught in OauthIntegrationTest.UnauthenticatedFlow
which I haven't been able to reproduce locally using O(10k) repetitions. Possibly something
that coverage does, like adding pressure on certain system resources, is needed to tease the issue
out.
Relevant CI log: https://dev.azure.com/cncf/4684fb3d-0389-4e0b-8251-221942316e06/_apis/build/builds/50172/logs/98
The text was updated successfully, but these errors were encountered: