-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dynamic fwd proxy: fix race condition in filter destructor #32692
Conversation
Signed-off-by: Christoph Pakulski <paker8848@gmail.com>
/retest |
Signed-off-by: Christoph Pakulski <paker8848@gmail.com>
Signed-off-by: Christoph Pakulski <paker8848@gmail.com>
CC @envoyproxy/coverage-shephards: FYI only for changes made to |
Signed-off-by: Christoph Pakulski <paker8848@gmail.com>
Signed-off-by: Christoph Pakulski <paker8848@gmail.com>
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couple of thoughts
first, please don't publish crash fixes without checking with envoy-setec. I think this is hard enough to exploit we don't have to treat as a zero day but let's err on the side of caution.
second, looks like this may be a generic problem. having any "main thread" object register for thread local callbacks, seems like a recipe for deletion races. I'd lean towards a solution that works beyond proxy filter - I'd be inclined to change the function signature for adding the callback to be a shared pointer (here to a wrapper shim, that we can detatch from the DFP config) such that the shim will self-delete once no worker threads are pointed to it.
thoughts?
/wait
@alyssawilk Thanks a lot reviewing.
Unless I am missing something, I believe that the protocol of reporting crashes was followed. The crash was originally reported to setec group and the reply was to log it as public issue: #32427.
That sounds like a good idea. Let me experiment with this. The other option would be to modify |
@alyssawilk I created another PR: #32798 which solves the problem at the much lower level and is probably more elegant. |
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions! |
Closing. Fixed in #33303. |
Commit Message:
dynamic fwd proxy: fix race condition in filter destructor
Additional Description:
Dynamic fwd proxy is specific comparing to other filters because it controls upstream cluster. The filter registers itself (
this
pointer) in ClusterManager as cluster callbacks: https://github.com/envoyproxy/envoy/blob/release/v1.29/source/extensions/filters/http/dynamic_forward_proxy/proxy_filter.h#L37.In the filter's destructor, tls slot destructor will remove the registered callbacks. The problem is that tls slots are not destructed atomically. Instructions to release slots are posted on each thread and main thread continues with filter destructor. This means that there is a short period of time when the filter is gone (memory was released) but callbacks on each worker thread still point to the deleted callbacks. If at that time a worker thread iterates over callbacks, like here: https://github.com/envoyproxy/envoy/blob/release/v1.29/source/common/upstream/cluster_manager_impl.cc#L1314, it will access a callback pointer which has already been released.
I observed a crash when I reconfigured a dynamic fwd proxy 2 times per second. The code crashed after ~72 hrs. When I reconfigure the filter 50 timer per second the code crashes within 10 minutes.
The fix is to introduce some order when destructor is called. When callbacks are added to each thread, the added counter is increased. That counter is decremented on each thread when callbacks are deregistered. The destructor waits on condition variable until
this
pointer has been removed from all threads and the counter drops to zero.Risk Level: Low
Testing: Manual. It is difficult to simulate thread switching in unit/integration tests.
Docs Changes: No
Release Notes: No
Platform Specific Features: No
Fixes #32427