-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
listener: listen socket factory can be cloned from listener draining … #18686
Conversation
…filter chain Signed-off-by: Yuchen Dai <silentdai@gmail.com>
/assign @mattklein123 |
I would argue that #18677 is also needed. In the test case, we add the listener 0.0.0.0:8080. It's happy that the requests arrived prior to the add is queued. However, the add back is pretty much unpredictable. |
/retest |
Retrying Azure Pipelines: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, makes sense to me with small comment.
/wait
Signed-off-by: Yuchen Dai <silentdai@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! We should backport this obviously.
There is no issue if we add a fresh new listener after we stop the listener on that address. Think about why adding back a fresh listener can duplicate that draining filter chain listener. The state of the system is described here: |
Thank you for the offline guidance and the quick approval! |
/backport Only need backport to 1.20 |
Sorry I still don't understand. If you want to continue discussion on the other PR we can talk there, but I don't understand any scenario in which that PR will help. The real fix is cloning if we want to reuse a socket and not drop anything. |
envoyproxy#18686) Signed-off-by: Yuchen Dai <silentdai@gmail.com>
…filter chain
Commit Message:
Currently the in place updated old listener holds the listen fd during the drain timeout.
If the corresponding active listener is removed and added back, the added back listener creates fresh new passive sockets with new inodes.
The kernel spread the accepted requests to the old listen fd and new listenfd.
The accepted sockets to the old listen fd will be lost. Even somehow the kernel/ebpf requeue these sockets,
these sockets will be handled by envoy after the drain timeout is triggered.
This PR duplicate the listen fd from the old listener. The new listen fd share the same inode.
The newly create listener will consume the accepted sockets if any.
Signed-off-by: Yuchen Dai silentdai@gmail.com
Additional Description:
Risk Level:
Testing: Unittest and a bunch of istio e2e tests
Docs Changes:
Release Notes:
Platform Specific Features:
[Optional Runtime guard:]
Fix #18616
[Optional Fixes commit #PR or SHA]
[Optional Deprecated:]
[Optional API Considerations:]