-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: terminate goroutines gracefully #21
Conversation
Signed-off-by: Zespre Chang <zespre.chang@suse.com>
Signed-off-by: Zespre Chang <zespre.chang@suse.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If my thought is correct, it might be a issue here, please take a look my comments in sequence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a NIT, I will approve it if you're not planning to change.
Signed-off-by: Zespre Chang <zespre.chang@suse.com> Co-authored-by: Jack Yu <jack.yu@suse.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks.
IMPORTANT: Please do not create a Pull Request without creating an issue first.
Problem:
The context is not propagated appropriately. Goroutines do not work together correctly if some of them encounter failure.
Solution:
Passing down context and grouping goroutines to make them clean up resources after receiving the interrupt/terminate signal.
Related Issue:
harvester/harvester#5072
Test plan:
Create an IPPool object (please adjust the content according to your environment setup).
Wait a few seconds for the agent Pod to become ready. Monitor the log (keep it opened and start a new shell for the next step):
Remove the IPPool object to trigger agent Pod teardown.
Go back to the log monitor. The teardown logs should look like the following: