-
-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Applier hangs if schedule request buffer is full #926
Comments
nightkr
added a commit
to nightkr/kube-rs
that referenced
this issue
Jun 8, 2022
nightkr
added a commit
to nightkr/kube-rs
that referenced
this issue
Jun 8, 2022
This fixes kube-rs#926, since we already run multiple reconcilers in parallel.
nightkr
added a commit
to nightkr/kube-rs
that referenced
this issue
Jun 8, 2022
nightkr
added a commit
to nightkr/kube-rs
that referenced
this issue
Jun 8, 2022
Signed-off-by: Teo Klestrup Röijezon <teo@nullable.se>
nightkr
added a commit
to nightkr/kube-rs
that referenced
this issue
Jun 8, 2022
This fixes kube-rs#926, since we already run multiple reconcilers in parallel. Signed-off-by: Teo Klestrup Röijezon <teo@nullable.se>
nightkr
added a commit
to nightkr/kube-rs
that referenced
this issue
Jun 8, 2022
Signed-off-by: Teo Klestrup Röijezon <teo@nullable.se>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Current and expected behavior
See #925 for more details.
The applier currently stops trying to process new scheduling requests while trying to write back the result of a reconciliation. This leads to a deadlock where the buffer is never emptied because we're busy trying to fill it.
Thanks to @moustafab for reporting the issue and contributing a workaround.
Possible solution
Additional context
No response
Environment
This is a runtime issue, independent of K8s version
Configuration and features
kube 0.73.1
Affected crates
kube-runtime
Would you like to work on fixing this bug?
yes
The text was updated successfully, but these errors were encountered: