-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StatefulSet delete behavior changed in v1.11 #68627
Comments
/sig apps |
It seems the default value of pod management policy changed . You can choose parallel and see f that fixes it https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy |
Creation:
Deletion:
|
I don't see a default change between v1beta1 and v1 |
/area stateful-apps |
maybe the reason is the default deleteoption for |
/sig cli |
That change was made between 1.11.0 and 1.11.1 |
I think this changed with the removal of reapers in #63979. There used to be a reaper for StatefulSet that took care of orderly and graceful termination from the client, but not termination of pods are done by the garbage collector. |
If behavior is represented in the API, it should not depend on a specific client-side implementation for that behavior.
It is not clear from the documentation of that field that it applies to deletion of the statefulset, only to scale-down. If it is intended to apply to statefulset deletion as well, it must be implemented via finalizers/controller so that the field is honored when any client does a single cascading delete (as kubectl does in 1.11, but as any other client could always have done). |
/remove-sig cli |
@smarterclayton @kubernetes/sig-apps-api-reviews for question about statefulset deletion behavior and the podManagementPolicy API field |
Stateful set deletion never had any guarantees, like deployments or daemonsets. Consumers must scale to zero and wait for ack. It’s unfortunate that the removal of reapers exposed this to end users. I think it’s reasonable to consider changes to Statefulsets to offer a simpler path for controlled shutdown within the controller. |
Ah I missed this was happening for deletion. And not rolling update . What is the guarantee for deployments ? |
I have created a PR to update the documentation to reflect the current behavior and guarantees provided by StatefulSets: kubernetes/website#10380 I discussed this with @enisoc and @kow3ns, and there probably is a way to do this using a custom finalizer to prevent the garbage collector from deleting the pods until the StatefulSet controller can scale down to 0. It gets a little more complicated when considering upgrades and rollbacks, but we think it can be solved. But it is not clear that the benefits of adding this outweighs the cost. It is pretty easy to work around this by simply scaling down the StatefulSet before deleting it. We think that with updated documentation, we can keep the current behavior and revisit it if there demand for a better solution. |
/assign kow3ns |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/assign @krmayankk |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind bug
What happened:
In k8s v1.11.1 all statefulset pods are deleted at the same time. This is what I observe:
What you expected to happen:
In k8s v1.9.3 StatefulSet deletion adheres to documentation and follows
Ordered, graceful deletion and termination
. In v1.9.3 when I delete the statefulset this is what I see:How to reproduce it (as minimally and precisely as possible):
Create, the delete the following:
Anything else we need to know?:
Environment:
kubectl version
):NAME="Ubuntu"
VERSION="16.04.2 LTS (Xenial Xerus)"
uname -a
):The text was updated successfully, but these errors were encountered: