-
Notifications
You must be signed in to change notification settings - Fork 16.8k
Clarify when to use StatefulSet instead of Deployment for Charts with PVC #1863
Comments
@apeschel Thanks for the issue. Totally agree with you i have been thinking about this recently as well, yes as a part of Kubernetes 1.8 and 1.9 sig-apps is expecting more feedback from the community with regards to statefulset. Migrating stateful applications from deployment to statefulset is one of the best way to start getting feedback from users. We have already started reasoning with (new) chart contributors about their choice of I think (apart from adding in best practices) we should start by migrating well-known DBs and K/V stores to statefulsets from deployments
cc: @kow3ns |
Stateful sets are not applicable in all cases. For example some pods need to share a pvc, whereas stateful sets are designed so that each pod is backed by its own storage. In that case a Deployment is more appropriate. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
This is most definitely still an issue. |
Thats a huge issue with RWO (e.g. Block Storage) PVCs like Longhorn. You cannot upgrade the chart because the upgrade cannot mount the storage, used by the old pod. Even with NFS & co is very dangerous .. imagine the database pod needs to be upgraded, a new pod is started accessing the same storage and files as the already running old pod. |
StatefulSets allow you to use a |
Another advantage of StatefulSet is that you can There's indeed still the cases where a single volume is used by multiple Pods. It's more advanced as more volumes support only RWO and those that don't are slow(er). It may use StatefulSet but switch to use a PVC RWM when >1 replicas is asked (or using a value). |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
As an afterthought, I think switching to statefulset for DBs like postgres that don't natively scale is good for one thing and only one: VolumeClaimTemplate and the ability to delete a Release then reinstall it (without changing values to use custom PVC), and still having the PVC. |
@desaintmartin ah that is less troublesome with statefuleset?! Nice! I'm currently doing something quite troublesome whenever that needs to be done: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/CHANGELOG.md |
Actually, with Deployments, you need to declare the PVC (AFAIK). So when you delete a release but you set the PVC to stay (using annotations), when you reinstall the chart, Helm will complain that the PVC already exists. So you need to change the values.yaml (when possible) to manually set the PVC and don't automatically create it. With StatefulSet, it's automated. |
Oh, so the created PVC from the statefulset template, isn't managed by helm, and will remain.
So, the StatefulSet is binding to the same PV again by requesting the same PVC, but if the PVC is deleted, one has to do extra work no matter what. A new PVC, created by the statefulset or by helm, will get a new To summarize, the benefit you see @desaintmartin, is that statefulsets' PVCs are not manage by helm, and will be reused by statefulsets coming and going. This differs from a Deployment + PVC managed by helm, that comes and goes, as the PV is bound to a specific PVC with a certain |
I was just bitten badly by this chart not following that pattern. I did a helm delete and a helm install but I lost all of my dashboards because the PVC vanished. The rest of my services that used persistence restarted as intended because they were statefulsets. |
#8004 proposes to switch to StatefulSet. It might take some time to get this done. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
@scottrigby you say:
Can you expand on that? Are you for example making a distinction between transient state (caches for example) and persistent state (let's say minio or postgresql), or is it about something else? |
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching node-red chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping node-red docker image to the latest stable version * using volumeClaimTemplates for statefulSet Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com> Signed-off-by: Kevin Duane <duank001@apps.disney.com>
* switching node-red chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping node-red docker image to the latest stable version * using volumeClaimTemplates for statefulSet Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count Signed-off-by: Jeff Billimek <jeff@billimek.com> Signed-off-by: Kevin Duane <duank001@apps.disney.com>
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching node-red chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping node-red docker image to the latest stable version * using volumeClaimTemplates for statefulSet Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching node-red chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping node-red docker image to the latest stable version * using volumeClaimTemplates for statefulSet Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count Signed-off-by: Jeff Billimek <jeff@billimek.com>
Coming in late for this discussion with an interesting question... What happens when you are using StatefulSets without a dynamic PV provisioning solution? I'll admit manually creating each PV to match a specific PVC is awful, but it needs to be done anyway in this case. PVCs are created slowly as each pod in the StatefulSet becomes ready. Would it be possible to prepare the chart template to automatically assign a PV volume name to the PVC spec? |
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm/charts#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching node-red chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm/charts#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping node-red docker image to the latest stable version * using volumeClaimTemplates for statefulSet Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching node-red chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping node-red docker image to the latest stable version * using volumeClaimTemplates for statefulSet Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching node-red chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping node-red docker image to the latest stable version * using volumeClaimTemplates for statefulSet Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count Signed-off-by: Jeff Billimek <jeff@billimek.com>
One person's feature is another person's bug :) I have a chart that uses postgres as a subchart. After reading all this I still don't get how to configure it in such a way that the data does get purged. I'm not even sure that it can be done at all. Can it @wernight @desaintmartin ? |
Unfortunately, right now, it cannot, as it has not been created by Helm. See helm/helm#5156 |
This is why: I've actually seen the case where a new Jenkins master pod is unable to start because the other is holding onto its PersistentVolumeClaim. Fundamentally, the Jenkins master is a stateful application, and needs to be handled as such. |
@dylanpiergies I am adding the same for Sonarqube which depicts the same behavior as Jenkins master. The pvc which is required by the service is being hold up by the existing pod and updates failed. See the logs below:
|
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm/charts#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com>
* switching unifi chart to SatefulSet * based on the persistent nature of this chart as well as [this discussion](helm/charts#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version * bumping unifi controller to the latest stable version (5.10.19) * adding @mcronce to the OWNERS file Signed-off-by: Jeff Billimek <jeff@billimek.com> * using volumeClaimTemplates for statefulSet * also updating label syntax to current helm standards (e.g. `app.kubernetes.io/name`) Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing indenting Signed-off-by: Jeff Billimek <jeff@billimek.com> * using Parallel podManagementPolicy Signed-off-by: Jeff Billimek <jeff@billimek.com> * revert to Deployment and leverage strategy types Signed-off-by: Jeff Billimek <jeff@billimek.com> * include readme entry for strategyType Signed-off-by: Jeff Billimek <jeff@billimek.com> * hard-code replica count and add mcronce to Chart maintainers Signed-off-by: Jeff Billimek <jeff@billimek.com> * fixing linting error Signed-off-by: Jeff Billimek <jeff@billimek.com>
Thanks, all. I'll close the issue here as this repo is not active. If desired, please contribute to Helm docs for clarifications: https://github.com/helm/helm-www/ |
There seems to be a recurring bad practice among the charts in this repository: using a Deployment to manage pods using Persistent Volume Claims, rather than the proper StatefulSet.
To demonstrate just how pervasive the problem is, one can compare the list of charts using a StatefulSet vs a Deployment.
The list of stateful charts using a StatefulSet:
versus the stateful charts using a Deployment:
Hopefully I'm not completely missing something here -- please let me know if I overlooked a good reason why these charts are using Deployments instead of StatefulSets.
Assuming that I'm not completely off in the weeds, there are a few clear asks here:
The text was updated successfully, but these errors were encountered: