Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Clarify when to use StatefulSet instead of Deployment for Charts with PVC #1863

Closed
apeschel opened this issue Aug 25, 2017 · 39 comments
Closed
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@apeschel
Copy link
Contributor

apeschel commented Aug 25, 2017

There seems to be a recurring bad practice among the charts in this repository: using a Deployment to manage pods using Persistent Volume Claims, rather than the proper StatefulSet.

To demonstrate just how pervasive the problem is, one can compare the list of charts using a StatefulSet vs a Deployment.

The list of stateful charts using a StatefulSet:

$ git grep -li 'kind: *StatefulSet' |
    awk -F '/' '{print $1}'
cockroachdb
concourse
consul
ipfs
memcached
minio
mongodb-replicaset
rethinkdb

versus the stateful charts using a Deployment:

$ git grep -l -i 'kind: *Deployment' |
    xargs grep -i PersistentVolumeClaim |
    awk -F '/' '{print $1}' |
    sort -u
artifactory
chronograf
dokuwiki
drupal
factorio
ghost
gitlab-ce
gitlab-ee
grafana
influxdb
jasperreports
jenkins
joomla
kapacitor
magento
mariadb
mediawiki
minecraft
minio
mongodb
moodle
mysql
odoo
opencart
openvpn
orangehrm
osclass
owncloud
percona
phabricator
phpbb
postgresql
prestashop
prometheus
rabbitmq
redis
redmine
rocketchat
sentry
testlink
traefik
wordpress 

Hopefully I'm not completely missing something here -- please let me know if I overlooked a good reason why these charts are using Deployments instead of StatefulSets.

Assuming that I'm not completely off in the weeds, there are a few clear asks here:

  • Add requirement to the contribution guideline for stateful charts to use a StatefulSet
  • Require new stateful charts to use a StatefulSet before they are accepted
  • Slowly convert the existing stateful charts to use StatefulSets instead of Deployments
@dhilipkumars
Copy link
Contributor

@apeschel Thanks for the issue. Totally agree with you i have been thinking about this recently as well, yes as a part of Kubernetes 1.8 and 1.9 sig-apps is expecting more feedback from the community with regards to statefulset. Migrating stateful applications from deployment to statefulset is one of the best way to start getting feedback from users.

We have already started reasoning with (new) chart contributors about their choice of deployments over statefulsets for stateful applications.

I think (apart from adding in best practices) we should start by migrating well-known DBs and K/V stores to statefulsets from deployments

redis
mysql
minio
mariadb
mongodb
postgresql

cc: @kow3ns

@jfelten
Copy link
Contributor

jfelten commented Sep 5, 2017

Stateful sets are not applicable in all cases. For example some pods need to share a pvc, whereas stateful sets are designed so that each pod is backed by its own storage. In that case a Deployment is more appropriate.

@etiennetremel
Copy link
Contributor

#3005

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2018
@ghost
Copy link

ghost commented Apr 14, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 14, 2018
@ghost
Copy link

ghost commented Apr 14, 2018

This is most definitely still an issue.

@dnauck
Copy link

dnauck commented Jun 20, 2018

Thats a huge issue with RWO (e.g. Block Storage) PVCs like Longhorn. You cannot upgrade the chart because the upgrade cannot mount the storage, used by the old pod.

Even with NFS & co is very dangerous .. imagine the database pod needs to be upgraded, a new pod is started accessing the same storage and files as the already running old pod.

@consideRatio
Copy link
Contributor

@jfelten

Stateful sets are not applicable in all cases. For example some pods need to share a pvc, whereas stateful sets are designed so that each pod is backed by its own storage. In that case a Deployment is more appropriate.

StatefulSets allow you to use a volumeClaimTemplates, but you can also declare volumes as you do within deployments, and volumeMount for a container in the pod. At least I've done so, but I did not use a volumeClaimTemplates field at the same time.

@wernight
Copy link

Another advantage of StatefulSet is that you can helm delete --purge RELEASE-NAME and re-create it with the same name, and it'll keep&reuse the data. There is a lot lower risk of deleting data.

There's indeed still the cases where a single volume is used by multiple Pods. It's more advanced as more volumes support only RWO and those that don't are slow(er). It may use StatefulSet but switch to use a PVC RWM when >1 replicas is asked (or using a value).

@stale
Copy link

stale bot commented Sep 9, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 9, 2018
@desaintmartin
Copy link
Collaborator

desaintmartin commented Sep 14, 2018

Duplicate of related to #809

As an afterthought, I think switching to statefulset for DBs like postgres that don't natively scale is good for one thing and only one: VolumeClaimTemplate and the ability to delete a Release then reinstall it (without changing values to use custom PVC), and still having the PVC.
This would be a very helpful feature for my use cases (a lot of test Releases that are automatically created as needed then deleted)

@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 14, 2018
@consideRatio
Copy link
Contributor

@desaintmartin ah that is less troublesome with statefuleset?! Nice!

I'm currently doing something quite troublesome whenever that needs to be done: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/CHANGELOG.md

@desaintmartin
Copy link
Collaborator

desaintmartin commented Sep 14, 2018

Actually, with Deployments, you need to declare the PVC (AFAIK). So when you delete a release but you set the PVC to stay (using annotations), when you reinstall the chart, Helm will complain that the PVC already exists. So you need to change the values.yaml (when possible) to manually set the PVC and don't automatically create it.

With StatefulSet, it's automated.

@consideRatio
Copy link
Contributor

consideRatio commented Sep 14, 2018

Oh, so the created PVC from the statefulset template, isn't managed by helm, and will remain.
While a PVC created for a deployment with helm, is managed by helm, and will be deleted. The underlying PV can be Retained though if the storageClass used has a reclaimPolicy of Retain rather than Delete. But the PV cannot be reused by a new PVC with a new uid until it has been made available again, and that won't happen unless:

# makes a `Released` PV `Available` again
kubectl patch pv $pv --patch '{"spec":{"claimRef":{"uid":null}}}}'

So, the StatefulSet is binding to the same PV again by requesting the same PVC, but if the PVC is deleted, one has to do extra work no matter what. A new PVC, created by the statefulset or by helm, will get a new uid no matter what I figure.


To summarize, the benefit you see @desaintmartin, is that statefulsets' PVCs are not manage by helm, and will be reused by statefulsets coming and going. This differs from a Deployment + PVC managed by helm, that comes and goes, as the PV is bound to a specific PVC with a certain uid and recreating that will force you to make the pv Available again manually, if it was set to Retain at all, if not it has simply been deleted.

@seanlaff
Copy link

seanlaff commented Oct 4, 2018

I was just bitten badly by this chart not following that pattern. I did a helm delete and a helm install but I lost all of my dashboards because the PVC vanished. The rest of my services that used persistence restarted as intended because they were statefulsets.

@desaintmartin
Copy link
Collaborator

#8004 proposes to switch to StatefulSet. It might take some time to get this done.

@stale
Copy link

stale bot commented Nov 3, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 3, 2018
@victornoel
Copy link
Collaborator

@scottrigby you say:

There are cases where that's not a good idea

Can you expand on that? Are you for example making a distinction between transient state (caches for example) and persistent state (let's say minio or postgresql), or is it about something else?

crackmac pushed a commit to crackmac/charts that referenced this issue Mar 29, 2019
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
crackmac pushed a commit to crackmac/charts that referenced this issue Mar 29, 2019
* switching node-red chart to SatefulSet

* based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version
* bumping node-red docker image to the latest stable version
* using volumeClaimTemplates for statefulSet

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count

Signed-off-by: Jeff Billimek <jeff@billimek.com>
crackmac pushed a commit to crackmac/charts that referenced this issue Mar 29, 2019
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
Signed-off-by: Kevin Duane <duank001@apps.disney.com>
crackmac pushed a commit to crackmac/charts that referenced this issue Mar 29, 2019
* switching node-red chart to SatefulSet

* based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version
* bumping node-red docker image to the latest stable version
* using volumeClaimTemplates for statefulSet

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count

Signed-off-by: Jeff Billimek <jeff@billimek.com>
Signed-off-by: Kevin Duane <duank001@apps.disney.com>
devnulled pushed a commit to devnulled/charts that referenced this issue Apr 25, 2019
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
devnulled pushed a commit to devnulled/charts that referenced this issue Apr 25, 2019
* switching node-red chart to SatefulSet

* based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version
* bumping node-red docker image to the latest stable version
* using volumeClaimTemplates for statefulSet

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count

Signed-off-by: Jeff Billimek <jeff@billimek.com>
dermorz pushed a commit to dermorz/charts that referenced this issue Apr 26, 2019
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
dermorz pushed a commit to dermorz/charts that referenced this issue Apr 26, 2019
* switching node-red chart to SatefulSet

* based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version
* bumping node-red docker image to the latest stable version
* using volumeClaimTemplates for statefulSet

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count

Signed-off-by: Jeff Billimek <jeff@billimek.com>
@juliohm1978
Copy link
Contributor

Coming in late for this discussion with an interesting question...

What happens when you are using StatefulSets without a dynamic PV provisioning solution?

I'll admit manually creating each PV to match a specific PVC is awful, but it needs to be done anyway in this case. PVCs are created slowly as each pod in the StatefulSet becomes ready.

Would it be possible to prepare the chart template to automatically assign a PV volume name to the PVC spec?

edsoncsouza pushed a commit to socialbase/charts that referenced this issue May 14, 2019
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm/charts#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
edsoncsouza pushed a commit to socialbase/charts that referenced this issue May 14, 2019
* switching node-red chart to SatefulSet

* based on the persistent nature of this chart as well as [this discussion](helm/charts#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version
* bumping node-red docker image to the latest stable version
* using volumeClaimTemplates for statefulSet

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count

Signed-off-by: Jeff Billimek <jeff@billimek.com>
goshlanguage pushed a commit to goshlanguage/charts that referenced this issue May 17, 2019
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
goshlanguage pushed a commit to goshlanguage/charts that referenced this issue May 17, 2019
* switching node-red chart to SatefulSet

* based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version
* bumping node-red docker image to the latest stable version
* using volumeClaimTemplates for statefulSet

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count

Signed-off-by: Jeff Billimek <jeff@billimek.com>
eyenx pushed a commit to eyenx/charts that referenced this issue May 28, 2019
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
eyenx pushed a commit to eyenx/charts that referenced this issue May 28, 2019
* switching node-red chart to SatefulSet

* based on the persistent nature of this chart as well as [this discussion](helm#1863), migrating the chart to a StatefulSet instead of a deployment. As a result bumping the major version
* bumping node-red docker image to the latest stable version
* using volumeClaimTemplates for statefulSet

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count

Signed-off-by: Jeff Billimek <jeff@billimek.com>
@fdlk
Copy link

fdlk commented May 31, 2019

Another advantage of StatefulSet is that you can helm delete --purge RELEASE-NAME and re-create it with the same name, and it'll keep&reuse the data. There is a lot lower risk of deleting data.

One person's feature is another person's bug :)
I come here wondering why my postgres deployments contain old data even though I purged the previous deployment.

I have a chart that uses postgres as a subchart. After reading all this I still don't get how to configure it in such a way that the data does get purged. I'm not even sure that it can be done at all. Can it @wernight @desaintmartin ?

@desaintmartin
Copy link
Collaborator

Unfortunately, right now, it cannot, as it has not been created by Helm. See helm/helm#5156

@dylanpiergies
Copy link

dylanpiergies commented Oct 13, 2019

This is why:

https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes#deployments_vs_statefulsets

I've actually seen the case where a new Jenkins master pod is unable to start because the other is holding onto its PersistentVolumeClaim. Fundamentally, the Jenkins master is a stateful application, and needs to be handled as such.

@shapeofarchitect
Copy link

shapeofarchitect commented Dec 11, 2019

@dylanpiergies I am adding the same for Sonarqube which depicts the same behavior as Jenkins master. The pvc which is required by the service is being hold up by the existing pod and updates failed. See the logs below:

Warning FailedAttachVolume 42m attachdetach-controller Multi-Attach error for volume "pvc-02341115-174c-xxxx-xxxxxxx" Volume is already used by pod(s) sonarqube-sonarqube-xxxxxx-xxxxx

Warning FailedMount 90s (x18 over 40m) kubelet, aks-basepool-XXXXX Unable to mount volumes for pod "sonarqube-sonarqube-xxxxx-xxxxx_xxxxx(cd802a4d-1c02-11ea-847b-xxxxxxx)": timeout expired waiting for volumes to attach or mount for pod "xxxx-pods"/"sonarqube-sonarqube-xxxxxxxxx". list of unmounted volumes=[sonarqube]. list of unattached volumes=[config install-plugins copy-plugins sonarqube tmp-dir default-token-ztvcd]

billimek added a commit to k8s-at-home/charts that referenced this issue Sep 5, 2020
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm/charts#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
billimek added a commit to k8s-at-home/charts that referenced this issue Sep 5, 2020
* switching unifi chart to SatefulSet

* based on the persistent nature of this chart as well as [this
discussion](helm/charts#1863), migrating the
chart to a StatefulSet instead of a deployment. As a result bumping the
major version
* bumping unifi controller to the latest stable version (5.10.19)
* adding @mcronce to the OWNERS file

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using volumeClaimTemplates for statefulSet

* also updating label syntax to current helm standards (e.g.
`app.kubernetes.io/name`)

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing indenting

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* using Parallel podManagementPolicy

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* revert to Deployment and leverage strategy types

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* include readme entry for strategyType

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* hard-code replica count and add mcronce to Chart maintainers

Signed-off-by: Jeff Billimek <jeff@billimek.com>

* fixing linting error

Signed-off-by: Jeff Billimek <jeff@billimek.com>
@bridgetkromhout
Copy link
Member

Thanks, all. I'll close the issue here as this repo is not active. If desired, please contribute to Helm docs for clarifications: https://github.com/helm/helm-www/

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests