Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/mariadb] stateful sets break with Helm v3 #19231

Closed
jkirkham-ratehub opened this issue Nov 28, 2019 · 16 comments
Closed

[stable/mariadb] stateful sets break with Helm v3 #19231

jkirkham-ratehub opened this issue Nov 28, 2019 · 16 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@jkirkham-ratehub
Copy link

Describe the bug
After using helm 2to3 to migrate a release to helm v3 we encountered an issue with the StatefulSets in MariaDB. Labels added to the volumeClaimTemplate include a k-v pair for "Heritage". In Helm v2 this resolved to "Tiller" but in Helm v3 it now resolves to "Helm". This blocks our ability to upgrade because some of these key fields in a StaefulSet are meant to be immutable.
e.g.

heritage: {{ .Release.Service | quote }}

Version of Helm and Kubernetes:
Kubernetes 1.14.8
Helm 3.0.0

Which chart:
https://github.com/helm/charts/tree/master/stable/mariadb

What happened:
Upgrades now fail because of the Heritage label in the volumeClaimTemplate.

What you expected to happen:
No release name changes occured so it is expected that the Helm release with the Stateful sets can be upgraded.

How to reproduce it (as minimally and precisely as possible):
See above. This may affect other charts using Stateful sets.

Anything else we need to know:
I don't know of a work-around apart from migrating to a new DB.

@juan131
Copy link
Collaborator

juan131 commented Nov 29, 2019

Hi @jkirkham-ratehub

After using helm 2to3 to migrate a release to helm v3

I guess you're talking about this tool: https://github.com/helm/helm-2to3

This blocks our ability to upgrade because some of these key fields in a StatefulSet are meant to be immutable.

You're 100% right! We need to find a way to support upgrading from releases that were moved from helm2 to helm3.

I guess we could use something like this as a workaround:

$ kubectl patch statefulset my-release-mariadb-master --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/heritage"}]'
$ kubectl patch statefulset my-release-mariadb-slave  --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/heritage"}]'
...
$ helm upgrade my-release ...

@carrodher @javsalgar @tompizmor what do you think?

@floretan
Copy link

floretan commented Dec 2, 2019

I was about to submit another ticket when I saw this one. Here are concrete steps to reproduce:

helm2 install --name test stable/mariadb
helm3 2to3 convert test
helm3 upgrade test stable/mariadb

The last step gives the error:

Error: UPGRADE FAILED: cannot patch "test-mariadb-master" with kind StatefulSet: StatefulSet.apps "test-mariadb-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden && cannot patch "test-mariadb-slave" with kind StatefulSet: StatefulSet.apps "test-mariadb-slave" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

@juan131
Copy link
Collaborator

juan131 commented Dec 4, 2019

Thanks for sharing the exact steps to reproduce the issue @floretan
Did you try the workaround I shared before? (patching the k8s objects removing the label)

@floretan
Copy link

floretan commented Dec 4, 2019

I tried the patch above, but the problem is actually the label on the statefulset's volumeClaimTemplates, not the label of the statefulset itself. I updated the path command to match, but the operation is not permitted:

 kubectl patch statefulset test-mariadb-master --type=json -p='[{"op": "remove", "path": "/spec/volumeClaimTemplates/0/metadata/labels"}]'
The StatefulSet "test-mariadb-master" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

I see two options, both of which are not really ideal:

  • Delete and recreate the statefulset. The underlying persistent volume claim is persisted, but it's still not a pleasant thing to do.
  • Trick helm 3 into setting .Release.Service to be "Tiller". I haven't found a way to do that though.

@skaji
Copy link
Contributor

skaji commented Dec 16, 2019

FYI, this issue also affects the redis chart.

heritage: {{ .Release.Service }}

@juan131
Copy link
Collaborator

juan131 commented Dec 19, 2019

Hi @floretan @skaji

I tried the patch above, but the problem is actually the label on the statefulset's volumeClaimTemplates, not the label of the statefulset itself. I updated the path command to match, but the operation is not permitted

Oh crap... You're right. Probably you need to use new PVCs and clone the content of the old PVs (https://kubernetes.io/blog/2019/06/21/introducing-volume-cloning-alpha-for-kubernetes/) so you don't lose the data.

This issue also affects the redis chart.

This issue affects almost every chart in the stable repo (since they were meant to be installed with Helm 2 when they were created)

@stale
Copy link

stale bot commented Jan 19, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2020
@juan131
Copy link
Collaborator

juan131 commented Jan 22, 2020

Do not stale

@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2020
@stale
Copy link

stale bot commented Feb 21, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 21, 2020
@br0nwe
Copy link

br0nwe commented Mar 3, 2020

We have the same problems in all our clusters. isn't there a less painfull way (then cloning hundreds of PVC's) to solve this problem?

@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 3, 2020
@stale
Copy link

stale bot commented Apr 2, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 2, 2020
@carrodher
Copy link
Collaborator

Hi,

Given the stable deprecation timeline, this Bitnami maintained Helm chart is now located at bitnami/charts. Please visit the bitnami/charts GitHub repository to create Issues or PRs, in this case, if the problem persist with latest version of bitnami/mariadb, please don't hesitate to report it in the bitnami/charts GH repo.

In this issue, we tried to explain more carefully the reasons and motivations behind this transition, please don't hesitate to add a comment in this issue if you have any question related to the migration itself.

Regards,

@stale stale bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2020
@stale
Copy link

stale bot commented May 3, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

@stale stale bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 3, 2020
@stale
Copy link

stale bot commented May 17, 2020

This issue is being automatically closed due to inactivity.

@stale stale bot closed this as completed May 17, 2020
@dejan9393
Copy link

Has anyone found a viable workaround for this issue?

@br0nwe
Copy link

br0nwe commented Jul 1, 2021

Has anyone found a viable workaround for this issue?

I wrote a little script that replaced all statefulsets without dropping the pv(c)'s:

`# ------------------- README ----------------------

Fix Helm2 statefulsets for Helm3 usage

-------------------------------------------------

app='app-name'
chart='chart-name'
name='name'
repo='repo-name'

NAMESPACES_RAW="$(kubectl get namespaces --output=custom-columns=HEADER:.metadata.name)"
read -a array -d ' ' <<< "$NAMESPACES_RAW"

for ns in "${array[@]}"
do
if [[ $ns =~ ^.*customer ]];
then
echo -e "\n\n\n\n\nFixing Helm3 upgrade issue for $app: $ns \n"
echo "kubectl delete --cascade=false statefulset $ns-$name --namespace $ns"
kubectl delete --cascade=false statefulset $ns-$name --namespace $ns
echo "helm upgrade --install --wait $ns $repo/$chart --namespace $ns --timeout 10m0s"
helm upgrade --install --wait $ns $repo/$chart --namespace $ns --timeout 10m0s
echo "kubectl get statefulsets.apps -n $ns"
kubectl get statefulsets.apps -n $ns
echo -e "\n\n\nHeritage is now: \n\n"
kubectl describe statefulset $ns-$name -n $ns | grep heritage
fi
done
echo -e "finished repair of $ns-$name"`

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

7 participants