Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow modification of configs if the first pod of rack fails #583

Open
burmanm opened this issue Oct 11, 2023 · 3 comments · May be fixed by #711
Open

Allow modification of configs if the first pod of rack fails #583

burmanm opened this issue Oct 11, 2023 · 3 comments · May be fixed by #711
Assignees
Labels
enhancement New feature or request ready Issues in the state 'ready'

Comments

@burmanm
Copy link
Contributor

burmanm commented Oct 11, 2023

What is missing?

Currently, if user deploys something that's broken and causes the first pod of the rack to not start, we require the user to manually set "forceUpgradeRacks" property which will then override the current configuration despite the datacenter not being ready from previous change.

This process feels like something we could automate. If we're still processing the first pod (but rest are up), we should probably allow user to fix the configuration and then we would apply it to make the cluster healthy again. In other words, try to detect when "forceUpgradeRacks" is going to be required and just simply apply it in our internal process.

Even better would be to rollback to previous setting, but we don't want to downgrade the user set CRDs by ourselves as that could mess up with tools like Argo.

Why is this needed?

Automatic recovery is not easy if it requires modifying the CRD and then cass-operator will modify the CRD to ensure next time it's not applicable. We want to simplify the user experience by automatically detecting this issue as these incorrect settings do appear every now and then (user mistakes happen).

┆Issue is synchronized with this Jira Story by Unito
┆Fix Versions: 2024-10
┆Issue Number: CASS-17

@burmanm burmanm added the enhancement New feature or request label Oct 11, 2023
@burmanm burmanm moved this to Assess/Investigate in K8ssandra Dec 19, 2023
@adejanovski adejanovski added the assess Issues in the state 'assess' label Dec 19, 2023
@adejanovski
Copy link
Contributor

If we're still processing the first pod (but rest are up)

Why would we need the rest of the pods to be up?
I'm thinking that if all pods are down then it's ok to apply the upgrade as well. I ran into this case where the serverImage provided was wrong and the image couldn't be pulled, which cannot be fixed unless you force the rack upgrade.

@adejanovski
Copy link
Contributor

We need to be able to detect what a failed state would be for a pod in the statefulset (it could be pending scheduling, or having wrong coordinates for the image, etc...).

If the change was applied to a single pod (the revision of the other pods is still the previous one) and the last pod isn't in  "Running" state (it's the first updated one), then we allow a new update to the statefulset in order to fix what was preventing the pod from starting.

@adejanovski adejanovski moved this from Assess/Investigate to Ready in K8ssandra Aug 13, 2024
@adejanovski adejanovski added ready Issues in the state 'ready' and removed assess Issues in the state 'assess' labels Aug 13, 2024
Copy link

sync-by-unito bot commented Dec 4, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request ready Issues in the state 'ready'
Projects
No open projects
Status: Ready
Development

Successfully merging a pull request may close this issue.

2 participants