You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In short, the startOneNodePerRack is unable to handle situations where StatefulSet has 0 nodes, since it still tries to find one. It should detect that the rack is fine, since target node count is 0.
Without this check, it will do a nil pointer check and requeue forever.
Did you expect to see something different?
How to reproduce it (as minimally and precisely as possible):
Environment
Cass Operator version:
Insert image tag or Git SHA here
* Kubernetes version information:
kubectl version
* Kubernetes cluster kind:
insert how you created your cluster: kops, bootkube, etc.
Manifests:
insert manifests relevant to the issue
Cass Operator Logs:
insert Cass Operator logs relevant to the issue here
Anything else we need to know?:
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1863
┆priority: Medium
The text was updated successfully, but these errors were encountered:
sync-by-unitobot
changed the title
startOneNodeRack does not handle decommissioned StatefulSets
K8SSAND-1863 ⁃ startOneNodeRack does not handle decommissioned StatefulSets
Oct 28, 2022
What happened?
k8ssandra/k8ssandra-operator#746 for more details
In short, the startOneNodePerRack is unable to handle situations where StatefulSet has 0 nodes, since it still tries to find one. It should detect that the rack is fine, since target node count is 0.
Without this check, it will do a nil pointer check and requeue forever.
Did you expect to see something different?
How to reproduce it (as minimally and precisely as possible):
Environment
Cass Operator version:
* Kubernetes version information:Insert image tag or Git SHA here
* Kubernetes cluster kind:kubectl version
Anything else we need to know?:
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1863
┆priority: Medium
The text was updated successfully, but these errors were encountered: