-
Notifications
You must be signed in to change notification settings - Fork 37
Deleting or scaling down Cassandra StatefulSet #30
Comments
I am trying C* deployment from k8s: https://github.com/kubernetes/examples/blob/master/cassandra/README.md I could not use this C* because image
Deleting of any pod worked:
Scaling UP worked
Scaling DOWN did not worked
|
Is this outcome different than what we have, or is it just a confirmation that we are doing the same as the reference is doing? |
By reference do you mean cassandra deployment from k8s examples? The outcome was to get more familiar with C* deployment on K8s and explore what works/does not work in our deployment (maybe for future improvements). I strongly agree with you that currently, we should just say that our deployment has a limited functionality. |
Yes :) I'm just not sure what is the relevant part on that comment, as I wouldn't know how to compare that with the "expected" output, or with the output from our template. |
If you delete a pod and it recovers as expected it should be |
curious if you ever figured this out @pavolloffay, I'm experiencing the same and wondering how to scale down. |
@hobbs C* template provided in this repo is not production ready, use other templates or helm charts to create scalable C* deployment |
I get
Cannot achieve consistency level LOCAL_ONE
after I have manually deleted C* pod. Sometimes it recovered, sometimes it returned this error. C* logs show this:related issues:
kubernetes/kubernetes#24030 (comment)
kubernetes/kubernetes#34978 (comment)
The text was updated successfully, but these errors were encountered: