-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade path for the zookeeper persistence issue #228
Conversation
Fixes #89, "logs" which are actually data would end up outside the mount. Zookeeper's startup logs are more clear than the property file entries: INFO Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2
Edit: Got an idea for a less risky upgrade flow that doesn't require downtime and doesn't involve filesystem operations on running pods. See new comment below. First step of an upgrade is to take a healthy cluster and replace the ephemeral
Now take a pause and make sure your cluster is still working. When everything is stable do:
Check for example |
Not recommended for production yet, but I upgraded to v5.0.1 after the above, like so:
|
An upgrade flow that can avoid downtime, I think, is to make use of the fact that Thus we can scale up Note that for the configmap to have effect you must delete the old pods one by one after the two new pods have become ready. The suggested order is After that comes the risky part: you can switch to the v4.3.1 release, apply 0ed261f there, apply the configmap again and then delete+apply as above the Finally revert the +2 scale patch, apply the configmap, delete the As you can see ideas in #225 are most welcome. Note that |
I ran the above upgrade flow now using the commands below. That this is not a script: you need to run the commands one by one and make sure everything is up and healthy inbetween:
I had tests and eventrouter running, with no interruptions. I also tested deletion of kafka pods after the |
I think it's actually possible to upgrade using Edit: Which means #228 (comment), with other git tag names, is our upgrade flow to v5.0.3 :) |
Confirmed. I've upgraded without scaling: Directly from v4.3.1, no additional patch:
|
#227 for a v4.3.1 release, fixing #89, with an upgrade path to migrate your zookeeper state. Recovery from snapshots is an alternative, but I haven't looked into that.
v5.0.0 hasn't been out long enough for anyone to run anything important on, I suppose. Thus we should focus on how to migrate a 4.3.0 installation, that might have been running since v2.0.0. After this upgrade I think that it'll be a straightforward
kubectl replace
to go to v5.0.1.