-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
live app upgrade #229
Comments
Rough notes -
|
The Ampool guys had an interesting request here. When one of their workers goes down and comes back up, there's a subsequent period of time in which it could be bad for another worker to go down, before the system has done some necessary rebalancing/repopulating. In general this idea of application-gated upgrade progress is probably a good thing to support (added value). The in-container app code knows when it is OK to proceed with the next pod upgrade, but how would we interface that with whatever code/controller is actually doing the pod upgrades? KD could control the upgrade progress in various ways:
The other side of this question is how KD knows when it is OK to proceed. In the case of Ampool we could e.g. run a script hook in the controller container to see what it says, but I'm wondering how the idea of "what to do to learn if it's ok to keep upgrading" can be generalized and expressed in the kdapp. Food for thought. |
Will be filing a bunch of finer-grained issues for work on the live-app-upgrade branch. |
This would be one of the most useful & impressive features we could add in the near term.
App upgrade could be driven by an edit to an existing KubeDirectorApp CR, preserving its name but changing its version string. Obviously currently we block such edits altogether if any clusters are using the app, but if we support upgrade we can allow them.
If cluster reconciliation notices that the app version is different from what it is, it can kick off a statefulset update.
This is obviously a lifecycle event, and post-update we need to be able to trigger a script hook insider the containers to do app-specific stuff.
The really tricky question here is what to do with any directories mounted from persistent volumes. Here's a strawman:
This has some big downsides. For one, you must ALWAYS implement an upgrade script hook, or you lose your persisted data. For two, you have to keep overhead around to potentially double your persistent data storage on update. So yeah it's just a strawman but perhaps it is pointing in the general direction of a solution.
One way we could tackle those drawbacks is that a KDApp could identify what specific app versions it supports upgrading from, and what to do with the persisted directories when coming from each of those various older versions. Ideally some or all of the persisted directories could be left alone on upgrade. We may also need to get more fine-grained about specifying which subdirectories (or even individual files?) should be on the persistent storage; current app CRs use a broad brush.
Statefulset update is the trickiest part but not the only thing to manage; we'd also need to enhance handleMemberServiceConfig so that it can support reconciling a service object to change its ports.
==========
A related issue is how much are concerned about detecting/preventing out-of-band fiddling with the statefulset spec, i.e. trying to do updates to a KDCluster without actually editing the KDApp. If we truly want/need to block this, we would need another admission controller watching statefulsets.
The text was updated successfully, but these errors were encountered: