-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rolling-update heuristic needs improvement #489
Comments
Upgrades are on my list. Will ping you if I need more details. |
Even if you have just one master, it starts killing workers before the new master is ready. |
@dwradcliffe working on it this week ;) we gonna make it a bunch better :P |
Sweet! Happy to test it when you're ready. |
I am wondering if launch a job in the cluster to upgrade itself has value currently. Probably phase two. |
@chrislovecnm what did you decide on this? What is the alternative to launching a job in the cluster? Having the workstation poll for progress and take steps in sequence? That would be discarding everything we have learned about message/job queues. |
This is phase 1 #1134 |
So work in progress still needs more tlc, but much much much better now:
Same pattern upgrade masters, and then the nodes. Now it does not quite scale to 100s of nodes with this pattern, without a long run time, and we have ideas for that. But much better.
More context please. |
@nicolasbelanger can we close this now? |
@chrislovecnm yep, tx. |
This is a follow-up on the issue #284
The rolling-update for masters in different zones needs to wait for at least one master to be fully ready. I tested an upgrade from v1.3.7 to v1.4.0-beta.10.
Unfortunately, by the time the
master-us-west-2c.masters.qa.k8s
is taken down,master-us-west-2a.masters.qa.k8s
is not yet fully started. Then no pods could be scheduled, and the service goes down.Let me know if you need more details.
The text was updated successfully, but these errors were encountered: