This repository has been archived by the owner on Jun 20, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 673
Rolling restart can leave ipam hung #894
Labels
Milestone
Comments
squaremo
added a commit
to plugins-demo-2015/demo
that referenced
this issue
Jun 12, 2015
If one calls weave reset when reprovisioning, it means the weave node gets a different identity. This can mean that it will not be able to allocate IPs, due to weaveworks/weave#894
Mostly wanted to document the problem and current mitigations. |
This got fixed in #1624. except for the case where the hosts are restarted w/o rmpeer, i.e. w/o invoking |
rade
changed the title
Rolling restart can leave ipam hung
Rolling restart can leave ipam hung - not if you do it correctly; but need to document this
Nov 9, 2015
This was referenced Jan 10, 2016
That is no longer true, post #1866. -> fixed. |
rade
changed the title
Rolling restart can leave ipam hung - not if you do it correctly; but need to document this
Rolling restart can leave ipam hung
Jan 21, 2016
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
If you have peers A, B, C running, then you kill A, start a new A', kill B, start a new B', etc., then you will end up with a ring owned by the old A, B, C and none of the running peers have any space. If they try to ask for any space they will only try to contact the now-gone peers.
(To trigger this symptom requires that the new peers have unique PeerName, i.e. the weave bridge has also been recreated)
This can be worked around by calling
weave rmpeer
on the new peers or (somewhat less reliably) by callingweave reset
when shutting down each peer. Or by shutting down all the old ones before starting any new ones.The weave logs are not very helpful; just a series of messages like:
The text was updated successfully, but these errors were encountered: