This repository was archived by the owner on Jun 20, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 679
fix occasional failure of 870_weave_recovers_unreachable_ips_on_relaunch_3_test.sh in CI #3444
Milestone
Comments
Looking at the test code, it checks on both remaining hosts, so it is possible it checks before the update has been processed on the one that didn't do the
|
Ok, let me try this and see if it works. |
It appears occasionally
|
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
After merge #3399 test 870_weave_recovers_unreachable_ips_on_relaunch_3_test.sh has been modified to not to explicitly do weave pod restart, instead relies the functionality in kibe-util which perfom
weave rmpeer
automatically. However its observed test has been failing occasionally. It appears thatweave rmpeer
does not reclaim IP's (or atlest instantly) leading to test failure.log
DEBU: 2018/11/03 05:26:48.285669 registering for updates for node delete events
INFO: 2018/11/03 05:26:48.315171 Discovered remote MAC c2:db:37:0a:e3:3d at 0a:e0:a5:d1:8c:2d(test-11349-1-2)
INFO: 2018/11/03 05:26:48.698411 Discovered remote MAC 82:c9:fc:71:9b:41 at 0a:e0:a5:d1:8c:2d(test-11349-1-2)
INFO: 2018/11/03 05:26:49.455204 Discovered remote MAC f2:16:e3:fe:7b:79 at d6:93:32:ad:b8:6d(test-11349-1-1)
INFO: 2018/11/03 05:26:50.452105 Discovered remote MAC a6:05:62:7e:14:67 at d6:93:32:ad:b8:6d(test-11349-1-1)
INFO: 2018/11/03 05:26:54.887989 ->[10.128.0.13:6783|d6:93:32:ad:b8:6d(test-11349-1-1)]: connection shutting down due to error: read tcp4 10.128.0.10:48533->10.128.0.13:6783: read: connection reset by peer
INFO: 2018/11/03 05:26:54.889402 ->[10.128.0.13:6783|d6:93:32:ad:b8:6d(test-11349-1-1)]: connection deleted
INFO: 2018/11/03 05:26:54.895858 Removed unreachable peer d6:93:32:ad:b8:6d(test-11349-1-1)
DEBU: 2018/11/03 05:26:55.507406 [kube-peers] Nodes that have disappeared: map[d6:93:32:ad:b8:6d:{d6:93:32:ad:b8:6d test-11349-1-1}]
DEBU: 2018/11/03 05:26:55.509363 [kube-peers] Preparing to remove disappeared peer d6:93:32:ad:b8:6d
DEBU: 2018/11/03 05:26:55.509389 [kube-peers] Noting I plan to remove d6:93:32:ad:b8:6d
DEBU: 2018/11/03 05:26:55.534310 weave DELETE to http://127.0.0.1:6784/peer/d6:93:32:ad:b8:6d with map[]
INFO: 2018/11/03 05:26:55.549122 [kube-peers] rmpeer of d6:93:32:ad:b8:6d: 131072 IPs taken over from d6:93:32:ad:b8:6d
DEBU: 2018/11/03 05:26:55.569297 [kube-peers] Nodes that have disappeared: map[]
DEBU: 2018/11/03 05:26:55.574779 weave POST to http://127.0.0.1:6784/connect with map[replace:[true] peer:[10.128.0.10 10.128.0.8]]
INFO: 2018/11/03 05:26:55.575602 ->[10.128.0.10:6783] attempting connection
INFO: 2018/11/03 05:26:55.575989 ->[10.128.0.10:60701] connection accepted
INFO: 2018/11/03 05:26:55.576448 ->[10.128.0.10:60701|7e:e5:6d:a3:49:8f(test-11349-1-0)]: connection shutting down due to error: cannot connect to ourself INFO: 2018/11/03 05:26:55.576769 ->[10.128.0.10:6783|7e:e5:6d:a3:49:8f(test-11349-1-0)]: connection shutting down due to error: cannot connect to ourself INFO: 2018/11/03 05:26:55.737140 Discovered remote MAC c6:b1:c6:8b:6f:0d at 0a:e0:a5:d1:8c:2d(test-11349-1-2)
INFO: 2018/11/03 05:26:55.919062 Discovered remote MAC da:3f:74:7f:09:91 at 0a:e0:a5:d1:8c:2d(test-11349-1-2)
IPAM status
7e:e5:6d:a3:49:8f(test-11349-1-0) 524288 IPs (50.0% of total) (1 active)
0a:e0:a5:d1:8c:2d(test-11349-1-2) 393216 IPs (37.5% of total)
d6:93:32:ad:b8:6d(test-11349-1-1) 131072 IPs (12.5% of total) - unreachable!
The text was updated successfully, but these errors were encountered: