-
-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermittent failures in test_open_tcp_listeners_backlog
on Travis MacOS
#584
Comments
This was referenced Jul 30, 2018
njsmith
added a commit
to njsmith/trio
that referenced
this issue
Jul 30, 2018
While the Travis MacOS infrastructure is clearly *much* better than it used to be, doing these tests on Jenkins is still faster overall, and it avoids hitting python-triogh-584. This commit: - adds a temporary workaround for MagicStack/immutables#7 - re-enables all MacOS builds on Jenkins - including 3.7, which was previously not enabled - re-disables Travis MacOS builds
It would be nice to know what's happening here, but for now we want to switch back to Jenkins anyway to improve test turnaround time, plus that avoids this bug, so meh, we can re-open if we ever move back to Travis and start hitting it again. |
njsmith
added a commit
to njsmith/trio
that referenced
this issue
Oct 31, 2018
Since Jenkins is acting screwy and not letting people see the logs :-(. It's possible that this will cause python-triogh-584 to start happening again, so I guess we'll just have to watch out for that... Fixes python-triogh-749
njsmith
added a commit
to njsmith/trio
that referenced
this issue
Oct 31, 2018
Since Jenkins is acting screwy and not letting people see the logs :-(. It's possible that this will cause python-triogh-584 to start happening again, so I guess we'll just have to watch out for that... Fixes python-triogh-749
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We seem to be getting intermittent failures in
test_open_tcp_listeners_backlog
:https://travis-ci.org/python-trio/trio/jobs/409715108
https://travis-ci.org/python-trio/trio/jobs/409715111
This is a test where we open a listening port with a specific backlog, then try to connect to it and make sure its backlog actually is at least as large as we requested. In those two logs we request a backlog of 11, and one time we manage 7 connections and the other time we manage 3.
Ideas:
The way we detect the backlog being full is that we attempt to connect, and after 0.5 seconds the connection still hasn't succeeded. I guess the host could be so overloaded that somehow connecting to a loopback socket actually takes 0.5 seconds sometimes? It seems unlikely that we're getting random 0.5 second glitches so often that they hit this test 2 times out of 3 though...
Currently that test is actually opening a world-readable port. Maybe someone else is randomly connecting? Seems pretty unlikely as well though... 5+ connections in a fraction of a second to a newly opened random high port, for a vps on a private network?
I guess we could disable the test entirely... it's been a hassle before, and mostly it's just checking that the value we set as
backlog
does actually get passed down to the kernel. The logic for this isn't totally trivial (see_compute_backlog
), but still, not very scary. Plus setting small backlogs is probably never useful, we only support it because it's traditional.The text was updated successfully, but these errors were encountered: