-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kill all pending accepts when TCP listener is closed #1517
Conversation
368f758
to
0eb056e
Compare
0eb056e
to
beac39b
Compare
src/tokio_util.rs
Outdated
r.track_task(self.task_id); | ||
return Ok(futures::prelude::Async::NotReady); | ||
} | ||
Err(e) => return Err(From::from(e)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably should untrack here too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
err = e; | ||
} | ||
assert(!!err); | ||
assertEqual(err.message, "Listener has been closed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool - good test case
src/resources.rs
Outdated
// we need to track these task so that when the listener is closed, | ||
// pending tasks could be notified and die. | ||
// The tasks are indexed, so they could be removed when accept completed. | ||
TcpListener(tokio::net::TcpListener, HashMap<usize, futures::task::Task>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like this is something TcpListener should take care of? Is this working around a bug in tokio? Maybe we ought to report and link here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not so sure about this... we could probably ask them...
(Also I think our usage here is different from its most publicly available API, so there might be a miss...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind opening an issue and just describing what's we're experiencing https://github.com/tokio-rs/tokio
Ideally link that issue in the source code here.
It's also unclear to me what the expected behavior is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Submitted: tokio-rs/tokio#846
I'll add a comment linking to this once there are some basic feedback from the Tokio team (that this is indeed a problem)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I believe based on the discussion there we might just save the task and notify ourselves...
There seems to be some problems with sccache on AppVeyor... On Travis it seems that it is rejecting more network listening requests (tried locally on my Mac machine, and seems it is the firewall that rejects the Nvm, found the issuelisten
after reaching a threshold and instead shows a permission prompt that requires manual confirmation...)
src/resources.rs
Outdated
// If TcpListener, we must kill all pending accepts! | ||
if let Repr::TcpListener(l, m) = r.unwrap() { | ||
// Drop first | ||
std::mem::drop(l); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don’t think this is necessary - it will be dropped with the above remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed... Forgot to remove this (was part of initial experiment...)
src/tokio_util.rs
Outdated
use std::sync::atomic::Ordering; | ||
lazy_static! { | ||
// Keep unique such that no collisions in TcpListener accept task map | ||
static ref NEXT_ACCEPT_ID: AtomicUsize = AtomicUsize::new(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use a HashSet instead and avoid this machinery?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that futures::task::Task
does not implement a few traits necessary for comparison in HashSet...
Per tokio-rs/tokio#846 (comment) I'll go and implement this if this sounds reasonable |
@kevinkassimo Thanks for investigating this. Yes, one accept at a time seems like a reasonable constraint for a single thread of JS. I can imagine that we might want to share servers among isolates in the future and accept in multiple workers... but let's do that later. You should mention in a comment something about how this may need to be improved for the multi-worker accept situation. |
ae51d18
to
07b575b
Compare
07b575b
to
3e686e2
Compare
Should be ready for one more round of review. (Added a test for multiple concurrent accepts) |
3e686e2
to
de7b233
Compare
de7b233
to
9f405cc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - very technical fix - thanks for digging through it.
@kevinkassimo FYI I've now seen one failure of this test on Windows. Not sure what's happening - it's potentially flaky.
|
@ry It's a race condition I believe: basically the second accept somehow polls earlier than the first one by the scheduler. |
…ener is closed (denoland#2224)" Crashes while running wrk against js/deps/https/deno.land/std/http/http_bench.ts This reverts commit 972ac03.
…TCP listener is closed (denoland#2224)" (denoland#2239)" This reverts commit 1af02b4.
…TCP listener is closed (denoland#2224)" (denoland#2239)" This reverts commit 1af02b4.
…TCP listener is closed (denoland#2224)" (denoland#2239)" This reverts commit 1af02b4.
Closes #1516 .
Keep track of the pending accept task and notify it when the corresponding
TcpListener
fromResourceTable
is dropped.Before: Hangs forever
After:
and dies. This error could be caught.
This is a first-pass implementation (so might not be the best solution). I'm slightly worried about the performance implication of adding 2
ResourceTable
locks.(Benchmark from Travis seems no change though)