-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection stuck when "keep alive" is used #1439
Comments
Ok, this seems to be the test case to reproduce this behavior: #[test]
fn client_keep_alive_connreset() {
use std::sync::mpsc;
extern crate pretty_env_logger;
let _ = pretty_env_logger::try_init();
let server = TcpListener::bind("127.0.0.1:0").unwrap();
let addr = server.local_addr().unwrap();
let mut core = Core::new().unwrap();
// This one seems to hang forever
let client = client(&core.handle());
// This one works as expected (fails, because second connection is not handled by the server)
//let client = Client::configure()
// .keep_alive(false)
// .build(&core.handle());
let (tx1, rx1) = oneshot::channel();
let (tx2, rx2) = mpsc::channel();
thread::spawn(move || {
let mut sock = server.accept().unwrap().0;
//sock.set_read_timeout(Some(Duration::from_secs(5))).unwrap();
//sock.set_write_timeout(Some(Duration::from_secs(5))).unwrap();
let mut buf = [0; 4096];
sock.read(&mut buf).expect("read 1");
sock.write_all(b"HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n").expect("write 1");
// Wait for client to indicate it is done processing the first request
// This is what seem to trigger the race condition -- without it client notices
// connection is closed while processing the first request.
rx2.recv();
let _r = sock.shutdown(std::net::Shutdown::Both);
// Let client know it can try to reuse the connection
let _ = tx1.send(());
println!("we are ready to receive the second connection");
let mut _sock = server.accept().unwrap().0;
println!("accepted second connection");
});
let res = client.get(format!("http://{}/a", addr).parse().unwrap());
core.run(res).unwrap();
tx2.send(());
let rx = rx1.map_err(|_| hyper::Error::Io(io::Error::new(io::ErrorKind::Other, "thread panicked")));
core.run(rx).unwrap();
println!("connecting for the second time -- hangs");
let res = client.get(format!("http://{}/b", addr).parse().unwrap());
core.run(res).unwrap();
} P.S. Seems like Hyper tries to re-use pooled connection that was closed on the remote. For some reason, write to the socket on the client side succeeds (doesn't return EPIPE), so Hyper hangs forever (if it was reported, though, I would expect getting "pooled connection was not ready, this is a hyper bug"). I don't quite understand why "event Readable | Hup" event received earlier does not trigger connection to be evicted from the pool. P.P.S. I think, there are two issues here:
|
Hey @idubrov, thanks so much for the detailed report, and a test case!
The first write can sometimes succeed, since it's just being put in the sndbuf, and then the kernel can try to send in on the wire. Networking is fun! Let's see what we can do to fix this:
|
Yes, that's what I think. It looks to me that new connection does read as the first thing ("try_empty_read") whereas reused one does not. On a related note, shouldn't the pool itself have a future associated with it to do the bookkeeping of the connections? Otherwise, closed connections will sit in the pool until they are reused? |
There should be a fix for this on master, including your test case passing!
Most likely. However, it not existing doesn't mean closed sockets will be kept around. Those will be dropped and closed. It's just a sender handle in the pool. Still, it'd be better to periodically clean those up... |
I have a simple client that sends requests in chunks of 5 and after a while this client stucks.
The client looks like this:
Debugger shows that it waits for the mio event (sits in mio::sys::unix::kqueue::Selector::select):
Analyzing logs shows that the following events seem to be causing the issue:
The text was updated successfully, but these errors were encountered: