Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TcpListener::incoming fails to receive new connections #888

Closed
Darksonn opened this issue Sep 28, 2020 · 7 comments · Fixed by #889
Closed

TcpListener::incoming fails to receive new connections #888

Darksonn opened this issue Sep 28, 2020 · 7 comments · Fixed by #889

Comments

@Darksonn
Copy link

Darksonn commented Sep 28, 2020

Based on this question, it appears that there is some issue with the incoming stream on TcpListener.

use async_std::net::TcpListener;
use futures::StreamExt;

#[async_std::main]
async fn main() {
    let listener = TcpListener::bind("127.0.0.1:4444").await.unwrap();
    println!("Server listening on {}", listener.local_addr().unwrap());

    listener
        .incoming()
        .for_each(|tcpstream| async move {
            let tcpstream = tcpstream.unwrap();
            println!("Accepting from: {}", tcpstream.peer_addr().unwrap());
        })
        .await;
}
[dependencies]
async-std = { version = "=1.6.4", features = ["attributes"] }
futures = "0.3"

This example does not print anything when new connections are made to that TcpListener, e.g. with nc 127.0.0.1 4444. My guess is that it is somehow failing to send notifications to the waker.

It works when fine rewritten to use accept in a loop.

@DavidBM
Copy link

DavidBM commented Sep 28, 2020

I believe that this affects Tide as well, as we tried Tide a couple of days ago and it worked well, but no one in our team (Mac & Linux) can make it work today. It keeps frozen on accepting the connection:

   Compiling port-http v0.1.0 ([[REDACTED]])
    Finished dev [unoptimized + debuginfo] target(s) in 3.34s
     Running `target/debug/port-http`
tide::log Logger started
    level TRACE
port_http::http::router Test1
tide::server Adding middleware tide::cookies::middleware::CookiesMiddleware
tide::server Adding middleware tide::log::middleware::LogMiddleware
async_io::driver main_loop: sleeping for 1000 us
port_http::http::router Test2
polling::epoll insert: epoll_fd=3, fd=7
tide::listener::tcp_listener Server listening on http://127.0.0.1:8080
polling::epoll interest: epoll_fd=3, fd=7, ev=Event { key: 1, readable: true, writable: false }
async_io::driver block_on: notified
async_io::reactor process_timers: 0 ready wakers
polling Poller::wait(_, Some(0ns))
polling::epoll wait: epoll_fd=3, timeout=Some(0ns)
polling::epoll interest: epoll_fd=3, fd=5, ev=Event { key: 18446744073709551615, readable: true, writable: false }
polling::epoll new events: epoll_fd=3, res=0
polling::epoll interest: epoll_fd=3, fd=4, ev=Event { key: 18446744073709551615, readable: true, writable: false }
async_io::reactor react: 0 ready wakers
polling::epoll interest: epoll_fd=3, fd=7, ev=Event { key: 1, readable: true, writable: false }
async_io::driver block_on: waiting on I/O
async_io::reactor process_timers: 0 ready wakers
polling Poller::wait(_, Some(599.990063236s))
polling::epoll wait: epoll_fd=3, timeout=Some(599.990063236s)
polling::epoll interest: epoll_fd=3, fd=5, ev=Event { key: 18446744073709551615, readable: true, writable: false }
async_io::driver main_loop: sleeping for 2500 us
async_io::driver main_loop: sleeping for 5000 us
async_io::driver main_loop: sleeping for 10000 us
polling::epoll new events: epoll_fd=3, res=1
polling::epoll interest: epoll_fd=3, fd=4, ev=Event { key: 18446744073709551615, readable: true, writable: false }
async_io::reactor react: 0 ready wakers
async_io::driver block_on: stops hogging the reactor
async_io::driver main_loop: waiting on I/O
async_io::reactor process_timers: 0 ready wakers
polling Poller::wait(_, Some(585.727612537s))
polling::epoll wait: epoll_fd=3, timeout=Some(585.727612537s)
polling::epoll interest: epoll_fd=3, fd=5, ev=Event { key: 18446744073709551615, readable: true, writable: false }

@jannik4
Copy link

jannik4 commented Sep 28, 2020

Same for me. Reverting from async-std 1.6.4 to 1.6.3 fixed it

@yoshuawuyts
Copy link
Contributor

Hi, thanks for the report! I think there may be one of two causes for the issue here:

We should probably dig deeper and find out what's going on.

@Darksonn
Copy link
Author

Darksonn commented Sep 28, 2020

Taking a look at this impl, it seems like it's calling accept, then dropping the future. My guess is that the new async-io release makes accept's future remove the registered waker in the destructor of the future.

impl<'a> Stream for Incoming<'a> {
type Item = io::Result<TcpStream>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let future = self.0.accept();
pin_utils::pin_mut!(future);
let (socket, _) = futures_core::ready!(future.poll(cx))?;
Poll::Ready(Some(Ok(socket)))
}
}

In fact, async-io has had a commit with the message Unregister wakers if readable() or writable() is canceled since the latest release, so it sounds likely to me.

@yoshuawuyts
Copy link
Contributor

@Darksonn can you file an issue on async-io about this? This doesn't seem like it should have been a patch release.

Though we can probably duplicate the AsyncRead impl from async_net::TcpStream to fix async-std's impl. Unfortunate, but that way we don't need to wait for a fix in async-io.

@Darksonn
Copy link
Author

Sure! Done in smol-rs/async-io#30.

@yoshuawuyts
Copy link
Contributor

async-std@1.6.5 has been published with a fix!

mxinden added a commit to mxinden/substrate that referenced this issue Oct 12, 2020
Prevent users from using v1.6.4 which faces issues receiving incoming
TCP connections. See async-rs/async-std#888
for details.
ghost pushed a commit to paritytech/substrate that referenced this issue Oct 20, 2020
* *: Bump async-std to v1.6.5

Prevent users from using v1.6.4 which faces issues receiving incoming
TCP connections. See async-rs/async-std#888
for details.

* client/network/src/gossip: Use channel instead of condvar

`async_std::sync::Condvar::wait_timeout` uses
`gloo_timers::callback::Timeout` when compiled for
`wasm32-unknown-unknown`. This timeout implementation does not fulfill
the requirement of being `Send`.

Instead of using a `Condvar` use a `futures::channel::mpsc` to signal
progress from the `QueuedSender` to the background `Future`.

* client/network/Cargo.toml: Remove async-std unstable feature

* client/network/src/gossip: Forward all queued messages

* client/network/gossip: Have QueuedSender methods take &mut self

* client/network/gossip: Move queue_size_limit into QueuedSender

The `queue_size_limit` field is only accessed by `QueuedSender`, thus
there is no need to share it between the background future and the
`QueuedSender`.

* client/network/gossip: Rename background task to future

To be a bit picky the background task is not a task in the sense of an
asynchonous task, but rather a background future in the sense of
`futures::future::Future`.
jordy25519 pushed a commit to plugblockchain/plug-blockchain that referenced this issue Nov 10, 2020
* *: Bump async-std to v1.6.5

Prevent users from using v1.6.4 which faces issues receiving incoming
TCP connections. See async-rs/async-std#888
for details.

* client/network/src/gossip: Use channel instead of condvar

`async_std::sync::Condvar::wait_timeout` uses
`gloo_timers::callback::Timeout` when compiled for
`wasm32-unknown-unknown`. This timeout implementation does not fulfill
the requirement of being `Send`.

Instead of using a `Condvar` use a `futures::channel::mpsc` to signal
progress from the `QueuedSender` to the background `Future`.

* client/network/Cargo.toml: Remove async-std unstable feature

* client/network/src/gossip: Forward all queued messages

* client/network/gossip: Have QueuedSender methods take &mut self

* client/network/gossip: Move queue_size_limit into QueuedSender

The `queue_size_limit` field is only accessed by `QueuedSender`, thus
there is no need to share it between the background future and the
`QueuedSender`.

* client/network/gossip: Rename background task to future

To be a bit picky the background task is not a task in the sense of an
asynchonous task, but rather a background future in the sense of
`futures::future::Future`.
aliXsed pushed a commit to plugblockchain/plug-blockchain that referenced this issue Nov 11, 2020
* Fixes logging of target names with dashes (#7281)

* Fixes logging of target names with dashes

There was a bug in tracing-core which resulted in not supporting dashes
in target names. This was fixed upstream. Besides that a test was added
to ensure that we don't break this again.

* Extend test

* client: fix log filters (#7241)

* client: fix multiple logger filters

* client: add test for log filters setup

* Fix logging from inside the WASM runtime (#7355)

* Fix logging from inside the WASM runtime

When using `RuntimeLogger` to log something from the runtime, we didn't
set any logging level. So, we actually did not log anything from the
runtime as logging is disabled by default. This pr fixes that by setting
the logging level to `TRACE`. It also adds a test to ensure this does
not break again ;)

* Update frame/support/src/debug.rs

* Print an error if an unregistered notifications protocol is used (#7457)

* Print an error if an nregistered notifications protocol is used

* Print an error if an nregistered notifications protocol is used

* Update client/network/src/service.rs

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>

* Fix wrong outgoing calculation in election (#7384)

* Fix wrong outgoing calculation in election

* Add test.

* Lil bit better naming.

* grandpa: fix tests

* *: Bump async-std to v1.6.5 (#7306)

* *: Bump async-std to v1.6.5

Prevent users from using v1.6.4 which faces issues receiving incoming
TCP connections. See async-rs/async-std#888
for details.

* client/network/src/gossip: Use channel instead of condvar

`async_std::sync::Condvar::wait_timeout` uses
`gloo_timers::callback::Timeout` when compiled for
`wasm32-unknown-unknown`. This timeout implementation does not fulfill
the requirement of being `Send`.

Instead of using a `Condvar` use a `futures::channel::mpsc` to signal
progress from the `QueuedSender` to the background `Future`.

* client/network/Cargo.toml: Remove async-std unstable feature

* client/network/src/gossip: Forward all queued messages

* client/network/gossip: Have QueuedSender methods take &mut self

* client/network/gossip: Move queue_size_limit into QueuedSender

The `queue_size_limit` field is only accessed by `QueuedSender`, thus
there is no need to share it between the background future and the
`QueuedSender`.

* client/network/gossip: Rename background task to future

To be a bit picky the background task is not a task in the sense of an
asynchonous task, but rather a background future in the sense of
`futures::future::Future`.

* client/network: Remove option to disable yamux flow control (#7358)

With the `OnRead` flow control option yamux "send[s] window updates only
when data is read on the receiving end" and not as soon as "a Stream's
receive window drops to 0".

Yamux flow control has proven itself. This commit removes the feature
flag. Yamux flow control is now always enabled.

* Make `queryStorage` and `storagePairs` unsafe RPC functions (#7342)

The RPC calls can be rather expensive and can easily bring a RPC node in
some problems ;)

* consensus: prioritize finality work over block import in queue (#7307)

* consensus: prioritize finality work over block import in queue

* consensus: add test for import queue task priority

* sync: only restart peers not doing finality related requests (#7322)

* sync: only restart peers not doing finality related requests

* sync: add test for sync restart

* sync: add better docs to restart method

* Undo phragmen merge

* grandpa: fix early enactment of forced changes (#7321)

* grandpa: fix early enactment of forced authority set changes

* grandpa: add test for early enactment of forced changes

* grandpa: fix typo in log message

* grandpa: only allow one pending forced change per fork

* grandpa: fix tests

Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com>
Co-authored-by: André Silva <123550+andresilva@users.noreply.github.com>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Co-authored-by: André Silva <andrerfosilva@gmail.com>
Co-authored-by: Max Inden <mail@max-inden.de>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants