-
Notifications
You must be signed in to change notification settings - Fork 13.3k
net_tcp read timeout #3599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm getting similar results (read blocking indefinitely, even with a short timeout) on 082d3d. This is also on OSX (10.7.4) |
The same issue also occurs on Windows 7 with Rust 0.4. |
I should have responded to this before. Preemptive apology: this interface is not so good. The problem here may be in accepting and handling the connection in the on_connect callback. That callback runs directly on the I/O loop, so doing long running operations in that callback will block all I/O entirely. The way it's intended to be used is to delegate the accepting and handling of the connection to another task, but there are some wrinkles. You can't just call Additionally, a connection has to be 'accepted' before the on_connect callback returns, but calling What you have to do is set up another task to handle the acceptance of connections. Here's an example of a working TCP server and client, more readable than the TCP test cases: brson@fcfdd49#L0R789 |
i cannot tell if brson has left this issue open with the intent of using it to motivate development of a nicer interface. So I will not close this issue, but I will de-milestone it. |
Not critical for 0.6; de-milestoning |
I'm still having problems with this, using a sample similar to issue #4296. My intention is to implement a simple TCP server to start implementing some higher-level protocols like FastCGI for Rust, so any efforts to move this forward would be highly appreciated :-) |
@brson the link you provided , brson/rust@fcfdd49#L0R789 , is not working for me. (I was going to ask AndresOsinski if he had already read that material and was still having problems, but since the link isn't working, the question seems silly.) Can you provide a fresh and perhaps more stable link? |
I'm having the same issue. read times out. captured traffic with wireshark points out that data was received before timeout. Anyway, it seems to work with read_start()/_stop(). |
I assume the updated example @brson provided refers to this test: I removed the flatpipes specific code and some other code that seemed unneeded to make as basic of an example as I could. The Receiver task is needed because if that code is in the on_connect callback it blocks. I share @mneumann's concern that the idea behind libuv is to use an event loop and not threads. Two tasks/threads might be acceptable, but the only way I could get an echo server to work with multiple connections was spawning a task for each connection. Seems to me that |
I'm going to close this since net_tcp is gone and newrt is reworking this area heavily. |
Don't print unnecessary sysroot messages Currently, when running `cargo miri setup`, we always print that a sysroot is being prepared, even if we just bail out of building after checking the hash. So that message is wrong; we didn't actually prepare a sysroot. We also always print the preparing message for `cargo miri run`, even if no sysroot is prepared. With this PR, `cargo miri run` prints no sysroot messages when an existing one is reused, and when a redundant `cargo miri setup` is requested, we print: ``` A sysroot for Miri is already available in `/home/ben/.cache/miri`. ```
Not completely sure if this is a bug because I'm finding it difficult to figure out how to use net_tcp using the unit tests. But the below code times out when both the client and the server try to read.
Logging shows this:
So it appears the the message is being read but the read function isn't returning for some reason (the first message is "dupe: hey\0" which is 10 bytes). This is on Mac with rust from Sep 22, 2012.
The seg faults are a bit disturbing too...
The text was updated successfully, but these errors were encountered: