-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sender::send(t) -> Result<(), SendError> #2
Comments
Good points. I'll explain my rationale. The first design decision I made was that I wanted to expose exactly one The second design decision I made was, "sending a value that can never be received is either a bug or an intentional leak." This may be a bad decision, but it is one that has a large body of evidence that suggests it may not be a horrible idea. (e.g., Go.) The idea here is too encourage users of OK, that was my thinking. It may be bad thinking, but there it is. Now I'll actually respond. :-)
This is interesting. Is your server built to be resistant against bugs in the code? My initial instinct here is, well, if it's a bug then you should fix that instead of ignoring it and pressing on.
I was actually thinking about this the other day, and I think this is a pretty compelling use case for figuring out how to do error handling in a concurrent program. I think you're suggesting the path of least resistance: if enough errors occur such that all receivers hang up, then the senders should return an error too, which gives the senders an opportunity to wind down gracefully. Another way to approach this same problem is to add an error channel that inverts control. When your readers encounter an error, then they send the error back to the sender. So your senders would have to look something like this: for {
chan_select! {
send.send(...) => {},
recv_error.recv() -> err => { /* handle error */ },
}
} But this suffers from a couple problems that I can't think of an immediate solution to:
I guess those are pretty damning reasons not to take that path. Hmm. In sum, my concerns:
I think my current position is to make |
With respect to performance: no, almost no decisions were made based on performance. I'm glad you don't see it as too much of an issue. I would obviously like to optimize the channels (and select), but I think channels will only ever be good for coarse grained concurrency. (I'm not sure that lock free channels can provide the same semantics provided by the channels in this library.) |
Thanks for the detailed design rationale!
It's somewhat of a weak argument, but being able to detect these bugs (and handle them gracefully or panic) seems better than deadlocking to me. Deadlocking leaks resources, where a panic or the ability to detect the bug via
Makes sense, though I personally disagree that it's a burden. Rust tends to take a very manual approach to error handling - if it's not a bug that the compiler can statically determine won't happen, providing a way to detect and handle it seems fair. A resilient program should use try or panic! I suppose I'd make my case like this:
This is also fair, but the complexity involved with that solution suggests that it may be worth supporting in some fashion rather than expecting users to implement their own. It does cover my use case, which is designed around spsc (unix pipe behaviour).
It is, but this message is already used the other way around - a channel is considered closed when all senders drop. As a mpmc library, I don't see any reason not to support channel closure the other way around too. |
Frankly, those are some pretty compelling points. I was hoping to find a way to justify the current function signature, but that seems like folly. I'll branch and release a |
Heh, sorry if it seems like I'm forcing my opinions on you... I'm just a big fan of giving the library user some control over situations like this (and am especially concerned due to working on constrained resource systems). I look forward to trying that branch out! |
@arcnmx Sorry I haven't got to this yet. I started thinking about an implementation, and I got a little stuck with We could do something similar to The other problem here is that the things we can do inside of |
No worries!
To be fair, is this any worse than the current behaviour of deadlocking? You'd presumably also get a lint warning about the unused result, no different than calling EDIT: I tackled that from the wrong angle, didn't I? How does it currently work? |
Breaking it down... Current version: Result version: So... I'm actually not completely against just silently allowing the send to be ignored, as the send simply may not happen. I guess alternatives are:
|
One tricky part here is the I wonder if we could be more creative. If With all that said, it is going to be difficult to get the syntax right in the macro. There's really no obvious "do this if it's a receive" and "do this if it's a send." It's all pretty implicit. Blech. |
Hm, is the select macro not magical and treat both the same way (as in, doesn't actually send unless the receiver is ready / it knows send wouldn't block)? EDIT: Or are you only referring to where the explicit result handle case where you'd get Err(..) repeatedly? That is awkward, yeah... |
The implementation, which is The issue is the syntax in the macro. At the macro level, there's really no distinction between a send and a receive. I don't have much more to say than that because the macro is effectively write-once code. I'll have to dig into it, play with it and re-discover its limitations. :-) |
Aha, gotcha! It's still important to determine the semantics we want, though, since that drives what's required from the macro syntax. |
Finally getting around to writing this. I think that panicking is the right way to go, or exposing a try_send(). The reason to deadlock is that it exposes a bug, but I feel that panicking or exposing the Result does the same thing. In a deadlock I have no way, in any part of my program, to handle the problem. I can catch panics, restart the service, or handle it explicitly in the case of a Result. Even in the case of Go my understanding (from our conversation on IRC) is that the runtime will try to panic on deadlock, so really it seems that the desired behavior is already panic. I think the most dangerous thing a program can do is deadlock/ hang - it eats up a threads resources, a single bug can lead to a completely exhausted resource pool, and the entire service can 'die' without crashing, leading to a very confusing time when everything's on fire but all of your services are up and running. I personally think that the solution is to panic in send or panic and send and expose a Result. Panicking seems like the natural thing to do because, as you said, if there's no one to receive a message there is a bug. I think that by choosing to panic by default the macro issue is also moot, correct? |
I guess so. But what happens if you want to use |
Yeah, that's pretty much what I was thinking. |
This crate is now deprecated. See: https://users.rust-lang.org/t/ann-chan-is-deprecated-use-crossbeam-channel-instead/19251 |
Is there a performance reason that this functionality is not exposed? It'd give more power to the user to decide exactly how they want to handle it if the dropped receiver case were exposed.
I suppose I have a few reasons...
std::sync::mpsc
by closer mimicing its design.Vec<u8>
buffers across threads. Imagine a case where a Read consumer doesn't finish reading to EOF (an error is encountered when processing the data), and drops the Read/Receiver end. The writer would normally encounter a broken pipe error, but deadlocking would occur withchan
, and leak resources from the writing thread:(also side note, the pipe thing is why I'm somewhat concerned with throughput. It's not a terribly huge issue though, especially when dealing with large enough buffers.)
The text was updated successfully, but these errors were encountered: