-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thread safety of Datagram send and receive #444
Comments
I'm not sure that these APIs for UDP are thread-safe because they would involve the IO manager. |
Can you elaborate on this? AFAIK the GHC IO manager is built around The reason I'm bringing this up is that I'd like to handle many UDP requests concurrently. |
@kazu-yamamoto I found a discussion on that topic that suspects the reason behind the non-thread-safety remark in the documentation to be the |
While digging through some Haskell project code dealing with UDP I've found several code bases where multiple threads use the same Datagram socket concurrently. All of them are from earlier then 2018 and thus before the commit introducing that note into the docs. |
@schmittlauch First of all, if you would like to use multiple workers after accepting a so-called listen socket for UDP, please give a look at my blog article. https://kazu-yamamoto.hatenablog.jp/entry/2020/02/18/145038 The "New connections" section describes how to use UDP like TCP. |
Second, I don't worry about |
Third, IO manager. When Otherwise, |
@vdukhovni might be interested in this topic. |
@schmittlauch Note that the technique described in my blog is NAT-friendly. |
NAT-friendly code is available here: https://github.com/kazu-yamamoto/quic/blob/master/Network/QUIC/Socket.hs#L29 Of course, this approach is fragile to NAT-rebinding. |
https://hackage.haskell.org/package/network-run-0.2.2/docs/Network-Run-UDP.html |
I'm willing to comment. Indeed for UDP sockets with multiple polling threads, without some sort of locking (and consequent overhead) you get thundering herd if you just naïvely share the UDP server socket. Nothing breaks, but it scales very poorly when there are many idle threads waiting for a message. All the threads get woken up, one reads the message, the rest go back to epoll()/kqueue(), ... when they discover that there's no message to read. The solution is to give each thread its own socket with {-# LANGUAGE BlockArguments #-}
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Control.Concurrent
import Control.Monad
import qualified Data.ByteString.Char8 as B
import qualified Data.ByteString.Builder as B
import qualified Data.ByteString.Lazy as LB
import Network.Socket
import Network.Socket.ByteString
main :: IO ()
main = do
let hints = defaultHints { addrFlags = [AI_NUMERICHOST, AI_NUMERICSERV]
, addrSocketType = Datagram }
ais <- getAddrInfo (Just hints) (Just "127.0.0.1") (Just "12345")
forM_ ais \ai -> do
forM_ [1..64] \i -> do
socket (addrFamily ai) (addrSocketType ai) (addrProtocol ai) >>= \s -> do
setSocketOption s ReusePort 1
bind s $ addrAddress ai
forkIO (pong s i)
ping ais
threadDelay 1000000
ping :: [AddrInfo] -> IO ()
ping ais = do
forM_ ais \ai -> do
socket (addrFamily ai) (addrSocketType ai) (addrProtocol ai) >>= \s -> do
forM_ [1..64] \i -> do
threadDelay 5000
let addr = addrAddress ai
obuf = LB.toStrict $ B.toLazyByteString $ B.byteString "ping " <> B.intDec i
void $ sendTo s obuf addr
recvFrom s 32 >>= B.putStrLn . fst
pong :: Socket -> Int -> IO ()
pong s i = do
(ibuf, addr) <- recvFrom s 32
void $ sendTo s (reply ibuf) addr
close s
where
reply ibuf =
LB.toStrict $ B.toLazyByteString
$ B.byteString ibuf
<> B.byteString ", pong "
<> B.intDec i |
Bottom line, I think this issue could be closed. Yes I don't know of a good reason for UDP clients to share a socket, you generally want each client to see just the responses to the query it sent, so a dedicated socket for each client thread also makes more sense. Perhaps the documentation could be changed to discuss performance, rather than safety, maybe that'd be less confusing? |
Finally, if the processing for each client request is expensive enough (high latency), The right design may depend on the application, there may not be a single right answer. At C10m loads you may writing C-code or assembly to run on bare metal (no OS, no Haskell RTS, ...) and poll the interface for packets rather than take interrupts and context switches. |
Thanks to both of you for your helpful comments.
Yes, that'd be helpful.
Forking a thread for each received message (maybe with a semaphor for limiting the maximum number) allows for more dynamic scaling. Instead of having a fixed amount of worker threads running independently from the actual application load, threads are spawned according to the actual demand.
With this approach, the sending of a response can be an issue: Once a Datagram Socket has been |
Would you like to suggest specific text in a pull request? Perhaps Yamamoto-san (@kazu-yamamoto) would be willing to review and merge less confusing text. You don't need to explain every detail, just take care of anything that's outright confusing or misleading, keeping the explanations reasonably concise. Basically racing multiple readers for the same socket should be avoided for efficiency reasons even with datagram transports where atomicity of messages generally means that you don't sacrifice safety. You could briefly mention |
@schmittlauch Would you like to improve the doc or just close this issue? |
@kazu-yamamoto I am still tinkering with sockets and networking, so I still need some time to gather experience. |
@vdukhovni One more question: Is there any reason against sending and receiving in parallel on the same socket in 2 different threads, like receiving (and blocking) in one thread while sending (in exactly 1 other thread) at the same time concurrently? As far as I understood the main reason against using sockets over multiple threads is that the performance degrades when blocking on Otherwise I'd need to abort the blocking |
This approach is recommended as written in the manual of
|
We can use two threads to separate sending and receiving safely. :-) |
So long as neither thread closes the socket while the other is still active, separate reader/writer threads are fine, one of each per socket. |
just to give an estimate for when I'm going to do this: I hope to find time in October |
The package documentation claims that
send
andreceive
operations on the same socket are not thread safe:In a discussion on StackOverflow though the opinion was brought up that in POSIX,
send
,sendTo
,receive
andreceiveFrom
are defined as atomic operations, making them thread safe at least for Datagram sockets.With this library mostly being a thin wrapper around the Berkeley sockets API: Are operations on the same Datagram socket atomic? If not, why?
The text was updated successfully, but these errors were encountered: