-
-
Notifications
You must be signed in to change notification settings - Fork 345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High-level networking interface #73
Comments
When it comes to unix domain socket handling, some references:
Summary: regular AF_UNIX sockets have an annoying issue where the filesystem entry is automatically created when you Twisted is a little fancier: they also have the option to use a pid-based lockfile to track whether the socket is currently in use. By default, they do not do this (everything defaults to If we do want locking then it might makes sense to use Twisted also has the option to check for the the lockfile when connecting to a AF_UNIX socket. I'm not sure what the point of this is – if the socket is stale and gone, then won't NB: for zero-downtime upgrades you might want to blow away an existing socket with an active listener and replace it. To facilitate this use case, blowing away listening sockets should be done as In any case, you want to unlink the socket when closing it when possible – after verifying that the file hasn't been replaced under you! (i.e. check fs/inode information and compare to the still-open socket). But unfortunately this has an inherent race, since there's no way to do the stat and the unlink atomically! Maybe this is worth doing some kind of locking, like taking an fcntl lock on some sidecar file just while we're creating or cleaning up a socket? I guess the algorithm here would be: To acquire the lock:
To release a lock:
To close a socket:
To bind the socket:
To acquire an SLEEP_TIME = 0.01
async def acquire_flock(fd):
while True:
try:
await fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
except BlockingIOError:
await trio.sleep(SLEEP_TIME)
else:
return This is essentially a busy-wait, but that's OK, we never hold the lock for more than a few tens of milliseconds at worst, so sleeping for 10 ms in between checks is probably fine. Permissions: apparently some-but-not-all systems enforce file permissions on unix domain sockets. Linux does. (Do any others we care about? probably not many.) Tornado defaults to 0o600, Twisted defaults to 0o666, and asyncio just accepts the system default. In any case, sockets are always created according to the Biggest question: How on earth do we test all this. |
A survey of various HTTP libraries to tease out the commonality among their request and response APIs: https://drive.google.com/open?id=1HKHvyeEnB_y4rvLEOraE0JNCcpkVCTJ_jVdKs8KFXYw |
Twisted's code for calling libuv has a very cute trick for handling EMFILE/ENFILE: https://github.com/libuv/libuv/blob/26daa99e2c93db2692933e8d141e44661a444e68/src/unix/stream.c#L470-L480 |
[moving this comment from #8 because it's more relevant here] Poorly organized thoughts on server API:
Maybe It should also optionally let you set the listener nursery and the connection nursery separately. Implementations: Convenience functions: |
On further thought, I'm wondering if perhaps we should not try to handle every weird error that can happen in There are some errors that are "normal" – mostly around cases where something goes wrong on the new connection before we accept it. So e.g. the Twisted and Tornado comments say BSD can give ECONNABORTED when the client does a handshake and then sends FIN or RST before we call Then there are errors that indicate a system problem, like The alternative is: if we hit |
Okay, more refined thoughts on the issue about how to handle resource exhaustion errors (EMFILE etc.) in servers: For the For our |
Reviewed some erlang libraries to see how they handle capacity errors:
Also, I'm not 100% sure, but I think mochiweb and yaws actually open the accept socket outside of the acceptor process, so crashing the process just causes a new process to be spawned using the same socket. |
Is there a library named "actually"? Or did you mean to say "actually opens" instead of "and actually open"? Or is there a word missing? |
Whoops, I meant mochiweb and yaws, the two libraries that do crash their acceptor process. Thanks for the catch. |
@njsmith I went ahead and edited your comment to add yaws. |
I ended up copying the "log and wait 100 ms" thing that the erlang servers do, we'll see how it goes. It's just using the stdlib logging, on the theory that well, it kinda sucks, but as methods of dumping to stderr go, at least it's reconfigurable and good logging packages will have some easy way to capture it. Follow-up bugs:
|
We should have something that's just a smidge higher level than
trio.socket
. This is something that could go into an external library, but:trio
itselfThe goal wouldn't be to compete with like, all of netty or twisted or something, but just to provide some minimal extensible abstractions for endpoints, connecting, listening. Maybe UDP too, though that can wait.
Things that should be easy:
Maybe:
A source of inspiration is twisted's "endpoints": tutorial, list of client endpoints, list of server endpoints, a much fancier server endpoint. I'm not a big fan of the plugins + string parsing thing, just because of the whole injecting-arbitrary-code-into-people's-processes thing that plugins do ... though I see the advantage when it comes to exposing this stuff as part of a public interface and wanting it to be extensible there.
We probably want:
A basic Stream API. There's a sketch in
trio._streams
. This is useful in its own right; the idea would be to implement TLS as a stream wrapper e.g., and we can use aSendStream
andRecvStream
for subprocess pipes, etc.Some sort of connect and listen factories for streams.
Lots more TBD; for now this is a placeholder to think more about.
Related: #9 (tls support), #13 (more ergonomic server quick-start), #14 (batched accept), #72 (socket option defaults – some might make more sense to set as part of the high-level interface)
The text was updated successfully, but these errors were encountered: