-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More ligthweight waiting for signals #7
Comments
Thanks for the suggestion!
I haven't investigated how much the compile time changes depending on whether the feature is enabled, but this seems to make sense.
cc @stjepang What do you think about this? (I'm not familiar enough to answer/review this.) |
Not sure if this would impact this issue but i recently had my PR merged that made signal-hook-async-std only use async-io, thereby making it usable with smol with no extra async dependacies. Edit: and futures-lite. That change is in version 0.2.1. |
Another alternative to signals might by pidfds. There already exists a crate which uses them with async-io: async-pidfd |
signal-hook-async-std would be more heavyweight than what's there right now (at least in the amount of dependencies). pidfd is probably linux specific, non-portable. |
I know I said that I wouldn't add any more features, but I think this is important enough. Right now, a thread called "async-process" is responsible for listening for SIGCHLD and reaping zombie processes. This listens for the SIGCHLD signal in Unix and uses a channel connected to the waitable handle on Windows. While this works, we can do better. Through async-signal, the signal was already asynchronous on Unix; we were already just using async_io::block_on to wait on the signal. After swapping out the channel used on Windows with async-channel, the process reaping function "reap" can be reimplemented as a fully asynchronous future. From here we must make sure this future is being polled at all times. To facilitate this, a function named "driver()" is added to the public API. This future acquires a lock on the reaper structure and calls the "reap()" future indefinitely. Multiple drivers can be created at once; they will just wait forever on this lock. This future is intended to be spawned onto an executor and left to run forever, making sure all child processes are signalled whenever necessary. If no tasks are running the driver future, the "async-process" thread is spawned and runs the "reap()" future itself. I've added the following controls to make sure that this system is robust: - If a "driver" task is dropped, another "driver" task will acquire the lock and keep the reaper active. - Before being dropped, the task checks to see if it is the last driver. If it is, it will spawn the "async-process" thread to be the driver. - When a Child is being created, it checks if there are any active drivers. If there are none, it spawns the "async-process" thread itself. - One concern is that the driver future wil try to spawn the "async-process" thread as the application exits and the task is being dropped, which will be unnecessary and lead to slower shutdowns. To prevent this, the future checks to see if there are any extant `Child` instances (a new refcount is added to Reaper to facilitate this). If there are none, and if there are no zombie processes, it does not spawn the additional thread. - Someone can still `mem::forget()` the driver thread. This does not lead to undefined behavior and just leads to processes being left dangling. At this point they're asking for wacky behavior. This strategy might also be viable for `async-io`, if we want to try to avoid needing to spawn the additional thread there as well. Closes #7 cc smol-rs/async-io#40 Signed-off-by: John Nunley <dev@notgull.net>
I know I said that I wouldn't add any more features, but I think this is important enough. Right now, a thread called "async-process" is responsible for listening for SIGCHLD and reaping zombie processes. This listens for the SIGCHLD signal in Unix and uses a channel connected to the waitable handle on Windows. While this works, we can do better. Through async-signal, the signal was already asynchronous on Unix; we were already just using async_io::block_on to wait on the signal. After swapping out the channel used on Windows with async-channel, the process reaping function "reap" can be reimplemented as a fully asynchronous future. From here we must make sure this future is being polled at all times. To facilitate this, a function named "driver()" is added to the public API. This future acquires a lock on the reaper structure and calls the "reap()" future indefinitely. Multiple drivers can be created at once; they will just wait forever on this lock. This future is intended to be spawned onto an executor and left to run forever, making sure all child processes are signalled whenever necessary. If no tasks are running the driver future, the "async-process" thread is spawned and runs the "reap()" future itself. I've added the following controls to make sure that this system is robust: - If a "driver" task is dropped, another "driver" task will acquire the lock and keep the reaper active. - Before being dropped, the task checks to see if it is the last driver. If it is, it will spawn the "async-process" thread to be the driver. - When a Child is being created, it checks if there are any active drivers. If there are none, it spawns the "async-process" thread itself. - One concern is that the driver future wil try to spawn the "async-process" thread as the application exits and the task is being dropped, which will be unnecessary and lead to slower shutdowns. To prevent this, the future checks to see if there are any extant `Child` instances (a new refcount is added to Reaper to facilitate this). If there are none, and if there are no zombie processes, it does not spawn the additional thread. - Someone can still `mem::forget()` the driver thread. This does not lead to undefined behavior and just leads to processes being left dangling. At this point they're asking for wacky behavior. This strategy might also be viable for `async-io`, if we want to try to avoid needing to spawn the additional thread there as well. Closes #7 cc smol-rs/async-io#40 Signed-off-by: John Nunley <dev@notgull.net>
I've noticed in #4 that the waiting for signals on unix is more heavy-weight than it probably deserves. The
signal-hook::iterator::Signals
is the most comfortable interface for application usage, as it allows conveniently blocking on an iterator and can handle multiple distinct signals at once, but it is also quite a bit of code that needs to be compiled (it can be turned off by a feature) and needs a separate thread for the blocking. It doesn't seem this crate would need any kind of multiple signals support and blocking is probably even an anti-feature.I would send a pull request with doing something more lightweight, but I want to discuss design first.
pipe
. This would allow removing the feature flag and cutting down on the amount of code being compiled.What do you think would be the best way forward?
The text was updated successfully, but these errors were encountered: