-
Notifications
You must be signed in to change notification settings - Fork 13k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement the task tree and task killing using messages #1189
Comments
I don't anticipate this happening in the 0.3 timeframe |
I've implemented linked failure using an exclusive ARC per task-group (soon to be several per), rather than messages. Killing is done with a call into the runtime, A problem with implementing this using messages is that to be killed at any time, a task must always be implicitly reading from some sort of failure port. This could maybe be done to allow tasks to get kicked awake from blocked-on-ports, using "implicitly select2(port, failure_port)" -- but starts getting fuzzier when compiler-inserted yield checks come into play. So I think using the current model of the exclusive arc, the kill_other builtin, and checking for killed at yield-points is the way to go, instead of messages. Feel free to reopen if you disagree. |
This allows loading the sources for crates loaded from the sysroot.
This allows loading the sources for crates loaded from the sysroot.
Right now killing a task means setting a flag on it in the runtime and checking that flag before and after yielding. There is at least one race condition that could result in the task not being killed and never waking up.
Seems like we should be able to maintain the relationships between tasks just using Rust messages. This is desirable because every bit of synchronization we eliminate from the runtime makes it easier to understand. There are some obstacles to being able to implement this though, primarily that we need to be able to select on multiple ports. Additionally, I don't really envision how reparenting would work.
The text was updated successfully, but these errors were encountered: