-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unstick parent if all child tasks are done #41393
base: master
Are you sure you want to change the base?
Conversation
src/julia.h
Outdated
@@ -1829,8 +1829,8 @@ typedef struct _jl_task_t { | |||
jl_value_t *result; | |||
jl_value_t *logstate; | |||
jl_function_t *start; | |||
uint16_t sticky; // 0 means this Task can be migrated to a new thread |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uint16 here is somewhat arbitrary. It works with uint8 too (see the saturation check) but I thought 255 tasks are not super crazily many tasks.
This looks like it could have a lot of overhead. Is a feature like this truly needed long term, or should we just encourage making more things thread-safe? |
Oh yes, I'd love to go this direction :) But I thought we'd need to keep living with code
What's the main overhead you are thinking? I think there are:
I thought they might not be so prominent compared to a task creation + scheduling overhead. |
Right, the overhead is currently way way too high, so I don't want to add any more to it :) |
Co-authored-by: Valentin Churavy <vchuravy@users.noreply.github.com>
Yes, it's an overhead and code complexity. But I'm thinking a situation like a That said, I don't think #41334 is too bad since that's another motivation for us to recommend |
I suppose the risk is that people might write algorithms (e.g. spinlocks) that depend, for deadlock-free correctness, on the ability to migrate threads, and then adding a |
#39773 restricts the use of communication APIs like channel, condition variable, and synchronization barrier though. So, if we want to enable task migration for such "concurrent" tasks as much as possible, I think this PR is better than #41334.
Bit aside, but writing something like this hasn't been possible as there was not migration, right? How about documenting such pattern is unsupported? (Though Hyrum's law may be triggered anyway...) |
else | ||
return getfield(t, field) | ||
end | ||
end | ||
|
||
function setproperty!(t::Task, field::Symbol, x) | ||
if field === :sticky | ||
t.sticky_count = convert(Bool, x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what prevents an @async
-scheduled task from being marked as unsticky since sticky_count
will always be >= 1
, right? Maybe it would be good to add comment here. Or a test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is rather a compat layer for supporting .sticky
property.
@async
task is never marked as unsticky because (1) its sticky_count
is initialized to 1 (in the C function jl_new_task
) and (2) a decrement (if any) is always paired with a preceding increment.
Co-authored-by: Jonas Schulze <jonas.schulze7@t-online.de>
This is a possible improvement built upon #41334 implementing the idea I mentioned in #41324 (comment)
fixes #41324