Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Distributed] Remove (now) unnecessary copy of the Future in its serializer #43465

Merged
merged 1 commit into from
Dec 18, 2021
Merged

[Distributed] Remove (now) unnecessary copy of the Future in its serializer #43465

merged 1 commit into from
Dec 18, 2021

Conversation

krynju
Copy link
Contributor

@krynju krynju commented Dec 18, 2021

This was introduced in #42339 as a workaround in order to safely serialize a Future, which now contains a lock
But now that #43325 is merged this is no longer necessary

@krynju
Copy link
Contributor Author

krynju commented Dec 18, 2021

@tkf @vtjnash

@vchuravy vchuravy merged commit 6817551 into JuliaLang:master Dec 18, 2021
Copy link
Member

@vtjnash vtjnash left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This now violates thread safety, since it is not holding a lock

@krynju
Copy link
Contributor Author

krynju commented Dec 19, 2021

I checked this lock placement and it works fine krynju@fb605fa

But now I'm thinking and maybe we should only lock when the f.v is nothing?
When the future is already set further changes to it are no longer possible.

The lock in serialize is mainly protecting against serializing junk in progress of setting the Future or other consequences of serializing and setting at the same time, so I guess it's fine to skip it when it's already set

@vtjnash
Copy link
Member

vtjnash commented Dec 19, 2021

You cannot know that it is not-nothing until you are holding the lock though

@krynju
Copy link
Contributor Author

krynju commented Dec 19, 2021

I was thinking of checking for nothing and then just locking based on that observation regardless of the current state.
If by the time the lock is obtained the value is already set it's not an issue if we unnecessairly obtain the lock.

We already check for nothing in the beginning of the serializer, so this could be a slight improvement to usually skip the lock at the end of the function

@vtjnash
Copy link
Member

vtjnash commented Jan 12, 2022

that requires double-checked locking pattern, and the extra complexity is probably not worth it here

@krynju
Copy link
Contributor Author

krynju commented Jan 26, 2022

I'll take care of this one in ~a month once I have any free time.
I have an idea in mind, but I'll just prepare the code and it will be easier to discuss the overall solution

@vtjnash
Copy link
Member

vtjnash commented Mar 3, 2022

Actually, looking back, I don't know what send_add_client does here. It seems that we are not concerned about race conditions, and if the value gets both serialized here and the send_add_client gets called, that will be handled by deserialize and send_add_client to ensure we do not leave a dangling reference or get an incorrect value?

@krynju
Copy link
Contributor Author

krynju commented May 4, 2022

I think send_add_client just sends the remoteref to the worker, so value serialization won't be resolved I guess

I added what I was talking about here https://github.com/krynju/julia/blob/b2edc754ed6215e27e282e390ac031c4ddca756f/stdlib/Distributed/src/remotecall.jl#L367-L386

Basically I think there might be this one gap where we serialize an unset Future. An unset future can be set during the serialization, so we should probably protect only that case with the lock (the lock is always used when setting the Future).
But I think this is more about consistency - there shouldn't be any actual race here as the value is set atomically, so that's why I'm not sure it is even necessary

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants