Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

threads: avoid deadlock from recursive lock acquire #38487

Merged
merged 2 commits into from
Dec 4, 2020

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented Nov 18, 2020

Finalizers can't safely acquire many essential locks (such as the
iolock, to cleanup libuv objects) if they are run inside another lock.
Therefore, inhibit all finalizers on the thread until all locks are
released (previously, this was only true for our internal locks).

@@ -61,10 +61,12 @@ Base.assert_havelock(l::SpinLock) = islocked(l) ? nothing : Base.concurrency_vio
function lock(l::SpinLock)
while true
if _get(l) == 0
GC.enable_finalizers(false)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't this be done just before returning?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The entire point of this bug fix is that it must be done before lock acquisition

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For comparison, jl_mutex_lock only disables finalizers after jl_mutex_wait returns. I thought that would correspond to disabling them just after if p == 0 here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jl_mutex_lock can get away with doing potentially suspect things because it can ensure they are indistinguishable to the user

maleadt added a commit to JuliaGPU/CUDA.jl that referenced this pull request Nov 24, 2020
Copy link
Member

@maleadt maleadt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works fine with CUDA.jl (which had its own version of this).

maleadt added a commit to JuliaGPU/CUDA.jl that referenced this pull request Nov 24, 2020
@vchuravy
Copy link
Member

The failure on Linux32 bit is related:

Error in testset compiler/codegen:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/compiler/codegen.jl:251
  Expression: was_gced
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:130
  Expression: enable_finalizers_count() == 0
   Evaluated: 1 == 0
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:131
  Expression: lock(enable_finalizers_count, l) == 1
   Evaluated: 2 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:132
  Expression: enable_finalizers_count() == 0
   Evaluated: 1 == 0
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:136
  Expression: enable_finalizers_count() == 2
   Evaluated: 3 == 2
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:138
  Expression: enable_finalizers_count() == 1
   Evaluated: 2 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:140
  Expression: enable_finalizers_count() == 0
   Evaluated: 1 == 0
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:142
  Expression: enable_finalizers_count() == 1
   Evaluated: 2 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:144
  Expression: enable_finalizers_count() == 1
   Evaluated: 2 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:146
  Expression: enable_finalizers_count() == 0
   Evaluated: 1 == 0
Error in testset misc:
Test Failed at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.6/Test/src/Test.jl:667
  Expression: contains_warn(read(fname, String), $(Expr(:escape, "WARNING: GC finalizers already enabled on this thread.")))
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:130
  Expression: enable_finalizers_count() == 0
   Evaluated: 1 == 0
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:131
  Expression: lock(enable_finalizers_count, l) == 1
   Evaluated: 2 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:132
  Expression: enable_finalizers_count() == 0
   Evaluated: 1 == 0
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:136
  Expression: enable_finalizers_count() == 2
   Evaluated: 3 == 2
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:138
  Expression: enable_finalizers_count() == 1
   Evaluated: 2 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:140
  Expression: enable_finalizers_count() == 0
   Evaluated: 1 == 0
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:142
  Expression: enable_finalizers_count() == 1
   Evaluated: 2 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:144
  Expression: enable_finalizers_count() == 1
   Evaluated: 2 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:146
  Expression: enable_finalizers_count() == 0
   Evaluated: 1 == 0
Error in testset misc:
Test Failed at /buildworker/worker/package_linux32/build/usr/share/julia/stdlib/v1.6/Test/src/Test.jl:667
  Expression: contains_warn(read(fname, String), $(Expr(:escape, "WARNING: GC finalizers already enabled on this thread.")))
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:165
  Expression: c[] == 1
   Evaluated: 0 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:167
  Expression: c[] == 1
   Evaluated: 0 == 1
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:169
  Expression: c[] == 100
   Evaluated: 99 == 100
Error in testset misc:
Test Failed at /buildworker/worker/tester_linux32/build/share/julia/test/misc.jl:505
  Expression: finalized

@vtjnash
Copy link
Member Author

vtjnash commented Nov 30, 2020

Okay, I changed the implementation, as adding an error check for that revealed a number of violators. Should be happy now?

@vtjnash
Copy link
Member Author

vtjnash commented Dec 1, 2020

Okay, I changed the implementation, as that was not sufficient. Should be happy now?

Unrelated, I also reduced the size of jl_task_t here by 2 pointers, to be consistent in the implementation.

@vchuravy
Copy link
Member

vchuravy commented Dec 1, 2020

Should be happy now?

Linux 64bit is still not happy.

@vtjnash
Copy link
Member Author

vtjnash commented Dec 1, 2020

Okay, I changed the implementation, as I missed a line of code, and added some assertions for it. Should be happy now?

Will squash now, as CI is green.

src/gc.c Outdated Show resolved Hide resolved
Finalizers can't safely acquire many essential locks (such as the
iolock, to cleanup libuv objects) if they are run inside another lock.
Therefore, inhibit all finalizers on the thread until all locks are
released (previously, this was only true for our internal locks).
@vchuravy vchuravy merged commit 0253ebf into master Dec 4, 2020
@vchuravy vchuravy deleted the jn/lock-finalizers branch December 4, 2020 18:14
maleadt added a commit to JuliaGPU/CUDA.jl that referenced this pull request Dec 7, 2020
vtjnash added a commit that referenced this pull request Dec 7, 2020
Finalizers can't safely acquire many essential locks (such as the
iolock, to cleanup libuv objects) if they are run inside another lock.
Therefore, inhibit all finalizers on the thread until all locks are
released (previously, this was only true for our internal locks).

(cherry-picked from 59aedd1)
vtjnash added a commit that referenced this pull request Dec 7, 2020
Finalizers can't safely acquire many essential locks (such as the
iolock, to cleanup libuv objects) if they are run inside another lock.
Therefore, inhibit all finalizers on the thread until all locks are
released (previously, this was only true for our internal locks).

(cherry-picked from 59aedd1)
vtjnash added a commit that referenced this pull request Dec 8, 2020
Finalizers can't safely acquire many essential locks (such as the
iolock, to cleanup libuv objects) if they are run inside another lock.
Therefore, inhibit all finalizers on the thread until all locks are
released (previously, this was only true for our internal locks).

(cherry-picked from 59aedd1)
vtjnash added a commit that referenced this pull request Dec 10, 2020
Finalizers can't safely acquire many essential locks (such as the
iolock, to cleanup libuv objects) if they are run inside another lock.
Therefore, inhibit all finalizers on the thread until all locks are
released (previously, this was only true for our internal locks).

(cherry-picked from 59aedd1)
maleadt added a commit to JuliaGPU/CUDA.jl that referenced this pull request Jan 4, 2021
@vchuravy vchuravy mentioned this pull request Jan 21, 2021
27 tasks
vchuravy pushed a commit to JuliaLang/Distributed.jl that referenced this pull request Oct 6, 2023
…ia#38487) (JuliaLang/julia#38753)

Finalizers can't safely acquire many essential locks (such as the
iolock, to cleanup libuv objects) if they are run inside another lock.
Therefore, inhibit all finalizers on the thread until all locks are
released (previously, this was only true for our internal locks).

(cherry-picked from ec8466b)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
multithreading Base.Threads and related functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants