Skip to content

Commit

Permalink
doc multi-threading: fix indentation (#36560)
Browse files Browse the repository at this point in the history
(cherry picked from commit 737d232)
  • Loading branch information
rfourquet authored and JeffBezanson committed Jul 25, 2020
1 parent 76c232b commit e0555db
Showing 1 changed file with 40 additions and 40 deletions.
80 changes: 40 additions & 40 deletions doc/src/manual/multi-threading.md
Original file line number Diff line number Diff line change
Expand Up @@ -304,47 +304,47 @@ rather pointless as a finalizer). This leads us to a bit of a conundrum.
There are a few approaches to dealing with this problem:

1. When single-threaded, code could call the internal `jl_gc_enable_finalizers`
C function to prevent finalizers from being scheduled
inside a critical region. Internally, this is used inside some functions (such
as our C locks) to prevent recursion when doing certain operations (incremental
package loading, codegen, etc.). The combination of a lock and this flag
can be used to make finalizers safe.
C function to prevent finalizers from being scheduled
inside a critical region. Internally, this is used inside some functions (such
as our C locks) to prevent recursion when doing certain operations (incremental
package loading, codegen, etc.). The combination of a lock and this flag
can be used to make finalizers safe.

2. A second strategy, employed by Base in a couple places, is to explicitly
delay a finalizer until it may be able to acquire its lock non-recursively.
The following example demonstrates how this strategy could be applied to
`Distributed.finalize_ref`:

```
function finalize_ref(r::AbstractRemoteRef)
if r.where > 0 # Check if the finalizer is already run
if islocked(client_refs) || !trylock(client_refs)
# delay finalizer for later if we aren't free to acquire the lock
finalizer(finalize_ref, r)
return nothing
end
try # `lock` should always be followed by `try`
if r.where > 0 # Must check again here
# Do actual cleanup here
r.where = 0
end
finally
unlock(client_refs)
end
end
nothing
end
```
delay a finalizer until it may be able to acquire its lock non-recursively.
The following example demonstrates how this strategy could be applied to
`Distributed.finalize_ref`:

```
function finalize_ref(r::AbstractRemoteRef)
if r.where > 0 # Check if the finalizer is already run
if islocked(client_refs) || !trylock(client_refs)
# delay finalizer for later if we aren't free to acquire the lock
finalizer(finalize_ref, r)
return nothing
end
try # `lock` should always be followed by `try`
if r.where > 0 # Must check again here
# Do actual cleanup here
r.where = 0
end
finally
unlock(client_refs)
end
end
nothing
end
```

3. A related third strategy is to use a yield-free queue. We don't currently
have a lock-free queue implemented in Base, but
`Base.InvasiveLinkedListSynchronized{T}` is suitable. This can frequently be a
good strategy to use for code with event loops. For example, this strategy is
employed by `Gtk.jl` to manage lifetime ref-counting. In this approach, we
don't do any explicit work inside the `finalizer`, and instead add it to a queue
to run at a safer time. In fact, Julia's task scheduler already uses this, so
defining the finalizer as `x -> @spawn do_cleanup(x)` is one example of this
approach. Note however that this doesn't control which thread `do_cleanup`
runs on, so `do_cleanup` would still need to acquire a lock. That
doesn't need to be true if you implement your own queue, as you can explicitly
only drain that queue from your thread.
have a lock-free queue implemented in Base, but
`Base.InvasiveLinkedListSynchronized{T}` is suitable. This can frequently be a
good strategy to use for code with event loops. For example, this strategy is
employed by `Gtk.jl` to manage lifetime ref-counting. In this approach, we
don't do any explicit work inside the `finalizer`, and instead add it to a queue
to run at a safer time. In fact, Julia's task scheduler already uses this, so
defining the finalizer as `x -> @spawn do_cleanup(x)` is one example of this
approach. Note however that this doesn't control which thread `do_cleanup`
runs on, so `do_cleanup` would still need to acquire a lock. That
doesn't need to be true if you implement your own queue, as you can explicitly
only drain that queue from your thread.

0 comments on commit e0555db

Please sign in to comment.