Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gh-115999: Enable specialization of CALL instructions in free-threaded builds #127123

Merged
merged 29 commits into from
Dec 3, 2024

Conversation

mpage
Copy link
Contributor

@mpage mpage commented Nov 22, 2024

The CALL family of instructions were mostly thread-safe already and only required a small number of changes, which are documented below.

A few changes were needed to make CALL_ALLOC_AND_ENTER_INIT thread-safe:

  • Added _PyType_LookupRefAndVersion, which returns the type version corresponding to the returned ref.
  • Added _PyType_CacheInitForSpecialization, which takes an init method and the corresponding type version and only populates the specialization cache if the current type version matches the supplied version. This prevents potentially caching a stale value in free-threaded builds if we race with an update to __init__.
  • Only cache __init__ functions that are deferred in free-threaded builds. This ensures that the reference to __init__ that is stored in the specialization cache is valid if the type version guard in _CHECK_AND_ALLOCATE_OBJECT passes.
  • Fix a bug in _CREATE_INIT_FRAME where the frame is pushed to the stack on failure.

A few other miscellaneous changes were also needed:

  • Use {LOCK,UNLOCK}_OBJECT in LIST_APPEND. This ensures that the list's per-object lock is held while we are appending to it.
  • Add missing co_tlbc for _Py_InitCleanup.
  • Stop/start the world around setting the eval frame hook. This allows us to read interp->eval_frame non-atomically and preserves the behavior of _CHECK_PEP_523 documented below.

Single-threaded performance

  • Performance is improved by 3-4% on free-threaded builds.
  • Performance is neutral on default builds.

Scaling

The scaling benchmark looks about the same for this PR vs its base:

                    Base         This PR
object_cfunction    1.5x slower  1.3x slower
cmodule_function    1.5x slower  1.5x slower
mult_constant      12.5x faster  12.2x faster
generator          12.1x faster  12.1x faster
pymethod            1.8x slower  1.9x slower
pyfunction         13.6x faster  14.1x faster
module_function     1.7x slower  2.0x slower
load_string_const  13.1x faster  13.8x faster
load_tuple_const   13.0x faster  13.0x faster
create_pyobject    11.7x faster  14.1x faster
create_closure     13.4x faster  13.4x faster
create_dict        12.7x faster  12.0x faster
thread_local_read   3.6x slower  3.7x slower

Thread safety

Thread safety of each instruction in the CALL family is documented below, starting with the uops that are composed to form instructions in the family.

UOPS

The more interesting uops that warrant closer inspection are:

_CHECK_AND_ALLOCATE_OBJECT
This uop loads an __init__ method from the specialization cache of the operand (a type) if the operand's type version matches the type version stored in the inline cache. The loaded method is guaranteed to be valid because we only store deferred objects in the specialization cache and there are no escaping calls following the load:

  1. The type version is cleared before the reference in the MRO to __init__ is destroyed.
  2. If the reference in (1) was the last reference then the __init__ method will be queued for deletion the next time GC runs.
  3. GC requires stopping the world, which forces a synchronizes-with operation between all threads.
  4. If the GC collects the cached __init__, then type's version will have been updated and the update will be visible to all threads, so the guard cannot pass.

_CHECK_FUNCTION_VERSION
This uop guards that the top of the stack is a function and that its version matches the version stored in the inline cache. Instructions assume that if the guard passes, the version, and any properties verified by the version, will not change for the remainder of the instruction execution, assuming there are no escaping calls in between the guard and the code that relies on the guard. This property is preserved in free-threaded builds: the world is stopped whenever a function's version changes.

_CHECK_PEP_523
This uop guards that a custom eval frame function is not in use. Instructions assume that if the guard passes, an eval frame function will not be set for the remainder of the instruction's execution, assuming there are no escaping calls in between the guard and code that relies on the guard passing. This property is preserved in free-threaded builds: the world is stopped whenever the eval frame function is set.

The instructions are also composed of uops whose thread safety properties are easier to reason about and require less scrutiny. These are:

  • _CALL_NON_PY_GENERAL - Uses existing thread-safe APIs.
  • _CHECK_CALL_BOUND_METHOD_EXACT_ARGS - Only performs exact type checks, which are thread-safe: changing an instance's type stops the world.
  • _CHECK_FUNCTION_EXACT_ARGS - All the loads in the uop are safe to perform non-atomically: setting func->func_code stops the world, the co_argcount attribute of code objects is immutable.
  • _CHECK_IS_NOT_PY_CALLABLE - Only performs exact type checks.
  • _CHECK_METHOD_VERSION - This loads a function from a PyMethodObject and guards that its version matches what is stored in the cache. PyMethodObjects are immutable; their fields can be accessed non-atomically. The thread safety of function version guards was already documented above.
  • _CHECK_PERIODIC - Thread safety was previously addressed as part of the 3.13 release.
  • _CHECK_STACK_SPACE - All the loads in this uop are safe to perform non-atomically: setting func->func_code stops the world, the co_framesize attribute of code objects is immutable, and tstate->py_recursion_remaining should only be mutated by the current thread.
  • _CREATE_INIT_FRAME - Uses existing thread-safe APIs.
  • _EXPAND_METHOD - Only loads from PyMethodObjects.
  • _INIT_CALL_BOUND_METHOD_EXACT_ARGS - Only loads from PyMethodObjects.
  • _INIT_CALL_PY_EXACT_ARGS - Only operates on data that isn't yet visible to other threads.
  • _PUSH_FRAME - Only manipulates tstate->current_frame and fields that are not read by other threads.
  • _PY_FRAME_GENERAL - Reads from fields that are either immutable (co_flags) or requires stopping the world to change (func_code).
  • _SAVE_RETURN_OFFSET - Stores only to the frame's return_offset which is not read by other threads.

Instructions

These instructions perform exact type checks and loads from immutable fields of PyCFunction objects:

  • CALL_BUILTIN_FAST
  • CALL_BUILTIN_FAST_WITH_KEYWORDS
  • CALL_BUILTIN_O

These instructions perform exact type checks and loads from immutable fields of PyMethodDescrObjects:

  • CALL_METHOD_DESCRIPTOR_FAST
  • CALL_METHOD_DESCRIPTOR_FAST_WITH_KEYWORDS
  • CALL_METHOD_DESCRIPTOR_NOARGS
  • CALL_METHOD_DESCRIPTOR_O

These instructions are composed of the uops documented above, and are thread-safe transitively:

  • CALL_ALLOC_AND_ENTER_INIT
  • CALL_BOUND_METHOD_EXACT_ARGS
  • CALL_BOUND_METHOD_GENERAL
  • CALL_NON_PY_GENERAL
  • CALL_PY_EXACT_ARGS
  • CALL_PY_GENERAL

These instructions load from the callable cache, which is immutable, perform exact type checks, and use existing thread-safe APIs:

  • CALL_ISINSTANCE
  • CALL_LEN
  • CALL_LIST_APPEND

These instructions use existing thread-safe APIs:

  • CALL_STR_1
  • CALL_TUPLE_1
  • CALL_TYPE_1

Finally, these instructions don't categorize neatly:

  • CALL_BUILTIN_CLASS - Performs exact type checks and loads from immutable types.

Specialization

Apart from the changes discussed earlier, specialization is already thread-safe. It inspects immutable properties (i.e. those of code objects, method descriptors, or PyCFunctions) or properties that require stopping the world to mutate (i.e. properties checked by function version guards).

_CALL_ALLOC_AND_ENTER_INIT will be addressed in a separate PR
This needs to acquire a critical section on the list.
- Modify `get_init_for_simple_managed_python_class` to return both init
  as well as the type version at the time of lookup.
- Modify caching logic to verify that the current version of the type
  matches the version at the time of lookup. This prevents potentially
  caching a stale value if we race with an update to __init__.
- Only cache __init__ functions that are deferred in free-threaded builds.
  This ensures that the borrowed reference to __init__ that is stored in
  the cache is valid if the type version guard in _CHECK_AND_ALLOCATE_OBJECT
  passes:
  1. The type version is cleared before the reference in the MRO to __init__
     is destroyed.
  2. If the reference in (1) was the last reference then the __init__ method
     will be queued for deletion the next time GC runs.
  3. GC requires stopping the world, which forces a synchronizes-with operation
     between all threads.
  4. If the GC collects the cached __init__, then type's version will have been
     updated *and* the update will be visible to all threads, so the guard
     cannot pass.
- There are no escaping calls in between loading from the specialization cache
  and pushing the frame. This is a requirement for the default build.
@mpage mpage changed the title gh-115999: Enable specialization of CALL in free-threaded builds gh-115999: Enable specialization of CALL instructions in free-threaded builds Nov 22, 2024
@@ -850,6 +850,13 @@ def __init__(self, events):
def __call__(self, code, offset, val):
self.events.append(("return", code.co_name, val))

# CALL_ALLOC_AND_ENTER_INIT will only cache __init__ methods that are
# deferred. We only defer functions defined at the top-level.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... I think we should be deferring functions defined in classes, even if the classes are nested. They already require GC for collection because classes are full of cycles.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filed #127274

@@ -484,11 +494,11 @@ _PyPerfTrampoline_Init(int activate)
return -1;
}
if (!activate) {
tstate->interp->eval_frame = NULL;
set_eval_frame(tstate, NULL);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these go through _PyInterpreterState_SetEvalFrameFunc()? It'd be nice to have just one function that modifies interp->eval_frame.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that should be fine (the tests pass using it). @pablogsal - do you know if we need to set tstate->eval_frame explicitly when setting / clearing the perf trampoline, or is it fine to go through _PyInterpreterState_SetEvalFrameFunc()?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is any difference here so it should be safe to just go through _PyInterpreterState_SetEvalFrameFunc().

Just remember to run the buildbots before landing because these tests only run there with all the options.

@mpage mpage added the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Nov 26, 2024
@bedevere-bot
Copy link

🤖 New build scheduled with the buildbot fleet by @mpage for commit 3e8d85e 🤖

If you want to schedule another build, you need to add the 🔨 test-with-buildbots label again.

@bedevere-bot bedevere-bot removed the 🔨 test-with-buildbots Test PR w/ buildbots; report in status section label Nov 26, 2024
@mpage mpage requested a review from colesbury November 26, 2024 00:49
@mpage
Copy link
Contributor Author

mpage commented Nov 26, 2024

Looks like the nogil refleaks buildbots are surfacing a real issue, digging into that. The other buildbot failures look like they're unrelated to this PR:

Fix a bug in `_CREATE_INIT_FRAME` where the frame is pushed to the stack on failure.
`_CREATE_INIT_FRAME` pushes a pointer to the new frame onto the stack for consumption
by the next uop. When pushing the frame fails, we do not want to push the result (NULL)
to the stack because it is not a valid stackref and will be exposed to the generic
error handling code in the interpreter loop. This worked in default builds because
`PyStackRef_NULL` is `NULL` in default builds, which is not the case in free-threaded
builds.
@mpage
Copy link
Contributor Author

mpage commented Nov 26, 2024

!buildbot nogil refleak

@bedevere-bot
Copy link

🤖 New build scheduled with the buildbot fleet by @mpage for commit de0e2ee 🤖

The command will test the builders whose names match following regular expression: nogil refleak

The builders matched are:

  • AMD64 Fedora Rawhide NoGIL refleaks PR
  • PPC64LE Fedora Rawhide NoGIL refleaks PR
  • aarch64 Fedora Rawhide NoGIL refleaks PR
  • AMD64 CentOS9 NoGIL Refleaks PR

Copy link
Contributor

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Two small comments below.

can_cache = can_cache && _PyObject_HasDeferredRefcount(init);
#endif
if (can_cache) {
FT_ATOMIC_STORE_PTR_RELAXED(type->_spec_cache.init, init);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be "release" so that the contents of init are visible before init is written to type->_spec_cache.init.

(There might be other synchronization of the thread-local specialization process that makes relaxed okay, but it seems easier to reason about to me if it's at least "release")

@@ -3997,7 +3999,11 @@ dummy_func(
assert(self_o != NULL);
DEOPT_IF(!PyList_Check(self_o));
STAT_INC(CALL, hit);
#ifdef Py_GIL_DISABLED
int err = _PyList_AppendTakeRefAndLock((PyListObject *)self_o, PyStackRef_AsPyObjectSteal(arg));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use the DEOPT_IF(!LOCK_OBJECT(...)) macros instead of adding the new _PyList_AppendTakeRefAndLock function.

I'm a bit uneasy about the critical section after the DEOPT_IF guards. The critical section may block and allow a stop-the-world operation to occur in between the check of the guards and the list append. I'm concerned that it might allow some weird things to occur (like swapping the type of objects) that would invalidate the guards.

In other words, this adds a potentially escaping call in a place where we previously didn't have one.

The DEOPT_IF(!LOCK_OBJECT(...)) avoids this because it doesn't block (and doesn't detach the thread state).

@mpage
Copy link
Contributor Author

mpage commented Dec 3, 2024

Test failure is #127421

@mpage mpage merged commit dabcecf into python:main Dec 3, 2024
55 checks passed
@mpage mpage deleted the gh-115999-tlbc-call branch December 3, 2024 19:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants