Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid deadlock caused by generational GC and object finalizers #720

Merged
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions loguru/_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from threading import Thread

from ._colorizer import Colorizer
from ._locks_machinery import create_handler_lock
from ._locks_machinery import HandlerLockNotReentrant, create_handler_lock


def prepare_colored_format(format_, ansi_level):
Expand Down Expand Up @@ -175,7 +175,8 @@ def emit(self, record, level_id, from_decorator, is_raw, colored_message):
self._queue.put(str_record)
else:
self._sink.write(str_record)

except HandlerLockNotReentrant:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this shouldn't be re-raised.

If we swallow it here, we don't emit the log, but the object finalizer can complete whatever else it should do. If we raise it, it will likely still be swallowed (as is the case with finalizers), and the finalizer stop.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I've been wondering about this, and I think we should not catch HandlerLockNotReentrant specifically. Instead, it can be handled by the except Exception: block bellow with conditional re-raising depending on catch argument.

It's true that it can cause __del__ implementation to fail if the error is re-raised (because it won't be able to complete), but that's the expected behavior. Using the logger with catch=False means it can fail for different reasons, either because the lock is already in use or because the custom sink contains a bug for example.

Although the __del__ documentation specifies raised exceptions are ignored by CPython, they do cause a warning to be displayed, so I don't think they will go unnoticed or be difficult to debug.

Finally, I think that this new safety measure will be triggered mostly when a user reuses the logger in a sink. In such case, the application should fail if the user specified catch=False.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

self._error_interceptor.print(record)
except Exception:
if not self._error_interceptor.should_catch():
raise
Expand Down
37 changes: 35 additions & 2 deletions loguru/_locks_machinery.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,46 @@
import threading
import weakref


class HandlerLockNotReentrant(RuntimeError):
pass


class _HandlerLockWrapper:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use @contextlib.contextmanager for simplification?

Copy link
Contributor Author

@jacksmith15 jacksmith15 Oct 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we can, but only if we separate the Lock from this logic. E.g.

with _acquire_protected(self._lock):
    ...

This works given your suggestion to avoid this behaviour on the queued writer.

Another alternative is to keep the wrapper object, and have a contextlib interface like this:

with self._lock.acquire(raise_on_reenter=True):
    ...

Although acquire is a poor choice of method name given its meaning on threading.Lock, and I can't think of a better name.

"""Lock wrapper for the handler.

This avoids thread-local deadlock by propagating an exception rather than waiting forever.
See: https://github.com/Delgan/loguru/issues/712
"""

def __init__(self, lock):
self._lock = lock
self._local = threading.local()
self._local.acquired = False

def __enter__(self):
if getattr(self._local, "acquired", False):
raise HandlerLockNotReentrant(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given my other comment about not handling HandlerLockNotReentrant specifically, I think we can just raise a RuntimeError here.

Also, given that you suggested to protect all methods, I would make the message a bit more generic (because the error won't necessarily be raised when we "tried to emit a log"). For example:

raise RuntimeError(
    "Could not acquire internal lock because it was already in use (deadlock avoided). "
    "This likely happened because the logger was re-used inside a sink, a signal handler "
    "or a '__del__' method. This is not permitted because the logger and its handlers are "
    "not re-entrant."
)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

"Tried to emit a log in the process of handling a log. Log handler is not "
"re-entrant."
)
self._local.acquired = True
self._lock.acquire()
return

def __exit__(self, exc_type, exc_value, traceback):
self._lock.release()
self._local.acquired = False
return


if not hasattr(os, "register_at_fork"):

def create_logger_lock():
return threading.Lock()

def create_handler_lock():
return threading.Lock()
return _HandlerLockWrapper(threading.Lock())
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach here changes the behaviour for the handler lock in general. So this applies not only to Handler.emit, but also to Handler.stop, and Handler.complete_async.

I'm not sure if that is correct? If it is, do we need additional handling in those methods?

If this behaviour should only apply for emit, I think the solution shouldn't touch this file at all, and I will move it to _handler.py.

Copy link
Owner

@Delgan Delgan Oct 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, we should also protect the other methods using the lock, this is a very good idea!

However, I'm thinking it might be better to explicitly wrap the lock on use instead of on creation. I prefer to differentiate the context manager of the lock itself because in some cases it is not required. In particular, I'm thinking to the lock inside the _queued_writer thread which does not need to be protected.

I'm thinking of replacing with self._lock: with with self._protected_lock(): for example. There is certainly a risk if we forget to use it later, but I'll keep an eye on it and I think it should be fine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


else:
# While forking, we need to sanitize all locks to make sure the child process doesn't run into
Expand Down Expand Up @@ -47,4 +80,4 @@ def create_logger_lock():
def create_handler_lock():
lock = threading.Lock()
handler_locks.add(lock)
return lock
return _HandlerLockWrapper(lock)
65 changes: 65 additions & 0 deletions tests/test_locks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
import gc

import pytest

from loguru import logger


class CyclicReference:
"""A minimal cyclic reference.

Cyclical references are garbage collected using the generational collector rather than
via reference counting. This is important here, because the generational collector runs
periodically, meaning that it is hard to predict when the stack will be overtaken by a
garbage collection process - but it will almost always be when allocating memory of some
kind.

When this object is garbage-collected, a log will be emitted.
"""

def __init__(self, _other: "CyclicReference" = None):
self.other = _other or CyclicReference(_other=self)

def __del__(self):
logger.info("tearing down")


@pytest.fixture()
def _remove_cyclic_references():
"""Prevent cyclic isolate finalizers bleeding into other tests."""
try:
yield
finally:
for generation in range(3):
gc.collect(generation=generation)


def test_no_deadlock_on_generational_garbage_collection(_remove_cyclic_references):
"""Regression test for https://github.com/Delgan/loguru/issues/712

Assert that deadlocks do not occur when a cyclic isolate containing log output in
finalizers is collected by generational GC, during the output of another log message.
"""

# GIVEN a sink which assigns some memory
output = []

def sink(message):
# This simulates assigning some memory as part of log handling. Eventually this will
# trigger the generational garbage collector to take over here.
# You could replace it with `json.dumps(message.record)` to get the same effect.
_ = [dict() for _ in range(20)]
Copy link
Owner

@Delgan Delgan Oct 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually don't understand why do we need to trigger the garbage collector this way instead of calling gc.collect()?

Copy link
Contributor Author

@jacksmith15 jacksmith15 Oct 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Functionally that works!

I based this on the example on the issue, and I avoided explicitly calling gc.collect there to demonstrate how this could be triggered reliably with seemingly innocuous operations here.

In short, I think this demonstrates how easy it would be to trigger GC at this point, for readers who are not familiar with how the generational GC works.

Happy to switch to gc.collect if you'd prefer?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given the unpredictable nature of the garbage collector, I would indeed prefer to call gc.collect() to guarantee the test behaves as expected regardless of gc configuration and possible change of the gc implementation in the future.

It's a good idea to give a concrete example, though. I would suggest to leave it as a comment and/or link to the related ticket you created.


# Actually write the message somewhere
output.append(message)

logger.add(sink, colorize=False)

# WHEN there are cyclic isolates in memory which log on GC
# AND logs are produced long enough to trigger generational GC
for _ in range(1000):
CyclicReference()
logger.info("test")

# THEN deadlock should not be reached
assert True