Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Worker can no longer detect faulty dependencies causing infinite loop #4439

Open
fjetter opened this issue Jan 19, 2021 · 3 comments
Open

Worker can no longer detect faulty dependencies causing infinite loop #4439

fjetter opened this issue Jan 19, 2021 · 3 comments
Labels
bug Something is broken deadlock The cluster appears to not make any progress stability Issue or feature related to cluster stability (e.g. deadlock)

Comments

@fjetter
Copy link
Member

fjetter commented Jan 19, 2021

What happened:

For cases where there are broken dependencies, e.g. deserialization isn't working, the workers no longer have the possibility to detect this failure and try indefinitely instead of raising an exception.

What you expected to happen:

The computation terminates with an appropriate exception.

Minimal Complete Verifiable Example:

@gen_cluster(client=True, Worker=Nanny, timeout=4600)
async def test_get_data_faulty_dep(c, s, a, b):
    """This test creates a broken dependency and forces serialization by
    requiring it to be submitted to another worker. The computation should
    eventually finish by flagging the dep as bad and raise an appropriate
    exception.
    """

    class BrokenDeserialization:
        def __setstate__(self, *state):
            raise AttributeError()

        def __getstate__(self, *args):
            return ""

    def create():
        return BrokenDeserialization()

    def collect(*args):
        return args

    fut1 = c.submit(create, workers=[a.name])

    fut2 = c.submit(collect, fut1, workers=[b.name])

    with pytest.raises(RuntimeError, match="Could not find dependencies for collect-"):
        await fut2.result()

Anything else we need to know?:

I tried addressing this issue in #4360 but I believe this is difficult to resolve without a more fundamental rework of the way tasks states are managed.

I explained the problem of this infinite loop in #4360 (comment) since this is a race condition between the worker-internal missing dep handler and the scheduler escalation. The scheduler escalation triggers a release and the worker forgets the dependency. The TaskState.suspicious_count is therefore reset and the exception is never triggered.

I believe resolving this requires the introduction of additional task states on worker side (xref #4413; similar to released/forgotten on scheduler) where the meta information of a task is only forgotten once the scheduler allows this to happen. Alternatively, the scheduler must keep track of these suspicious tasks, relieving the worker of its responsibility to track these things.

cc @mrocklin @gforsyth

@fjetter fjetter added the bug Something is broken label Jan 19, 2021
@mrocklin
Copy link
Member

mrocklin commented Jan 19, 2021 via email

@mrocklin
Copy link
Member

mrocklin commented Jan 19, 2021 via email

@fjetter
Copy link
Member Author

fjetter commented Jan 19, 2021

No the nanny is not necessary and is a relict of my debugging sessions. When using a debugger it was often quite helpful since I could pause individual workers and observe the system in a more natural state and/or provoke timing sensitive race conditions by pausing one of the processes. (Disclaimer: My brain almost exploded while doing so but it was really helpful tracking some of the issues in the past weeks :) )

Same thing for the timeout. A timeout of 4k seconds is otherwise not really sensible. That should be removed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something is broken deadlock The cluster appears to not make any progress stability Issue or feature related to cluster stability (e.g. deadlock)
Projects
None yet
Development

No branches or pull requests

2 participants