Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix deadlock caused by an erred task (executing->cancelled->error) #5503

Merged
merged 2 commits into from
Nov 18, 2021

Conversation

fjetter
Copy link
Member

@fjetter fjetter commented Nov 8, 2021

This fixes a deadlock caused by an erred task not freeing slots on the threadpool

Supersedes #5501 and #5500

Closes #5497

Most notable change to the other proposed solutions is that this includes tests. Writing the tests also revealed problems around how _put_key_in_memory is implemented leading me to factor the exception handling out. Mid-/Long term I believe this should be addressed by changing the zict buffer to not raise in case of spillage/serialization error. It could instead simply store the item in memory even if it is too large. However, that's way beyond the scope of this fix.

Comment on lines +2854 to 2858
msg = error_message(e)
for k in self.in_flight_workers[worker]:
ts = self.tasks[k]
recommendations[ts] = tuple(msg.values())
raise
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This unhandled exception was also capable of causing a deadlock. That's also fixed now

Comment on lines -2547 to +2624
try:
self.data[ts.key] = value
except Exception as e:
msg = error_message(e)
ts.exception = msg["exception"]
ts.traceback = msg["traceback"]
recommendations[ts] = ("error", msg["exception"], msg["traceback"])
return recommendations, []
self.data[ts.key] = value
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exception handling is now handled by the caller. That's ugly but everything else would likely end up more ugly even. Reasoning explained in the above doc string

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, the buffer should never raise and instead handle this gracefully by not spilling. Apart from being more work, that's also a change of behaviour and an API question which is why I didn't feel comfortable with mixing such a change up in here

@@ -2488,70 +2559,69 @@ def ensure_communicating(self):
for el in skipped_worker_in_flight:
heapq.heappush(self.data_needed, el)

def get_task_state_for_scheduler(self, ts):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function was awkward after the refactoring. Instead, there is now a single place where the error message is generated and the task-finished is factored into a dedicated private function. Ultimately, we also should have a single place were this is generated but that's currently not possible without more work

@fjetter fjetter self-assigned this Nov 12, 2021
@jrbourbeau jrbourbeau mentioned this pull request Nov 18, 2021
3 tasks
Copy link
Member

@jrbourbeau jrbourbeau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Offline @fjetter mentioned he is confident in the changes here and would be good to include them in the upcoming release (xref dask/community#206). To that end, I'm planning to merge this PR in a bit if there are no further comments

@crusaderky
Copy link
Collaborator

The two new tests are failing in main
e.g. https://github.com/dask/distributed/runs/4257423724?check_suite_focus=true

@jrbourbeau
Copy link
Member

Thanks for reporting @crusaderky -- tracking over in #5527

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Deadlock: subsequent computations deadlock after a task has errored
3 participants