-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integration tests for spilling #229
Conversation
"DASK_DISTRIBUTED__WORKER__CONNECTIONS__INCOMING": "1", | ||
"DASK_DISTRIBUTED__WORKER__CONNECTIONS__OUTGOING": "1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to reduce the number of incoming and outgoing connections to avoid getting the worker oom-killed
. With a default chunk size of 100 MiB from the dask.array
, having 4 incoming and 4 outgoing connections at the same time means that we might use up to 800 MiB for communications. With the (previous) default of a t3.medium
machine, this appears to be ~25 % of the actual memory that we can use. Together with the misconfigured Worker.memory_limit
(coiled/feedback#185), we reliably trigger the oom-killer
when running the workload without the adjusted config. See dask/distributed#6208 for an existing upstream issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hendrikmakait is this safe to remove now that coiled/feedback#185 is resolved?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there is an ongoing effort in dask/distributed#6208 to resolve the underlying issue. coiled/feedback#185 made this even worse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. I think we may need to skip this test for a while until this is a bit more stable (cf #280 )
Thanks @hendrikmakait! |
Closes #136