-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlock behavior seen with Lustre during memory allocation #15786
Comments
KM_NOSLEEP makes sense only if caller is ready to handle allocation errors, as a packet loss in network stack. ZFS can not just return ENOMEM errors randomly, so it would not be a fix, but a permanent disaster. |
This same issue comes up in quite a few places in the ZFS code. To handle it we added two functions It looks to me like the best fix here will be to update the Lustre code to use these wrappers for any area where a deadlock like this is possible. |
Thanks, will check out the spl_fstrans_mark()/spl_fstrans_unmark(). |
Agreed. Modifying lustre to wrap calls into ZFS where lustre cannot recurse into itself with |
Submitted https://review.whamcloud.com/c/fs/lustre-release/+/56442 which implements the needed Lustre changes. |
System information
Describe the problem you're observing
A deadlock situation is seen with Lustre using ZFS in low memory instances. The issue happens when Lustre calls ZFS apis to write/update data. The memory allocation in the api tries to free up pages inline during the allocation call and which in turn calls Lustre APIs resulting in a deadlock situation. Stack trace posted below shows the situation where the Lustre RPC thread calls sa_update, which then calls spl_kmem_zalloc. The spl_kmem_zalloc tries to free up memory inline and this results in calling Lustre api again and results in a deadlock.
Describe how to reproduce the problem
The issue happens in systems with low memory. Another trigger condition for the issue is to have a Lustre client mount on the Lustre metadata server. The issue does not happen if all the memory allocations from zfs are modified to use KM_NOSLEEP, by avoiding inline free. Would there be an implication of using the KM_NOSLEEP for all the zfs allocations or is there a better way to avoid running into this scenario?
Include any warning/errors/backtraces from the system logs
The text was updated successfully, but these errors were encountered: