-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
qBittorrent does not respect cgroup memory limits, resulting in non-oom related kernel panic #15463
Comments
I'm currently testing disabling the OOM killer for the container, I'm not sure how qBittorrent will handle malloc failing, but I'll reply here with the result. |
A random question from a non-developer:
My understanding of out-of-memory conditions in Linux, is that a process requesting memory aggressively enough can trigger a kernel panic, even in the presence of the OOM killer if there is also IO activity (disk/network), as IO will often trigger kalloc calls that will kernel panic on fail, and the OOM killer can take a few milliseconds to respond to an oversize process. The trick is to ensure that the Kernel always has enough free memory so kalloc is unlikely to fail. This can be done by setting hard limits on individual process size using RLIMITs - i.e. RLIMIT_AS. Setting RLIMIT_AS to some value lower than the total amount of memory available to the VM will cause qBittorrent's call to malloc to fail, and trigger whatever error handling qbittorrent has in place to handle that happening (best case it would likely exit immediately) - with a reasonable amount of elbow room (say 1gb), the kernel should be able to clean-up whatever the resulting IO operations are without panicing. With respect to the issue: how should qbittorrent handle approaching a cgroup limit ? Refusing to add more torrents ? Removing existing torrents ? Exiting immediately ? Ultimately, to do work, qbittorrent needs to allocate memory, it's possible to configure it to use less memory - but it's very hard to predict up-front how much memory a given action will require. At some point QBittorrent would have to say: "I'm too close to a cgroup limit - I refuse to do that", even though the action was probably safe. To reduce the expected behavior to absurdity, one solution is for QBittorrent to refuse to start in the presence of cgroup limits. |
@CordySmith in docker cgroups v2 are set using the systemd driver such that they govern all processes within a systemd slice under one set of parameters. There is no kernel memory controller in cgroups v2, so it's not possible to limit it. It is possible to set memory.high in cgroups v2 without memory.max which disables the OOM killer and when usage goes over the high boundary, the processes are throttled and put under heavy reclaim pressure, so that could be another possible solution with memory.max used which a high value as a failsafe. Currently we're moving back to cgroups v1 and disabling the OOM killer which will hopefully solve the issue until we or someone else can make PR in docker. |
OOM kill was disabled and the kernel panic still persists:
It looks like qbittorrent is triggering a bug in the kernel, I'm not sure if this is ZFS specific or not. |
Still relevant ? |
I'm still having these issues. Yes, still a problem. |
Do you have "first and last piece priority" enabled for these torrents? |
Potentially, but through the web-ui I am unable to see that setting on completed torrents. |
Bug report
Checklist
Description
qBittorrent info and operating system(s)
If on Linux,
libtorrent-rasterbar
andQt
versionsWhat is the problem
Qbittorrent does not respect cgroup memory limits, resulting in constantly being OOM killed.
Detailed steps to reproduce the problem
What is the expected behavior
qbittorrent to respect cgroup memory limits so as to not be stuck in an endless OOM kill loop.
Extra info (if any)
Kernel logs showing issue:
Attachments
The text was updated successfully, but these errors were encountered: