-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: using USB switch causes a memory spiral #1490
Comments
It's almost certainly in this file within the kbdin related functions: I don't spot any obvious infinite loops and I only use Linux in VMs these days, so will need help with reproduction+investigation. Might help to add some more lines of code logging and compile yourself. |
Also, running kanata with the |
I was able to get it to crash again with |
Hm the trace logs unfortunately don't show any smoking gun to my eyes. Valgrind might be of help. Ensure you have debug symbols; files downloaded from GitHub releases are stripped, so don't have them. Then run kanata under valgrind. Sample command:
I'm not sure if valgrind will output useful logs upon process termination with a SIGKILL though. If SIGKILL causes problems you'd need to find a way to send a signal other than SIGKILL when the issue reproduces and memory starts climbing. E.g. https://unix.stackexchange.com/questions/172559/receive-signal-before-process-is-being-killed-by-oom-killer-cgroups. |
I read the Stackexchange post you linked, but I'm not really a kernel expert and got lost reading all of that documentation. I did capture a OOM sigkill with Valgrind, so I figured I'd share that first to see if it was helpful before continuing down the rabbit hole of trying to send another signal: valgrind-sigkill.log |
As I guessed there's no useful output; there's no memcheck output at the end. |
I was able to get a crash again and send it a SIGTERM before the OOM SIGKILL, so hopefully this log is more helpful: valgrind.log |
Indeed there is useful info; the memory is mostly in a single allocation. Unfortunately you didn't use a build with debug symbols. I wonder if maybe the addresses can be mapped to symbols retroactively...
|
Is there a build I could use with debug symbols, or a guide on how to build that myself? |
The command |
Here's another log with debug symbols now: valgrind.log |
Nice! This is helpful.
|
Looks like this call needs to have limiter e.g.
|
At your convenience, if you can test that the |
I haven't been able to cause another crash since switching to that branch, so I think your commit has fixed my issue! Once the PR is merged, what does the release cadence look like? I'd like to go back to a stable release as soon as possible instead of continuing to use a local debug build. |
Requirements
Describe the bug
I have my keyboard and primary mouse connected to a USB switch so that I can quickly switch them between my desktop and work laptop (I have a secondary mouse that always stays connected to the desktop). Sometimes when switching from one computer to another (approximately 1 in 10 times), Kanata will go into a memory spiral until it consumes so much memory (~12 GB of memory and ~25 GB of swap) that my OS kills it with the message "Device memory is nearly full. An application was using a lot of memory and was forced to stop".
This has happened both when connecting and disconnecting the devices.
Relevant kanata config
config.kbd
:To Reproduce
Expected behavior
I expect to be able to connect and/or disconnect devices as many times as I'd like without Kanata consuming all available memory.
Kanata version
kanata 1.7.0
Debug logs
journalctl -u kanata.service
outputs nothing.systemctl
does provide some info (note the mem peak):I looked around for other log files and couldn't find any, but happy to provide more logs if someone can point me to them.
Operating system
Linux (EndeavourOS, kernel 6.12.8-arch1-1)
Additional context
No response
The text was updated successfully, but these errors were encountered: