-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bisq using up too much memory on Linux even after closing #3918
Comments
I think this is due to one or several JavaFx bugs related to bug JDK-8188094 During profiling, I noticed that the JavaFX Application Thread had been allocated 66.6 GiB I have not found a solution. I'm running Ubuntu 18 & 19(vm), using an nvidia gpu. |
What I think is happening on my Linux machine is the java vm thinks it has 128 GB of RAM to work with. My OpenJDK 10 and 11 VMs have a very large default -XX:MaxRAM setting of 128 GB. It looks like "because" of this -- I need to be more careful with that word -- there are too many minor page faults, and libc malloc()s are creating a lot of extra memory segments via mmap() syscalls, but not using heap memory for alloctions via brk() calls. During tracing sessions, I saw mmap() calls, but not brk() calls. The MaxRAM setting is described in the OpenJDK source.
You can check your own VM's default MaxRAM setting:
I can set MaxRAM=2GiB in JAVA_OPTS
and Bisq's RES (SEE htop) stays below ~1.2 GB, instead of growing to ~4.8 GB. (But I haven't let it run for days...) After libc malloc()s expand the extra virtual memory via mmap() syscalls, the app causes MMU violations when it tries to use the allocation memory range(s) and finds no mapping for an address or addresses. I saw MMU errors in GC logs last night, when using -XX:+ExtendedDTraceProbes + GC logging, but only today realized there may be a connection to my VM's detaults MaxRAM=128GB setting. I had assumed it was due to the extra overhead from using -XX:+DebugNonSafepoints -XX:+PreserveFramePointer, but especially due to -XX:+ExtendedDTraceProbes. I believe these MMU violations in turn result in page faults, and causes consumption of extra pages of physical memory -- the RES, or resident set size you can see in htop. Using $ pmap -x $(pgrep -f BisqAppMain) | more ... I find a memory mapped file with an RSS ~3.8 GB
I don't know what this anonymous mapped file is, but setting the VM's MaxRAM to 2GB in JAVA_OPTS reduces this mmapped file's size to ~0.5 GB. Corresponding to this huge 3.8GB memory mapped file size is htop's RES value of ~ 4.8 GB, and VIRT ~10.4 GB (when running Bisq with no MaxRAM option). Here are measurements after starting Bisq with an empty data.dir, using MaxRAM 2 GB, and letting it run for 30 minutes (not touching the GUI).
About 7-8 hours after starting Bisq with a 3 GB MaxRAM setting, I looked at
I unset JAVA_OPTS, restarted Bisq (with an empty data.dir), and checked minor/major page fault counts after uptime ~2.75 hrs
The bisq instance running with the default 128 GB MaxRAM setting is almost doubling the number of minor page faults in less than half the time as the instance running with a 3 GB MaxRAM setting. Attached below are two page fault flame graphs showing very large differences in the number of faults while tracing page_fault_user events for 120s, begun 30s after bisq was started with an empty data.dir.
I had to zip them up to upload them to here because you can't upload svg files to comments, and the flamegraph tool does not produce png files. page-fault-flame-graphs.zip Open them in a browser, and hover your mouse over and drill down to levels representing "do_page_fault" event counts for libjvm.so and java method code paths. To return to the original view, click "Reset Zoom" at the top left of page. |
There is a glibc setting on Linux that can be used to define a hard limit of the number of arenas -- thread safe memory pools used by malloc() -- that can be created. The trade off for using this is more thread contention among Bisq's allocating threads versus less memory usage.
Using this on my Ubuntu 18 laptop reduces the RES memory by ~300 mb, and VIRT by ~3.7 GB. In addition, configuring the jvm's starting and max heap size reduces both RES and VIRT by another ~200 mb.
Used together
RES memory is reduced by ~500 mb, and VIRT memory by ~3.9 GB. I will test these on VMs next (Arch Linux, Debian, Fedora). These settings, any of which disable dynamic adjustment of glibc's mmap threshold, save another 100 mb in RES and VIRT, but I haven't experimented enough to suggest anyone else try them yet.
By the way, I am looking for leaks too... |
On my Ubuntu 18 hosted VMs Arch Linux kernel 5.5.8, Fedora 31, and Debian 10.3, the settings
reduced resident memory use by ~400 MB, and virtual memory by ~1.9 GB. I assume the overall reduced memory footprint on the VMs is because they don't have access to the GPU. |
Tracing linux glibc memory_arena_new calls via
is showing Bisq malloc() calls creating 20 new arenas between startup (with emtpy data.dir) and the time the BlockParser is done. When I started Bisq after setting
the same trace shows Bisq is does not create any new arenas during the same interval; it is using existing arenas. |
so all in all, it seems that our memory issues on linux are mainly caused by the JVM, its implementations and (default) configuration? what do you suggest as an action plan? get the malloc stuff into our code? do more tests? |
To sum up, I agree the biggest problem on Linux is off-heap memory growth. Your adding the MaxRAM jvm option reduced resident memory > 50% The next experiment should be adding
to the bisq-desktop and bisq-daemon startup scripts; this requires no changes to the binaries.
820 MB should be sufficient, and I think Linux bisq users would be happy to see a ~ 1.0 GB resident memory size after starting bisq with an empty data dir. (When we started investigating, bisq's resident memory size was growing to ~ 4.8 GB. So problem not solved, but this is a significant improvement.) Regarding the Java Heap: I certain JavaFX 11 has leaks, and their dev team has been fixing them in a version we cannot use yet. Can't do much about that yet. There are probably other on-heap leaks, but I have not been able to identify with certainty. |
Update the gradle dependency to JavaFX 14. This brings to Bisq the latest JavaFX fixes and improvements, especially in the areas of UI performance, memory management and security. JavaFX can be upgraded independently of the JDK used to build the application, so this change is modular and does not affect other parts of the build process. Related / likely related to: bisq-network#350 bisq-network#2135 bisq-network#2509 bisq-network#3128 bisq-network#3307 bisq-network#3308 bisq-network#3343 bisq-network#3430 bisq-network#3657 bisq-network#3677 bisq-network#3683 bisq-network#3686 bisq-network#3786 bisq-network#3787 bisq-network#3892 bisq-network#3917 bisq-network#3918 bisq-network#3936
Update the gradle dependency to JavaFX 14. This brings to Bisq the latest JavaFX fixes and improvements, especially in the areas of UI performance, memory management and security. JavaFX can be upgraded independently of the JDK used to build the application, so this change is modular and does not affect other parts of the build process. Related / likely related to: bisq-network#350 bisq-network#2135 bisq-network#2509 bisq-network#3128 bisq-network#3307 bisq-network#3308 bisq-network#3343 bisq-network#3430 bisq-network#3657 bisq-network#3677 bisq-network#3683 bisq-network#3686 bisq-network#3786 bisq-network#3787 bisq-network#3892 bisq-network#3917 bisq-network#3918 bisq-network#3936
Should be fixed in the most recent release (v1.6.4) which brings several UI performance improvements and generally reduces system resource consumption (especially RAM). Please try it out and let us know if it's still an issue for you. |
Description
Bisq will take more and more memory until the system freezes when none remains.
Additionally if shutdown before no memory is left, not all of it is released. About 4-7GB are shown to be additionally occupied, but no process seems to hold it. Logging out and in again does not fix it. A restart does.
Version
1.2.5
Steps to reproduce
Unknown, just start the application for me.
Expected behaviour
I expect bisq to take maybe 3GB of ram and released it after closing the application.
Actual behaviour
More and more ram will be occupied until none is left and the system starts freezing. Sometimes bisq gets terminated by the system other times I just force reboot.
Device or machine
Ubuntu 18.04.3
Ryzen 5 2600X
G.Skill Aegis DIMM Kit 16GB
Additional info
The log contains a case where, after a reboot I started Bisq and got lucky. The application got terminated (I reacted too slow and could no longer move the mouse) after the ram was full. About 5.5GB of ram were shown to be still occupied after the termination.
bisq.log
The text was updated successfully, but these errors were encountered: