Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in opensnitch-ui #1030

Closed
GeekZJJ opened this issue Sep 1, 2023 · 15 comments
Closed

Memory leak in opensnitch-ui #1030

GeekZJJ opened this issue Sep 1, 2023 · 15 comments

Comments

@GeekZJJ
Copy link

GeekZJJ commented Sep 1, 2023

Describe the bug
Possible memory leak in opensnitch-ui. Memory usage is abnormal, which raise up to 1.2GB after 8 days' uptime.

$ uptime
09:34:23 up 8 days, 29 min,  1 user,  load average: 1.08, 1.14, 1.12

$  cat /proc/3280/status
Name:	opensnitch-ui
Umask:	0002
State:	S (sleeping)
Tgid:	3280
Ngid:	0
Pid:	3280
PPid:	3272
TracerPid:	0
Uid:	1000	1000	1000	1000
Gid:	1000	1000	1000	1000
FDSize:	64
Groups:	4 5 24 27 30 46 122 135 136 137 138 141 144 1000 
NStgid:	3280
NSpid:	3280
NSpgid:	2517
NSsid:	2517
VmPeak:	 3235100 kB
VmSize:	 3235100 kB
VmLck:	       0 kB
VmPin:	       0 kB
VmHWM:	 1219384 kB
VmRSS:	 1219384 kB
RssAnon:	 1141112 kB
RssFile:	   70224 kB
RssShmem:	    8048 kB
VmData:	 1324328 kB
VmStk:	     140 kB
VmExe:	    2764 kB
VmLib:	  192776 kB
VmPTE:	    3036 kB
VmSwap:	       0 kB
HugetlbPages:	       0 kB
CoreDumping:	0
THP_enabled:	1
Threads:	20
SigQ:	1/95024
SigPnd:	0000000000000000
ShdPnd:	0000000000000000
SigBlk:	0000000000000000
SigIgn:	0000000001001000
SigCgt:	0000000100000000
CapInh:	0000000000000000
CapPrm:	0000000000000000
CapEff:	0000000000000000
CapBnd:	000001ffffffffff
CapAmb:	0000000000000000
NoNewPrivs:	0
Seccomp:	0
Seccomp_filters:	0
Speculation_Store_Bypass:	thread vulnerable
SpeculationIndirectBranch:	conditional enabled
Cpus_allowed:	f
Cpus_allowed_list:	0-3
Mems_allowed:	00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list:	0
voluntary_ctxt_switches:	2299744
nonvoluntary_ctxt_switches:	176235
  • OpenSnitch version: python3-opensnitch-ui: 1.6.0.1-1 opensnitch: 1.6.0.1-1
  • OS: Ubuntu
  • Version: 22.04.2 LTS
  • Window Manager: GNOME Shell
  • Kernel version: Linux Ubuntu 5.19.0-45-generic #46~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 7 15:06:04 UTC 20 x86_64 x86_64 x86_64 GNU/Linux

To Reproduce
Install opensnitch and setup some custom rules.

Steps to reproduce the behavior:

  1. System boot up, and the opensnitch-ui process starts automatically after user login.
  2. Wait and check the memory usage of opensnitch-ui.

Expected behavior (optional)
A reasonable memory usage after a long uptime.

@lainedfles
Copy link
Contributor

@GeekZJJ Are you using the default in-memory database configuration? If yes, this may be expected behavior. Presently, due to technical limitations, the database cleaner function is disabled for this mode. See #857 & 9a75102.

Of course memory usage is also dependent upon your usage patterns and the type of network traffic events logged. The best solution may be to use a database file configuration.

@GeekZJJ
Copy link
Author

GeekZJJ commented Sep 4, 2023

@lainedfles Got that. I try to put the database file under a tmpfs mount point. That seems to work fine. Thanks for developing such an amazing application.

@lainedfles
Copy link
Contributor

@GeekZJJ That's a creative compromise. Nice!

Here is my shameless plug. I did recently contribute changes to permit activation of the SQLite Write-Ahead Logging which aims to provide the performance of the in-memory mode while also providing a high level of persistence on-disk. This option is available under 1.6.3+. My motivation was similar to yours, high uptime memory usage and constant disk writes using file mode.

I too love this project and feel gratitude to all contributors especially the original author evilsocket and present maintainer/collaborator gustavo-iniguez-goya (although I don't want to @ them simply for gratitude LoL) 🍷

@GeekZJJ
Copy link
Author

GeekZJJ commented Sep 4, 2023

@lainedfles Nice! I will update it to 1.6.3.👍

@TriMoon
Copy link

TriMoon commented Sep 4, 2023

An easier way to check memory usage is using:

systemctl --user status app-opensnitch_ui@autostart.service
  • You can find the unit-name by issuing:
    systemctl --user status | grep -B1 -m1 opensnitch-ui
    

Example run on my machine at moment: (opensnith-ui 1.6.3-1)

● app-opensnitch_ui@autostart.service - OpenSnitch
     Loaded: loaded (<$HOME>/.config/autostart/opensnitch_ui.desktop; generated)
     Active: active (running) since Mon 2023-09-04 09:32:52 +03; 5h 10min ago
       Docs: man:systemd-xdg-autostart-generator(8)
   Main PID: 2067 (opensnitch-ui)
      Tasks: 26 (limit: 38145)
     Memory: 223.3M
        CPU: 52.204s
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/app-opensnitch_ui@autostart.service
             └─2067 /usr/bin/python3 /usr/bin/opensnitch-ui

Sep 04 14:30:14 kubuntu opensnitch-ui[2067]:         ~~~~~~~~~~~~~~~~~~~~~~^^^^^
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: KeyError: 1
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: ERROR:dbus.connection:Exception in handler for D-Bus signal:
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: Traceback (most recent call last):
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]:   File "/usr/lib/python3/dist-packages/dbus/connection.py", line 218, in maybe_handle_message
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]:     self._handler(*args, **kwargs)
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]:   File "/usr/lib/python3/dist-packages/notify2.py", line 154, in _closed_callback
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]:     n = notifications_registry[nid]
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]:         ~~~~~~~~~~~~~~~~~~~~~~^^^^^
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: KeyError: 1
  • (Yea i'm also wondering about those repeated log messages...) 🤷‍♀️

@gustavo-iniguez-goya
Copy link
Collaborator

Hi @GeekZJJ ,

I'd say that this is the expected behaviour for in-memory databases. I can't find the link, but if I remember correctly, once you hit the DB size limit, old records are discarded.

I've been running the GUI for months (years even with v1.4.x), and it has never exceeded ~300MB. Maybe (besides a mem leak of course), there's something that makes it allocate more RAM than expected.

How many rules do you have configured? Is there a lot of net traffic being logged?

@GeekZJJ
Copy link
Author

GeekZJJ commented Sep 21, 2023

Hi, @gustavo-iniguez-goya
I have updated it to 1.6.3. And as I mentioned above, I changed the database settings from in-memory to a file under a tmpfs mount point.
After that, the memory leak issue seems gone. I checked the statistics in the bottom of opensnitch-ui, which shows: "Connections 638885 Dropped 137793 Uptime 6 days, 9:02:33 Rules 106". The memory usage is about 78MB, the database file size is about 74MB.

@GeekZJJ
Copy link
Author

GeekZJJ commented Sep 21, 2023

@TriMoon Hi, I think you may also has the same issue as me. As I can see, the process run only 5 hours but it consume about 223MB memory. So you can have a try of my solution, modify the log setting from memory to file.

@TriMoon
Copy link

TriMoon commented Sep 22, 2023

@GeekZJJ , no it was just a comment to show how to easier see the mem usage...

FYI, nothing much chagnged here is a new fresh one:

● app-opensnitch_ui@autostart.service - OpenSnitch
     Loaded: loaded (~/.config/autostart/opensnitch_ui.desktop; generated)
     Active: active (running) since Wed 2023-09-20 11:16:16 +03; 2 days ago
       Docs: man:systemd-xdg-autostart-generator(8)
   Main PID: 2289 (opensnitch-ui)
      Tasks: 27 (limit: 38144)
     Memory: 222.2M
        CPU: 12min 23.654s
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/app-opensnitch_ui@autostart.service
             └─2289 /usr/bin/python3 /usr/bin/opensnitch-ui

Sep 23 01:04:05 kubuntu opensnitch-ui[2289]:         ~~~~~~~~~~~~~~~~~~~~~~^^^^^
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: KeyError: 2
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: ERROR:dbus.connection:Exception in handler for D-Bus signal:
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: Traceback (most recent call last):
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]:   File "/usr/lib/python3/dist-packages/dbus/connection.py", line 218, in maybe_handle_message
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]:     self._handler(*args, **kwargs)
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]:   File "/usr/lib/python3/dist-packages/notify2.py", line 154, in _closed_callback
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]:     n = notifications_registry[nid]
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]:         ~~~~~~~~~~~~~~~~~~~~~~^^^^^
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: KeyError: 2

@GeekZJJ
Copy link
Author

GeekZJJ commented Sep 23, 2023

@TriMoon Get it. 👍

@gustavo-iniguez-goya
Copy link
Collaborator

oops, sorry @GeekZJJ , lainedfles was absolutely right. The OP of the #844 issue didn't report high mem consumption, but it seems to be the same case. Related change: 5b5e271

So this is a known problem, and the workaround for now is what you did, save events to disk.

I'm exploring a new way of deleting events.

@gustavo-iniguez-goya
Copy link
Collaborator

Just an update about this. I haven't managed to delete old events from in-memory db. We'll have to keep investigating how to delete old events from the in-memory db from a different thread.

@hbednar
Copy link

hbednar commented Oct 13, 2023

@gustavo-iniguez-goya
For the moment it may be easier to just save a db to /dev/shm/ as i believe this directory is stored in memory.

This will hopefully reduce or fix the problem and give devs the time to work on a better solution at there own speed.

What Is /dev/shm

@gustavo-iniguez-goya
Copy link
Collaborator

gustavo-iniguez-goya commented Oct 19, 2023

Can anyone test this change?:

From:

        DB_IN_MEMORY   = ":memory:"
---
        self.db = QSqlDatabase.addDatabase("QSQLITE", self.db_name)
        self.db.setDatabaseName(self.db_file)
        if not self.db.open():

to:

        DB_IN_MEMORY   = "file::memory:"
---
        self.db = QSqlDatabase.addDatabase("QSQLITE", self.db_name)
        self.db.setDatabaseName(self.db_file)
        if dbtype == Database.DB_TYPE_MEMORY:                                   
            self.db.setConnectOptions("QSQLITE_OPEN_URI;QSQLITE_ENABLE_SHARED_CACHE")
        if not self.db.open():

https://github.com/evilsocket/opensnitch/blob/master/ui/opensnitch/database/__init__.py#L7-L10
https://github.com/evilsocket/opensnitch/blob/master/ui/opensnitch/database/__init__.py#L51-L53

And:

if self._cfg.getBool(Config.DEFAULT_DB_PURGE_OLDEST):
self._start_db_cleaner()

to:

 #if self._cfg.getBool(Config.DEFAULT_DB_PURGE_OLDEST): 
 self._start_db_cleaner() 

Sometime ago we changed how we used in-memory databases to allow delete old events: 5b5e271

but we realized that it created a file (a regular sqlite db) on disk, so in the end it was the same than saving events to disk: 9a75102

As far as I can tell, with the changes posted above, it doesn't save events to disk and deletes events when using the db in-memory.
Tested on Mint 20.3 and Debian Sid, I'll monitor it a bit more.

@gustavo-iniguez-goya
Copy link
Collaborator

I've added finally this change. Hopefully it'll solve this issue. Otherwise the solution will be to save events to a file on disk or /dev/shm.

gustavo-iniguez-goya added a commit that referenced this issue Nov 6, 2023
Not deleting events from in-memory db can lead to a high mem usage under
certain scenarios.

Previous attempt to solve this issue wrote events to disk in a temporal
file (when using file::memory:?cache=shared).

Related issues: #844 #857

Closes: #1030
gustavo-iniguez-goya added a commit that referenced this issue Nov 6, 2023
Instead of :memory:, use file::memory:

Related: #1030
gustavo-iniguez-goya added a commit that referenced this issue Nov 6, 2023
Instead of :memory:, use file::memory:

Related: #1030
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants