Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory leak on nginx reload #2381

Closed
amorozkin opened this issue Aug 7, 2020 · 41 comments
Closed

memory leak on nginx reload #2381

amorozkin opened this issue Aug 7, 2020 · 41 comments
Assignees
Labels
3.x Related to ModSecurity version 3.x

Comments

@amorozkin
Copy link

RAM usage constantly grows on nginx -s reload

Having modsecurity rules loaded (even with modsecurity off) causes RAM usage to grow with each nginx -s reload and ultimately leads nginx to stuck with messages like:

Logs and dumps (/var/log/nginx/error.log)

Output of:
2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning "worker process" (12: Cannot allocate memory)
2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor)
2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning "worker process" (12: Cannot allocate memory)
2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor)
2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning "cache manager process" (12: Cannot allocate memory)
2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor)

To Reproduce

  1. Configure nginx to load rules:
    /etc/nginx/nginx.conf
http {
...
   modsecurity off;
   modsecurity_rules_file /etc/nginx/modsec/main.conf;
..
}
  1. Restart Nginx and check rules were loaded (/var/log/nginx/error.log):
2020/08/06 08:57:13 [notice] 13627#13627: ModSecurity-nginx v1.0.1 (rules loaded inline/local/remote: 0/911/0)
  1. Generate load:
./nikto.pl -h https://your-site-name
  1. Run several 'nginx -s reload' (with 3-4 minutes interval) and check RAM consumption with free -m command before and after nginx reload:
# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         433        2122          30        1395        3136
Swap:          2043          49        1994

# nginx -s reload

# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         451        2103          30        1395        3117
Swap:          2043          49        1994

# nginx -s reload

# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         464        2083          30        1404        3104
Swap:          2043          49        1994

# nginx -s reload

# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         481        2051          30        1417        3086
Swap:          2043          49        1994

.....

# free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         901        1534          30        1515        2666
Swap:  

Expected behavior

'RAM used' should not steadily grow and should stay around the same level as it does for example without modsecurity rules loaded (in which case 'ram used' stays about 300 MB)

Server

  • ModSecurity v3 master - 51d06d7 with nginx-connector v1.0.1
  • WebServer: nginx-1.18.0
  • OS : Ubuntu 16.04

Rule Set:

Additional context

The same happens with modsecurity on in server's context.
Using SecResponseBodyAccess Off in modsecurity.conf

@albgen
Copy link

albgen commented Aug 7, 2020

Same issue here. I have disabled modsec but i doubt this will be fixed...

@zimmerle zimmerle self-assigned this Aug 7, 2020
@zimmerle zimmerle added the 3.x Related to ModSecurity version 3.x label Aug 7, 2020
@zimmerle
Copy link
Contributor

zimmerle commented Aug 7, 2020

Hi,

We are working on a lot regarding rules reload at: v3.1-experimetal. Is this issues happens on v3.0.4 or only in v3/master?

@amorozkin
Copy link
Author

Hi, zimmerle, you are right - can't reproduce this with v3.0.4.

@hazardousmonk
Copy link

hazardousmonk commented Oct 19, 2020

I'm still not able to resolve this. As nginx works with the ModSecurity + CRS ruleset, the memory size grows and grows until Nginx crashes. I've hacked around it by forcing nginx to restart as soon as it crashes, but that's a really bad fix. I tried to use the experimental version, however that didn't work out (the vhost on which ModSecurity is running was not working with the following error)[alert] 7955#0: worker process 9835 exited on signal 11 (core dumped) ...but thats for another topic. Any idea how I can get nginx and ModSecurity to behave?

nginx version: nginx/1.18.0
built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
ModSecurity-nginx v1.0.1
ModSecurity v3/master

@albgen
Copy link

albgen commented Oct 19, 2020

I'm still not able to resolve this. As nginx works with the ModSecurity + CRS ruleset, the memory size grows and grows until Nginx crashes. I've hacked around it by forcing nginx to restart as soon as it crashes, but that's a really bad fix. I tried to use the experimental version, however that didn't work out (the vhost on which ModSecurity was not working with the following error)[alert] 7955#0: worker process 9835 exited on signal 11 (core dumped) ...but thats for another topic. Any idea how I can get nginx and ModSecurity to behave?

nginx version: nginx/1.18.0
built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
ModSecurity-nginx v1.0.1
ModSecurity v3/master

hi,
in my case, the issue, i think it was because i was loading the rules on each server directive which means many times as the number of server directives. Moving the loding of the rules on the main nginx.conf file fixed 2 issues: 1)The memory leak one which is the current issue and also 2)the slow start of nginx

@hazardousmonk
Copy link

I'm still not able to resolve this. As nginx works with the ModSecurity + CRS ruleset, the memory size grows and grows until Nginx crashes. I've hacked around it by forcing nginx to restart as soon as it crashes, but that's a really bad fix. I tried to use the experimental version, however that didn't work out (the vhost on which ModSecurity was not working with the following error)[alert] 7955#0: worker process 9835 exited on signal 11 (core dumped) ...but thats for another topic. Any idea how I can get nginx and ModSecurity to behave?
nginx version: nginx/1.18.0
built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
ModSecurity-nginx v1.0.1
ModSecurity v3/master

hi,
in my case, the issue, i think it was because i was loading the rules on each server directive which means many times as the number of server directives. Moving the loding of the rules on the main nginx.conf file fixed 2 issues: 1)The memory leak one which is the current issue and also 2)the slow start of nginx

Unfortunately each vhost is running a different application, and the ruleset is fine tuned for each one. So loading the rules in the nginx.conf isn't something that's possible. Is the 3.0.4 or v3/master the latest stable version?

@albgen
Copy link

albgen commented Oct 20, 2020

oh i see. That's a big issue.
If i'm not wrong, i read some times ago that this issue with the loading of the rules is being fixed but it will take a lot of time...
btw i always use v3/master

@zimmerle
Copy link
Contributor

The last

I'm still not able to resolve this. As nginx works with the ModSecurity + CRS ruleset, the memory size grows and grows until Nginx crashes. I've hacked around it by forcing nginx to restart as soon as it crashes, but that's a really bad fix. I tried to use the experimental version, however that didn't work out (the vhost on which ModSecurity was not working with the following error)[alert] 7955#0: worker process 9835 exited on signal 11 (core dumped) ...but thats for another topic. Any idea how I can get nginx and ModSecurity to behave?
nginx version: nginx/1.18.0
built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
ModSecurity-nginx v1.0.1
ModSecurity v3/master

hi,
in my case, the issue, i think it was because i was loading the rules on each server directive which means many times as the number of server directives. Moving the loding of the rules on the main nginx.conf file fixed 2 issues: 1)The memory leak one which is the current issue and also 2)the slow start of nginx

Unfortunately each vhost is running a different application, and the ruleset is fine tuned for each one. So loading the rules in the nginx.conf isn't something that's possible. Is the 3.0.4 or v3/master the latest stable version?

The last released version is 3.0.4.

The experimental branch is the 3.1-experimental

@zimmerle
Copy link
Contributor

oh i see. That's a big issue.
If i'm not wrong, i read some times ago that this issue with the loading of the rules is being fixed but it will take a lot of time...
btw i always use v3/master

did you had the chance to test this particular issue in 3.1-experimental?

@albgen
Copy link

albgen commented Oct 20, 2020

oh i see. That's a big issue.
If i'm not wrong, i read some times ago that this issue with the loading of the rules is being fixed but it will take a lot of time...
btw i always use v3/master

did you had the chance to test this particular issue in 3.1-experimental?

Nope. I can test it but it will take same take because it is not very deterministic the appeareance of the memory leak bug.

The slow start on the other hand can be tested. Will try tomorrow and let you know.

@zimmerle
Copy link
Contributor

oh i see. That's a big issue.
If i'm not wrong, i read some times ago that this issue with the loading of the rules is being fixed but it will take a lot of time...
btw i always use v3/master

did you had the chance to test this particular issue in 3.1-experimental?

Nope. I can test it but it will take same take because it is not very deterministic the appeareance of the memory leak bug.

The slow start on the other hand can be tested. Will try tomorrow and let you know.

thank you.

@albgen
Copy link

albgen commented Oct 21, 2020

hi zimmerle,

i cannot compile that branch...

cd /root
rm -rdf /root/ModSecurity/
sudo git clone --depth 1 -b v3/dev/3.1-experimental --single-branch https://github.com/SpiderLabs/ModSecurity
cd ModSecurity
sudo git submodule init
sudo git submodule update
sudo ./build.sh
sudo ./configure --with-maxmind=no
sudo make #here the errors

image

@zimmerle
Copy link
Contributor

Can you confirm the last commit hash in your branch?

@albgen
Copy link

albgen commented Oct 21, 2020

The following is what git log says, so seems the last one..

[root@ReverseProxy ModSecurity]# git log
commit baf189938e9edf6e2f71142f69d58e021d49cac1
Author: Felipe Zimmerle felipe@zimmerle.org
Date: Mon Oct 19 12:33:59 2020 -0300

Having RuleWithActionsProperties()

[root@ReverseProxy ModSecurity]#

This is another nginx installation with ModSec and the same issue even here. Just tried on another machine to be sure that is not environment issue.

@zimmerle
Copy link
Contributor

I am going to verify and get back to you. thanks

@hazardousmonk
Copy link

I've tried everything, even boosting the RAM on my instance, however it is still crashing at times. Will using another operating system help? I have a similar setup running on Debian 10 (not as many vhosts though), that has not yet crashed.

@zimmerle
Copy link
Contributor

@hazardousmonk Have you had a chance to test v3/dev/3.1-experimental ?

@zimmerle
Copy link
Contributor

zimmerle commented Dec 1, 2020

ping.

@albgen
Copy link

albgen commented Dec 1, 2020

btw, does not compile again
image

git log
image

@tomasdeml
Copy link

tomasdeml commented Dec 8, 2020

We are also experiencing a memory leak on reload signal. It seems that the leak started manifesting when we enabled SecAuditLog /dev/stdout in our rule set (however, we have not confirmed this yet). On reload, the memory of the master process consistently grows about 6 MB.

The leak is definitely present in the latest v3/master branch (afefda5) but it seems that versionv3.0.4 is fine. By bisecting the changes I identified commit 7a48245 as the culprit. I tried to track the leak to a specific call-stack with the memleak script (top 10 outstanding allocations after 60s), however it was not very helpful (even for debug builds with optimizations disabled, may be limitation of the script):

1153930 bytes in 16780 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
1268410 bytes in 18320 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
1536770 bytes in 22510 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
1914850 bytes in 30490 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
2177110 bytes in 28440 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
3771110 bytes in 54810 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
4531060 bytes in 62420 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
4897110 bytes in 72520 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
		[unknown]
5016560 bytes in 156793 allocations from stack
		calloc+0x35 [ld-musl-x86_64.so.1]
5531980 bytes in 31644 allocations from stack
		operator new(unsigned long)+0x15 [libstdc++.so.6.0.28]

I am going to run this through Valgrind and gdb to find out more soon. HTH.

@hazardousmonk
Copy link

@hazardousmonk Have you had a chance to test v3/dev/3.1-experimental ?

Sorry for the delayed response. I built it, however it crashed on launch - it was a production server down for maintenance so I had to revert back to safer territory. I will try this again and keep you posted. Probably Nginx 1.19 and the Dev version. I'm just curious as to how everyone is dealing with this so far? I've hacked a quick script to restart nginx the moment it sees those evil lines in the error log.

@zimmerle
Copy link
Contributor

zimmerle commented Dec 9, 2020

Hi @hazardousmonk thank you for have it tested. You could have it compiled and just changed library file or export a different LD_LIBRARY_PATH together with the nginx .so addon.

At this phase of the development I would prefer if you don't have it running headless. Specially if it not starting properly. Do you happens to have the logging related to the event?

@hazardousmonk
Copy link

I actually compiled it into nginx rather than building a dynamic module - I will try to load it as a dynamic module. I'm not quite sure how to run it in non-headless mode. If you could point me in the right direction I'll be more than happy to help out on this and send you the full verbose logs.

I can also confirm that v3.0.4 is stable and does not have this condition.

@zimmerle
Copy link
Contributor

As mentioned on #2376 we no longer have leaks on the reloads. That is valid for 3.1-experimental.

@Hello-Linux
Copy link

Hello-Linux commented Jan 22, 2021

I have the same problem. How can I fix it? The impact of this problem is very serious. I hope that the government will pay close attention to the repair and repair version 3.0.4 as soon as possible

@Hello-Linux
Copy link

Is anyone there?

@zimmerle
Copy link
Contributor

I have the same problem. How can I fix it? The impact of this problem is very serious. I hope that the government will pay close attention to the repair and repair version 3.0.4 as soon as possible

(a) Try to restart instead of reload; (b) Experiment the master branch and provide feedback; (c) experiment 3.1 and provide feedback.

@zimmerle
Copy link
Contributor

Is anyone there?

?

@Hello-Linux
Copy link

@zimmerle Has the master branch been fixed the memory leak?

@vncloudsco
Copy link

@zimmerle hello admin. I used the one you specified (3.1-experimental) but I noticed the problem still occurs if you need the environment to test, I'll give it to you. this issues not fix in 3.1-experimental. thank you!

@Hello-Linux
Copy link

@zimmerle It often leads memory leak on nginx reload for branch 3.1-experimental

Describe the bug

We publish more frequently every day, and often reload nginx,every few days nginx has a memory leak,How to fix it?

Logs and dumps

Output of:

2021/01/21 11:04:08 [alert] 25264#25264: fork() failed while spawning "worker process" (12: Cannot allocate memory)
2021/01/21 11:04:08 [alert] 25264#25264: sendmsg() failed (9: Bad file descriptor)
2021/01/21 11:04:08 [alert] 25264#25264: sendmsg() failed (9: Bad file descriptor)
2021/01/21 11:04:08 [alert] 25264#25264: sendmsg() failed (9: Bad file descriptor)

@Hello-Linux
Copy link

@zimmerle It often leads memory leak on nginx reload using modsecurity branch 3.1-experimental and master,I hope that the official will fix this problem as soon as possible, which has a serious impact on the production environment.

And There is also why this problem #2381 was closed without being resolved!!!

I suggest that you officially do a test yourself. First, create a lot of virtual hosts, each virtual host quotes the modsecurity rules separately, and then constantly reload, the memory leak should be reproduced!
Describe the bug

We publish more frequently every day, and often reload nginx,every few days nginx has a memory leak,How to fix it?

Logs and dumps

@Hello-Linux
Copy link

I feel that this kind of problem has been going on for a long time, and I hope that the official will seize the time to fix it. This is a fatal problem and the priority should be the highest.

@GutHib
Copy link

GutHib commented May 22, 2021

For the past six months, I've been waiting for 3.0.10 to become an official release so it comes down the regular update channels on my server (while patiently restarting a crashed nginx about every other week).

Should I still hold my breath, or go ahead and compile the experimental version?

@zimmerle
Copy link
Contributor

@GutHib what version are you currently running? did you ever tried v3/master?

@GutHib
Copy link

GutHib commented May 24, 2021

I am running 3.0.4. on a server with Directadmin installed. Like with most control panels, by default you get to install all stable releases that come down through the regular update channels. I'm aware that I can go ahead and slap 3.1 on my server manually, but I don't want to make a mess of things. (Directadmin has its own install paths, and I don't want to end up with two competing versions or break the regular updates.) I understand that my concerns may be unfounded, but weirder things have happened.

So basically I'm just trying to figure out my options. If manual installation is my only choice for the foreseeable future, I will give it a try. Thanks!

@GutHib
Copy link

GutHib commented Jun 13, 2021

Anyone?

@kudrom
Copy link

kudrom commented Jun 17, 2021

Hi there,
First of all thanks a lot for the project, we use it to protect several resources in our company and we really appreciate the work you are doing, keep it up!
I've compiled the library in v3/master, v3/dev/3.1 and v3/dev/3.1-experimental and I see memory leaks in v3/master and v3/dev/3.1 once I reload nginx, I don´t see them in v3/dev/3.1-experimental as expected since the last commit there is with the fix. The size of the leaks is directly proportional on the amount of rules you have loaded.
We would use the experimental branch, however we are using owasp crs as well and their rules depend on ctl:forceRequestBodyVariable which is not supported in the experimental branch, so I was wondering how much effort it would take to port the fix on the experimental branch to a more up to date branch like master or dev/3.1.
Kind regards

@mmelo-yottaa
Copy link

mmelo-yottaa commented Sep 27, 2021

I'm not seeing the 'nginx -s reload' memory leak fixed using the v3/dev/3.1-experimental branch, checking either using /proc/nginx-pid/smaps directly or using valgrind. I am running nginx single process in foreground:

master_process off;
daemon off;
worker_processes 1;

The memory leak is in modsecurity code called from the parser, the two worst offenders are:

nginx version 1.21.1
modsec nginx-connector version 1.0.2
modsecurity version v3/dev/3.1-experimental

`==16116== 1,200,064 bytes in 41,681 blocks are possibly lost in loss record 1,114 of 1,115

==16116== at 0x4837B65: calloc (vg_replace_malloc.c:752)

==16116== by 0x4C25E58: acmp_add_pattern (acmp.cc:517)

==16116== by 0x4C17C27: modsecurity::operators::PmFromFile::init(std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) (pm_from_file.cc:71)

==16116== by 0x4B034E6: yy::seclang_parser::parse() (seclang-parser.yy:889)

==16116== by 0x4B7B68A: modsecurity::Parser::Driver::parse(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:210)

==16116== by 0x4B7B8C1: modsecurity::Parser::Driver::parseFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:254)

==16116== by 0x4BA4973: modsecurity::RulesSet::loadFromUri(char const*) (rules_set.cc:55)

==16116== by 0x4BA6F1A: msc_rules_add_file (rules_set.cc:368)

==16116== by 0x283F21: ngx_conf_set_rules_file (ngx_http_modsecurity_module.c:363)

==16116== by 0x159693: ngx_conf_handler (ngx_conf_file.c:463)

==16116== by 0x1591A0: ngx_conf_parse (ngx_conf_file.c:319)

==16116== by 0x197EC3: ngx_http_core_server (ngx_http_core_module.c:2892)

==16116==
==16116== 4,668,272 bytes in 41,681 blocks are possibly lost in loss record 1,115 of 1,115
==16116== at 0x4837B65: calloc (vg_replace_malloc.c:752)

==16116== by 0x4C25E10: acmp_add_pattern (acmp.cc:512)

==16116== by 0x4C17C27: modsecurity::operators::PmFromFile::init(std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) (pm_from_file.cc:71)

==16116== by 0x4B034E6: yy::seclang_parser::parse() (seclang-parser.yy:889)

==16116== by 0x4B7B68A: modsecurity::Parser::Driver::parse(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:210)

==16116== by 0x4B7B8C1: modsecurity::Parser::Driver::parseFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:254)

==16116== by 0x4BA4973: modsecurity::RulesSet::loadFromUri(char const*) (rules_set.cc:55)

==16116== by 0x4BA6F1A: msc_rules_add_file (rules_set.cc:368)

==16116== by 0x283F21: ngx_conf_set_rules_file (ngx_http_modsecurity_module.c:363)

==16116== by 0x159693: ngx_conf_handler (ngx_conf_file.c:463)

==16116== by 0x1591A0: ngx_conf_parse (ngx_conf_file.c:319)

==16116== by 0x197EC3: ngx_http_core_server (ngx_http_core_module.c:2892)
`

nginx version 1.21.1
modsec nginx-connector version 1.0.2
modsecurity version v3.0.4

`==25494== 1,200,064 bytes in 41,681 blocks are possibly lost in loss record 785 of 786

==25494== at 0x4837B65: calloc (vg_replace_malloc.c:752)

==25494== by 0x4B63038: acmp_add_pattern (acmp.cc:517)

==25494== by 0x4B55FCB: modsecurity::operators::PmFromFile::init(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) (pm_from_file.cc:71)

==25494== by 0x4A6E267: yy::seclang_parser::parse() (seclang-parser.yy:871)

==25494== by 0x4AD3162: modsecurity::Parser::Driver::parse(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:154)

==25494== by 0x4AD3393: modsecurity::Parser::Driver::parseFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:185)

==25494== by 0x4AF5117: modsecurity::Rules::loadFromUri(char const*) (rules.cc:98)

==25494== by 0x4AF720C: msc_rules_add_file (rules.cc:370)

==25494== by 0x283F21: ngx_conf_set_rules_file (ngx_http_modsecurity_module.c:363)

==25494== by 0x159693: ngx_conf_handler (ngx_conf_file.c:463)

==25494== by 0x1591A0: ngx_conf_parse (ngx_conf_file.c:319)

==25494== by 0x197EC3: ngx_http_core_server (ngx_http_core_module.c:2892)

==25494==
==25494== 4,668,272 bytes in 41,681 blocks are possibly lost in loss record 786 of 786

==25494== at 0x4837B65: calloc (vg_replace_malloc.c:752)

==25494== by 0x4B62FF0: acmp_add_pattern (acmp.cc:512)

==25494== by 0x4B55FCB: modsecurity::operators::PmFromFile::init(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) (pm_from_file.cc:71)

==25494== by 0x4A6E267: yy::seclang_parser::parse() (seclang-parser.yy:871)

==25494== by 0x4AD3162: modsecurity::Parser::Driver::parse(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:154)

==25494== by 0x4AD3393: modsecurity::Parser::Driver::parseFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:185)

==25494== by 0x4AF5117: modsecurity::Rules::loadFromUri(char const*) (rules.cc:98)

==25494== by 0x4AF720C: msc_rules_add_file (rules.cc:370)

==25494== by 0x283F21: ngx_conf_set_rules_file (ngx_http_modsecurity_module.c:363)

==25494== by 0x159693: ngx_conf_handler (ngx_conf_file.c:463)

==25494== by 0x1591A0: ngx_conf_parse (ngx_conf_file.c:319)

==25494== by 0x197EC3: ngx_http_core_server (ngx_http_core_module.c:2892)
`

Total reported RSS change (which is almost completely USS) is

modsecurity version v3/dev/3.1-experimental (about 20MB for v3.3.0 rules, several rules removed that had load errors)
`before
PID TID CLS RTPRIO STAT VSZ RSS COMMAND
16789 16789 TS - S+ 92460 41020 nginx

after reload
PID TID CLS RTPRIO STAT VSZ RSS COMMAND
16789 16789 TS - S+ 112916 61216 nginx
`

modsecurity version v3.0.4 (about 17MB for stock v3.3.0 rules)
` before
PID TID CLS RTPRIO STAT VSZ RSS COMMAND
26010 26010 TS - S+ 86460 36564 nginx-static

after
PID TID CLS RTPRIO STAT VSZ RSS COMMAND
26010 26010 TS - S+ 103620 53476 nginx
`

I do agree the amount of leaked memory is (at least partly) a function of the size of the ruleset being reloaded.

Happy to provide more data and/or test different builds.

Thanks.

@scaarup
Copy link

scaarup commented Nov 9, 2021

Do we open a new issue or reopen this one?

@martinhsv
Copy link
Contributor

Hi @scaarup ,

There is still an open issue for this general issue: #2552

Unless there are very distinctive cases, I expect having 2552 open is sufficient. If there is indeed a distinctive case that need to be considered separately, it might make more sense to open a new issue rather than re-open this old one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3.x Related to ModSecurity version 3.x
Projects
None yet
Development

No branches or pull requests