-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory leak on nginx reload #2381
Comments
Same issue here. I have disabled modsec but i doubt this will be fixed... |
Hi, We are working on a lot regarding rules reload at: v3.1-experimetal. Is this issues happens on v3.0.4 or only in v3/master? |
Hi, zimmerle, you are right - can't reproduce this with v3.0.4. |
I'm still not able to resolve this. As nginx works with the ModSecurity + CRS ruleset, the memory size grows and grows until Nginx crashes. I've hacked around it by forcing nginx to restart as soon as it crashes, but that's a really bad fix. I tried to use the experimental version, however that didn't work out (the vhost on which ModSecurity is running was not working with the following error) nginx version: nginx/1.18.0 |
hi, |
Unfortunately each vhost is running a different application, and the ruleset is fine tuned for each one. So loading the rules in the nginx.conf isn't something that's possible. Is the 3.0.4 or v3/master the latest stable version? |
oh i see. That's a big issue. |
The last
The last released version is 3.0.4. The experimental branch is the 3.1-experimental |
did you had the chance to test this particular issue in 3.1-experimental? |
Nope. I can test it but it will take same take because it is not very deterministic the appeareance of the memory leak bug. The slow start on the other hand can be tested. Will try tomorrow and let you know. |
thank you. |
hi zimmerle, i cannot compile that branch... cd /root |
Can you confirm the last commit hash in your branch? |
The following is what git log says, so seems the last one..
This is another nginx installation with ModSec and the same issue even here. Just tried on another machine to be sure that is not environment issue. |
I am going to verify and get back to you. thanks |
I've tried everything, even boosting the RAM on my instance, however it is still crashing at times. Will using another operating system help? I have a similar setup running on Debian 10 (not as many vhosts though), that has not yet crashed. |
@hazardousmonk Have you had a chance to test v3/dev/3.1-experimental ? |
ping. |
We are also experiencing a memory leak on reload signal. It seems that the leak started manifesting when we enabled The leak is definitely present in the latest
I am going to run this through Valgrind and gdb to find out more soon. HTH. |
Sorry for the delayed response. I built it, however it crashed on launch - it was a production server down for maintenance so I had to revert back to safer territory. I will try this again and keep you posted. Probably Nginx 1.19 and the Dev version. I'm just curious as to how everyone is dealing with this so far? I've hacked a quick script to restart nginx the moment it sees those evil lines in the error log. |
Hi @hazardousmonk thank you for have it tested. You could have it compiled and just changed library file or export a different LD_LIBRARY_PATH together with the nginx .so addon. At this phase of the development I would prefer if you don't have it running headless. Specially if it not starting properly. Do you happens to have the logging related to the event? |
I actually compiled it into nginx rather than building a dynamic module - I will try to load it as a dynamic module. I'm not quite sure how to run it in non-headless mode. If you could point me in the right direction I'll be more than happy to help out on this and send you the full verbose logs. I can also confirm that v3.0.4 is stable and does not have this condition. |
As mentioned on #2376 we no longer have leaks on the reloads. That is valid for 3.1-experimental. |
I have the same problem. How can I fix it? The impact of this problem is very serious. I hope that the government will pay close attention to the repair and repair version 3.0.4 as soon as possible |
Is anyone there? |
(a) Try to restart instead of reload; (b) Experiment the master branch and provide feedback; (c) experiment 3.1 and provide feedback. |
? |
@zimmerle Has the master branch been fixed the memory leak? |
@zimmerle hello admin. I used the one you specified (3.1-experimental) but I noticed the problem still occurs if you need the environment to test, I'll give it to you. this issues not fix in 3.1-experimental. thank you! |
@zimmerle It often leads memory leak on nginx reload for branch 3.1-experimental Describe the bug We publish more frequently every day, and often reload nginx,every few days nginx has a memory leak,How to fix it? Logs and dumps Output of:
|
@zimmerle It often leads memory leak on nginx reload using modsecurity branch 3.1-experimental and master,I hope that the official will fix this problem as soon as possible, which has a serious impact on the production environment. And There is also why this problem #2381 was closed without being resolved!!! I suggest that you officially do a test yourself. First, create a lot of virtual hosts, each virtual host quotes the modsecurity rules separately, and then constantly reload, the memory leak should be reproduced! We publish more frequently every day, and often reload nginx,every few days nginx has a memory leak,How to fix it? Logs and dumps |
I feel that this kind of problem has been going on for a long time, and I hope that the official will seize the time to fix it. This is a fatal problem and the priority should be the highest. |
For the past six months, I've been waiting for 3.0.10 to become an official release so it comes down the regular update channels on my server (while patiently restarting a crashed nginx about every other week). Should I still hold my breath, or go ahead and compile the experimental version? |
@GutHib what version are you currently running? did you ever tried v3/master? |
I am running 3.0.4. on a server with Directadmin installed. Like with most control panels, by default you get to install all stable releases that come down through the regular update channels. I'm aware that I can go ahead and slap 3.1 on my server manually, but I don't want to make a mess of things. (Directadmin has its own install paths, and I don't want to end up with two competing versions or break the regular updates.) I understand that my concerns may be unfounded, but weirder things have happened. So basically I'm just trying to figure out my options. If manual installation is my only choice for the foreseeable future, I will give it a try. Thanks! |
Anyone? |
Hi there, |
I'm not seeing the 'nginx -s reload' memory leak fixed using the v3/dev/3.1-experimental branch, checking either using /proc/nginx-pid/smaps directly or using valgrind. I am running nginx single process in foreground: master_process off; The memory leak is in modsecurity code called from the parser, the two worst offenders are: nginx version 1.21.1 `==16116== 1,200,064 bytes in 41,681 blocks are possibly lost in loss record 1,114 of 1,115 ==16116== at 0x4837B65: calloc (vg_replace_malloc.c:752) ==16116== by 0x4C25E58: acmp_add_pattern (acmp.cc:517) ==16116== by 0x4C17C27: modsecurity::operators::PmFromFile::init(std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) (pm_from_file.cc:71) ==16116== by 0x4B034E6: yy::seclang_parser::parse() (seclang-parser.yy:889) ==16116== by 0x4B7B68A: modsecurity::Parser::Driver::parse(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:210) ==16116== by 0x4B7B8C1: modsecurity::Parser::Driver::parseFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:254) ==16116== by 0x4BA4973: modsecurity::RulesSet::loadFromUri(char const*) (rules_set.cc:55) ==16116== by 0x4BA6F1A: msc_rules_add_file (rules_set.cc:368) ==16116== by 0x283F21: ngx_conf_set_rules_file (ngx_http_modsecurity_module.c:363) ==16116== by 0x159693: ngx_conf_handler (ngx_conf_file.c:463) ==16116== by 0x1591A0: ngx_conf_parse (ngx_conf_file.c:319) ==16116== by 0x197EC3: ngx_http_core_server (ngx_http_core_module.c:2892) ==16116== ==16116== by 0x4C25E10: acmp_add_pattern (acmp.cc:512) ==16116== by 0x4C17C27: modsecurity::operators::PmFromFile::init(std::shared_ptr<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) (pm_from_file.cc:71) ==16116== by 0x4B034E6: yy::seclang_parser::parse() (seclang-parser.yy:889) ==16116== by 0x4B7B68A: modsecurity::Parser::Driver::parse(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:210) ==16116== by 0x4B7B8C1: modsecurity::Parser::Driver::parseFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:254) ==16116== by 0x4BA4973: modsecurity::RulesSet::loadFromUri(char const*) (rules_set.cc:55) ==16116== by 0x4BA6F1A: msc_rules_add_file (rules_set.cc:368) ==16116== by 0x283F21: ngx_conf_set_rules_file (ngx_http_modsecurity_module.c:363) ==16116== by 0x159693: ngx_conf_handler (ngx_conf_file.c:463) ==16116== by 0x1591A0: ngx_conf_parse (ngx_conf_file.c:319) ==16116== by 0x197EC3: ngx_http_core_server (ngx_http_core_module.c:2892) nginx version 1.21.1 `==25494== 1,200,064 bytes in 41,681 blocks are possibly lost in loss record 785 of 786 ==25494== at 0x4837B65: calloc (vg_replace_malloc.c:752) ==25494== by 0x4B63038: acmp_add_pattern (acmp.cc:517) ==25494== by 0x4B55FCB: modsecurity::operators::PmFromFile::init(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) (pm_from_file.cc:71) ==25494== by 0x4A6E267: yy::seclang_parser::parse() (seclang-parser.yy:871) ==25494== by 0x4AD3162: modsecurity::Parser::Driver::parse(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:154) ==25494== by 0x4AD3393: modsecurity::Parser::Driver::parseFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:185) ==25494== by 0x4AF5117: modsecurity::Rules::loadFromUri(char const*) (rules.cc:98) ==25494== by 0x4AF720C: msc_rules_add_file (rules.cc:370) ==25494== by 0x283F21: ngx_conf_set_rules_file (ngx_http_modsecurity_module.c:363) ==25494== by 0x159693: ngx_conf_handler (ngx_conf_file.c:463) ==25494== by 0x1591A0: ngx_conf_parse (ngx_conf_file.c:319) ==25494== by 0x197EC3: ngx_http_core_server (ngx_http_core_module.c:2892) ==25494== ==25494== at 0x4837B65: calloc (vg_replace_malloc.c:752) ==25494== by 0x4B62FF0: acmp_add_pattern (acmp.cc:512) ==25494== by 0x4B55FCB: modsecurity::operators::PmFromFile::init(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >*) (pm_from_file.cc:71) ==25494== by 0x4A6E267: yy::seclang_parser::parse() (seclang-parser.yy:871) ==25494== by 0x4AD3162: modsecurity::Parser::Driver::parse(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:154) ==25494== by 0x4AD3393: modsecurity::Parser::Driver::parseFile(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) (driver.cc:185) ==25494== by 0x4AF5117: modsecurity::Rules::loadFromUri(char const*) (rules.cc:98) ==25494== by 0x4AF720C: msc_rules_add_file (rules.cc:370) ==25494== by 0x283F21: ngx_conf_set_rules_file (ngx_http_modsecurity_module.c:363) ==25494== by 0x159693: ngx_conf_handler (ngx_conf_file.c:463) ==25494== by 0x1591A0: ngx_conf_parse (ngx_conf_file.c:319) ==25494== by 0x197EC3: ngx_http_core_server (ngx_http_core_module.c:2892) Total reported RSS change (which is almost completely USS) is modsecurity version v3/dev/3.1-experimental (about 20MB for v3.3.0 rules, several rules removed that had load errors) after reload modsecurity version v3.0.4 (about 17MB for stock v3.3.0 rules) after I do agree the amount of leaked memory is (at least partly) a function of the size of the ruleset being reloaded. Happy to provide more data and/or test different builds. Thanks. |
Do we open a new issue or reopen this one? |
Hi @scaarup , There is still an open issue for this general issue: #2552 Unless there are very distinctive cases, I expect having 2552 open is sufficient. If there is indeed a distinctive case that need to be considered separately, it might make more sense to open a new issue rather than re-open this old one. |
RAM usage constantly grows on nginx -s reload
Having modsecurity rules loaded (even with modsecurity off) causes RAM usage to grow with each nginx -s reload and ultimately leads nginx to stuck with messages like:
Logs and dumps (/var/log/nginx/error.log)
Output of:
2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning "worker process" (12: Cannot allocate memory)
2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor)
2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning "worker process" (12: Cannot allocate memory)
2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor)
2020/08/06 20:00:20 [alert] 1962#1962: fork() failed while spawning "cache manager process" (12: Cannot allocate memory)
2020/08/06 20:00:20 [alert] 1962#1962: sendmsg() failed (9: Bad file descriptor)
To Reproduce
/etc/nginx/nginx.conf
Expected behavior
'RAM used' should not steadily grow and should stay around the same level as it does for example without modsecurity rules loaded (in which case 'ram used' stays about 300 MB)
Server
Rule Set:
Additional context
The same happens with modsecurity on in server's context.
Using SecResponseBodyAccess Off in modsecurity.conf
The text was updated successfully, but these errors were encountered: