-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory jump in high-load scenario with nsc/nse release/v1.11.2+ #703
Comments
My guess the problem is in the diff in the nsc from v1.11.2 to main. Most likely, monitor streams are not closing. Currently, I am working with quick reproduction; I will inform you when we get more information. |
Would it be an explanation why the memory graph flattens after restarting |
Hi @denis-tingaikin, edit: I think I might have found a possible src of resource leaking. In our NSC the streams returned by MonitorConnections linger on, as they are using the main context... After running a test with fixed context handling for the streaming RPC, the steady increase in memory usage disappeared. The NSC used in Meridio: https://github.com/Nordix/Meridio/blob/master/cmd/proxy/main.go#L243 Simply on a Kind cluster I also noticed the differences Szilard reported (by leaving the cluster intact without any traffic). The nsmgrs with the most memory hosted more (custom) NSEs (in both cases below 1 nsmgr had 3 NSEs, 1 nsmgr had 1 NSE, and 2 nsmgrs run without NSEs). |
@szvincze Could we test Note: Focus on nsmgr mem/fd consumption. I expected that its not leaking. |
We just ran a 6 hours test and focused on file descriptors. It looks very similar to our previous tests. There are a bit more FDs for |
It looks good; it seems like we are already able to get rid of using workarounds in the next releases. Let's keep the open this ticket till next RC. |
Sometimes the memory consumption is uncontrollable increasing in the nsmgr.
Reproduced with release v1.14.0-rc.2
The text was updated successfully, but these errors were encountered: