-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why nlohmann does not release memory #1924
Comments
This is strange, as the library does release all allocated memory in the destructor, and we have no issues with Valgrind and ASAN. I need to double check your code. |
Hi nlohmann, It is more strange if I say any things is OK in windows. So I am waiting for your response. |
This is indeed strange. It could be due to allocator not releasing memory and caching it for later use. More info here and here. I couldn't release the memory using void TestMemAllocation()
{
std::vector<std::vector<int>> all_v{};
for (int i = 0; i < 100; i++)
{
std::vector<int> v;
v.resize(5000000);
all_v.emplace_back(v);
}
} |
This is a memory paging policy. |
No idea or solution? |
It is strange. I had try to reproduce your problem both in windows and linux, but all is OK and there are no issues with Valgrind and ASAN. I agree with @nickaein, and will try to test and verify. |
I cannot reproduce your example. In my tests, all allocated memory is released. |
Did you test my example? How to check memory is released? If you test with Valgrind or ASAN or ... it's OK. but in monitoring system you can see the memory is not released. We continually call the parser in a big project and the memory goes up more and more. We checked all other modules and libraries, They were OK. |
It’s Xcode. |
I test example in Ubuntu. Check it in Ubuntu please; |
I still believe this is an optimization by the allocator (probably After a closer examination, I see that by calling # without calling malloc_trim
$ cat /proc/$(pidof without-malloc_trim.out)/status | grep Vm
VmPeak: 571516 kB
VmSize: 388364 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 528560 kB
VmRSS: 385600 kB
VmData: 382576 kB
VmStk: 136 kB
VmExe: 84 kB
VmLib: 3380 kB
VmPTE: 800 kB
VmSwap: 0 kB
# with calling malloc_trim
$ cat /proc/$(pidof with-malloc_trim.out)/status | grep Vm
VmPeak: 571516 kB
VmSize: 370860 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 528632 kB
VmRSS: 3436 kB
VmData: 365072 kB
VmStk: 136 kB
VmExe: 84 kB
VmLib: 3380 kB
VmPTE: 764 kB
VmSwap: 0 kB |
I had reproduce successful in ubuntu.
There is indeed a problem, but this is an optimization by the allocator (probably glibc in your case) and unrelated to the library, as @nickaein said. If you add
you will find everythink will be OK. Other Test: Anyway, the probleam is unrelated to the library. |
I highly doubt this behavior can cause any problems in most applications, except maybe and only maybe in very memory-constrained situations like an embedded systems with very small physical memory. Note that the memory allocated by this optimization isn't going to grow infinitely. You can verify that by calling Nevertheless, if this is really problematic in your case, these are the workarounds I can think of:
|
Great suggestions @nickaein. I will analyse your options and select the best solution. |
Unfortunately, I had tested the library - |
We have two problems:
1- Why nlohmann use huge memory to parse data
2- After call parser in a function locally like below code it does not release memory. My JSON data size is about 8MB and the parser use more then 50MB for parsing. I parsed this JSON data 10 times and memory usage goes up to 600MB and after the function is finished memory did not released.
Our platform is Ubuntu 18.04, CMake 3.15.3, g++ 7.4.0
The text was updated successfully, but these errors were encountered: