-
Notifications
You must be signed in to change notification settings - Fork 388
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
usage of memory and disk grow rapidly #660
Comments
@RoomCat Is the increase of 450M to 3G you're seeing related memory or disk? The memory footprint should also level, and I expect that to happen sooner than disk usage. This might be a memory leak. If so, it should be fairly easy to reproduce in this scenario. To have all the variables, could you also please tell us the version of Dkron you're using, and on which platform? |
I can second that! The memory footprint of Dkron seems to be quite high: we have a memory requirement of 600MiB for three jobs - one executing every minute, two executing every 30 minutes. I really do not want to come of as ungrateful - I am grateful for Dkron but also a little worried that this will get worse as we scale up, put it into production and rely on it on a daily basis. Are there any benchmarks or something like that? |
I've been looking into the memory usage. The 550MB+ memory footprint when Dkron starts is largely because BadgerDB allocates some memory to work with (~83MB) and fires up a cache (Ristretto under the hood) that reserves 384MB right off the bat. I haven't look at whether Ristretto can be configured to require less memory yet - nor whether that is prudent. I did find some interesting behavior when a job gets deleted: Badger would free then reallocate memory, which does get GC'ed, but not released to the OS (at least not immediately). This causes the process to jump up ~83MB in mem use every time a job is deleted (until the GC decides to release the mem to the OS). With respect to the http executor increasing memory gradually: I haven't looked at that specifically yet, but I am wondering whether the runtime might be too busy handling tasks, causing the GC to not get a chance to free memory and/or release it to the OS. |
Do you have some metrics where we can observe the http executor behaviour? Yes, Badger can consume some memory, never got a problem for me but we can experiment in memory tuning with it. @davidgengenbach I did several tests of memory usage and amount of job executions, though I did not formalised them, I found not leakage on job amounts in the order of > 1000 Currently > 200 jobs here http://test.dkron.io:8080/dashboard/ memory usage ~2,88Gb stable since 25 days ago, though not using the http executor. |
I was using 2.0.0-rc7 in one of my setups (in very low memory conditions) and 2 days upgraded to 2.0.4 and started running out of memory. Thanks to the docker releases, took 1min to pinpoint the release
So rc7 was the last one with low memory footprint. |
a quick related question: is the badger v2 upgrade valuable? |
The usage of memory and disk is growing rapidly when I run http task
I deploy three dkron server and add 10 http task @every 1s, everything is ok at the beginning, but the usage of memory and disk is always increase, from 450M to 3G in a few hours. It seems the gc of executions not work?
or I use it in the wrong way?
The text was updated successfully, but these errors were encountered: