-
-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate potential memory leak #522
Comments
Found the cause. Every time a user requests a summary for a relative time interval (such as That is, the more often you hit refresh on the dashboard page, the faster the memory will be filled up. This way, you could technically bring Wakapi down 😉. Options to fix this include:
Cool side-effect of this investigation: I learned a lot about profiling in Go and added a profiler web endpoint to Wakapi. |
As expected, the pattern can still be observed after my latest commit. However, the delta is, as expected, a lot smaller now (~170 MB as opposed to ~450 MB before). We could get around this completely by disabling the cache (as discussed above), but I don't think it doesn't hurt much the way it is now. |
Reopening this, as Wakapi is still getting OOM-killed every now and then. It currently runs with |
The plot shows memory usage (actual, not virtual) of the
wakapi
process on wakapi.dev over 7 days with Prometheus scrape interval of 5 minutes. We can see how memory accumulates during the day and all of a sudden drops precisely at 02:20 am (or anywhere in the past 5 minutes before that). The default forWAKAPI_AGGREGATION_TIME
is 02:15 am. To check whether it's related, I changed the aggregation time to 01:45 am on Sep 19 and we can observe the drop happening around 01:50 am now. So apparently running the daily aggregation job causes memory to be freed. Let's investigate why that is and why memory accumulates during the day in the first place.The text was updated successfully, but these errors were encountered: