Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in Chrome on auto-refreshing dashboard #14653

Closed
marius-dr opened this issue Oct 28, 2017 · 8 comments
Closed

Memory leak in Chrome on auto-refreshing dashboard #14653

marius-dr opened this issue Oct 28, 2017 · 8 comments
Labels
bug Fixes for quality problems that affect the customer experience

Comments

@marius-dr
Copy link
Member

Kibana version: 6.0.0-rc2

Elasticsearch version: 6.0.0-rc2

Server OS version: Windows 10

Browser version: Chrome 61, Chrome 62

Browser OS version: Windows

Original install method (e.g. download page, yum, from source, etc.): BC1 build

Description of the problem including expected versus actual behavior:
There seems to be a memory leak on a auto-refreshing dashboard in Kibana.
The settings I used were:

  • Metricbeat processes dashboard
  • Dark theme
  • Full screen mode and F11 in the browser (tested the second time without this and still experiencing the leak, so might not be related)
  • Auto-refresh on 5 seconds
  • Time picker: Last 15 minutes

First time I ran out of memory and Chrome crashed the tab automatically after about 8h for about 7GB of free RAM at the start of the test.
Initial memory usage just for Chrome with only this tab starts at about 250MB. After 1h it got all the way to 892MB. Chrome Tab Manager reports that as javascript only memory.

You can see the evolution of Chrome memory usage in the actual metricbeat visualization on Memory usage per process.
mem_leak_chrome
Chrome is the Dark Green one. You can see that the test started at about 2AM and at about 11:30 AM the tab crashed on the test machine.

On Firefox, after about the same time (16h now since the start of the test), the tab is at about 384MB of RAM broken down like this:
firefox_mem

I will update this issue as I get more information and run more tests. Next one on the list is Chrome on Ubuntu 16.04

cc: @epixa @LeeDr @Rasroh @bhavyarm

@marius-dr marius-dr added Team:Operations Team label for Operations Team bug Fixes for quality problems that affect the customer experience :Sharing Feature:Visualizations Generic visualization features (in case no more specific feature label is available) and removed Team:Operations Team label for Operations Team labels Oct 28, 2017
@stacey-gammon
Copy link
Contributor

I believe this has been a long standing issue (#7427) that I fixed for 6.1 only (#13871). Was a bit nervous of the courier changes required for the fix to backport to 6.0.

@stacey-gammon
Copy link
Contributor

... although I am curious why firefox didn't exhibit the same issue. Could you run this on the 6.x branch to see if that has the same crash?

@marius-dr
Copy link
Member Author

I'll give it a go on 6.x today and see if the memory increase rate is similar. After setting the refresh rate on 1 minute instead of 5 seconds the rate of RAM increase has significantly declined,down to about 1gb per 24h from 8 gb per 8h.

@marius-dr
Copy link
Member Author

It looks fine on 6.1.0-Snapshot so it seems to be the same issue that you fixed there. Same dashboard, same settings and it's hovering aroun 170mb-200mb for the past hour, no noticeable increase. With RC2 it was going from 200MB to 800mb in 1h. @stacey-gammon

@marius-dr marius-dr removed the Feature:Visualizations Generic visualization features (in case no more specific feature label is available) label Oct 30, 2017
@stacey-gammon
Copy link
Contributor

sweet!

@chrisronline
Copy link
Contributor

We have a request to backport this to 5.5 by @smassarik.

@stacey-gammon
Copy link
Contributor

Unfortunately there are a ton of merge conflicts attempting to backport to 5.x because of a big visualization refactoring done in 6.x. Since there is a workaround (hard refresh the page, which I know isn't ideal), and I haven't heard any other requests to backport to 5.x, I don't think we can prioritize this at the moment.

It happens on the auto-refresh. The longer you can make that interval, the longer it will take for the memory leak to cause an issue.

chrisronline added a commit to chrisronline/kibana that referenced this issue Mar 2, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience
Projects
None yet
Development

No branches or pull requests

3 participants