Skip to content
This repository was archived by the owner on Oct 23, 2023. It is now read-only.

Massive memory leak when auto breadcrumbs is enabled #295

Closed
maxcountryman opened this issue Mar 15, 2017 · 9 comments · Fixed by #296
Closed

Massive memory leak when auto breadcrumbs is enabled #295

maxcountryman opened this issue Mar 15, 2017 · 9 comments · Fixed by #296

Comments

@maxcountryman
Copy link

maxcountryman commented Mar 15, 2017

Unfortunately I will not be able to dig into this for you, but it should be reported so that others who can have a starting point and realize they are not alone.

I had to disable auto breadcrumbs due to the fact it was consistently consuming gigs of memory and crashing my Node process within minutes of running in production. (Roughly every twenty minutes or so it would exhaust available memory and restart the process.)

Disabling auto breadcrumbs fixed the issue.

This is on Node v7.4.0 and raven-node 1.1.4. Happens on macOS as well as whatever Docker node:7.4 is running.

Best of luck fixing this...it's a nasty one.

@LewisJEllis
Copy link
Contributor

Thanks for the report, sounds serious. Will investigate and see if I can pinpoint a cause.

@LewisJEllis
Copy link
Contributor

Okay, a few questions @maxcountryman, could be helpful for complete information/guidance to repro attempts:

  • What do your raven config/install setup calls look like?
  • Are you using the express middleware? Or managing your own contexts with run/context methods? Or just using the global context?
  • Do you get any raven console alert messages on startup (or throughout program lifecycle)? They look something like raven@1.1.4 alert: ...
  • What has your version history with raven-node looked like? I'm guessing you're on 1.1.4, but have you used autoBreadcrumbs with any other versions? I'm especially curious between 1.1.4, 1.1.3, and anything older.

I'm definitely not going to ask you to put autoBreadcrumbs back into your production environment, and don't mean to ask you to help me chase this issue, but one thing that would be really helpful is knowing whether 1.1.2 causes the same issue. If you have any way to safely/easily determine if this happens on 1.1.2, I would hugely appreciate it (though I completely understand if it's not viable to test).

I have three potential avenues to investigate at the moment, will be updating this thread with any findings.

@LewisJEllis
Copy link
Contributor

@maheshoml following up on your comment in #295, with this additional context on a memory leak being autobreadcrumb-related, I'm wondering if you have autoBreadcrumbs enabled and if so, do you still see the memory issue you mentioned if you disable autoBreadcrumbs?

@LewisJEllis
Copy link
Contributor

I've done some investigating and it appears to be an issue with http breadcrumbs present in 1.1.4 but not present in 1.1.2, which means #276 likely introduced the problem. I'm working on a fix.

@maheshoml
Copy link

maheshoml commented Mar 16, 2017

@LewisJEllis I still cannot zero in on what in Raven might be causing this. But we do have auto breadcrumbs for http enabled. For most part of yesterday we had raven enabled and 50+ instances were taken out of ELB for health check failures on account of memory leaks and 12+ hours since we disabled raven 0 such instances. So it is definitely Raven.

Let me see if I can run with http disabled later in the day.

@LewisJEllis
Copy link
Contributor

@maheshoml thanks for the conf on that behavior, sounds consistent. I tracked the issue down today and made the fix in #296; it'll merge and go out in a new version tomorrow.

@idris
Copy link

idris commented Mar 16, 2017

I have the same issue with massive memory leaks. This was essentially taking down our servers. To fix, I reverted to an old version of Raven but did not try disabling auto breadcrumbs. I'll re-upgrade and disable auto breadcrumbs, since that's probably fine.

I can also test the new fix once it's released

@LewisJEllis
Copy link
Contributor

I've published versions 1.1.5 (just this fix) and 1.2.0 (this plus sampleRate from #292). I'll be working on the memory leak testing stuff soon; opened #297 to track that.

@emmenko
Copy link

emmenko commented Mar 18, 2017

@LewisJEllis thanks for fixing this 🎉

I was going crazy trying to figure out the cause of the memory leak in our production environment, then luckily I bumped into this issue!
I'll upgrade to 1.2.0 and will let you know if the problem is gone.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants