-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CachedValuesHistogram memory overhead #1296
Comments
These values were put in because they worked for our use-case. I don't think these would necessarily be appropriate for all use-cases though. I will add 2 pieces of configuration:
I think setting the defaults to 0 / 3 is probably the right thing to do. That way, you opt in to to decision to use the pool. Thanks for the catch on this - was probably a bad decision to make this change to all Hystrix consumers. The way this works is that each distribution stream (concurrency/latency) generates a new distribution on an interval. There are only a few distributions active at a time (10 is the default). When a new distribution is needed, it tries to get one from the pool. When an old distribution is no longer referenced, it is cleared and returned to the pool. Once this is configurable, I'll run some benchmarks to get a more accurate accounting for the tradeoffs of pool sizes. |
@mattnelson Rather than add configuration, I have a PR which just removes the pooling entirely: #1351. I would rather reduce complexity than add to it. Any concerns with this approach? |
I'm fine with either approach. Was there profiling done when this feature was introduced to warrant using the pool? I don't want to see a performance regression caused by the construction of the histograms. |
I think at some point in the development of the metrics functionality, it was warranted. I just ran a preliminary set of jmh tests and the results looked fine without the pooling. I'll merge this in and publish the perf-delta here |
I added the jmh data for 1.5.6 here: https://docs.google.com/spreadsheets/d/1a0ERBQJZzlmVqMpuvvdSwbJuXwt0UmkPsFZ97pvgA8o/edit#gid=1200952544 No large delta on memory-usage when reading metrics, so closing |
The CachedValuesHistogram class preloads[1] 1000 instances of HDRHistogram. Through some profiling it looks like each instance of the HDRHistogram is using 41k of memory. Which equates to 41m of memory for preloading these histograms. The class will create a new histogram on a pool miss[2], so I'm wondering how the 1000 default was decided on #1047. Would you need 1000 unique commands in order to exhaust the pool?
[1] https://github.com/Netflix/Hystrix/blob/v1.5.3/hystrix-core/src/main/java/com/netflix/hystrix/metric/CachedValuesHistogram.java#L27-L31
[2] https://github.com/Netflix/Hystrix/blob/v1.5.3/hystrix-core/src/main/java/com/netflix/hystrix/metric/CachedValuesHistogram.java#L165-L167
The text was updated successfully, but these errors were encountered: