-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Carbon.conf #4
Comments
Here is the one I think you're referring to. IIRC it was configured for 0.9.10 and will probably not be suitable as a drop-in replacement for your environment but it's something to work from. In this environment we had Backstop running on Heroku accepting a heavy stream of metrics and distributing them to the pool of relays below.
|
Contrast that with this version which imho is much more typical of a "scale-out" (on a single box) across multiple cores. We had an HAProxy running in front of the relays at different "layers" (in front of replication and fanout layers). Running in production on 0.9.12.
|
Ports 2114 and 2614 - I assume they're are used by two haproxy frontends, right ? Am I getting it correct ? |
Yup On Thu, Jan 09, 2014 at 01:25:17AM -0800, Anatoliy D. wrote:
Jason Dixon |
Hey, sorry to dig up such an old thing, but I'm kind of running into some problems with a similar aggregator setup. I wonder if you could at least tell me how wrong I am doing this. Have two relays picking up metrics from a AMQP queue, similar to relay:[5,6]. Those relays distribute their load between two aggregators, similar to aggregator:[1,2], using the consistent-hashing method. My aggregators are misbehaving with this. My take on this is that by using consistent-hashing in order to distribute the load between two aggregators I'm not feeding them all the metrics they need to properly aggregate according to the rule, because those metrics will be "evenly" distributed between the two aggregators. They won't arrive in the proper sequence for the same aggregator to operate on them. My aggregators jam in this setup, but I attribute the jamming issue to the fact that I stop feeding metrics to the system at a given point. Although I think that if I didn't stop, aggregates would be wrongly calculated between both aggregates. If you're reading this to the end, thank you so much already. I'd really appreciate the heads up on this: it's starting to make a small dent in my sanity. |
@punnie First off, sorry for the late reply. I just now saw your question. Second, I think that's a reasonable assumption. You could use |
@obfuscurity thank you very much for taking the time to reply. just for the sake of closure, I've solved this issue with the help of graphite-project/carbon#32. what it does is implement a new relay method called "aggregated-consistent-hashing" which distributes metrics across aggregators taking into account the destination aggregated metric name instead of the source name to produce the hash. this ultimately results that every aggregator performs their own set of aggregation operations, and always receives every metric needed to perform that aggregation. again, thanks for the help, and for pushing the envelope 🍻 |
@punnie Very cool, I wasn't even aware of that feature. :) |
Hi,
Would it also be possible to see your carbon.conf for an idea on which settings need to be set between the different cache's/relay's?
The text was updated successfully, but these errors were encountered: