Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit total global Hystrix threads #297

Closed
mikeycohen opened this issue Aug 27, 2014 · 5 comments
Closed

Limit total global Hystrix threads #297

mikeycohen opened this issue Aug 27, 2014 · 5 comments
Assignees

Comments

@mikeycohen
Copy link

Systems can sprial out of control with hystrix threads as the number of hystrix commands increase. During periods of latency, multiple pools may be affected, causing rapid runnable thread expansion. We should be able to cap total hystrix threads.

@benjchristensen
Copy link
Contributor

Agreed, thanks for posting this. Hopefully @KoltonAndrus and I can get to this in Hystrix 1.4 in the near future.

@benjchristensen benjchristensen modified the milestones: 1.4, 1.4.x Aug 27, 2014
@mattrjacobs mattrjacobs changed the title Limit total global Hystrx threads Limit total global Hystrix threads Dec 18, 2014
@mattrjacobs mattrjacobs removed this from the 1.4.x milestone Dec 19, 2014
@mikeycohen mikeycohen removed this from the 1.4.x milestone Dec 19, 2014
@mattrjacobs mattrjacobs added this to the 1.4.0-RC7 milestone Dec 19, 2014
@mattrjacobs
Copy link
Contributor

I see a couple of options for addressing this problem.

  1. Just like each individual Hystrix thread pool has a size, and exceeding that number of active threads results in a thread-pool rejection, we could create a synthetic limit on the sum of all active threads, and exceeding that would result in a synthetic thread-pool rejection.

  2. Hystrix already exposes the hooks onThreadStart and onThreadComplete. You can write a custom hook that keeps track of the number of outstanding threads and then consults a policy on how to handle different levels of saturation. For instance, you could check this value upon each receipt of an HTTP request, and then 503 it if the value was too high, hopefully shedding enough load to keep your system up.

My preference is Option 2, as not all applications will want to opt-in to this behavior, and it doesn't add any cognitive load to Hystrix-core.

Thoughts? /cc @mikeycohen @benjchristensen @KoltonAndrus ?

@benjchristensen
Copy link
Contributor

I agree with option 2.

@mattrjacobs
Copy link
Contributor

Punting to 1.4.x

@mattrjacobs mattrjacobs modified the milestones: 1.4.0-RC7, 1.4.x Feb 11, 2015
@mattrjacobs mattrjacobs modified the milestones: 1.4.x, 1.4.2 Mar 3, 2015
@mattrjacobs mattrjacobs modified the milestones: Hystrix-core Features, Hystrix-core Bugfixes Mar 27, 2015
@mattrjacobs
Copy link
Contributor

I just implemented a middle-ground between options 1+2 above. There is now a static method (HystrixCounters.getGlobalConcurrentThreadsExecuting()) which directly returns the number of Hystrix threads executing across the JVM.

This doesn't affect any core Hystrix behavior, but makes it easier to build a custom strategy in any application which consumes Hystrix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants