-
Notifications
You must be signed in to change notification settings - Fork 626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keep-alive in TheHive #427
Comments
As mentioned in #414, As per elastic4play issue #41 however, Stream misbehaves when you have more than one TheHive app nodes behind an LB (YMMV). |
@ParanoidRat I'm curious to know which settings for timeout you added/changed. I believe I can control those in AWS ELB, so I'd be willing to give it a try. I assumed (apparently incorrectly), however, that it should be TheHive setting this up. |
Using a timeout of 120 seconds made it so I never had any gateway timeouts using the ALB. Also, if someone is using AWS for load-balancing, make sure to point the health check to index.html (which is served by TheHive and Cortex) to verify connectivity. Pointing to / will result in a 303, which is not considered healthy by default (or add the 303 to the health check). |
@0xtf My overall timeout for nginx is configured as |
This issue seems solved. Feel free to reopen it if increasing the timeout is not enough. |
I have been working on this issue on my deployment as well, but it may be specific to my config. I have a front end up in a cloud provider that uses nginx and proxies over a openvpn connection back into the on prem server that runs thehive. Anytime I set the proxy_connect_timeout settings in my nginx config the frontend server becomes unresponsive for all traffic. I'm going to keep working through it, but if anyone has suggestions, they would be much appreciated. |
Hi all,
Following a discussion on the gitter channel I decided to open this issue to bring more attention to this issue. This might not be an issue, so feel free to close it if that's the case.
I'm currently running TheHive behind a load balancer, and I had several timeouts, which result in the following error pop up in the lower left corner of TheHive (regardless of which screen we are in):
The LoadBalancer in this case is AWS Elastic Load Balancer (ELB), and following their documentation one of the possible solutions was to increase the timeout, even though I don't believe that is the best solution.
Another recommendation given by AWS is to implement a keep-alive on the application. I looked around and I'm not sure this is supported by TheHive. I'm not sure this can be done at the OS-layer, so any feedback on this is greatly appreciated.
I saw some keep-alive settings but they only applied to Elastic. Any tips? Can TH support keep-alive?
The text was updated successfully, but these errors were encountered: