Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep-alive in TheHive #427

Closed
0xtf opened this issue Jan 14, 2018 · 7 comments
Closed

Keep-alive in TheHive #427

0xtf opened this issue Jan 14, 2018 · 7 comments
Assignees
Labels

Comments

@0xtf
Copy link

0xtf commented Jan 14, 2018

Hi all,

Following a discussion on the gitter channel I decided to open this issue to bring more attention to this issue. This might not be an issue, so feel free to close it if that's the case.

I'm currently running TheHive behind a load balancer, and I had several timeouts, which result in the following error pop up in the lower left corner of TheHive (regardless of which screen we are in):

image

The LoadBalancer in this case is AWS Elastic Load Balancer (ELB), and following their documentation one of the possible solutions was to increase the timeout, even though I don't believe that is the best solution.

Another recommendation given by AWS is to implement a keep-alive on the application. I looked around and I'm not sure this is supported by TheHive. I'm not sure this can be done at the OS-layer, so any feedback on this is greatly appreciated.

I saw some keep-alive settings but they only applied to Elastic. Any tips? Can TH support keep-alive?

@ParanoidRat
Copy link

As mentioned in #414, Keep-Alive seems to cause this behavior. Stream stopped throwing 504s when I removed Keep-Alive in the Nginx. I later added reasonable time-out settings 504s have not appeared since then.

As per elastic4play issue #41 however, Stream misbehaves when you have more than one TheHive app nodes behind an LB (YMMV).

@0xtf
Copy link
Author

0xtf commented Jan 17, 2018

@ParanoidRat I'm curious to know which settings for timeout you added/changed. I believe I can control those in AWS ELB, so I'd be willing to give it a try.

I assumed (apparently incorrectly), however, that it should be TheHive setting this up.

@0xtf
Copy link
Author

0xtf commented Jan 17, 2018

image

It's possible to configure Idle Timeout in AWS ELB, so that might be a way to go. Unsure about what to set though.

@0xtf
Copy link
Author

0xtf commented Jan 19, 2018

Using a timeout of 120 seconds made it so I never had any gateway timeouts using the ALB. Also, if someone is using AWS for load-balancing, make sure to point the health check to index.html (which is served by TheHive and Cortex) to verify connectivity. Pointing to / will result in a 303, which is not considered healthy by default (or add the 303 to the health check).

@ParanoidRat
Copy link

@0xtf My overall timeout for nginx is configured as proxy_connect_timeout 159s;

@To-om
Copy link
Contributor

To-om commented Jan 23, 2018

This issue seems solved. Feel free to reopen it if increasing the timeout is not enough.

@To-om To-om closed this as completed Jan 23, 2018
@Hestat
Copy link

Hestat commented Feb 26, 2018

I have been working on this issue on my deployment as well, but it may be specific to my config. I have a front end up in a cloud provider that uses nginx and proxies over a openvpn connection back into the on prem server that runs thehive. Anytime I set the proxy_connect_timeout settings in my nginx config the frontend server becomes unresponsive for all traffic. I'm going to keep working through it, but if anyone has suggestions, they would be much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants