You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 30, 2024. It is now read-only.
Lately we've upgrade our orchestrator instance to ubuntu18 with version 3.1.4
since then we experience a lot of failed checked to the backends which getting sorted out in one of the next checks.
the backends are up and the failures of the checks seems random.
this creates a-lot of confusion because checking the ui makes it impossible to know what is the status of the clusters
How can we debug it further to understand why the checks are failing?
attached orchestrator conf (user and pass removed) orc.conf.txt
Thanks!
The text was updated successfully, but these errors were encountered:
First, please run with --debug --stack and see if you spot any interesting error messages.
Next, do you perhaps have a low setting for open file limit (ulimit -n)? If you have many servers in your topologies, then consider increasing nofile to some higher vlaue, e.g. 8192
Last, let's try increasing some connection timeouts? Possibly your network times out. The default values are actually pretty permissive, but worth testing.
Lately we've upgrade our orchestrator instance to ubuntu18 with version 3.1.4
since then we experience a lot of failed checked to the backends which getting sorted out in one of the next checks.
the backends are up and the failures of the checks seems random.
this creates a-lot of confusion because checking the ui makes it impossible to know what is the status of the clusters
How can we debug it further to understand why the checks are failing?
attached orchestrator conf (user and pass removed)
orc.conf.txt
Thanks!
The text was updated successfully, but these errors were encountered: