You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have the ES cluster, but Kibana is configured to a single ip. If ES on this ip is down, Kibana detaches from the ES. Is there any way to overcome it? Actually, I need transparent work between Kibana to the whole ES cluster.
Is possible to give only the hostname and the name of cluster to kibana to seach ES node which's up ? Is possible to give a list of URL (host1:9200, host1:9201) ?
For information, i have kibana and ES Cluster in the same server. and i use the following versions : elasticsearch-1.1.1 ; kibana-3.1.0 ; logstash-1.4.1
The text was updated successfully, but these errors were encountered:
An idea - have you tried round robin DNS? If an HTTP request fails because the server is unreachable, the browser will transparently try the other IPs returned by DNS. Not sure if this works with XHR though.
I have used HAProxy to load balance the ES 2-node cluster and also verified that the browser was able to transparently failover to the redundant ES node through the HA Proxy while retaining the indexed data. Make sure you verify that the redundant node has the replicated data similar to the master/primary while doing failover testing.
The data integrity during failover needs to be checked which we are doing right now.
Hello,
I have the ES cluster, but Kibana is configured to a single ip. If ES on this ip is down, Kibana detaches from the ES. Is there any way to overcome it? Actually, I need transparent work between Kibana to the whole ES cluster.
Is possible to give only the hostname and the name of cluster to kibana to seach ES node which's up ? Is possible to give a list of URL (host1:9200, host1:9201) ?
For information, i have kibana and ES Cluster in the same server. and i use the following versions : elasticsearch-1.1.1 ; kibana-3.1.0 ; logstash-1.4.1
The text was updated successfully, but these errors were encountered: