Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

restoration from a snapshot is failing #15432

Closed
binishabraham opened this issue Dec 15, 2015 · 13 comments
Closed

restoration from a snapshot is failing #15432

binishabraham opened this issue Dec 15, 2015 · 13 comments
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs feedback_needed

Comments

@binishabraham
Copy link

I am getting the following exception while doing a restoration

curl -XPOST "http://hostname1:9200/_snapshot/snapshot_restore_repo/snapshot_07_12_15_17_08_56/_restore"
{"error":"RemoteTransportException[[hostname2][inet[/ipaddress:9300]][cluster:admin/snapshot/restore]]; nested: ConcurrentSnapshotExecutionException[[snapshot_restore_repo:snapshot_07_12_15_17_08_56] Restore process is already running in this cluster]; ","status":503}-bash-4.1$

Cluster state is shown below

GET _cluster/state

"restore": {
"snapshots": [
{
"snapshot": "snapshot_07_12_15_17_08_56",
"repository": "snapshot_restore_repo",
"state": "STARTED",
"indices": [
"indexname"
],
"shards": [
{
"index": "indexname",
"shard": 0,
"state": "SUCCESS"
},
{
"index": "indexname",
"shard": 1,
"state": "SUCCESS"
},
{
"index": "indexname",
"shard": 2,
"state": "INIT"
},
{
"index": "indexname",
"shard": 3,
"state": "SUCCESS"
},
{
"index": "indexname",
"shard": 4,
"state": "SUCCESS"
}
]
}
]
}

How to come out of "INIT" and complete restoration

@clintongormley clintongormley added feedback_needed :Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs labels Dec 15, 2015
@clintongormley
Copy link
Contributor

What version of Elasticsearch are you using?

@binishabraham
Copy link
Author

I am using 1.7.0

@clintongormley
Copy link
Contributor

@imotov @ywelsch any ideas?

@imotov
Copy link
Contributor

imotov commented Dec 16, 2015

@binishabraham anything in the log file on master?

@binishabraham
Copy link
Author

@clintongormley ,
@imotov

Please find more details

I used to restore the snapshot from one cluster to three other clusters.

Saw the following error during restoration from that particulat snapshot to last cluster

{"error":"MasterNotDiscoveredException[]","status":503}

But even after this exception I was able to find all the documents in the last cluster also. Count of documents equal to other three clusters.

From the next day snapshot onwards, I am getting the following exception for the fourth cluster

{"error":"RemoteTransportException[[app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]][cluster:admin/repository/put]]; nested: ElasticsearchIllegalStateException[trying to modify or unregister repository that is currently used ]; ","status":500} % Total % Received % Xferd Average Speed Time Time Time Current
372 Dload Upload Total Spent Left Speed
373 ^M 0 0 0 0 0 60 0 49059 --:--:-- --:--:-- --:--:-- 49059^M 0 0 0 0 0 60 0 59 --:--:-- 0:00:01 --:--:-- 0^M 0 0 0 0 0 60 0 29 --:--:-- 0:00:02 --:--:-- 0^M 0 0 0 0 0 60 0 19 --:--:-- 0:00:03 --:--:-- 0^M 0 0 0 0 0 60 0 14 - -:--:-- 0:00:04 --:--:-- 0^M176 293 146 293 0 60 68 14 0:00:04 0:00:04 --:--:-- 54
374 {"error":"RemoteTransportException[[app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]][cluster:admin/snapshot/restore]]; nested: ConcurrentSnapshotExecutionException[[snapshot_restore_repo:snapshot_08_12_15_15_32_36] Restore process is already running in this cluster]; ","status":503}

But cluster-2 and cluster-3 restores it successfully from same snapshot.

Exception from same day's ES log.

-bash-4.1$ grep Exception es-prd-b-cluster.log.2015-12-07
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[/172.28.74.67:9300]][internal:discovery/zen/unicast_gte_1_4] request_id [2412434] timed out after [3751ms]
[2015-12-07 23:06:19,862][INFO ][discovery.zen] [app-wc-b1p.sys.company.net] failed to send join request to master [[app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true}], reason [RemoteTransportException[[app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]][internal:discovery/zen/join]]; nested: ElasticsearchIllegalStateException[Node [[app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true}] not master for join request from [[app-wc-b1p.sys.company.net][wVA3GQwHTW6MM4jHlubQqA][app-wc-b1p.sys.company.net][inet[/172.28.74.7:9300]]{master=true}]]; ], tried [3] times
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[/172.28.74.67:9300]][internal:discovery/zen/unicast_gte_1_4] request_id [2412470] timed out after [3750ms]
org.elasticsearch.ElasticsearchIllegalStateException: cluster state from a different master then the current one, rejecting (received [app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true}, current [app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true})

I have the following questions as part of immediate fix

  1. Any fix to this instead of recreating the index and reloading the data
  2. Is there any provision to restore all the snapshots present in my share path
    These are created on daily basis and present in a NAS mount point as /NAS/esbackup and has the names as follows
    snapshot-snapshot_08_12_15_15_32_36, snapshot-snapshot_10_12_15_17_13_05 etc.

@imotov
Copy link
Contributor

imotov commented Dec 17, 2015

@binishabraham could you post here or send me by email the output of the following command as well as complete log from the master node?

curl "localhost:9200/_cluster/state/routing_table,customs"

@binishabraham
Copy link
Author

@imotov
Please find details

I am using 7 node cluster. 5 - data nodes eligible to become master. 2 - client nodes

Log from master
[2015-12-07 23:01:45,541][WARN ][transport ] [app-wc-b1p.sys.company.net] Received response for a request that has timed out, sent [72441ms] ago, timed out [42440ms] ago, action [internal:discovery/zen/fd/master_ping], node [[app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true}], id [2412225]
[2015-12-07 23:01:45,543][WARN ][transport ] [app-wc-b1p.sys.company.net] Received response for a request that has timed out, sent [42442ms] ago, timed out [12442ms] ago, action [internal:discovery/zen/fd/master_ping], node [[app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true}], id [2412226]
[2015-12-07 23:02:23,013][WARN ][monitor.jvm ] [app-wc-b1p.sys.company.net] [gc][young][2250943][49420] duration [1.4s], collections [1]/[1.7s], total [1.4s]/[32m], memory [7gb]->[6.8gb]/[9.9gb], all_pools {[young] [248.8mb]->[878.5kb]/[266.2mb]}{[survivor] [20.6mb]->[21.9mb]/[33.2mb]}{[old] [6.7gb]->[6.7gb]/[9.6gb]}
[2015-12-07 23:06:15,118][INFO ][discovery.zen ] [app-wc-b1p.sys.company.net] master_left [[app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true}], reason [failed to ping, tried [3] times, each with maximum [30s] timeout]
[2015-12-07 23:06:15,119][WARN ][discovery.zen ] [app-wc-b1p.sys.company.net] master left (reason = failed to ping, tried [3] times, each with maximum [30s] timeout), current nodes: {[app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true},[app-wc-b5p.sys.company.net][fCe-HjVqSdSj8Opr0A0amQ][app-wc-b5p.sys.company.net][inet[/172.28.74.69:9300]]{master=true},[app-wc-b4p.sys.company.net][ctuotCZqRjGbsZ1LfOIa0A][app-wc-b4p.sys.company.net][inet[/172.28.74.68:9300]]{master=true},[app-wc-b1p.sys.company.net][wVA3GQwHTW6MM4jHlubQqA][app-wc-b1p.sys.company.net][inet[/172.28.74.7:9300]]{master=true},[cspesweb-wc-b2p.sys.company.net][MNtdFAsqRI6SryWUEawdqA][cspesweb-wc-b2p.sys.company.net][inet[/76.96.55.242:9300]]{data=false, master=false},[cspesweb-wc-b1p.sys.company.net][vcU4GzyDTMCrYPfN_dbYkw][cspesweb-wc-b1p.sys.company.net][inet[/76.96.55.241:9300]]{data=false, master=false},}
[2015-12-07 23:06:15,119][INFO ][cluster.service ] [app-wc-b1p.sys.company.net] removed {[app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true},}, reason: zen-disco-master_failed ([app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true})
[2015-12-07 23:06:18,876][WARN ][discovery.zen.ping.unicast] [app-wc-b1p.sys.company.net] failed to send ping to [[#zen_unicast_3#][app-wc-b1p.sys.company.net][inet[/172.28.74.67:9300]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[/172.28.74.67:9300]][internal:discovery/zen/unicast_gte_1_4] request_id [2412434] timed out after [3751ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-12-07 23:06:19,862][INFO ][discovery.zen ] [app-wc-b1p.sys.company.net] failed to send join request to master [[app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true}], reason [RemoteTransportException[[app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]][internal:discovery/zen/join]]; nested: ElasticsearchIllegalStateException[Node [[app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true}] not master for join request from [[app-wc-b1p.sys.company.net][wVA3GQwHTW6MM4jHlubQqA][app-wc-b1p.sys.company.net][inet[/172.28.74.7:9300]]{master=true}]]; ], tried [3] times
[2015-12-07 23:06:21,235][INFO ][cluster.service ] [app-wc-b1p.sys.company.net] detected_master [app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true}, reason: zen-disco-receive(from master [[app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true}])
[2015-12-07 23:06:23,631][WARN ][discovery.zen.ping.unicast] [app-wc-b1p.sys.company.net] failed to send ping to [[#zen_unicast_3#][app-wc-b1p.sys.company.net][inet[/172.28.74.67:9300]]]
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[/172.28.74.67:9300]][internal:discovery/zen/unicast_gte_1_4] request_id [2412470] timed out after [3750ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:529)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-12-07 23:08:21,515][WARN ][discovery.zen ] [app-wc-b1p.sys.company.net] received a cluster state from a different master then the current one, rejecting (received [app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true}, current [app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true})
[2015-12-07 23:08:21,516][ERROR][discovery.zen ] [app-wc-b1p.sys.company.net] unexpected failure during [zen-disco-receive(from master [[app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true}])]
org.elasticsearch.ElasticsearchIllegalStateException: cluster state from a different master then the current one, rejecting (received [app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true}, current [app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true})
at org.elasticsearch.discovery.zen.ZenDiscovery.shouldIgnoreOrRejectNewClusterState(ZenDiscovery.java:898)
at org.elasticsearch.discovery.zen.ZenDiscovery$10.execute(ZenDiscovery.java:778)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-12-07 23:08:24,929][INFO ][cluster.service ] [app-wc-b1p.sys.company.net] added {[app-wc-b3p.sys.company.net][QNbUU32-T3yYaVBGBf6z9A][app-wc-b3p.sys.company.net][inet[/172.28.74.67:9300]]{master=true},}, reason: zen-disco-receive(from master [[app-wc-b2p.sys.company.net][FJ3lMq59TKmnSfdPlJNm5g][app-wc-b2p.sys.company.net][inet[/172.28.74.70:9300]]{master=true}])

GET _cluster/state/routing_table,customs
{
"cluster_name": "es-prd-b-cluster",
"routing_table": {
"indices": {
"company_prd": {
"shards": {
"0": [
{
"state": "STARTED",
"primary": false,
"node": "R3BcxJx1SfiLhdvGr8A-5Q",
"relocating_node": null,
"shard": 0,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "bufngu7lS9igVfz0G1xVsg",
"relocating_node": null,
"shard": 0,
"index": "company_prd"
}
],
"1": [
{
"state": "STARTED",
"primary": false,
"node": "T6DUea7CTmm8VnjEDXCvMg",
"relocating_node": null,
"shard": 1,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "o32Diim1T6qQGKrZynjsXw",
"relocating_node": null,
"shard": 1,
"index": "company_prd"
}
],
"2": [
{
"state": "STARTED",
"primary": false,
"node": "o32Diim1T6qQGKrZynjsXw",
"relocating_node": null,
"shard": 2,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "pWSBCl57Qqu3b5R5EF1-vg",
"relocating_node": null,
"shard": 2,
"index": "company_prd"
}
],
"3": [
{
"state": "STARTED",
"primary": false,
"node": "bufngu7lS9igVfz0G1xVsg",
"relocating_node": null,
"shard": 3,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "pWSBCl57Qqu3b5R5EF1-vg",
"relocating_node": null,
"shard": 3,
"index": "company_prd"
}
],
"4": [
{
"state": "STARTED",
"primary": true,
"node": "T6DUea7CTmm8VnjEDXCvMg",
"relocating_node": null,
"shard": 4,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": false,
"node": "R3BcxJx1SfiLhdvGr8A-5Q",
"relocating_node": null,
"shard": 4,
"index": "company_prd"
}
]
}
},
"company": {
"shards": {
"0": [
{
"state": "STARTED",
"primary": true,
"node": "bufngu7lS9igVfz0G1xVsg",
"relocating_node": null,
"shard": 0,
"index": "company"
},
{
"state": "STARTED",
"primary": false,
"node": "o32Diim1T6qQGKrZynjsXw",
"relocating_node": null,
"shard": 0,
"index": "company"
}
],
"1": [
{
"state": "STARTED",
"primary": true,
"node": "bufngu7lS9igVfz0G1xVsg",
"relocating_node": null,
"shard": 1,
"index": "company"
},
{
"state": "STARTED",
"primary": false,
"node": "o32Diim1T6qQGKrZynjsXw",
"relocating_node": null,
"shard": 1,
"index": "company"
}
],
"2": [
{
"state": "STARTED",
"primary": true,
"node": "T6DUea7CTmm8VnjEDXCvMg",
"relocating_node": null,
"shard": 2,
"index": "company"
},
{
"state": "STARTED",
"primary": false,
"node": "R3BcxJx1SfiLhdvGr8A-5Q",
"relocating_node": null,
"shard": 2,
"index": "company"
}
],
"3": [
{
"state": "STARTED",
"primary": true,
"node": "R3BcxJx1SfiLhdvGr8A-5Q",
"relocating_node": null,
"shard": 3,
"index": "company"
},
{
"state": "STARTED",
"primary": false,
"node": "pWSBCl57Qqu3b5R5EF1-vg",
"relocating_node": null,
"shard": 3,
"index": "company"
}
],
"4": [
{
"state": "STARTED",
"primary": true,
"node": "T6DUea7CTmm8VnjEDXCvMg",
"relocating_node": null,
"shard": 4,
"index": "company"
},
{
"state": "STARTED",
"primary": false,
"node": "pWSBCl57Qqu3b5R5EF1-vg",
"relocating_node": null,
"shard": 4,
"index": "company"
}
]
}
}
}
},
"routing_nodes": {
"unassigned": [],
"nodes": {
"T6DUea7CTmm8VnjEDXCvMg": [
{
"state": "STARTED",
"primary": false,
"node": "T6DUea7CTmm8VnjEDXCvMg",
"relocating_node": null,
"shard": 1,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "T6DUea7CTmm8VnjEDXCvMg",
"relocating_node": null,
"shard": 4,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "T6DUea7CTmm8VnjEDXCvMg",
"relocating_node": null,
"shard": 2,
"index": "company"
},
{
"state": "STARTED",
"primary": true,
"node": "T6DUea7CTmm8VnjEDXCvMg",
"relocating_node": null,
"shard": 4,
"index": "company"
}
],
"R3BcxJx1SfiLhdvGr8A-5Q": [
{
"state": "STARTED",
"primary": false,
"node": "R3BcxJx1SfiLhdvGr8A-5Q",
"relocating_node": null,
"shard": 0,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": false,
"node": "R3BcxJx1SfiLhdvGr8A-5Q",
"relocating_node": null,
"shard": 4,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": false,
"node": "R3BcxJx1SfiLhdvGr8A-5Q",
"relocating_node": null,
"shard": 2,
"index": "company"
},
{
"state": "STARTED",
"primary": true,
"node": "R3BcxJx1SfiLhdvGr8A-5Q",
"relocating_node": null,
"shard": 3,
"index": "company"
}
],
"bufngu7lS9igVfz0G1xVsg": [
{
"state": "STARTED",
"primary": true,
"node": "bufngu7lS9igVfz0G1xVsg",
"relocating_node": null,
"shard": 0,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": false,
"node": "bufngu7lS9igVfz0G1xVsg",
"relocating_node": null,
"shard": 3,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "bufngu7lS9igVfz0G1xVsg",
"relocating_node": null,
"shard": 0,
"index": "company"
},
{
"state": "STARTED",
"primary": true,
"node": "bufngu7lS9igVfz0G1xVsg",
"relocating_node": null,
"shard": 1,
"index": "company"
}
],
"o32Diim1T6qQGKrZynjsXw": [
{
"state": "STARTED",
"primary": false,
"node": "o32Diim1T6qQGKrZynjsXw",
"relocating_node": null,
"shard": 2,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "o32Diim1T6qQGKrZynjsXw",
"relocating_node": null,
"shard": 1,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": false,
"node": "o32Diim1T6qQGKrZynjsXw",
"relocating_node": null,
"shard": 0,
"index": "company"
},
{
"state": "STARTED",
"primary": false,
"node": "o32Diim1T6qQGKrZynjsXw",
"relocating_node": null,
"shard": 1,
"index": "company"
}
],
"pWSBCl57Qqu3b5R5EF1-vg": [
{
"state": "STARTED",
"primary": true,
"node": "pWSBCl57Qqu3b5R5EF1-vg",
"relocating_node": null,
"shard": 2,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": true,
"node": "pWSBCl57Qqu3b5R5EF1-vg",
"relocating_node": null,
"shard": 3,
"index": "company_prd"
},
{
"state": "STARTED",
"primary": false,
"node": "pWSBCl57Qqu3b5R5EF1-vg",
"relocating_node": null,
"shard": 3,
"index": "company"
},
{
"state": "STARTED",
"primary": false,
"node": "pWSBCl57Qqu3b5R5EF1-vg",
"relocating_node": null,
"shard": 4,
"index": "company"
}
]
}
},
"allocations": []
}

@imotov
Copy link
Contributor

imotov commented Dec 17, 2015

@binishabraham What happened to the indexname index? Did you delete it?

@binishabraham
Copy link
Author

@imotov It was a mistake in my replacement
Here indexname (previous post) = company (last post)

@imotov
Copy link
Contributor

imotov commented Dec 18, 2015

@binishabraham could you try closing and opening the company index to see if it will get recovery process unstuck?

@imotov
Copy link
Contributor

imotov commented Jan 4, 2016

@binishabraham any updates?

@binishabraham
Copy link
Author

@imotov I tried, but didn't workout. So deleted the index 'indexname' and restored it using the snapshot from similar ES cluster from another Farm.

@clintongormley
Copy link
Contributor

No further info. Closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Snapshot/Restore Anything directly related to the `_snapshot/*` APIs feedback_needed
Projects
None yet
Development

No branches or pull requests

3 participants