22=== Shard Allocation Awareness
33
44When running nodes on multiple VMs on the same physical server, on multiple
5- racks, or across multiple awareness zones, it is more likely that two nodes on
6- the same physical server, in the same rack, or in the same awareness zone will
5+ racks, or across multiple zones or domains , it is more likely that two nodes on
6+ the same physical server, in the same rack, or in the same zone or domain will
77crash at the same time, rather than two unrelated nodes crashing
88simultaneously.
99
@@ -25,7 +25,7 @@ attribute called `rack_id` -- we could use any attribute name. For example:
2525----------------------
2626<1> This setting could also be specified in the `elasticsearch.yml` config file.
2727
28- Now, we need to setup _shard allocation awareness_ by telling Elasticsearch
28+ Now, we need to set up _shard allocation awareness_ by telling Elasticsearch
2929which attributes to use. This can be configured in the `elasticsearch.yml`
3030file on *all* master-eligible nodes, or it can be set (and changed) with the
3131<<cluster-update-settings,cluster-update-settings>> API.
@@ -37,51 +37,51 @@ For our example, we'll set the value in the config file:
3737cluster.routing.allocation.awareness.attributes: rack_id
3838--------------------------------------------------------
3939
40- With this config in place, let's say we start two nodes with `node.attr.rack_id`
41- set to `rack_one`, and we create an index with 5 primary shards and 1 replica
42- of each primary. All primaries and replicas are allocated across the two
43- nodes.
40+ With this config in place, let's say we start two nodes with
41+ `node.attr.rack_id` set to `rack_one`, and we create an index with 5 primary
42+ shards and 1 replica of each primary. All primaries and replicas are
43+ allocated across the two nodes.
4444
4545Now, if we start two more nodes with `node.attr.rack_id` set to `rack_two`,
4646Elasticsearch will move shards across to the new nodes, ensuring (if possible)
47- that no two copies of the same shard will be in the same rack. However if `rack_two`
48- were to fail, taking down both of its nodes, Elasticsearch will still allocate the lost
49- shard copies to nodes in `rack_one`.
47+ that no two copies of the same shard will be in the same rack. However if
48+ `rack_two` were to fail, taking down both of its nodes, Elasticsearch will
49+ still allocate the lost shard copies to nodes in `rack_one`.
5050
5151.Prefer local shards
5252*********************************************
5353
5454When executing search or GET requests, with shard awareness enabled,
5555Elasticsearch will prefer using local shards -- shards in the same awareness
56- group -- to execute the request. This is usually faster than crossing racks or
57- awareness zones .
56+ group -- to execute the request. This is usually faster than crossing between
57+ racks or across zone boundaries .
5858
5959*********************************************
6060
61- Multiple awareness attributes can be specified, in which case the combination
62- of values from each attribute is considered to be a separate value .
61+ Multiple awareness attributes can be specified, in which case each attribute
62+ is considered separately when deciding where to allocate the shards .
6363
6464[source,yaml]
6565-------------------------------------------------------------
6666cluster.routing.allocation.awareness.attributes: rack_id,zone
6767-------------------------------------------------------------
6868
69- NOTE: When using awareness attributes, shards will not be allocated to
70- nodes that don't have values set for those attributes.
69+ NOTE: When using awareness attributes, shards will not be allocated to nodes
70+ that don't have values set for those attributes.
7171
72- NOTE: Number of primary/replica of a shard allocated on a specific group
73- of nodes with the same awareness attribute value is determined by the number
74- of attribute values. When the number of nodes in groups is unbalanced and
75- there are many replicas, replica shards may be left unassigned.
72+ NOTE: Number of primary/replica of a shard allocated on a specific group of
73+ nodes with the same awareness attribute value is determined by the number of
74+ attribute values. When the number of nodes in groups is unbalanced and there
75+ are many replicas, replica shards may be left unassigned.
7676
7777[float]
7878[[forced-awareness]]
7979=== Forced Awareness
8080
81- Imagine that you have two awareness zones and enough hardware across the two
82- zones to host all of your primary and replica shards. But perhaps the
83- hardware in a single zone, while sufficient to host half the shards, would be
84- unable to host *ALL* the shards.
81+ Imagine that you have two zones and enough hardware across the two zones to
82+ host all of your primary and replica shards. But perhaps the hardware in a
83+ single zone, while sufficient to host half the shards, would be unable to host
84+ *ALL* the shards.
8585
8686With ordinary awareness, if one zone lost contact with the other zone,
8787Elasticsearch would assign all of the missing replica shards to a single zone.
@@ -91,9 +91,9 @@ remaining zone to be overloaded.
9191Forced awareness solves this problem by *NEVER* allowing copies of the same
9292shard to be allocated to the same zone.
9393
94- For example, lets say we have an awareness attribute called `zone`, and
95- we know we are going to have two zones, `zone1` and `zone2`. Here is how
96- we can force awareness on a node:
94+ For example, lets say we have an awareness attribute called `zone`, and we
95+ know we are going to have two zones, `zone1` and `zone2`. Here is how we can
96+ force awareness on a node:
9797
9898[source,yaml]
9999-------------------------------------------------------------------
@@ -102,10 +102,10 @@ cluster.routing.allocation.awareness.attributes: zone
102102-------------------------------------------------------------------
103103<1> We must list all possible values that the `zone` attribute can have.
104104
105- Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an index
106- with 5 shards and 1 replica. The index will be created, but only the 5 primary
107- shards will be allocated (with no replicas). Only when we start more nodes
108- with `node.attr.zone` set to `zone2` will the replicas be allocated.
105+ Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an
106+ index with 5 shards and 1 replica. The index will be created, but only the 5
107+ primary shards will be allocated (with no replicas). Only when we start more
108+ nodes with `node.attr.zone` set to `zone2` will the replicas be allocated.
109109
110110The `cluster.routing.allocation.awareness.*` settings can all be updated
111111dynamically on a live cluster with the
0 commit comments