title | description | marp | paginate | theme | backgroundColor |
---|---|---|---|---|---|
Highly Available Elasticsearch |
Presentation |
true |
true |
gaia |
white |
10.01.2022
<style scoped>p, ul, li, strong { font-size: 40px; }</style>
-
Since 6.2.4 to 7.2
-
Various ES clusters like:
- One-node clusters
- Two-node clusters
- Three-node clusters
- Using Ceph RBD as massive storage in data roles
- To sync metadata of objects in s3 buckets
<style scoped>p, ul, li, strong { font-size: 38px; }</style>
I've done the first time while my assignment submission:
- Configure roles in a single nodeSet
- max_map_count using an initContainer instead of
node.store.allow_mmap: false
Different on new elasticsearch versions:
- Optimized auto-configuring most of the settings
- The License
<style scoped>p, ul, li, strong { font-size: 40px; }</style>
-
☁️ Works on Kubernetes
-
🌀 High availability
-
🎭 Identical roles
-
✍️ README-driven
-
🧩 Kustomize-generated resources
-
🪄 Easily applicable
- 💪 Resilience
- 🎭 Roles
- 💎 Sharding
- 💾 Storage
- 🧮 Memory & JVM Size
- 🥽 Virtual Memory
- 🛠️ Applying Custom Configuration
- 📈 Benchmark
<style scoped>p, ul, li, strong { font-size: 32px; }</style>
💪 Resilience
- Resilient if:
- green,
- at least two data nodes,
- at least one replica for each shard,
- at least thee master nodes,
- load balancer
- Taking regular snapshots: SLM
- Design: Identical three nodes to ensure resilience to single-failure-node
<style scoped>p, ul, li, strong { font-size: 38px; }</style>
🎭 Roles
-
master
-
data
-
ingest
-
ml
💎 Sharding
Aim for:
- Shards between 10GB and 50GB.
- Max 20 shards per GB of heap memory
Avoid:
- Unnecessary mapped fields by using explicit mapping
<style scoped>p, ul, li, strong { font-size: 32px; }</style>
💾 Storage
- Network-attached PersistentVolumes
- Local PersistentVolumes
🧮 Memory & JVM Size
Xms and Xmx should be
- Same with each other
- Set to no more than 50% of the total available RAM
- Less than 26GB
<style scoped>p, ul, li, strong { font-size: 30px; }</style>
🥽 Virtual Memory
- Elasticsearch uses memory mapping.
vm.max_map_count
should be set to262144
🛠️ Applying Custom Configuration
- Create a custom image
- Use init containers
📈 Benchmark
- Rally can be used to size the cluster correctly
<style> footer { text-align: center; } </style>
<style scoped>p, ul, li, strong { font-size: 38px; }</style>
- ECK with vanilla manifest files
- Elasticsearch and Kibana with kustomize-generated file
- Configured using initContainers
- LoadBalancer
- Dynamic mapping option
- AWSElasticBlockStore
- Master & Data roles
- SLM Policy
<style scoped>p, ul, li, strong { font-size: 38px; }</style>
- Total shards per node
- JVM Heap Size Settings
- Update strategy
- PodDisruptionBudget configuration
- Node scheduling
- Readiness probe configuration
- PreStop hook configuration
- Security context configuration
<style scoped>p, ul, li, strong { font-size: 40px; }</style>
-
Install ECK Custom Resources
-
Install ECK Operator
-
Monitor the operator logs
-
Generate elasticsearch & kibana resources
-
Deploy elasticsearch & kibana
-
Verify everything is ready-to-use