Skip to content

Commit

Permalink
Remove custom create index runner
Browse files Browse the repository at this point in the history
With this commit we remove the custom runner `createindex` which creates
index templates, matching indices and also aliases and replace it with
runners that are available out of the box with Rally. Furthermore, we
cleanup some duplication and have removed the challenge
`append-no-conflicts` which acted as a dummy default challenge but does
not serve any other purpose.

Relates elastic#36
  • Loading branch information
danielmitterdorfer authored Aug 13, 2019
1 parent 4557e72 commit c08d724
Show file tree
Hide file tree
Showing 19 changed files with 266 additions and 782 deletions.
32 changes: 10 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,19 +29,7 @@ Note: In general, track parameters are only defined for a subset of the challeng

## Available Challenges

### 1) append-no-conflicts

This is the default challenge, which performs bulk indexing at maximum throughput against a single index for a period of 20 minutes.

The table below shows the track parameters that can be adjusted along with default values:

| Parameter | Explanation | Type | Default Value |
| --------- | ----------- | ---- | ------------- |
| `number_of_replicas` | Number of index replicas | `int` | `0` |
| `shard_count` | Number of primary shards | `int` | `2` |
| `bulk_indexing_clients` | Number of bulk indexing clients/connections | `int` | `8` |

### 2) bulk-size-evaluation
### bulk-size-evaluation

This challenge performs bulk-indexing against a single index with varying bulk request sizes, ranging from 125 events/request to 50000 events/request.

Expand All @@ -53,7 +41,7 @@ The table below shows the track parameters that can be adjusted along with defau
| `shard_count` | Number of primary shards | `int` | `2` |
| `bulk_indexing_clients` | Number of bulk indexing clients/connections | `int` | `16` |

### 3) shard-sizing
### shard-sizing

This challenge indexes 2 million events into an index consisting of a single shard 25 times. After each group of 2 million events has been inserted, 4 different Kibana dashboard configurations are benchmarked against the index. At this time no indexing takes place. There are two different dashboards being simulated, aggregating across 50% and 90% of the data in the shard.

Expand All @@ -69,7 +57,7 @@ The table below shows the track parameters that can be adjusted along with defau
| `shard_sizing_iterations` | Number of indexing querying iterations to run | `int` | `25` |
| `shard_sizing_queries` | Number of queries of each type to run for each iteration | `int` | `20` |

### 4) elasticlogs-1bn-load
### elasticlogs-1bn-load

This challenge indexes 1 billion events into a number of indices of 2 primary shards each, and results in around 200GB of indices being generated on disk. This can vary depending on the environment. It can be used give an idea of how max indexing performance behaves over an extended period of time.

Expand All @@ -84,11 +72,11 @@ The table below shows the track parameters that can be adjusted along with defau
| `translog_sync` | If value is not `request`, translog will be configured to use `async` mode | `string` | `request` |
| `rollover_enabled` | Enables the automatic rollover of indices after 100 million entries or 1 day. | `bool` | `true` |

### 5) elasticlogs-querying
### elasticlogs-querying

This challenge runs mixed Kibana queries against the index created in the **elasticlogs-1bn-load** track. No concurrent indexing is performed.

### 6) combined-indexing-and-querying
### combined-indexing-and-querying

This challenge assumes that the *elasticlogs-1bn-load* track has been executed as it simulates querying against these indices. It shows how indexing and querying through simulated Kibana dashboards can be combined to provide a more realistic benchmark.

Expand All @@ -107,7 +95,7 @@ The table below shows the track parameters that can be adjusted along with defau
| `rate_limit_step` | Number of requests per second to use as a rate_limit_step. `2` indicates rate limiting will increase in steps of 2k EPS | `int` | `2` |
| `rate_limit_max` | Maximum number of requests per second to use for rate-limiting. `32` indicates a top target indexing rate of 32k EPS | `int` | `32` |

### 7) elasticlogs-continuous-index-and-query
### elasticlogs-continuous-index-and-query

This challenge is suitable for long term execution and runs in two phases. Both phases (`p1`, `p2`) index documents containing auto-generated event, however, `p1` indexes events at the max possible speed, whereas `p2` throttles indexing to a specified rate and in parallel executes four queries simulating Kibana dashboards and queries. The created index gets rolled over after the configured max size and the maximum amount of rolled over indices are also configurable.

Expand Down Expand Up @@ -167,7 +155,7 @@ $ cat params-file.json
}
```

### 8) large-shard-sizing
### large-shard-sizing

This challenge examines the performance and memory usage of large shards. It indexes data into a single shard index ~25GB at a time and runs up to a shard size of ~300GB. After every 25GB that has been indexed, select index statistics are recorded and a number of simulated Kibana dashboards are run against the index to show how query performance varies with shard size.

Expand All @@ -184,7 +172,7 @@ The table below shows the track parameters that can be adjusted along with defau
| `bulk_indexing_clients` | Number of bulk indexing clients/connections | `int` | `32` |
| `query_iterations` | Number of times each dashboard is simulated at each level | `int` | `10` |

### 9) large-shard-id-type-evaluation
### large-shard-id-type-evaluation

This challenge examines the storage and heap usage implications of a wide variety of document ID types. It indexes data into a set of ~25GB single shard index, each for a different type of document ID (`auto`, `uuid`, `epoch_uuid`, `sha1`, `sha256`, `sha384`, and `sha512`). For each index a refresh is then run before select index statistics are recorded.

Expand All @@ -197,7 +185,7 @@ The table below shows the track parameters that can be adjusted along with defau
| --------- | ----------- | ---- | ------------- |
| `bulk_indexing_clients` | Number of bulk indexing clients/connections | `int` | `32` |

### 10) document_id_evaluation
### document_id_evaluation

This challenge examines the indexing throughput as a function of shard size as well as the resulting storage requirements for a set of different types of document IDs. For each document ID type, it indexes 200 million documents into a single-shard index, which should be about 40GB in size. Once all data has been indexed, index statistics are recorded before and after a forcemerge down to a single segment.

Expand Down Expand Up @@ -357,7 +345,7 @@ License

This software is licensed under the Apache License, version 2 ("ALv2"), quoted below.

Copyright 2015-2018 Elasticsearch <https://www.elastic.co>
Copyright 2015-2019 Elasticsearch <https://www.elastic.co>

Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
Expand Down
20 changes: 16 additions & 4 deletions eventdata/challenges/combined-indexing-and-querying.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@
{
"operation": "deleteindex_elasticlogs_i-*"
},
{
"operation": "delete-index-template"
},
{
"operation": "fieldstats_elasticlogs_q-*",
"warmup-iterations": {{ p_client_count }},
Expand All @@ -35,10 +38,19 @@
}
},
{
"operation": "create_elasticlogs_i_write",
"clients": 1,
"warmup-iterations": 0,
"iterations": 1
"operation": "create-index-template"
},
{
"operation": {
"name": "create_elasticlogs_i_write",
"operation-type": "create-index",
"index": "elasticlogs_i-000001",
"body": {
"aliases" : {
"elasticlogs_i_write" : {}
}
}
}
},
{# Add some data to index so it does not start empty #}
{
Expand Down
101 changes: 32 additions & 69 deletions eventdata/challenges/document_id_benchmark.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,33 +12,30 @@
},
"schedule": [
{
"name": "deleteindex_elasticlogs-warmup",
"name": "delete-index-elasticlogs-warmup",
"operation": {
"operation-type": "delete-index",
"index": "elasticlogs-warmup"
},
"include-in-reporting": false
}
},
{
"operation": "delete-index-template"
},
{
"name": "create_elasticlogs-warmup",
"operation": {
"operation-type": "createindex",
"index_name": "elasticlogs-warmup",
"index_template_body": {
"template": "elasticlogs-warmup",
"settings": {
"index.refresh_interval": "5s",
"index.codec": "best_compression",
"operation-type": "create-index-template",
"settings": {
"index.translog.retention.size": "10mb",
"index.number_of_replicas": 0,
"index.number_of_shards": 1
},
"mappings": "mappings.json",
"aliases": {}
},
"index_template_name": "elasticlogs-warmup"
},
"include-in-reporting": false
"index.number_of_shards": {{ shard_count | default(1) }}
}
}
},
{
"name": "create-index-elasticlogs-warmup",
"operation": {
"operation-type": "create-index",
"index": "elasticlogs-warmup"
}
},
{
"name": "index-append-1000-elasticlogs-warmup",
Expand All @@ -55,12 +52,11 @@
"include-in-reporting": false
},
{
"name": "deleteindex_elasticlogs-warmup-final",
"name": "delete-index-elasticlogs-warmup-final",
"operation": {
"operation-type": "delete-index",
"index": "elasticlogs-warmup"
},
"include-in-reporting": false
}
},
{% for id in [{ 'type': 'auto', 'desc': 'auto' },
{ 'type': 'uuid', 'desc': 'uuid' },
Expand All @@ -69,33 +65,18 @@
{ 'type': 'epoch_uuid', 'desc': 'epoch_uuid' },
{ 'type': 'epoch_md5', 'desc': 'epoch_md5'} ] %}
{
"name": "deleteindex_elasticlogs-{{ id['desc'] }}",
"name": "delete-index-elasticlogs-{{ id['desc'] }}",
"operation": {
"operation-type": "delete-index",
"index": "elasticlogs-{{ id['desc'] }}"
},
"include-in-reporting": false
}
},
{
"name": "create_elasticlogs-{{ id['desc'] }}",
"name": "create-index-elasticlogs-{{ id['desc'] }}",
"operation": {
"operation-type": "createindex",
"index_name": "elasticlogs-{{ id['desc'] }}",
"index_template_body": {
"template": "elasticlogs-{{ id['desc'] }}",
"settings": {
"index.refresh_interval": "5s",
"index.codec": "best_compression",
"index.translog.retention.size": "10mb",
"index.number_of_replicas": 0,
"index.number_of_shards": 1
},
"mappings": "mappings.json",
"aliases": {}
},
"index_template_name": "elasticlogs-{{ id['desc'] }}"
},
"include-in-reporting": false
"operation-type": "create-index",
"index": "elasticlogs-{{ id['desc'] }}"
}
},
{
"name": "index-append-1000-elasticlogs-{{ id['desc'] }}",
Expand Down Expand Up @@ -149,33 +130,18 @@
{% for id in [{ 'type': 'epoch_md5', 'desc': 'epoch_md5-10pct_60s', 'delay': 60 },
{ 'type': 'epoch_md5', 'desc': 'epoch_md5-10pct_300s', 'delay': 300 }] %}
{
"name": "deleteindex_elasticlogs-{{ id['desc'] }}",
"name": "delete-index-elasticlogs-{{ id['desc'] }}",
"operation": {
"operation-type": "delete-index",
"index": "elasticlogs-{{ id['desc'] }}"
},
"include-in-reporting": false
}
},
{
"name": "create_elasticlogs-{{ id['desc'] }}",
"name": "create-index-elasticlogs-{{ id['desc'] }}",
"operation": {
"operation-type": "createindex",
"index_name": "elasticlogs-{{ id['desc'] }}",
"index_template_body": {
"template": "elasticlogs-{{ id['desc'] }}",
"settings": {
"index.refresh_interval": "5s",
"index.codec": "best_compression",
"index.translog.retention.size": "10mb",
"index.number_of_replicas": 0,
"index.number_of_shards": 1
},
"mappings": "mappings.json",
"aliases": {}
},
"index_template_name": "elasticlogs-{{ id['desc'] }}"
},
"include-in-reporting": false
"operation-type": "create-index",
"index": "elasticlogs-{{ id['desc'] }}"
}
},
{
"name": "index-append-1000-elasticlogs-{{ id['desc'] }}",
Expand Down Expand Up @@ -230,10 +196,7 @@
{% endfor %}
{
"name": "refresh-final",
"operation": "refresh",
"iterations": 1,
"clients": 1,
"include-in-reporting": false
"operation": "refresh"
}
]
}
30 changes: 18 additions & 12 deletions eventdata/challenges/elasticlogs-1bn-load.json
Original file line number Diff line number Diff line change
@@ -1,28 +1,35 @@
{% set p_bulk_indexing_clients = (bulk_indexing_clients | default(20)) %}
{% set p_iterations = bulk_indexing_iterations | default(1000000) %}
{% set p_iterations_per_client = (p_iterations / p_bulk_indexing_clients) | int %}
{% set p_disk_type = disk_type | default('ssd') | lower %}
{% set p_translog_sync = translog_sync | default('request') | lower %}

{
"name": "elasticlogs-1bn-load",
"description": "Indexes 1bn (default) documents into elasticlogs_q-* indices. IDs are autogenerated by Elasticsearch, meaning there are no conflicts.",
"default": true,
"meta": {
"client_count": {{ p_bulk_indexing_clients }},
"benchmark_type": "indexing"
},
"schedule": [
{
"operation": "deleteindex_elasticlogs_q-*",
"clients": 1,
"warmup-iterations": 0,
"iterations": 1
"operation": "deleteindex_elasticlogs_q-*"
},
{
"operation": "create_elasticlogs_q_write",
"clients": 1,
"warmup-iterations": 0,
"iterations": 1
"operation": "delete-index-template"
},
{
"operation": "create-index-template"
},
{
"operation": {
"operation-type": "create-index",
"index": "elasticlogs_q-000001",
"body": {
"aliases" : {
"elasticlogs_q_write" : {}
}
}
}
},
{
"parallel": {
Expand All @@ -47,8 +54,7 @@
}
},
{
"operation": "node_storage",
"iterations": 1
"operation": "node_storage"
}
]
}
25 changes: 17 additions & 8 deletions eventdata/challenges/elasticlogs-continuous-index-and-query.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,25 @@
},
"schedule": [
{
"operation": "deleteindex_elasticlogs_q-*",
"clients": 1,
"warmup-iterations": 0,
"iterations": 1
"operation": "deleteindex_elasticlogs_q-*"
},
{
"operation": "create_elasticlogs_q_write",
"clients": 1,
"warmup-iterations": 0,
"iterations": 1
"operation": "delete-index-template"
},
{
"operation": "create-index-template"
},
{
"operation": {
"name": "create_elasticlogs_q_write",
"operation-type": "create-index",
"index": "elasticlogs_q-000001",
"body": {
"aliases" : {
"elasticlogs_q_write" : {}
}
}
}
},
{
"parallel": {
Expand Down
Loading

0 comments on commit c08d724

Please sign in to comment.