Skip to content

Conversation

@asimmahmood1
Copy link
Contributor

@asimmahmood1 asimmahmood1 commented Nov 19, 2025

Description

This is a built on top #19573.

Compare to date histogram skiplist, this change needs to hook into dynamic rounding during collection. There are 2 variables to keep track of:

  • bucketOrds - seen rounded dates
  • preparedRounding - starts at lowest interval: MINUTES and goes up

When a new ord is created, increaseRoundingIfNeeded function is called to determine if new preparedRounding needs to kick in (e.g. from HOURS to DAYS), and may also merge dates in bucketOrds. Thus, both are need to be supplied via lambda.

In the future skiplist can be enhanced to keep track of multiple owningBucketOrd, for now it only works when auto date histogram is root (parent == null), or within range filter rewrite context that guarantees new auto date histogram is created per range.

Related Issues

Resolves #19827
Part of #18882
Also #19384

Check List

  • Functionality includes testing.
  • [n/a] API changes companion pull request created, if applicable.
  • [TODO] Public documentation issue/PR created, if applicable.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Summary by CodeRabbit

  • New Features

    • Skiplist optimization for auto_date_histogram aggregation to speed up bucketing.
    • Public API to evaluate skiplist eligibility at runtime.
  • Improvements

    • Dynamic rounding support in histogram aggregations for robust handling of changing intervals.
    • Improved performance and additional debug metrics when the skiplist path is used.
  • Tests

    • Added tests validating skiplist behavior, rounding-change scenarios, and sub-aggregation correctness.

✏️ Tip: You can customize this high-level summary in your review settings.

@github-actions
Copy link
Contributor

❌ Gradle check result for 27f8248: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@asimmahmood1
Copy link
Contributor Author

Functionally correct but not showing improvement

diff 17_big5_auto_date_filter_baseline.json 17_big5_auto_date_filter_candidate.json
2c2
<   "took": 114,
---
>   "took": 214,

Query

curl -X POST "http://localhost:9200/big5/_search"     -H "Content-Type: application/json"     -d '{
      "size": 0,
      "query": {
        "bool": {
          "must": [
            {
              "term": {
                "process.name": "systemd"
              }
            }
          ]
        }
      },
      "aggs": {
        "by_hour": {
          "auto_date_histogram": {
            "field": "@timestamp",
            "buckets": 3
          }
        }
      }
    }'

Result

{
  "took": 127,
  "timed_out": false,
  "terminated_early": true,
  "_shards": {
    "total": 1,
    "successful": 1,
    "skipped": 0,
    "failed": 0
  },
  "hits": {
    "total": {
      "value": 10000,
      "relation": "gte"
    },
    "max_score": null,
    "hits": []
  },
  "aggregations": {
    "by_hour": {
      "buckets": [
        {
          "key_as_string": "2023-01-01T00:00:00.000Z",
          "key": 1672531200000,
          "doc_count": 2488712
        },
        {
          "key_as_string": "2023-01-08T00:00:00.000Z",
          "key": 1673136000000,
          "doc_count": 824998
        }
      ],
      "interval": "7d"
    }
  }
}

@asimmahmood1
Copy link
Contributor Author

@jainankitk
Copy link
Contributor

@asimmahmood1 - Have you verified that the regression is only with AutoDateHistogramAggregator and not when using DateHistogramAggregator for same query?

@asimmahmood1
Copy link
Contributor Author

asimmahmood1 commented Nov 24, 2025

So I figured out why performance was not up to par. Short answer is when auto date moves onto large intervals, we need to track of not just preparedRounding but also bucketOrds. If preparedRounding has changed since last time, we need to restart the skiplist logic. Otherwise, we'll collect too many docs, and although the end result doesn't change (i.e. unit test passes), performance is too low.

Auto date histo has two modes: FromSingle and FromMany. FromSingle is often used in top level aggregation, so It's similar to Date Histogram where parent is null. FromMany is used e.g in big5's range-auto-date-histo, which would normally handle interleaving owningBucketOrd. In the special case where filter rewrite logic is used, then we can safely assume that only 1 owningBucketOrd will be called per leaf collector, thus we can use skiplist histogram.

See validation below.

Note: will update this PR after #19573 is merged.

@asimmahmood1
Copy link
Contributor Author

range auto date: from 1335 to 139 (89%)

11_range_auto_date_histo.sh
#!/bin/bash

curl -XGET 'http://localhost:9200/big5/_search' \
-H 'Content-Type: application/json' \
-d '{
  "size": 0, "profile": false,
  "aggs": {
    "tmax": {
      "range": {
        "field": "metrics.size",
        "ranges": [
          {
            "to": -10
          },
          {
            "from": -10,
            "to": 10
          },
          {
            "from": 10,
            "to": 100
          },
          {
            "from": 100,
            "to": 1000
          },
          {
            "from": 1000,
            "to": 2000
          },
          {
            "from": 2000
          }
        ]
      },
      "aggs": {
        "date": {
          "auto_date_histogram": {
            "field": "@timestamp",
            "buckets": 20
          }
        }
      }
    }
  }
}'

diff 11_range_auto_date_histo_candidate.json 11_range_auto_date_histo_baseline.json
2c2
<   "took": 139,
---
>   "took": 1335,

@asimmahmood1
Copy link
Contributor Author

range-auto-date-with-metrics (22% lower)

this is similar to the date-with-metrics, since the time is bounded by tavg stat

[ec2-user@ip-172-31-61-197 ~]$ diff 11_range_auto_date_histo_with_metrics_candidate.json 11_range_auto_date_histo_with_metrics_baseline.json
2c2
<   "took": 2920,
---
>   "took": 3781,

@github-actions
Copy link
Contributor

❌ Gradle check result for 4d7f22d: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-project-automation github-project-automation bot moved this from Todo to Done in Performance Roadmap Nov 24, 2025
@asimmahmood1 asimmahmood1 reopened this Nov 24, 2025
@github-project-automation github-project-automation bot moved this from Done to In Progress in Performance Roadmap Nov 24, 2025
@asimmahmood1
Copy link
Contributor Author

{"run-benchmark-test": "id_3"}

@asimmahmood1
Copy link
Contributor Author

{"run-benchmark-test": "id_11"}

@github-actions
Copy link
Contributor

❌ Gradle check result for 4d7f22d:

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-actions
Copy link
Contributor

The Jenkins job url is https://build.ci.opensearch.org/job/benchmark-pull-request/5174/ . Final results will be published once the job is completed.

@opensearch-ci-bot
Copy link
Collaborator

Benchmark Results

Benchmark Results for Job: https://build.ci.opensearch.org/job/benchmark-pull-request/5174/

Metric Task Value Unit
Cumulative indexing time of primary shards 0 min
Min cumulative indexing time across primary shards 0 min
Median cumulative indexing time across primary shards 0 min
Max cumulative indexing time across primary shards 0 min
Cumulative indexing throttle time of primary shards 0 min
Min cumulative indexing throttle time across primary shards 0 min
Median cumulative indexing throttle time across primary shards 0 min
Max cumulative indexing throttle time across primary shards 0 min
Cumulative merge time of primary shards 0 min
Cumulative merge count of primary shards 0
Min cumulative merge time across primary shards 0 min
Median cumulative merge time across primary shards 0 min
Max cumulative merge time across primary shards 0 min
Cumulative merge throttle time of primary shards 0 min
Min cumulative merge throttle time across primary shards 0 min
Median cumulative merge throttle time across primary shards 0 min
Max cumulative merge throttle time across primary shards 0 min
Cumulative refresh time of primary shards 0 min
Cumulative refresh count of primary shards 31
Min cumulative refresh time across primary shards 0 min
Median cumulative refresh time across primary shards 0 min
Max cumulative refresh time across primary shards 0 min
Cumulative flush time of primary shards 0 min
Cumulative flush count of primary shards 8
Min cumulative flush time across primary shards 0 min
Median cumulative flush time across primary shards 0 min
Max cumulative flush time across primary shards 0 min
Total Young Gen GC time 2.18 s
Total Young Gen GC count 71
Total Old Gen GC time 0 s
Total Old Gen GC count 0
Store size 15.3221 GB
Translog size 4.09782e-07 GB
Heap used for segments 0 MB
Heap used for doc values 0 MB
Heap used for terms 0 MB
Heap used for norms 0 MB
Heap used for points 0 MB
Heap used for stored fields 0 MB
Segment count 73
100th percentile latency wait-for-snapshot-recovery 300001 ms
100th percentile service time wait-for-snapshot-recovery 300001 ms
error rate wait-for-snapshot-recovery 100 %
Min Throughput match-all 8 ops/s
Mean Throughput match-all 8 ops/s
Median Throughput match-all 8 ops/s
Max Throughput match-all 8 ops/s
50th percentile latency match-all 4.28224 ms
90th percentile latency match-all 4.91128 ms
99th percentile latency match-all 5.84384 ms
100th percentile latency match-all 5.86495 ms
50th percentile service time match-all 3.42397 ms
90th percentile service time match-all 3.82077 ms
99th percentile service time match-all 4.51325 ms
100th percentile service time match-all 4.65042 ms
error rate match-all 0 %
Min Throughput term 49.9 ops/s
Mean Throughput term 49.9 ops/s
Median Throughput term 49.9 ops/s
Max Throughput term 49.91 ops/s
50th percentile latency term 3.64265 ms
90th percentile latency term 4.15728 ms
99th percentile latency term 9.31213 ms
100th percentile latency term 14.2462 ms
50th percentile service time term 2.90563 ms
90th percentile service time term 3.16569 ms
99th percentile service time term 8.3056 ms
100th percentile service time term 13.2171 ms
error rate term 0 %
Min Throughput range 1 ops/s
Mean Throughput range 1.01 ops/s
Median Throughput range 1.01 ops/s
Max Throughput range 1.01 ops/s
50th percentile latency range 6.06911 ms
90th percentile latency range 6.68034 ms
99th percentile latency range 7.30484 ms
100th percentile latency range 7.37473 ms
50th percentile service time range 4.3927 ms
90th percentile service time range 4.67829 ms
99th percentile service time range 5.61881 ms
100th percentile service time range 5.64028 ms
error rate range 0 %
Min Throughput 200s-in-range 32.93 ops/s
Mean Throughput 200s-in-range 32.93 ops/s
Median Throughput 200s-in-range 32.93 ops/s
Max Throughput 200s-in-range 32.94 ops/s
50th percentile latency 200s-in-range 5.10912 ms
90th percentile latency 200s-in-range 6.15178 ms
99th percentile latency 200s-in-range 7.16864 ms
100th percentile latency 200s-in-range 7.41385 ms
50th percentile service time 200s-in-range 3.93731 ms
90th percentile service time 200s-in-range 4.25686 ms
99th percentile service time 200s-in-range 5.041 ms
100th percentile service time 200s-in-range 5.14181 ms
error rate 200s-in-range 0 %
Min Throughput 400s-in-range 50.03 ops/s
Mean Throughput 400s-in-range 50.03 ops/s
Median Throughput 400s-in-range 50.03 ops/s
Max Throughput 400s-in-range 50.03 ops/s
50th percentile latency 400s-in-range 3.43545 ms
90th percentile latency 400s-in-range 3.88271 ms
99th percentile latency 400s-in-range 9.29701 ms
100th percentile latency 400s-in-range 14.4099 ms
50th percentile service time 400s-in-range 2.62956 ms
90th percentile service time 400s-in-range 2.79619 ms
99th percentile service time 400s-in-range 8.37331 ms
100th percentile service time 400s-in-range 13.362 ms
error rate 400s-in-range 0 %
Min Throughput hourly_agg 1.01 ops/s
Mean Throughput hourly_agg 1.01 ops/s
Median Throughput hourly_agg 1.01 ops/s
Max Throughput hourly_agg 1.02 ops/s
50th percentile latency hourly_agg 13.9039 ms
90th percentile latency hourly_agg 15.0932 ms
99th percentile latency hourly_agg 16.7868 ms
100th percentile latency hourly_agg 17.49 ms
50th percentile service time hourly_agg 12.0059 ms
90th percentile service time hourly_agg 13.004 ms
99th percentile service time hourly_agg 14.7653 ms
100th percentile service time hourly_agg 15.154 ms
error rate hourly_agg 0 %
Min Throughput hourly_agg_with_filter 1 ops/s
Mean Throughput hourly_agg_with_filter 1 ops/s
Median Throughput hourly_agg_with_filter 1 ops/s
Max Throughput hourly_agg_with_filter 1 ops/s
50th percentile latency hourly_agg_with_filter 83.5865 ms
90th percentile latency hourly_agg_with_filter 94.7305 ms
99th percentile latency hourly_agg_with_filter 140.956 ms
100th percentile latency hourly_agg_with_filter 183.002 ms
50th percentile service time hourly_agg_with_filter 81.8279 ms
90th percentile service time hourly_agg_with_filter 92.8934 ms
99th percentile service time hourly_agg_with_filter 139.147 ms
100th percentile service time hourly_agg_with_filter 181.211 ms
error rate hourly_agg_with_filter 0 %
Min Throughput hourly_agg_with_filter_and_metrics 0.24 ops/s
Mean Throughput hourly_agg_with_filter_and_metrics 0.24 ops/s
Median Throughput hourly_agg_with_filter_and_metrics 0.24 ops/s
Max Throughput hourly_agg_with_filter_and_metrics 0.24 ops/s
50th percentile latency hourly_agg_with_filter_and_metrics 323483 ms
90th percentile latency hourly_agg_with_filter_and_metrics 451170 ms
99th percentile latency hourly_agg_with_filter_and_metrics 479744 ms
100th percentile latency hourly_agg_with_filter_and_metrics 481334 ms
50th percentile service time hourly_agg_with_filter_and_metrics 4176.52 ms
90th percentile service time hourly_agg_with_filter_and_metrics 4281.44 ms
99th percentile service time hourly_agg_with_filter_and_metrics 4475.93 ms
100th percentile service time hourly_agg_with_filter_and_metrics 4558.98 ms
error rate hourly_agg_with_filter_and_metrics 0 %
Min Throughput multi_term_agg 0.22 ops/s
Mean Throughput multi_term_agg 0.22 ops/s
Median Throughput multi_term_agg 0.22 ops/s
Max Throughput multi_term_agg 0.23 ops/s
50th percentile latency multi_term_agg 347090 ms
90th percentile latency multi_term_agg 485132 ms
99th percentile latency multi_term_agg 516024 ms
100th percentile latency multi_term_agg 517783 ms
50th percentile service time multi_term_agg 4506.97 ms
90th percentile service time multi_term_agg 4626.37 ms
99th percentile service time multi_term_agg 4866.29 ms
100th percentile service time multi_term_agg 4986.95 ms
error rate multi_term_agg 0 %
Min Throughput scroll 25.04 pages/s
Mean Throughput scroll 25.07 pages/s
Median Throughput scroll 25.07 pages/s
Max Throughput scroll 25.13 pages/s
50th percentile latency scroll 207.343 ms
90th percentile latency scroll 210.843 ms
99th percentile latency scroll 264.671 ms
100th percentile latency scroll 291.279 ms
50th percentile service time scroll 205.387 ms
90th percentile service time scroll 208.758 ms
99th percentile service time scroll 262.554 ms
100th percentile service time scroll 289.84 ms
error rate scroll 0 %
Min Throughput desc_sort_size 1 ops/s
Mean Throughput desc_sort_size 1 ops/s
Median Throughput desc_sort_size 1 ops/s
Max Throughput desc_sort_size 1 ops/s
50th percentile latency desc_sort_size 7.20233 ms
90th percentile latency desc_sort_size 7.99693 ms
99th percentile latency desc_sort_size 8.81847 ms
100th percentile latency desc_sort_size 8.96338 ms
50th percentile service time desc_sort_size 5.40942 ms
90th percentile service time desc_sort_size 5.89622 ms
99th percentile service time desc_sort_size 6.66956 ms
100th percentile service time desc_sort_size 6.68186 ms
error rate desc_sort_size 0 %
Min Throughput asc_sort_size 1 ops/s
Mean Throughput asc_sort_size 1 ops/s
Median Throughput asc_sort_size 1 ops/s
Max Throughput asc_sort_size 1 ops/s
50th percentile latency asc_sort_size 8.26643 ms
90th percentile latency asc_sort_size 8.92356 ms
99th percentile latency asc_sort_size 9.54172 ms
100th percentile latency asc_sort_size 9.60887 ms
50th percentile service time asc_sort_size 6.35152 ms
90th percentile service time asc_sort_size 7.0678 ms
99th percentile service time asc_sort_size 7.44685 ms
100th percentile service time asc_sort_size 7.53512 ms
error rate asc_sort_size 0 %
Min Throughput desc_sort_timestamp 1 ops/s
Mean Throughput desc_sort_timestamp 1 ops/s
Median Throughput desc_sort_timestamp 1 ops/s
Max Throughput desc_sort_timestamp 1 ops/s
50th percentile latency desc_sort_timestamp 13.5088 ms
90th percentile latency desc_sort_timestamp 14.2314 ms
99th percentile latency desc_sort_timestamp 15.988 ms
100th percentile latency desc_sort_timestamp 16.0292 ms
50th percentile service time desc_sort_timestamp 11.8524 ms
90th percentile service time desc_sort_timestamp 12.2587 ms
99th percentile service time desc_sort_timestamp 14.5756 ms
100th percentile service time desc_sort_timestamp 14.6071 ms
error rate desc_sort_timestamp 0 %
Min Throughput asc_sort_timestamp 1 ops/s
Mean Throughput asc_sort_timestamp 1 ops/s
Median Throughput asc_sort_timestamp 1 ops/s
Max Throughput asc_sort_timestamp 1 ops/s
50th percentile latency asc_sort_timestamp 8.08182 ms
90th percentile latency asc_sort_timestamp 8.76246 ms
99th percentile latency asc_sort_timestamp 9.50709 ms
100th percentile latency asc_sort_timestamp 10.022 ms
50th percentile service time asc_sort_timestamp 6.20787 ms
90th percentile service time asc_sort_timestamp 6.69686 ms
99th percentile service time asc_sort_timestamp 7.68305 ms
100th percentile service time asc_sort_timestamp 8.12289 ms
error rate asc_sort_timestamp 0 %
Min Throughput desc_sort_with_after_timestamp 1.01 ops/s
Mean Throughput desc_sort_with_after_timestamp 1.02 ops/s
Median Throughput desc_sort_with_after_timestamp 1.02 ops/s
Max Throughput desc_sort_with_after_timestamp 1.1 ops/s
50th percentile latency desc_sort_with_after_timestamp 6.27938 ms
90th percentile latency desc_sort_with_after_timestamp 6.80572 ms
99th percentile latency desc_sort_with_after_timestamp 7.46574 ms
100th percentile latency desc_sort_with_after_timestamp 7.57326 ms
50th percentile service time desc_sort_with_after_timestamp 4.42068 ms
90th percentile service time desc_sort_with_after_timestamp 4.76673 ms
99th percentile service time desc_sort_with_after_timestamp 5.54311 ms
100th percentile service time desc_sort_with_after_timestamp 5.58049 ms
error rate desc_sort_with_after_timestamp 0 %
Min Throughput asc_sort_with_after_timestamp 1.01 ops/s
Mean Throughput asc_sort_with_after_timestamp 1.02 ops/s
Median Throughput asc_sort_with_after_timestamp 1.02 ops/s
Max Throughput asc_sort_with_after_timestamp 1.1 ops/s
50th percentile latency asc_sort_with_after_timestamp 5.39147 ms
90th percentile latency asc_sort_with_after_timestamp 5.85555 ms
99th percentile latency asc_sort_with_after_timestamp 6.20304 ms
100th percentile latency asc_sort_with_after_timestamp 6.30199 ms
50th percentile service time asc_sort_with_after_timestamp 3.60652 ms
90th percentile service time asc_sort_with_after_timestamp 3.74439 ms
99th percentile service time asc_sort_with_after_timestamp 3.92286 ms
100th percentile service time asc_sort_with_after_timestamp 4.0068 ms
error rate asc_sort_with_after_timestamp 0 %
Min Throughput range_size 2.01 ops/s
Mean Throughput range_size 2.01 ops/s
Median Throughput range_size 2.01 ops/s
Max Throughput range_size 2.02 ops/s
50th percentile latency range_size 8.27167 ms
90th percentile latency range_size 8.83461 ms
99th percentile latency range_size 10.0267 ms
100th percentile latency range_size 10.1073 ms
50th percentile service time range_size 6.9538 ms
90th percentile service time range_size 7.30531 ms
99th percentile service time range_size 8.47684 ms
100th percentile service time range_size 8.55189 ms
error rate range_size 0 %
Min Throughput range_with_asc_sort 2.01 ops/s
Mean Throughput range_with_asc_sort 2.01 ops/s
Median Throughput range_with_asc_sort 2.01 ops/s
Max Throughput range_with_asc_sort 2.02 ops/s
50th percentile latency range_with_asc_sort 18.8966 ms
90th percentile latency range_with_asc_sort 20.8522 ms
99th percentile latency range_with_asc_sort 22.2688 ms
100th percentile latency range_with_asc_sort 22.4174 ms
50th percentile service time range_with_asc_sort 17.4148 ms
90th percentile service time range_with_asc_sort 19.218 ms
99th percentile service time range_with_asc_sort 20.434 ms
100th percentile service time range_with_asc_sort 20.5176 ms
error rate range_with_asc_sort 0 %
Min Throughput range_with_desc_sort 2.01 ops/s
Mean Throughput range_with_desc_sort 2.01 ops/s
Median Throughput range_with_desc_sort 2.01 ops/s
Max Throughput range_with_desc_sort 2.02 ops/s
50th percentile latency range_with_desc_sort 20.7926 ms
90th percentile latency range_with_desc_sort 24.532 ms
99th percentile latency range_with_desc_sort 33.4369 ms
100th percentile latency range_with_desc_sort 41.1489 ms
50th percentile service time range_with_desc_sort 18.6428 ms
90th percentile service time range_with_desc_sort 22.7379 ms
99th percentile service time range_with_desc_sort 31.1639 ms
100th percentile service time range_with_desc_sort 38.9406 ms
error rate range_with_desc_sort 0 %

@opensearch-ci-bot
Copy link
Collaborator

Benchmark Baseline Comparison Results

Benchmark Results for Job: https://build.ci.opensearch.org/job/benchmark-compare/210/

Metric Task Baseline Contender Diff Unit
Cumulative indexing time of primary shards 0 0 0 min
Min cumulative indexing time across primary shard 0 0 0 min
Median cumulative indexing time across primary shard 0 0 0 min
Max cumulative indexing time across primary shard 0 0 0 min
Cumulative indexing throttle time of primary shards 0 0 0 min
Min cumulative indexing throttle time across primary shard 0 0 0 min
Median cumulative indexing throttle time across primary shard 0 0 0 min
Max cumulative indexing throttle time across primary shard 0 0 0 min
Cumulative merge time of primary shards 0 0 0 min
Cumulative merge count of primary shards 0 0 0
Min cumulative merge time across primary shard 0 0 0 min
Median cumulative merge time across primary shard 0 0 0 min
Max cumulative merge time across primary shard 0 0 0 min
Cumulative merge throttle time of primary shards 0 0 0 min
Min cumulative merge throttle time across primary shard 0 0 0 min
Median cumulative merge throttle time across primary shard 0 0 0 min
Max cumulative merge throttle time across primary shard 0 0 0 min
Cumulative refresh time of primary shards 0 0 0 min
Cumulative refresh count of primary shards 31 31 0
Min cumulative refresh time across primary shard 0 0 0 min
Median cumulative refresh time across primary shard 0 0 0 min
Max cumulative refresh time across primary shard 0 0 0 min
Cumulative flush time of primary shards 0 0 0 min
Cumulative flush count of primary shards 8 8 0
Min cumulative flush time across primary shard 0 0 0 min
Median cumulative flush time across primary shard 0 0 0 min
Max cumulative flush time across primary shard 0 0 0 min
Total Young Gen GC time 2.16 2.18 0.02 s
Total Young Gen GC count 71 71 0
Total Old Gen GC time 0 0 0 s
Total Old Gen GC count 0 0 0
Store size 15.3221 15.3221 0 GB
Translog size 4.09782e-07 4.09782e-07 0 GB
Heap used for segments 0 0 0 MB
Heap used for doc values 0 0 0 MB
Heap used for terms 0 0 0 MB
Heap used for norms 0 0 0 MB
Heap used for points 0 0 0 MB
Heap used for stored fields 0 0 0 MB
Segment count 73 73 0
100th percentile latency wait-for-snapshot-recovery 300002 300001 -0.46875 ms
100th percentile service time wait-for-snapshot-recovery 300002 300001 -0.46875 ms
error rate wait-for-snapshot-recovery 100 100 0 %
Min Throughput match-all 8.00004 7.99863 -0.00142 ops/s
Mean Throughput match-all 8.0001 7.99876 -0.00134 ops/s
Median Throughput match-all 8.00011 7.99878 -0.00133 ops/s
Max Throughput match-all 8.00013 7.99891 -0.00122 ops/s
50th percentile latency match-all 4.08353 4.28224 0.1987 ms
90th percentile latency match-all 4.69237 4.91128 0.21891 ms
99th percentile latency match-all 5.07276 5.84384 0.77108 ms
100th percentile latency match-all 5.15329 5.86495 0.71166 ms
50th percentile service time match-all 3.09253 3.42397 0.33145 ms
90th percentile service time match-all 3.56093 3.82077 0.25984 ms
99th percentile service time match-all 4.31898 4.51325 0.19427 ms
100th percentile service time match-all 4.38225 4.65042 0.26817 ms
error rate match-all 0 0 0 %
Min Throughput term 49.8653 49.898 0.03277 ops/s
Mean Throughput term 49.8705 49.9015 0.03104 ops/s
Median Throughput term 49.8705 49.9015 0.03104 ops/s
Max Throughput term 49.8757 49.905 0.02931 ops/s
50th percentile latency term 3.45205 3.64265 0.1906 ms
90th percentile latency term 3.89325 4.15728 0.26403 ms
99th percentile latency term 8.97703 9.31213 0.3351 ms
100th percentile latency term 13.9687 14.2462 0.27751 ms
50th percentile service time term 2.65867 2.90563 0.24697 ms
90th percentile service time term 2.84279 3.16569 0.32289 ms
99th percentile service time term 3.18447 8.3056 5.12113 ms
100th percentile service time term 3.20051 13.2171 10.0166 ms
error rate term 0 0 0 %
Min Throughput range 1.00478 1.00465 -0.00013 ops/s
Mean Throughput range 1.00662 1.00644 -0.00018 ops/s
Median Throughput range 1.00636 1.00619 -0.00018 ops/s
Max Throughput range 1.00951 1.00925 -0.00026 ops/s
50th percentile latency range 6.32679 6.06911 -0.25767 ms
90th percentile latency range 6.74759 6.68034 -0.06725 ms
99th percentile latency range 14.9304 7.30484 -7.62552 ms
100th percentile latency range 22.2679 7.37473 -14.8932 ms
50th percentile service time range 4.40336 4.3927 -0.01066 ms
90th percentile service time range 4.713 4.67829 -0.03471 ms
99th percentile service time range 13.1272 5.61881 -7.50838 ms
100th percentile service time range 20.2359 5.64028 -14.5956 ms
error rate range 0 0 0 %
Min Throughput 200s-in-range 32.9022 32.9316 0.02934 ops/s
Mean Throughput 200s-in-range 32.9078 32.9337 0.02591 ops/s
Median Throughput 200s-in-range 32.9084 32.9323 0.0239 ops/s
Max Throughput 200s-in-range 32.9129 32.9374 0.02448 ops/s
50th percentile latency 200s-in-range 4.86638 5.10912 0.24274 ms
90th percentile latency 200s-in-range 5.66428 6.15178 0.4875 ms
99th percentile latency 200s-in-range 6.22094 7.16864 0.9477 ms
100th percentile latency 200s-in-range 6.60392 7.41385 0.80994 ms
50th percentile service time 200s-in-range 3.52999 3.93731 0.40733 ms
90th percentile service time 200s-in-range 3.69637 4.25686 0.5605 ms
99th percentile service time 200s-in-range 4.96872 5.041 0.07228 ms
100th percentile service time 200s-in-range 5.87261 5.14181 -0.7308 ms
error rate 200s-in-range 0 0 0 %
Min Throughput 400s-in-range 50.0106 50.033 0.02244 ops/s
Mean Throughput 400s-in-range 50.0119 50.034 0.02205 ops/s
Median Throughput 400s-in-range 50.0119 50.034 0.02205 ops/s
Max Throughput 400s-in-range 50.0132 50.0349 0.02166 ops/s
50th percentile latency 400s-in-range 3.69972 3.43545 -0.26427 ms
90th percentile latency 400s-in-range 4.12578 3.88271 -0.24306 ms
99th percentile latency 400s-in-range 9.65137 9.29701 -0.35436 ms
100th percentile latency 400s-in-range 14.8397 14.4099 -0.42975 ms
50th percentile service time 400s-in-range 2.91395 2.62956 -0.28439 ms
90th percentile service time 400s-in-range 3.02216 2.79619 -0.22597 ms
99th percentile service time 400s-in-range 8.69587 8.37331 -0.32257 ms
100th percentile service time 400s-in-range 13.9622 13.362 -0.60017 ms
error rate 400s-in-range 0 0 0 %
Min Throughput hourly_agg 1.00566 1.00571 5e-05 ops/s
Mean Throughput hourly_agg 1.00932 1.0094 8e-05 ops/s
Median Throughput hourly_agg 1.00848 1.00855 7e-05 ops/s
Max Throughput hourly_agg 1.01684 1.01699 0.00016 ops/s
50th percentile latency hourly_agg 13.2876 13.9039 0.61626 ms
90th percentile latency hourly_agg 14.3772 15.0932 0.71602 ms
99th percentile latency hourly_agg 16.4744 16.7868 0.31244 ms
100th percentile latency hourly_agg 16.7372 17.49 0.75281 ms
50th percentile service time hourly_agg 11.4508 12.0059 0.55507 ms
90th percentile service time hourly_agg 12.4074 13.004 0.59659 ms
99th percentile service time hourly_agg 14.3855 14.7653 0.37978 ms
100th percentile service time hourly_agg 14.8684 15.154 0.28559 ms
error rate hourly_agg 0 0 0 %
Min Throughput hourly_agg_with_filter 1.00298 1.00122 -0.00176 ops/s
Mean Throughput hourly_agg_with_filter 1.00488 1.002 -0.00288 ops/s
Median Throughput hourly_agg_with_filter 1.00445 1.00182 -0.00262 ops/s
Max Throughput hourly_agg_with_filter 1.00879 1.00361 -0.00518 ops/s
50th percentile latency hourly_agg_with_filter 81.6363 83.5865 1.95017 ms
90th percentile latency hourly_agg_with_filter 92.412 94.7305 2.31843 ms
99th percentile latency hourly_agg_with_filter 127.853 140.956 13.1029 ms
100th percentile latency hourly_agg_with_filter 160.239 183.002 22.7638 ms
50th percentile service time hourly_agg_with_filter 79.6116 81.8279 2.21627 ms
90th percentile service time hourly_agg_with_filter 90.3268 92.8934 2.56658 ms
99th percentile service time hourly_agg_with_filter 126 139.147 13.1461 ms
100th percentile service time hourly_agg_with_filter 158.385 181.211 22.8264 ms
error rate hourly_agg_with_filter 0 0 0 %
Min Throughput hourly_agg_with_filter_and_metrics 0.216111 0.235085 0.01897 ops/s
Mean Throughput hourly_agg_with_filter_and_metrics 0.216971 0.236491 0.01952 ops/s
Median Throughput hourly_agg_with_filter_and_metrics 0.216991 0.236566 0.01958 ops/s
Max Throughput hourly_agg_with_filter_and_metrics 0.217794 0.237297 0.0195 ops/s
50th percentile latency hourly_agg_with_filter_and_metrics 363245 323483 -39762.4 ms
90th percentile latency hourly_agg_with_filter_and_metrics 505752 451170 -54581.8 ms
99th percentile latency hourly_agg_with_filter_and_metrics 538132 479744 -58388.8 ms
100th percentile latency hourly_agg_with_filter_and_metrics 539921 481334 -58586.9 ms
50th percentile service time hourly_agg_with_filter_and_metrics 4576.64 4176.52 -400.121 ms
90th percentile service time hourly_agg_with_filter_and_metrics 4684.03 4281.44 -402.596 ms
99th percentile service time hourly_agg_with_filter_and_metrics 4785.3 4475.93 -309.373 ms
100th percentile service time hourly_agg_with_filter_and_metrics 4817.81 4558.98 -258.823 ms
error rate hourly_agg_with_filter_and_metrics 0 0 0 %
Min Throughput multi_term_agg 0.220417 0.224183 0.00377 ops/s
Mean Throughput multi_term_agg 0.222457 0.224716 0.00226 ops/s
Median Throughput multi_term_agg 0.222732 0.224588 0.00186 ops/s
Max Throughput multi_term_agg 0.223353 0.226172 0.00282 ops/s
50th percentile latency multi_term_agg 350647 347090 -3557.05 ms
90th percentile latency multi_term_agg 489341 485132 -4208.94 ms
99th percentile latency multi_term_agg 520187 516024 -4163.12 ms
100th percentile latency multi_term_agg 521879 517783 -4096.09 ms
50th percentile service time multi_term_agg 4483.31 4506.97 23.6614 ms
90th percentile service time multi_term_agg 4644.73 4626.37 -18.353 ms
99th percentile service time multi_term_agg 4690.78 4866.29 175.507 ms
100th percentile service time multi_term_agg 4711.89 4986.95 275.058 ms
error rate multi_term_agg 0 0 0 %
Min Throughput scroll 25.0498 25.0438 -0.00595 pages/s
Mean Throughput scroll 25.0819 25.0721 -0.00981 pages/s
Median Throughput scroll 25.0745 25.0656 -0.0089 pages/s
Max Throughput scroll 25.1485 25.1306 -0.0179 pages/s
50th percentile latency scroll 209.486 207.343 -2.14297 ms
90th percentile latency scroll 214.075 210.843 -3.23216 ms
99th percentile latency scroll 260.846 264.671 3.82584 ms
100th percentile latency scroll 283.995 291.279 7.28326 ms
50th percentile service time scroll 207.623 205.387 -2.23612 ms
90th percentile service time scroll 211.888 208.758 -3.12957 ms
99th percentile service time scroll 258.878 262.554 3.67609 ms
100th percentile service time scroll 281.69 289.84 8.14999 ms
error rate scroll 0 0 0 %
Min Throughput desc_sort_size 1.00319 1.0032 1e-05 ops/s
Mean Throughput desc_sort_size 1.00388 1.00389 1e-05 ops/s
Median Throughput desc_sort_size 1.00383 1.00384 1e-05 ops/s
Max Throughput desc_sort_size 1.00478 1.00479 1e-05 ops/s
50th percentile latency desc_sort_size 7.67193 7.20233 -0.4696 ms
90th percentile latency desc_sort_size 8.26067 7.99693 -0.26374 ms
99th percentile latency desc_sort_size 9.19009 8.81847 -0.37162 ms
100th percentile latency desc_sort_size 9.25557 8.96338 -0.29219 ms
50th percentile service time desc_sort_size 5.80691 5.40942 -0.39748 ms
90th percentile service time desc_sort_size 6.34789 5.89622 -0.45167 ms
99th percentile service time desc_sort_size 7.17157 6.66956 -0.502 ms
100th percentile service time desc_sort_size 7.36431 6.68186 -0.68245 ms
error rate desc_sort_size 0 0 0 %
Min Throughput asc_sort_size 1.0032 1.00323 3e-05 ops/s
Mean Throughput asc_sort_size 1.00389 1.00392 3e-05 ops/s
Median Throughput asc_sort_size 1.00384 1.00387 3e-05 ops/s
Max Throughput asc_sort_size 1.00479 1.00483 4e-05 ops/s
50th percentile latency asc_sort_size 8.39235 8.26643 -0.12592 ms
90th percentile latency asc_sort_size 9.23733 8.92356 -0.31377 ms
99th percentile latency asc_sort_size 9.98721 9.54172 -0.4455 ms
100th percentile latency asc_sort_size 9.99685 9.60887 -0.38799 ms
50th percentile service time asc_sort_size 6.65887 6.35152 -0.30736 ms
90th percentile service time asc_sort_size 7.37027 7.0678 -0.30247 ms
99th percentile service time asc_sort_size 8.08429 7.44685 -0.63744 ms
100th percentile service time asc_sort_size 8.30731 7.53512 -0.77219 ms
error rate asc_sort_size 0 0 0 %
Min Throughput desc_sort_timestamp 1.00316 1.00312 -4e-05 ops/s
Mean Throughput desc_sort_timestamp 1.00384 1.00379 -5e-05 ops/s
Median Throughput desc_sort_timestamp 1.00378 1.00374 -5e-05 ops/s
Max Throughput desc_sort_timestamp 1.00472 1.00466 -6e-05 ops/s
50th percentile latency desc_sort_timestamp 13.7684 13.5088 -0.25965 ms
90th percentile latency desc_sort_timestamp 14.6248 14.2314 -0.39336 ms
99th percentile latency desc_sort_timestamp 16.3723 15.988 -0.3843 ms
100th percentile latency desc_sort_timestamp 16.813 16.0292 -0.78384 ms
50th percentile service time desc_sort_timestamp 12.0042 11.8524 -0.15178 ms
90th percentile service time desc_sort_timestamp 12.5057 12.2587 -0.24695 ms
99th percentile service time desc_sort_timestamp 14.3426 14.5756 0.233 ms
100th percentile service time desc_sort_timestamp 14.6647 14.6071 -0.05765 ms
error rate desc_sort_timestamp 0 0 0 %
Min Throughput asc_sort_timestamp 1.00327 1.00328 0 ops/s
Mean Throughput asc_sort_timestamp 1.00398 1.00398 0 ops/s
Median Throughput asc_sort_timestamp 1.00392 1.00393 0 ops/s
Max Throughput asc_sort_timestamp 1.00489 1.0049 1e-05 ops/s
50th percentile latency asc_sort_timestamp 7.98311 8.08182 0.09871 ms
90th percentile latency asc_sort_timestamp 8.56813 8.76246 0.19433 ms
99th percentile latency asc_sort_timestamp 9.33831 9.50709 0.16878 ms
100th percentile latency asc_sort_timestamp 9.5369 10.022 0.48514 ms
50th percentile service time asc_sort_timestamp 5.97092 6.20787 0.23695 ms
90th percentile service time asc_sort_timestamp 6.56714 6.69686 0.12972 ms
99th percentile service time asc_sort_timestamp 7.26511 7.68305 0.41794 ms
100th percentile service time asc_sort_timestamp 7.36109 8.12289 0.7618 ms
error rate asc_sort_timestamp 0 0 0 %
Min Throughput desc_sort_with_after_timestamp 1.00902 1.00899 -3e-05 ops/s
Mean Throughput desc_sort_with_after_timestamp 1.02402 1.02394 -8e-05 ops/s
Median Throughput desc_sort_with_after_timestamp 1.01652 1.01647 -5e-05 ops/s
Max Throughput desc_sort_with_after_timestamp 1.09819 1.09782 -0.00036 ops/s
50th percentile latency desc_sort_with_after_timestamp 5.98317 6.27938 0.29621 ms
90th percentile latency desc_sort_with_after_timestamp 6.5393 6.80572 0.26642 ms
99th percentile latency desc_sort_with_after_timestamp 6.8667 7.46574 0.59904 ms
100th percentile latency desc_sort_with_after_timestamp 6.88107 7.57326 0.69219 ms
50th percentile service time desc_sort_with_after_timestamp 4.20681 4.42068 0.21387 ms
90th percentile service time desc_sort_with_after_timestamp 4.5541 4.76673 0.21263 ms
99th percentile service time desc_sort_with_after_timestamp 5.08222 5.54311 0.46089 ms
100th percentile service time desc_sort_with_after_timestamp 5.23009 5.58049 0.35039 ms
error rate desc_sort_with_after_timestamp 0 0 0 %
Min Throughput asc_sort_with_after_timestamp 1.00906 1.00906 -0 ops/s
Mean Throughput asc_sort_with_after_timestamp 1.02412 1.02411 -0 ops/s
Median Throughput asc_sort_with_after_timestamp 1.01659 1.01659 -0 ops/s
Max Throughput asc_sort_with_after_timestamp 1.09864 1.09855 -9e-05 ops/s
50th percentile latency asc_sort_with_after_timestamp 5.53987 5.39147 -0.1484 ms
90th percentile latency asc_sort_with_after_timestamp 5.91514 5.85555 -0.05959 ms
99th percentile latency asc_sort_with_after_timestamp 6.20781 6.20304 -0.00477 ms
100th percentile latency asc_sort_with_after_timestamp 6.20875 6.30199 0.09324 ms
50th percentile service time asc_sort_with_after_timestamp 3.65936 3.60652 -0.05284 ms
90th percentile service time asc_sort_with_after_timestamp 3.81855 3.74439 -0.07415 ms
99th percentile service time asc_sort_with_after_timestamp 3.94172 3.92286 -0.01886 ms
100th percentile service time asc_sort_with_after_timestamp 3.94465 4.0068 0.06214 ms
error rate asc_sort_with_after_timestamp 0 0 0 %
Min Throughput range_size 2.00953 2.00955 1e-05 ops/s
Mean Throughput range_size 2.01319 2.0132 1e-05 ops/s
Median Throughput range_size 2.01268 2.01269 1e-05 ops/s
Max Throughput range_size 2.01888 2.0189 3e-05 ops/s
50th percentile latency range_size 8.52367 8.27167 -0.25199 ms
90th percentile latency range_size 9.10341 8.83461 -0.2688 ms
99th percentile latency range_size 9.8454 10.0267 0.18131 ms
100th percentile latency range_size 10.0394 10.1073 0.06786 ms
50th percentile service time range_size 7.26068 6.9538 -0.30688 ms
90th percentile service time range_size 7.49863 7.30531 -0.19332 ms
99th percentile service time range_size 8.3443 8.47684 0.13253 ms
100th percentile service time range_size 8.40524 8.55189 0.14665 ms
error rate range_size 0 0 0 %
Min Throughput range_with_asc_sort 2.00853 2.00832 -0.00022 ops/s
Mean Throughput range_with_asc_sort 2.01181 2.01152 -0.00029 ops/s
Median Throughput range_with_asc_sort 2.01135 2.01108 -0.00027 ops/s
Max Throughput range_with_asc_sort 2.01693 2.0165 -0.00043 ops/s
50th percentile latency range_with_asc_sort 19.3013 18.8966 -0.40474 ms
90th percentile latency range_with_asc_sort 21.44 20.8522 -0.5878 ms
99th percentile latency range_with_asc_sort 22.3658 22.2688 -0.097 ms
100th percentile latency range_with_asc_sort 22.408 22.4174 0.00942 ms
50th percentile service time range_with_asc_sort 17.5966 17.4148 -0.18179 ms
90th percentile service time range_with_asc_sort 19.992 19.218 -0.77396 ms
99th percentile service time range_with_asc_sort 20.5763 20.434 -0.14236 ms
100th percentile service time range_with_asc_sort 20.6325 20.5176 -0.11491 ms
error rate range_with_asc_sort 0 0 0 %
Min Throughput range_with_desc_sort 2.0093 2.00933 3e-05 ops/s
Mean Throughput range_with_desc_sort 2.01286 2.0129 4e-05 ops/s
Median Throughput range_with_desc_sort 2.01237 2.01242 6e-05 ops/s
Max Throughput range_with_desc_sort 2.01843 2.01852 9e-05 ops/s
50th percentile latency range_with_desc_sort 20.9821 20.7926 -0.18952 ms
90th percentile latency range_with_desc_sort 24.4813 24.532 0.0507 ms
99th percentile latency range_with_desc_sort 25.4849 33.4369 7.95205 ms
100th percentile latency range_with_desc_sort 25.504 41.1489 15.6449 ms
50th percentile service time range_with_desc_sort 18.6587 18.6428 -0.01594 ms
90th percentile service time range_with_desc_sort 22.2279 22.7379 0.51001 ms
99th percentile service time range_with_desc_sort 23.3495 31.1639 7.81438 ms
100th percentile service time range_with_desc_sort 23.4663 38.9406 15.4744 ms
error rate range_with_desc_sort 0 0 0 %

@asimmahmood1
Copy link
Contributor Author

{"run-benchmark-test": "id_3"}

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
server/src/main/java/org/opensearch/search/aggregations/bucket/HistogramSkiplistLeafCollector.java (1)

231-234: Improve comment grammar.

The comment "Skiplist is based as top level agg" is grammatically awkward.

Consider rewording:

-    /**
-     * Skiplist is based as top level agg (null parent) or parent that will execute in sorted order
-     *
-     */
+    /**
+     * Skiplist can be used when executing as a top-level aggregation (null parent) or when the parent executes in sorted order
+     *
+     */
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a7acf06 and 44bbcb3.

📒 Files selected for processing (2)
  • server/src/main/java/org/opensearch/search/aggregations/bucket/HistogramSkiplistLeafCollector.java (7 hunks)
  • server/src/main/java/org/opensearch/search/aggregations/bucket/histogram/AutoDateHistogramAggregator.java (16 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
server/src/main/java/org/opensearch/search/aggregations/bucket/HistogramSkiplistLeafCollector.java (1)
server/src/main/java/org/opensearch/search/aggregations/LeafBucketCollector.java (1)
  • LeafBucketCollector (51-179)
server/src/main/java/org/opensearch/search/aggregations/bucket/histogram/AutoDateHistogramAggregator.java (1)
server/src/main/java/org/opensearch/search/aggregations/bucket/HistogramSkiplistLeafCollector.java (1)
  • HistogramSkiplistLeafCollector (33-247)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
  • GitHub Check: gradle-check
  • GitHub Check: assemble (25, windows-latest)
  • GitHub Check: assemble (25, ubuntu-24.04-arm)
  • GitHub Check: assemble (25, ubuntu-latest)
  • GitHub Check: assemble (21, windows-latest)
  • GitHub Check: assemble (21, ubuntu-latest)
  • GitHub Check: assemble (21, ubuntu-24.04-arm)
  • GitHub Check: precommit (21, windows-2025, true)
  • GitHub Check: precommit (25, windows-latest)
  • GitHub Check: precommit (25, macos-15-intel)
  • GitHub Check: precommit (25, macos-15)
  • GitHub Check: precommit (25, ubuntu-24.04-arm)
  • GitHub Check: precommit (21, macos-15-intel)
  • GitHub Check: precommit (21, ubuntu-24.04-arm)
  • GitHub Check: Analyze (java)
  • GitHub Check: precommit (25, ubuntu-latest)
  • GitHub Check: precommit (21, macos-15)
  • GitHub Check: precommit (21, ubuntu-latest)
  • GitHub Check: precommit (21, windows-latest)
  • GitHub Check: detect-breaking-change
🔇 Additional comments (19)
server/src/main/java/org/opensearch/search/aggregations/bucket/histogram/AutoDateHistogramAggregator.java (9)

34-37: LGTM! Field name extraction for skiplist support.

The imports and field name extraction logic are well-structured. The code safely handles cases where the values source is not field-data backed by setting fieldName to null, which is checked before use at line 279.

Also applies to: 143-143, 166-168


255-259: LGTM! Abstract method signature updated consistently.

The signature change to include DocValuesSkipper is implemented consistently across both FromSingle and FromMany subclasses within this file.


278-283: LGTM! Skipper retrieval is properly guarded.

The code correctly retrieves the DocValuesSkipper only when fieldName is available, with null-safety handled by downstream skiplist eligibility checks.


444-489: LGTM! Standard collection path preserved.

The fallback implementation maintains the existing collection behavior when skiplist optimization is unavailable, ensuring backward compatibility.


497-545: LGTM! Rounding advancement logic is well-structured.

The increaseRoundingIfNeeded method correctly implements the rebucketing algorithm:

  • Tracks min/max bounds to determine when rounding needs to advance
  • Uses both bucket count and time range heuristics
  • Properly manages resource cleanup with try-with-resources
  • The do-while loop ensures at least one rebucketing attempt when thresholds are exceeded

707-737: LGTM! Skiplist integration for FromMany with appropriate constraints.

The skiplist path correctly handles the constraint that HistogramSkiplistLeafCollector currently supports one owningBucketOrd at a time. The comment on lines 712-720 clearly explains when this optimization applies (FilterRewrite context) and notes future enhancement possibilities.


739-785: LGTM! Standard collection path for FromMany preserved.

The fallback implementation maintains existing multi-bucket collection behavior when skiplist is unavailable.


792-838: LGTM! Per-bucket rounding advancement with efficient rebucketing.

The increaseRoundingIfNeeded method for FromMany correctly implements the incremental rounding advancement:

  • Dynamically grows tracking arrays (mins, maxes) as needed
  • Uses ratio-based estimation to predict bucket counts after rounding changes
  • Defers expensive rebucketing until wastedBucketsOverestimate exceeds threshold
  • The exponential backoff for nextRebucketAt helps avoid O(n²) behavior

392-393: LGTM! Debug tracking for skiplist usage.

The skiplistCollectorCount tracking provides valuable observability into how often the skiplist optimization is applied, which will be helpful for performance analysis and debugging.

Also applies to: 556-556, 663-663, 884-884

server/src/main/java/org/opensearch/search/aggregations/bucket/HistogramSkiplistLeafCollector.java (10)

16-17: LGTM! Necessary imports for dynamic rounding support.

The added imports support the new supplier-based design and skiplist eligibility checking functionality.

Also applies to: 19-19, 23-24


29-30: LGTM! Important constraint documented.

The javadoc correctly notes the current single-owningBucketOrd limitation, which aligns with the usage restrictions described in AutoDateHistogramAggregator.FromMany.


38-38: LGTM! Supplier-based design enables dynamic rounding.

The refactoring from fixed fields to suppliers allows AutoDateHistogramAggregator to dynamically adjust rounding during collection, which is central to the skiplist optimization.

Also applies to: 41-47


71-102: LGTM! Constructor delegation maintains backward compatibility.

The original constructor now delegates to the new supplier-based constructor, preserving backward compatibility while enabling dynamic rounding support. The isSubNoOp optimization is a nice touch for performance when there are no sub-aggregations.


127-148: LGTM! Skiplist advancement now uses dynamic rounding.

The advanceSkipper method correctly fetches the current rounding from the supplier, ensuring skiplist logic operates on the latest rounding configuration.


152-180: LGTM! Collection logic handles dynamic rounding with cache invalidation.

The collect method correctly invalidates cached skiplist state when rounding changes and uses the current rounding for all bucketing decisions. The callback notification at line 178 allows the aggregator to trigger further rounding adjustments as needed.


198-209: LGTM! Efficient stream handling based on sub-aggregation presence.

The isSubNoOp optimization smartly uses stream.count() when sub-aggregations are absent, avoiding unnecessary per-document collection overhead.


222-229: LGTM! Callback interface for rounding adjustment notification.

The IncreaseRoundingIfNeeded interface provides a clean abstraction for notifying the aggregator when new buckets are collected, enabling dynamic rounding decisions.


235-246: LGTM! Skiplist eligibility check with noted limitation.

The canUseSkiplist method correctly identifies when skiplist optimization is applicable: top-level aggregations or filter-rewrite contexts. The TODO comment at line 237 appropriately flags that hard bounds support is not yet implemented.


65-69: Verify that reference equality reliably detects rounding changes.

The code uses reference equality (currentRounding != lastPreparedRounding) at line 156 to detect when AutoDateHistogramAggregator changes rounding. This assumes that a new Rounding.Prepared instance is created whenever the rounding changes. Confirm that the prepareRounding() method consistently creates new instances rather than reusing or caching instances, as reuse would cause the reference equality check to fail silently and skip necessary cache invalidation.

Comment on lines +426 to +442
protected LeafBucketCollector getLeafCollector(SortedNumericDocValues values, DocValuesSkipper skipper, LeafBucketCollector sub)
throws IOException {
// Check if skiplist optimization is available
final NumericDocValues singleton = DocValues.unwrapSingleton(values);
if (HistogramSkiplistLeafCollector.canUseSkiplist(null, parent, skipper, singleton)) {
// Increment skiplist collector count
skiplistCollectorCount++;
return new HistogramSkiplistLeafCollector(
singleton,
skipper,
(owningBucketOrd) -> preparedRounding, // for FromSingle there will be no parent/
() -> bucketOrds,
sub,
FromSingle.this,
(owningBucket, rounded) -> increaseRoundingIfNeeded(rounded) // Pass supplier to allow rounding change
);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix comment typo; skiplist integration looks good.

The skiplist eligibility check and collector creation are well-structured with proper fallback to the standard path.

Minor issue: Line 436 has a trailing slash in the comment.

Apply this diff to fix the comment:

-                    (owningBucketOrd) -> preparedRounding,  // for FromSingle there will be no parent/
+                    (owningBucketOrd) -> preparedRounding,  // for FromSingle there will be no parent
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
protected LeafBucketCollector getLeafCollector(SortedNumericDocValues values, DocValuesSkipper skipper, LeafBucketCollector sub)
throws IOException {
// Check if skiplist optimization is available
final NumericDocValues singleton = DocValues.unwrapSingleton(values);
if (HistogramSkiplistLeafCollector.canUseSkiplist(null, parent, skipper, singleton)) {
// Increment skiplist collector count
skiplistCollectorCount++;
return new HistogramSkiplistLeafCollector(
singleton,
skipper,
(owningBucketOrd) -> preparedRounding, // for FromSingle there will be no parent/
() -> bucketOrds,
sub,
FromSingle.this,
(owningBucket, rounded) -> increaseRoundingIfNeeded(rounded) // Pass supplier to allow rounding change
);
}
protected LeafBucketCollector getLeafCollector(SortedNumericDocValues values, DocValuesSkipper skipper, LeafBucketCollector sub)
throws IOException {
// Check if skiplist optimization is available
final NumericDocValues singleton = DocValues.unwrapSingleton(values);
if (HistogramSkiplistLeafCollector.canUseSkiplist(null, parent, skipper, singleton)) {
// Increment skiplist collector count
skiplistCollectorCount++;
return new HistogramSkiplistLeafCollector(
singleton,
skipper,
(owningBucketOrd) -> preparedRounding, // for FromSingle there will be no parent
() -> bucketOrds,
sub,
FromSingle.this,
(owningBucket, rounded) -> increaseRoundingIfNeeded(rounded) // Pass supplier to allow rounding change
);
}
🤖 Prompt for AI Agents
server/src/main/java/org/opensearch/search/aggregations/bucket/histogram/AutoDateHistogramAggregator.java
around lines 426 to 442: there is a minor typo — the inline comment on line 436
contains an extraneous trailing slash ("// for FromSingle there will be no
parent/"); remove the trailing slash and ensure the comment reads clearly (e.g.,
"// for FromSingle there will be no parent") so the comment has no stray
characters.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

❕ Gradle check result for 44bbcb3: UNSTABLE

Please review all flaky tests that succeeded after retry and create an issue if one does not already exist to track the flaky failure.

@codecov
Copy link

codecov bot commented Dec 2, 2025

Codecov Report

❌ Patch coverage is 70.79646% with 33 lines in your changes missing coverage. Please review.
✅ Project coverage is 73.66%. Comparing base (da18cc6) to head (44bbcb3).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
.../bucket/histogram/AutoDateHistogramAggregator.java 68.96% 20 Missing and 7 partials ⚠️
...gations/bucket/HistogramSkiplistLeafCollector.java 76.00% 1 Missing and 5 partials ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##               main   #20057      +/-   ##
============================================
+ Coverage     73.25%   73.66%   +0.40%     
- Complexity    71684    72102     +418     
============================================
  Files          5788     5793       +5     
  Lines        327866   328096     +230     
  Branches      47218    47252      +34     
============================================
+ Hits         240194   241683    +1489     
+ Misses        68399    67351    -1048     
+ Partials      19273    19062     -211     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-project-automation github-project-automation bot moved this from In Progress to Done in Performance Roadmap Dec 2, 2025
@asimmahmood1 asimmahmood1 reopened this Dec 2, 2025
@github-project-automation github-project-automation bot moved this from Done to In Progress in Performance Roadmap Dec 2, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

❌ Gradle check result for 44bbcb3: FAILURE

Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change?

@github-project-automation github-project-automation bot moved this from In Progress to Done in Performance Roadmap Dec 2, 2025
@asimmahmood1 asimmahmood1 reopened this Dec 2, 2025
@github-project-automation github-project-automation bot moved this from Done to In Progress in Performance Roadmap Dec 2, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

❕ Gradle check result for 44bbcb3: UNSTABLE

Please review all flaky tests that succeeded after retry and create an issue if one does not already exist to track the flaky failure.

@asimmahmood1
Copy link
Contributor Author

{"run-benchmark-test": "id_11"}

@asimmahmood1
Copy link
Contributor Author

{"run-benchmark-test": "id_3"}

@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

The Jenkins job url is https://build.ci.opensearch.org/job/benchmark-pull-request/5270/ . Final results will be published once the job is completed.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

The Jenkins job url is https://build.ci.opensearch.org/job/benchmark-pull-request/5271/ . Final results will be published once the job is completed.

@opensearch-ci-bot
Copy link
Collaborator

The benchmark job https://build.ci.opensearch.org/job/benchmark-pull-request/5270/ failed.
Please see logs to debug.

@opensearch-ci-bot
Copy link
Collaborator

The benchmark job https://build.ci.opensearch.org/job/benchmark-pull-request/5271/ failed.
Please see logs to debug.

@asimmahmood1
Copy link
Contributor Author

benchmarks failed with insufficient ec2:

opensearch-infra-stack-5271 | 6:12:38 PM | CREATE_FAILED        | AWS::EC2::Instance                        | single-node-instance (singlenodeinstance5DE833AF133b4ae8cbb89594) Resource handler returned message: "We currently do not have sufficient r5.xlarge capacity in the Availability Zone you requested (us-east-1a). Our system will be working on provisioning additional capacity. You can currently get r5.xlarge capacity by not specifying an Availability Zone in your request or choosing us-east-1b, us-east-1c, us-east-1d, us-east-1f. (Service: Ec2, Status Code: 500, Request ID: df795b2f-5086-4a1a-8b8f-26e5fcdbe1cf) (SDK Attempt Count: 5)" (RequestToken: 36f81b51-c440-6f04-bc9f-0dc97485ee5e, HandlerErrorCode: GeneralServiceException)

@asimmahmood1
Copy link
Contributor Author

{"run-benchmark-test": "id_11"}

@github-actions
Copy link
Contributor

github-actions bot commented Dec 2, 2025

The Jenkins job url is https://build.ci.opensearch.org/job/benchmark-pull-request/5272/ . Final results will be published once the job is completed.

@opensearch-ci-bot
Copy link
Collaborator

Benchmark Results

Benchmark Results for Job: https://build.ci.opensearch.org/job/benchmark-pull-request/5272/

Metric Task Value Unit
Cumulative indexing time of primary shards 0 min
Min cumulative indexing time across primary shards 0 min
Median cumulative indexing time across primary shards 0 min
Max cumulative indexing time across primary shards 0 min
Cumulative indexing throttle time of primary shards 0 min
Min cumulative indexing throttle time across primary shards 0 min
Median cumulative indexing throttle time across primary shards 0 min
Max cumulative indexing throttle time across primary shards 0 min
Cumulative merge time of primary shards 0 min
Cumulative merge count of primary shards 0
Min cumulative merge time across primary shards 0 min
Median cumulative merge time across primary shards 0 min
Max cumulative merge time across primary shards 0 min
Cumulative merge throttle time of primary shards 0 min
Min cumulative merge throttle time across primary shards 0 min
Median cumulative merge throttle time across primary shards 0 min
Max cumulative merge throttle time across primary shards 0 min
Cumulative refresh time of primary shards 0 min
Cumulative refresh count of primary shards 31
Min cumulative refresh time across primary shards 0 min
Median cumulative refresh time across primary shards 0 min
Max cumulative refresh time across primary shards 0 min
Cumulative flush time of primary shards 0 min
Cumulative flush count of primary shards 8
Min cumulative flush time across primary shards 0 min
Median cumulative flush time across primary shards 0 min
Max cumulative flush time across primary shards 0 min
Total Young Gen GC time 1.339 s
Total Young Gen GC count 69
Total Old Gen GC time 0 s
Total Old Gen GC count 0
Store size 15.3221 GB
Translog size 4.09782e-07 GB
Heap used for segments 0 MB
Heap used for doc values 0 MB
Heap used for terms 0 MB
Heap used for norms 0 MB
Heap used for points 0 MB
Heap used for stored fields 0 MB
Segment count 73
100th percentile latency wait-for-snapshot-recovery 300001 ms
100th percentile service time wait-for-snapshot-recovery 300001 ms
error rate wait-for-snapshot-recovery 100 %
Min Throughput match-all 8 ops/s
Mean Throughput match-all 8 ops/s
Median Throughput match-all 8 ops/s
Max Throughput match-all 8 ops/s
50th percentile latency match-all 3.86757 ms
90th percentile latency match-all 4.35328 ms
99th percentile latency match-all 4.95502 ms
100th percentile latency match-all 5.19581 ms
50th percentile service time match-all 2.92093 ms
90th percentile service time match-all 3.1362 ms
99th percentile service time match-all 3.76365 ms
100th percentile service time match-all 3.82623 ms
error rate match-all 0 %
Min Throughput term 49.86 ops/s
Mean Throughput term 49.87 ops/s
Median Throughput term 49.87 ops/s
Max Throughput term 49.88 ops/s
50th percentile latency term 3.99056 ms
90th percentile latency term 4.39448 ms
99th percentile latency term 9.25511 ms
100th percentile latency term 13.8603 ms
50th percentile service time term 3.1054 ms
90th percentile service time term 3.2961 ms
99th percentile service time term 6.04156 ms
100th percentile service time term 8.27878 ms
error rate term 0 %
Min Throughput range 1 ops/s
Mean Throughput range 1.01 ops/s
Median Throughput range 1.01 ops/s
Max Throughput range 1.01 ops/s
50th percentile latency range 5.89274 ms
90th percentile latency range 6.29099 ms
99th percentile latency range 6.72695 ms
100th percentile latency range 6.73909 ms
50th percentile service time range 3.99803 ms
90th percentile service time range 4.3071 ms
99th percentile service time range 4.74987 ms
100th percentile service time range 4.81262 ms
error rate range 0 %
Min Throughput 200s-in-range 32.89 ops/s
Mean Throughput 200s-in-range 32.9 ops/s
Median Throughput 200s-in-range 32.9 ops/s
Max Throughput 200s-in-range 32.9 ops/s
50th percentile latency 200s-in-range 5.00815 ms
90th percentile latency 200s-in-range 5.91968 ms
99th percentile latency 200s-in-range 6.92215 ms
100th percentile latency 200s-in-range 7.2257 ms
50th percentile service time 200s-in-range 3.73302 ms
90th percentile service time 200s-in-range 3.89549 ms
99th percentile service time 200s-in-range 5.70838 ms
100th percentile service time 200s-in-range 6.93133 ms
error rate 200s-in-range 0 %
Min Throughput 400s-in-range 50.01 ops/s
Mean Throughput 400s-in-range 50.01 ops/s
Median Throughput 400s-in-range 50.01 ops/s
Max Throughput 400s-in-range 50.01 ops/s
50th percentile latency 400s-in-range 3.50179 ms
90th percentile latency 400s-in-range 3.89675 ms
99th percentile latency 400s-in-range 10.4887 ms
100th percentile latency 400s-in-range 14.8726 ms
50th percentile service time 400s-in-range 2.71335 ms
90th percentile service time 400s-in-range 2.85134 ms
99th percentile service time 400s-in-range 9.415 ms
100th percentile service time 400s-in-range 13.579 ms
error rate 400s-in-range 0 %
Min Throughput hourly_agg 1 ops/s
Mean Throughput hourly_agg 1.01 ops/s
Median Throughput hourly_agg 1.01 ops/s
Max Throughput hourly_agg 1.01 ops/s
50th percentile latency hourly_agg 13.6735 ms
90th percentile latency hourly_agg 14.5529 ms
99th percentile latency hourly_agg 15.8279 ms
100th percentile latency hourly_agg 16.0163 ms
50th percentile service time hourly_agg 11.8067 ms
90th percentile service time hourly_agg 12.6147 ms
99th percentile service time hourly_agg 13.6716 ms
100th percentile service time hourly_agg 13.7522 ms
error rate hourly_agg 0 %
Min Throughput hourly_agg_with_filter 1 ops/s
Mean Throughput hourly_agg_with_filter 1 ops/s
Median Throughput hourly_agg_with_filter 1 ops/s
Max Throughput hourly_agg_with_filter 1.01 ops/s
50th percentile latency hourly_agg_with_filter 80.4523 ms
90th percentile latency hourly_agg_with_filter 93.0656 ms
99th percentile latency hourly_agg_with_filter 134.039 ms
100th percentile latency hourly_agg_with_filter 168.4 ms
50th percentile service time hourly_agg_with_filter 78.7778 ms
90th percentile service time hourly_agg_with_filter 91.1438 ms
99th percentile service time hourly_agg_with_filter 132.398 ms
100th percentile service time hourly_agg_with_filter 167.072 ms
error rate hourly_agg_with_filter 0 %
Min Throughput hourly_agg_with_filter_and_metrics 0.18 ops/s
Mean Throughput hourly_agg_with_filter_and_metrics 0.18 ops/s
Median Throughput hourly_agg_with_filter_and_metrics 0.18 ops/s
Max Throughput hourly_agg_with_filter_and_metrics 0.18 ops/s
50th percentile latency hourly_agg_with_filter_and_metrics 458401 ms
90th percentile latency hourly_agg_with_filter_and_metrics 640536 ms
99th percentile latency hourly_agg_with_filter_and_metrics 681837 ms
100th percentile latency hourly_agg_with_filter_and_metrics 684118 ms
50th percentile service time hourly_agg_with_filter_and_metrics 5540.04 ms
90th percentile service time hourly_agg_with_filter_and_metrics 5639.64 ms
99th percentile service time hourly_agg_with_filter_and_metrics 5735.22 ms
100th percentile service time hourly_agg_with_filter_and_metrics 5735.71 ms
error rate hourly_agg_with_filter_and_metrics 0 %
Min Throughput multi_term_agg 0.23 ops/s
Mean Throughput multi_term_agg 0.23 ops/s
Median Throughput multi_term_agg 0.23 ops/s
Max Throughput multi_term_agg 0.23 ops/s
50th percentile latency multi_term_agg 341787 ms
90th percentile latency multi_term_agg 477057 ms
99th percentile latency multi_term_agg 507512 ms
100th percentile latency multi_term_agg 509265 ms
50th percentile service time multi_term_agg 4411.76 ms
90th percentile service time multi_term_agg 4554.54 ms
99th percentile service time multi_term_agg 4635.49 ms
100th percentile service time multi_term_agg 4661.35 ms
error rate multi_term_agg 0 %
Min Throughput scroll 25.05 pages/s
Mean Throughput scroll 25.08 pages/s
Median Throughput scroll 25.08 pages/s
Max Throughput scroll 25.15 pages/s
50th percentile latency scroll 206.36 ms
90th percentile latency scroll 210.892 ms
99th percentile latency scroll 263.809 ms
100th percentile latency scroll 285.433 ms
50th percentile service time scroll 204.332 ms
90th percentile service time scroll 208.733 ms
99th percentile service time scroll 261.858 ms
100th percentile service time scroll 283.312 ms
error rate scroll 0 %
Min Throughput desc_sort_size 1 ops/s
Mean Throughput desc_sort_size 1 ops/s
Median Throughput desc_sort_size 1 ops/s
Max Throughput desc_sort_size 1 ops/s
50th percentile latency desc_sort_size 7.68692 ms
90th percentile latency desc_sort_size 8.31439 ms
99th percentile latency desc_sort_size 8.99408 ms
100th percentile latency desc_sort_size 9.06204 ms
50th percentile service time desc_sort_size 5.88221 ms
90th percentile service time desc_sort_size 6.5777 ms
99th percentile service time desc_sort_size 6.95968 ms
100th percentile service time desc_sort_size 6.97288 ms
error rate desc_sort_size 0 %
Min Throughput asc_sort_size 1 ops/s
Mean Throughput asc_sort_size 1 ops/s
Median Throughput asc_sort_size 1 ops/s
Max Throughput asc_sort_size 1 ops/s
50th percentile latency asc_sort_size 8.5027 ms
90th percentile latency asc_sort_size 9.24479 ms
99th percentile latency asc_sort_size 12.3509 ms
100th percentile latency asc_sort_size 14.5397 ms
50th percentile service time asc_sort_size 6.59937 ms
90th percentile service time asc_sort_size 7.3174 ms
99th percentile service time asc_sort_size 10.293 ms
100th percentile service time asc_sort_size 12.2812 ms
error rate asc_sort_size 0 %
Min Throughput desc_sort_timestamp 1 ops/s
Mean Throughput desc_sort_timestamp 1 ops/s
Median Throughput desc_sort_timestamp 1 ops/s
Max Throughput desc_sort_timestamp 1 ops/s
50th percentile latency desc_sort_timestamp 13.598 ms
90th percentile latency desc_sort_timestamp 14.4817 ms
99th percentile latency desc_sort_timestamp 17.7357 ms
100th percentile latency desc_sort_timestamp 18.0537 ms
50th percentile service time desc_sort_timestamp 11.8965 ms
90th percentile service time desc_sort_timestamp 12.3499 ms
99th percentile service time desc_sort_timestamp 15.6305 ms
100th percentile service time desc_sort_timestamp 15.9471 ms
error rate desc_sort_timestamp 0 %
Min Throughput asc_sort_timestamp 1 ops/s
Mean Throughput asc_sort_timestamp 1 ops/s
Median Throughput asc_sort_timestamp 1 ops/s
Max Throughput asc_sort_timestamp 1 ops/s
50th percentile latency asc_sort_timestamp 8.13947 ms
90th percentile latency asc_sort_timestamp 8.84755 ms
99th percentile latency asc_sort_timestamp 9.4685 ms
100th percentile latency asc_sort_timestamp 9.53748 ms
50th percentile service time asc_sort_timestamp 6.39604 ms
90th percentile service time asc_sort_timestamp 6.88094 ms
99th percentile service time asc_sort_timestamp 7.43338 ms
100th percentile service time asc_sort_timestamp 7.54803 ms
error rate asc_sort_timestamp 0 %
Min Throughput desc_sort_with_after_timestamp 1.01 ops/s
Mean Throughput desc_sort_with_after_timestamp 1.02 ops/s
Median Throughput desc_sort_with_after_timestamp 1.02 ops/s
Max Throughput desc_sort_with_after_timestamp 1.1 ops/s
50th percentile latency desc_sort_with_after_timestamp 6.47168 ms
90th percentile latency desc_sort_with_after_timestamp 7.02008 ms
99th percentile latency desc_sort_with_after_timestamp 7.32734 ms
100th percentile latency desc_sort_with_after_timestamp 7.41164 ms
50th percentile service time desc_sort_with_after_timestamp 4.60445 ms
90th percentile service time desc_sort_with_after_timestamp 4.88947 ms
99th percentile service time desc_sort_with_after_timestamp 5.22581 ms
100th percentile service time desc_sort_with_after_timestamp 5.28413 ms
error rate desc_sort_with_after_timestamp 0 %
Min Throughput asc_sort_with_after_timestamp 1.01 ops/s
Mean Throughput asc_sort_with_after_timestamp 1.02 ops/s
Median Throughput asc_sort_with_after_timestamp 1.02 ops/s
Max Throughput asc_sort_with_after_timestamp 1.1 ops/s
50th percentile latency asc_sort_with_after_timestamp 5.0587 ms
90th percentile latency asc_sort_with_after_timestamp 5.69287 ms
99th percentile latency asc_sort_with_after_timestamp 5.80965 ms
100th percentile latency asc_sort_with_after_timestamp 5.81911 ms
50th percentile service time asc_sort_with_after_timestamp 3.3025 ms
90th percentile service time asc_sort_with_after_timestamp 3.50386 ms
99th percentile service time asc_sort_with_after_timestamp 3.64394 ms
100th percentile service time asc_sort_with_after_timestamp 3.66393 ms
error rate asc_sort_with_after_timestamp 0 %
Min Throughput range_size 2.01 ops/s
Mean Throughput range_size 2.01 ops/s
Median Throughput range_size 2.01 ops/s
Max Throughput range_size 2.02 ops/s
50th percentile latency range_size 8.36179 ms
90th percentile latency range_size 9.00261 ms
99th percentile latency range_size 9.39315 ms
100th percentile latency range_size 9.63734 ms
50th percentile service time range_size 7.04188 ms
90th percentile service time range_size 7.34781 ms
99th percentile service time range_size 8.20924 ms
100th percentile service time range_size 8.56015 ms
error rate range_size 0 %
Min Throughput range_with_asc_sort 2.01 ops/s
Mean Throughput range_with_asc_sort 2.01 ops/s
Median Throughput range_with_asc_sort 2.01 ops/s
Max Throughput range_with_asc_sort 2.02 ops/s
50th percentile latency range_with_asc_sort 19.6227 ms
90th percentile latency range_with_asc_sort 22.5587 ms
99th percentile latency range_with_asc_sort 25.748 ms
100th percentile latency range_with_asc_sort 25.8429 ms
50th percentile service time range_with_asc_sort 17.9573 ms
90th percentile service time range_with_asc_sort 20.944 ms
99th percentile service time range_with_asc_sort 23.919 ms
100th percentile service time range_with_asc_sort 24.3742 ms
error rate range_with_asc_sort 0 %
Min Throughput range_with_desc_sort 2.01 ops/s
Mean Throughput range_with_desc_sort 2.01 ops/s
Median Throughput range_with_desc_sort 2.01 ops/s
Max Throughput range_with_desc_sort 2.02 ops/s
50th percentile latency range_with_desc_sort 21.377 ms
90th percentile latency range_with_desc_sort 24.4574 ms
99th percentile latency range_with_desc_sort 29.2544 ms
100th percentile latency range_with_desc_sort 29.6355 ms
50th percentile service time range_with_desc_sort 19.2692 ms
90th percentile service time range_with_desc_sort 22.1687 ms
99th percentile service time range_with_desc_sort 26.5679 ms
100th percentile service time range_with_desc_sort 26.92 ms
error rate range_with_desc_sort 0 %

@opensearch-ci-bot
Copy link
Collaborator

Benchmark Baseline Comparison Results

Benchmark Results for Job: https://build.ci.opensearch.org/job/benchmark-compare/222/

Metric Task Baseline Contender Diff Unit
Cumulative indexing time of primary shards 0 0 0 min
Min cumulative indexing time across primary shard 0 0 0 min
Median cumulative indexing time across primary shard 0 0 0 min
Max cumulative indexing time across primary shard 0 0 0 min
Cumulative indexing throttle time of primary shards 0 0 0 min
Min cumulative indexing throttle time across primary shard 0 0 0 min
Median cumulative indexing throttle time across primary shard 0 0 0 min
Max cumulative indexing throttle time across primary shard 0 0 0 min
Cumulative merge time of primary shards 0 0 0 min
Cumulative merge count of primary shards 0 0 0
Min cumulative merge time across primary shard 0 0 0 min
Median cumulative merge time across primary shard 0 0 0 min
Max cumulative merge time across primary shard 0 0 0 min
Cumulative merge throttle time of primary shards 0 0 0 min
Min cumulative merge throttle time across primary shard 0 0 0 min
Median cumulative merge throttle time across primary shard 0 0 0 min
Max cumulative merge throttle time across primary shard 0 0 0 min
Cumulative refresh time of primary shards 0 0 0 min
Cumulative refresh count of primary shards 31 31 0
Min cumulative refresh time across primary shard 0 0 0 min
Median cumulative refresh time across primary shard 0 0 0 min
Max cumulative refresh time across primary shard 0 0 0 min
Cumulative flush time of primary shards 0 0 0 min
Cumulative flush count of primary shards 8 8 0
Min cumulative flush time across primary shard 0 0 0 min
Median cumulative flush time across primary shard 0 0 0 min
Max cumulative flush time across primary shard 0 0 0 min
Total Young Gen GC time 2.224 1.339 -0.885 s
Total Young Gen GC count 71 69 -2
Total Old Gen GC time 0 0 0 s
Total Old Gen GC count 0 0 0
Store size 15.3221 15.3221 0 GB
Translog size 4.09782e-07 4.09782e-07 0 GB
Heap used for segments 0 0 0 MB
Heap used for doc values 0 0 0 MB
Heap used for terms 0 0 0 MB
Heap used for norms 0 0 0 MB
Heap used for points 0 0 0 MB
Heap used for stored fields 0 0 0 MB
Segment count 73 73 0
100th percentile latency wait-for-snapshot-recovery 300002 300001 -0.09375 ms
100th percentile service time wait-for-snapshot-recovery 300002 300001 -0.09375 ms
error rate wait-for-snapshot-recovery 100 100 0 %
Min Throughput match-all 7.99878 7.99845 -0.00033 ops/s
Mean Throughput match-all 7.99895 7.9986 -0.00035 ops/s
Median Throughput match-all 7.99895 7.9986 -0.00035 ops/s
Max Throughput match-all 7.99907 7.99875 -0.00033 ops/s
50th percentile latency match-all 5.32565 3.86757 -1.45809 ms
90th percentile latency match-all 6.07248 4.35328 -1.7192 ms
99th percentile latency match-all 7.59107 4.95502 -2.63605 ms
100th percentile latency match-all 8.13901 5.19581 -2.94321 ms
50th percentile service time match-all 4.30108 2.92093 -1.38015 ms
90th percentile service time match-all 4.72584 3.1362 -1.58964 ms
99th percentile service time match-all 6.8506 3.76365 -3.08695 ms
100th percentile service time match-all 7.54375 3.82623 -3.71752 ms
error rate match-all 0 0 0 %
Min Throughput term 49.8489 49.8646 0.01567 ops/s
Mean Throughput term 49.8568 49.8712 0.01437 ops/s
Median Throughput term 49.8568 49.8712 0.01437 ops/s
Max Throughput term 49.8647 49.8778 0.01307 ops/s
50th percentile latency term 4.65479 3.99056 -0.66423 ms
90th percentile latency term 5.03181 4.39448 -0.63732 ms
99th percentile latency term 7.58546 9.25511 1.66965 ms
100th percentile latency term 9.72025 13.8603 4.1401 ms
50th percentile service time term 3.87828 3.1054 -0.77288 ms
90th percentile service time term 4.18169 3.2961 -0.88559 ms
99th percentile service time term 4.45204 6.04156 1.58952 ms
100th percentile service time term 4.45568 8.27878 3.8231 ms
error rate term 0 0 0 %
Min Throughput range 1.00467 1.00477 0.0001 ops/s
Mean Throughput range 1.00646 1.0066 0.00014 ops/s
Median Throughput range 1.00621 1.00634 0.00013 ops/s
Max Throughput range 1.00928 1.00948 0.0002 ops/s
50th percentile latency range 6.58673 5.89274 -0.69399 ms
90th percentile latency range 7.05424 6.29099 -0.76325 ms
99th percentile latency range 8.1772 6.72695 -1.45025 ms
100th percentile latency range 8.18863 6.73909 -1.44954 ms
50th percentile service time range 4.80683 3.99803 -0.8088 ms
90th percentile service time range 5.25612 4.3071 -0.94902 ms
99th percentile service time range 6.27654 4.74987 -1.52667 ms
100th percentile service time range 6.2796 4.81262 -1.46698 ms
error rate range 0 0 0 %
Min Throughput 200s-in-range 32.6912 32.892 0.20078 ops/s
Mean Throughput 200s-in-range 32.7092 32.8975 0.18835 ops/s
Median Throughput 200s-in-range 32.7097 32.8972 0.18749 ops/s
Max Throughput 200s-in-range 32.7266 32.9034 0.17679 ops/s
50th percentile latency 200s-in-range 5.41822 5.00815 -0.41007 ms
90th percentile latency 200s-in-range 6.71885 5.91968 -0.79917 ms
99th percentile latency 200s-in-range 7.24499 6.92215 -0.32283 ms
100th percentile latency 200s-in-range 7.24854 7.2257 -0.02284 ms
50th percentile service time 200s-in-range 4.70159 3.73302 -0.96857 ms
90th percentile service time 200s-in-range 5.11752 3.89549 -1.22203 ms
99th percentile service time 200s-in-range 5.87107 5.70838 -0.16269 ms
100th percentile service time 200s-in-range 5.87687 6.93133 1.05446 ms
error rate 200s-in-range 0 0 0 %
Min Throughput 400s-in-range 50.028 50.0083 -0.01974 ops/s
Mean Throughput 400s-in-range 50.0295 50.0086 -0.02094 ops/s
Median Throughput 400s-in-range 50.0295 50.0086 -0.02094 ops/s
Max Throughput 400s-in-range 50.031 50.0089 -0.02215 ops/s
50th percentile latency 400s-in-range 4.53014 3.50179 -1.02835 ms
90th percentile latency 400s-in-range 4.9212 3.89675 -1.02445 ms
99th percentile latency 400s-in-range 7.47314 10.4887 3.01557 ms
100th percentile latency 400s-in-range 9.87288 14.8726 4.99973 ms
50th percentile service time 400s-in-range 3.73068 2.71335 -1.01733 ms
90th percentile service time 400s-in-range 3.83258 2.85134 -0.98125 ms
99th percentile service time 400s-in-range 4.25352 9.415 5.16148 ms
100th percentile service time 400s-in-range 4.2559 13.579 9.32313 ms
error rate 400s-in-range 0 0 0 %
Min Throughput hourly_agg 1.00537 1.00498 -0.00038 ops/s
Mean Throughput hourly_agg 1.00882 1.0082 -0.00063 ops/s
Median Throughput hourly_agg 1.00803 1.00746 -0.00057 ops/s
Max Throughput hourly_agg 1.01594 1.0148 -0.00114 ops/s
50th percentile latency hourly_agg 14.6851 13.6735 -1.01157 ms
90th percentile latency hourly_agg 15.7074 14.5529 -1.1545 ms
99th percentile latency hourly_agg 17.0104 15.8279 -1.18252 ms
100th percentile latency hourly_agg 17.1165 16.0163 -1.10014 ms
50th percentile service time hourly_agg 12.8498 11.8067 -1.04309 ms
90th percentile service time hourly_agg 13.7898 12.6147 -1.17509 ms
99th percentile service time hourly_agg 15.158 13.6716 -1.48635 ms
100th percentile service time hourly_agg 15.6523 13.7522 -1.90004 ms
error rate hourly_agg 0 0 0 %
Min Throughput hourly_agg_with_filter 1.00231 1.00185 -0.00046 ops/s
Mean Throughput hourly_agg_with_filter 1.0038 1.00304 -0.00075 ops/s
Median Throughput hourly_agg_with_filter 1.00346 1.00277 -0.00068 ops/s
Max Throughput hourly_agg_with_filter 1.00683 1.00548 -0.00135 ops/s
50th percentile latency hourly_agg_with_filter 85.6173 80.4523 -5.16499 ms
90th percentile latency hourly_agg_with_filter 96.0534 93.0656 -2.98787 ms
99th percentile latency hourly_agg_with_filter 131.517 134.039 2.52246 ms
100th percentile latency hourly_agg_with_filter 161.206 168.4 7.19386 ms
50th percentile service time hourly_agg_with_filter 83.6109 78.7778 -4.83307 ms
90th percentile service time hourly_agg_with_filter 94.5287 91.1438 -3.38487 ms
99th percentile service time hourly_agg_with_filter 129.515 132.398 2.88272 ms
100th percentile service time hourly_agg_with_filter 159.023 167.072 8.04866 ms
error rate hourly_agg_with_filter 0 0 0 %
Min Throughput hourly_agg_with_filter_and_metrics 0.248516 0.179369 -0.06915 ops/s
Mean Throughput hourly_agg_with_filter_and_metrics 0.249399 0.179661 -0.06974 ops/s
Median Throughput hourly_agg_with_filter_and_metrics 0.249187 0.179688 -0.0695 ops/s
Max Throughput hourly_agg_with_filter_and_metrics 0.250754 0.179823 -0.07093 ops/s
50th percentile latency hourly_agg_with_filter_and_metrics 302449 458401 155952 ms
90th percentile latency hourly_agg_with_filter_and_metrics 423727 640536 216809 ms
99th percentile latency hourly_agg_with_filter_and_metrics 451415 681837 230423 ms
100th percentile latency hourly_agg_with_filter_and_metrics 453037 684118 231082 ms
50th percentile service time hourly_agg_with_filter_and_metrics 3990.75 5540.04 1549.29 ms
90th percentile service time hourly_agg_with_filter_and_metrics 4299.6 5639.64 1340.04 ms
99th percentile service time hourly_agg_with_filter_and_metrics 4508.06 5735.22 1227.16 ms
100th percentile service time hourly_agg_with_filter_and_metrics 4558.96 5735.71 1176.75 ms
error rate hourly_agg_with_filter_and_metrics 0 0 0 %
Min Throughput multi_term_agg 0.215344 0.225703 0.01036 ops/s
Mean Throughput multi_term_agg 0.216641 0.227269 0.01063 ops/s
Median Throughput multi_term_agg 0.216682 0.227461 0.01078 ops/s
Max Throughput multi_term_agg 0.217702 0.227794 0.01009 ops/s
50th percentile latency multi_term_agg 362948 341787 -21161.5 ms
90th percentile latency multi_term_agg 506426 477057 -29368.5 ms
99th percentile latency multi_term_agg 537644 507512 -30132.3 ms
100th percentile latency multi_term_agg 539440 509265 -30175.1 ms
50th percentile service time multi_term_agg 4592.08 4411.76 -180.32 ms
90th percentile service time multi_term_agg 4862.8 4554.54 -308.262 ms
99th percentile service time multi_term_agg 5257.77 4635.49 -622.276 ms
100th percentile service time multi_term_agg 5321.51 4661.35 -660.162 ms
error rate multi_term_agg 0 0 0 %
Min Throughput scroll 25.0446 25.0511 0.00646 pages/s
Mean Throughput scroll 25.0734 25.0841 0.01077 pages/s
Median Throughput scroll 25.0668 25.0766 0.00982 pages/s
Max Throughput scroll 25.133 25.1525 0.01947 pages/s
50th percentile latency scroll 223.821 206.36 -17.4613 ms
90th percentile latency scroll 233.25 210.892 -22.3581 ms
99th percentile latency scroll 271.945 263.809 -8.13596 ms
100th percentile latency scroll 295.632 285.433 -10.1997 ms
50th percentile service time scroll 221.815 204.332 -17.4827 ms
90th percentile service time scroll 231.237 208.733 -22.5045 ms
99th percentile service time scroll 270.336 261.858 -8.47842 ms
100th percentile service time scroll 294.077 283.312 -10.7643 ms
error rate scroll 0 0 0 %
Min Throughput desc_sort_size 1.00319 1.00319 0 ops/s
Mean Throughput desc_sort_size 1.00388 1.00388 0 ops/s
Median Throughput desc_sort_size 1.00382 1.00382 0 ops/s
Max Throughput desc_sort_size 1.00477 1.00477 0 ops/s
50th percentile latency desc_sort_size 8.142 7.68692 -0.45508 ms
90th percentile latency desc_sort_size 8.88212 8.31439 -0.56773 ms
99th percentile latency desc_sort_size 10.3515 8.99408 -1.35744 ms
100th percentile latency desc_sort_size 10.6359 9.06204 -1.57388 ms
50th percentile service time desc_sort_size 6.38267 5.88221 -0.50046 ms
90th percentile service time desc_sort_size 6.89728 6.5777 -0.31959 ms
99th percentile service time desc_sort_size 8.45037 6.95968 -1.49069 ms
100th percentile service time desc_sort_size 8.65789 6.97288 -1.68501 ms
error rate desc_sort_size 0 0 0 %
Min Throughput asc_sort_size 1.0032 1.00319 -1e-05 ops/s
Mean Throughput asc_sort_size 1.00389 1.00388 -1e-05 ops/s
Median Throughput asc_sort_size 1.00384 1.00382 -2e-05 ops/s
Max Throughput asc_sort_size 1.00479 1.00477 -2e-05 ops/s
50th percentile latency asc_sort_size 9.0643 8.5027 -0.5616 ms
90th percentile latency asc_sort_size 9.87484 9.24479 -0.63005 ms
99th percentile latency asc_sort_size 10.7149 12.3509 1.63594 ms
100th percentile latency asc_sort_size 10.8255 14.5397 3.71419 ms
50th percentile service time asc_sort_size 7.24988 6.59937 -0.65051 ms
90th percentile service time asc_sort_size 7.9673 7.3174 -0.6499 ms
99th percentile service time asc_sort_size 9.00948 10.293 1.28351 ms
100th percentile service time asc_sort_size 9.30451 12.2812 2.97672 ms
error rate asc_sort_size 0 0 0 %
Min Throughput desc_sort_timestamp 1.00309 1.0031 1e-05 ops/s
Mean Throughput desc_sort_timestamp 1.00376 1.00377 1e-05 ops/s
Median Throughput desc_sort_timestamp 1.00371 1.00372 1e-05 ops/s
Max Throughput desc_sort_timestamp 1.00462 1.00464 2e-05 ops/s
50th percentile latency desc_sort_timestamp 14.078 13.598 -0.48004 ms
90th percentile latency desc_sort_timestamp 14.9378 14.4817 -0.45606 ms
99th percentile latency desc_sort_timestamp 16.0152 17.7357 1.72057 ms
100th percentile latency desc_sort_timestamp 16.2004 18.0537 1.85327 ms
50th percentile service time desc_sort_timestamp 12.183 11.8965 -0.28655 ms
90th percentile service time desc_sort_timestamp 12.967 12.3499 -0.6171 ms
99th percentile service time desc_sort_timestamp 14.0214 15.6305 1.60903 ms
100th percentile service time desc_sort_timestamp 14.024 15.9471 1.92305 ms
error rate desc_sort_timestamp 0 0 0 %
Min Throughput asc_sort_timestamp 1.00325 1.00326 0 ops/s
Mean Throughput asc_sort_timestamp 1.00395 1.00396 0 ops/s
Median Throughput asc_sort_timestamp 1.0039 1.0039 1e-05 ops/s
Max Throughput asc_sort_timestamp 1.00487 1.00487 -0 ops/s
50th percentile latency asc_sort_timestamp 8.16401 8.13947 -0.02454 ms
90th percentile latency asc_sort_timestamp 8.74619 8.84755 0.10137 ms
99th percentile latency asc_sort_timestamp 9.3222 9.4685 0.1463 ms
100th percentile latency asc_sort_timestamp 9.39199 9.53748 0.14549 ms
50th percentile service time asc_sort_timestamp 6.35057 6.39604 0.04547 ms
90th percentile service time asc_sort_timestamp 6.78256 6.88094 0.09838 ms
99th percentile service time asc_sort_timestamp 7.19975 7.43338 0.23362 ms
100th percentile service time asc_sort_timestamp 7.29475 7.54803 0.25327 ms
error rate asc_sort_timestamp 0 0 0 %
Min Throughput desc_sort_with_after_timestamp 1.00898 1.009 2e-05 ops/s
Mean Throughput desc_sort_with_after_timestamp 1.0239 1.02397 8e-05 ops/s
Median Throughput desc_sort_with_after_timestamp 1.01644 1.01649 4e-05 ops/s
Max Throughput desc_sort_with_after_timestamp 1.09759 1.09801 0.00043 ops/s
50th percentile latency desc_sort_with_after_timestamp 6.15593 6.47168 0.31574 ms
90th percentile latency desc_sort_with_after_timestamp 6.58552 7.02008 0.43456 ms
99th percentile latency desc_sort_with_after_timestamp 7.05366 7.32734 0.27369 ms
100th percentile latency desc_sort_with_after_timestamp 7.16233 7.41164 0.24931 ms
50th percentile service time desc_sort_with_after_timestamp 4.30318 4.60445 0.30127 ms
90th percentile service time desc_sort_with_after_timestamp 4.6949 4.88947 0.19457 ms
99th percentile service time desc_sort_with_after_timestamp 5.05736 5.22581 0.16844 ms
100th percentile service time desc_sort_with_after_timestamp 5.11328 5.28413 0.17085 ms
error rate desc_sort_with_after_timestamp 0 0 0 %
Min Throughput asc_sort_with_after_timestamp 1.00905 1.00906 1e-05 ops/s
Mean Throughput asc_sort_with_after_timestamp 1.0241 1.02412 2e-05 ops/s
Median Throughput asc_sort_with_after_timestamp 1.01658 1.01659 1e-05 ops/s
Max Throughput asc_sort_with_after_timestamp 1.09856 1.09863 8e-05 ops/s
50th percentile latency asc_sort_with_after_timestamp 5.43168 5.0587 -0.37298 ms
90th percentile latency asc_sort_with_after_timestamp 5.78791 5.69287 -0.09504 ms
99th percentile latency asc_sort_with_after_timestamp 6.01717 5.80965 -0.20752 ms
100th percentile latency asc_sort_with_after_timestamp 6.06881 5.81911 -0.2497 ms
50th percentile service time asc_sort_with_after_timestamp 3.51118 3.3025 -0.20869 ms
90th percentile service time asc_sort_with_after_timestamp 3.71886 3.50386 -0.215 ms
99th percentile service time asc_sort_with_after_timestamp 3.92875 3.64394 -0.28481 ms
100th percentile service time asc_sort_with_after_timestamp 4.01702 3.66393 -0.3531 ms
error rate asc_sort_with_after_timestamp 0 0 0 %
Min Throughput range_size 2.00946 2.00959 0.00012 ops/s
Mean Throughput range_size 2.01309 2.01326 0.00016 ops/s
Median Throughput range_size 2.0126 2.01274 0.00015 ops/s
Max Throughput range_size 2.01873 2.01899 0.00026 ops/s
50th percentile latency range_size 8.83497 8.36179 -0.47318 ms
90th percentile latency range_size 9.73498 9.00261 -0.73237 ms
99th percentile latency range_size 11.5954 9.39315 -2.20229 ms
100th percentile latency range_size 11.9057 9.63734 -2.26832 ms
50th percentile service time range_size 7.50018 7.04188 -0.4583 ms
90th percentile service time range_size 8.20468 7.34781 -0.85687 ms
99th percentile service time range_size 10.4258 8.20924 -2.21654 ms
100th percentile service time range_size 10.6851 8.56015 -2.12496 ms
error rate range_size 0 0 0 %
Min Throughput range_with_asc_sort 2.00808 2.00807 -0 ops/s
Mean Throughput range_with_asc_sort 2.01119 2.01117 -2e-05 ops/s
Median Throughput range_with_asc_sort 2.01076 2.01075 -0 ops/s
Max Throughput range_with_asc_sort 2.01604 2.016 -4e-05 ops/s
50th percentile latency range_with_asc_sort 19.5364 19.6227 0.0863 ms
90th percentile latency range_with_asc_sort 22.1304 22.5587 0.42829 ms
99th percentile latency range_with_asc_sort 24.7692 25.748 0.9788 ms
100th percentile latency range_with_asc_sort 25.3118 25.8429 0.53113 ms
50th percentile service time range_with_asc_sort 17.8614 17.9573 0.09583 ms
90th percentile service time range_with_asc_sort 20.5443 20.944 0.39971 ms
99th percentile service time range_with_asc_sort 23.0851 23.919 0.83398 ms
100th percentile service time range_with_asc_sort 23.0867 24.3742 1.28751 ms
error rate range_with_asc_sort 0 0 0 %
Min Throughput range_with_desc_sort 2.00916 2.00935 0.00018 ops/s
Mean Throughput range_with_desc_sort 2.01267 2.01293 0.00026 ops/s
Median Throughput range_with_desc_sort 2.01218 2.01244 0.00026 ops/s
Max Throughput range_with_desc_sort 2.01815 2.0185 0.00035 ops/s
50th percentile latency range_with_desc_sort 22.4888 21.377 -1.11176 ms
90th percentile latency range_with_desc_sort 25.5058 24.4574 -1.04836 ms
99th percentile latency range_with_desc_sort 27.1699 29.2544 2.08444 ms
100th percentile latency range_with_desc_sort 27.5161 29.6355 2.11939 ms
50th percentile service time range_with_desc_sort 20.2721 19.2692 -1.00295 ms
90th percentile service time range_with_desc_sort 23.2618 22.1687 -1.09307 ms
99th percentile service time range_with_desc_sort 24.5961 26.5679 1.97178 ms
100th percentile service time range_with_desc_sort 24.8528 26.92 2.06717 ms
error rate range_with_desc_sort 0 0 0 %

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Search:Aggregations Search:Performance v3.4.0 Issues and PRs related to version 3.4.0

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

Add skip_list logic to auto date histogram, confirm with big5's range-auto-date-history-with-metrcis, range-auto-date-history

3 participants