This repository has been archived by the owner on Aug 2, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 36
Performance comparison of multisearch and date range bucket aggregation #63
Comments
running on two benchmarks with multisearch and time range query. date_range query is 2~10 times faster than multisearch. Every time I ran a query, I restarted ec2 nodes to invalidate OS and shard cache. date_range query:
multisearch query:
Environment:
Two benchmarks:
|
kaituo
added a commit
to kaituo/anomaly-detection
that referenced
this issue
Mar 16, 2020
Preview does not use all data in the given time range as it is costly. Previously, we sample data by issuing multiple queries on shard 0 data. The purpose of the shard 0 query restriction is to reduce system costs. The nab_art_daily_jumpsup data set has one doc in each interval, and the doc is spread out in 5 shards. Even though we issue 360 queries, we only get 70~80 samples back by querying shard 0. Together with interpolated data points, the preview run misses significant portions of data required to train models (400 is the minimum) and thus returns empty preview results. This PR fixes the issue by removing the shard 0 search restriction. Previously, the preview API issues multiple queries encapsulated in a multisearch request (the request can contain 360 search queries at most). The same result could be obtained via a date range query with multiple range buckets. We show a date range query is 2~10 times faster than a multisearch request (opendistro-for-elasticsearch#63). This PR replaces the multisearch request with a date range query. This PR also removes unused field scriptService in SearchFeatureDao. Testing done: - Previous preview unit tests pass. - Manually verified date range queries results are correctly processed by cross checking intermediate logs. - Manually verified preview results with multisearch and date range implementation are the same. - Manually verified preview don't show empty results with the nab_art_daily_jumpsup data set with the fix
kaituo
added a commit
that referenced
this issue
Mar 21, 2020
* Fix empty preview result due to insufficient sample Preview does not use all data in the given time range as it is costly. Previously, we sample data by issuing multiple queries on shard 0 data. The purpose of the shard 0 query restriction is to reduce system costs. The nab_art_daily_jumpsup data set has one doc in each interval, and the doc is spread out in 5 shards. Even though we issue 360 queries, we only get 70~80 samples back by querying shard 0. Together with interpolated data points, the preview run misses significant portions of data required to train models (400 is the minimum) and thus returns empty preview results. This PR fixes the issue by removing the shard 0 search restriction. Previously, the preview API issues multiple queries encapsulated in a multisearch request (the request can contain 360 search queries at most). The same result could be obtained via a date range query with multiple range buckets. We show a date range query is 2~10 times faster than a multisearch request (#63). This PR replaces the multisearch request with a date range query. This PR also - removes unused field scriptService in SearchFeatureDao. - fixes rest status during exception - fixes a bug in query generation. We generate aggregation query twice: once with filter query, once separately. Testing done: - Previous preview unit tests pass. - Manually verified date range queries results are correctly processed by cross checking intermediate logs. - Manually verified preview results with multisearch and date range implementation are the same. - Manually verified preview don't show empty results with the nab_art_daily_jumpsup data set with the fix
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Performance comparison of multisearch and date range bucket aggregation
Currently, we use multisearch in SearchFeatureDao
The text was updated successfully, but these errors were encountered: