-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Added aggregation precomputation for rare terms #18106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
❌ Gradle check result for f6371a2: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
|
❌ Gradle check result for 844164e: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
|
❌ Gradle check result for 0f3bd75: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
|
It looks like I am failing the test for org.opensearch.cache.common.tier.TieredSpilloverCacheStatsIT.testClosingShard, however when I tried running this test on my local computer, it passes. What could be happening? Edit: Sorry, but it actually looks like the test did not pass on my system. I also tested it on the current codebase without any changes that I made, and it did not pass. Therefore, I do not think that my code affects the test. |
Signed-off-by: Anthony Leong <aj.leong623@gmail.com>
|
❌ Gradle check result for b5e08d8: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
|
❌ Gradle check result for ebca7e1: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
Signed-off-by: Anthony Leong <aj.leong623@gmail.com>
|
❌ Gradle check result for 9d73b57: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
… completed action items Signed-off-by: Anthony Leong <aj.leong623@gmail.com>
Signed-off-by: Anthony Leong <aj.leong623@gmail.com>
|
@sandeshkr419 I believe all the comments were addressed. Rather than making a new class to return the expected count of the missing aggregation too, I simply put a check in the searchAndReduceCounting function. I also remove a lot of the non deterministic tests and made them deterministic. I added extra tests too for better coverage. The other action item is adding the workloads to the opensearch-benchmark-workloads repository. Do I just add those query bodies in the big5/queries folder? |
|
❌ Gradle check result for b60c221: null Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
| // TODO: A note is that in scripted aggregations, the way of collecting from buckets is determined from | ||
| // the script aggregator. For now, we will not be able to support the script aggregation. | ||
|
|
||
| if (subAggregators.length > 0 || includeExclude != null || fieldName == null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can pull up null checks for weight and config here so that you don't have to assert it again.
Right now you are checking for config != null twice, and checking up (weight.count(ctx) == ctx.reader().getDocCount(fieldName) before checking for weight == null.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might be able to proceed if config == null, but if there is a script or there is both a missing parameter and there are actual missing values, we will not be able to use the precomputation optimization. But I can move up the weight check.
| // field missing, we might not be able to use the index unless there is some way we can | ||
| // calculate which ordinal value that missing field is (something I am not sure how to | ||
| // do yet). | ||
| if (config != null && config.missing() != null && ((weight.count(ctx) == ctx.reader().getDocCount(fieldName)) == false)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: weight.count(ctx) != ctx.reader().getDocCount(fieldName) instead of asserting equality as false.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. I looked at the formatting guidelines again, and I only have to assert the equality as false for unary negations.
|
|
||
| // The optimization could only be used if there are no deleted documents and the top-level | ||
| // query matches all documents in the segment. | ||
| if (weight == null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Moving this null check towards the start of method can make this more readable.
| if (bucketOrdinal < 0) { // already seen | ||
| bucketOrdinal = -1 - bucketOrdinal; | ||
| } | ||
| int amount = stringTermsEnum.docFreq(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: rename amount to docCount or docFreq
| bucketOrdinal = -1 - bucketOrdinal; | ||
| } | ||
| int amount = stringTermsEnum.docFreq(); | ||
| if (resultStrategy instanceof SignificantTermsResults) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
if (resultStrategy instanceof SignificantTermsResults sigTermsResultStrategy) {
sigTermsResultStrategy.updateSubsetSizes(0L, docCount);
}
| if (fieldName == null) { | ||
| // The optimization does not work when there are subaggregations or if there is a filter. | ||
| // The query has to be a match all, otherwise | ||
| // |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the comment is misplaced here.
Can you please check the comments on the entire PR once. Also, please remove empty comment lines.
Signed-off-by: Anthony Leong <aj.leong623@gmail.com>
|
❌ Gradle check result for 0375104: FAILURE Please examine the workflow log, locate, and copy-paste the failure(s) below, then iterate to green. Is the failure a flaky test unrelated to your change? |
Signed-off-by: Anthony Leong <aj.leong623@gmail.com>
|
This PR is stalled because it has been open for 30 days with no activity. |
|
Since you already have a new rebased PR, I'm closing this one to reduce noise. I'll continue reviewing the new PR. |
Description
This change expands on using the techniques from @sandeshkr419 pull request #11643 to precompute aggregations for match all or match none queries. We can leverage reading from termsEnum to precompute the aggregation when the field is indexed and when there are no deletions. We can check that no terms are deleted by using the weight and checking if it matches maxDocs of the reader.
Unfortunately, I was not able to use the same technique for numeric aggregators like LongRareTermsAggregator. This is because the numeric points are not indexed by frequency of terms but instead through KD-trees to optimize for different types of operations https://github.com/apache/lucene/blob/main/lucene/core/src/java/org/apache/lucene/index/PointValues.java.
Please let me know if there are any comments, concerns or suggestions.
Related Issues
Resolves #13123
#13122
#10954
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.