Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Improving Lucene Engine Query Performance by reducing number of times a single Lucene k-NN query gets executed #2115

Open
navneet1v opened this issue Sep 18, 2024 · 6 comments

Comments

@navneet1v
Copy link
Collaborator

navneet1v commented Sep 18, 2024

Description

Currently Lucene Engine KNN queries get executed during the re-write phase and not in the Weight class. On recent deep-dive we observed that rewrite function of a query can be called multiple times in the overall search flow.

Please check this code trace on rewrite running before start of fetch phase.

  1. Transport action registering fetchphase: https://github.com/opensearch-project/OpenSearch/blob/f67ed1796749376401c5cc617eff[…]n/java/org/opensearch/action/search/SearchTransportService.java
  2. Search Context is built: https://github.com/opensearch-project/OpenSearch/blob/8148e4d295397d4f5b50b85f4dc3[…]6/server/src/main/java/org/opensearch/search/SearchService.java
  3. In building the searchcontext we parse the source again: https://github.com/opensearch-project/OpenSearch/blob/8148e4d295397d4f5b50b85f4dc3[…]6/server/src/main/java/org/opensearch/search/SearchService.java (this is already done in query phase)
  4. During the create context we do the preProcess: https://github.com/opensearch-project/OpenSearch/blob/8148e4d295397d4f5b50b85f4dc3[…]6/server/src/main/java/org/opensearch/search/SearchService.java
  5. In preprocess we do context preprocess: https://github.com/opensearch-project/OpenSearch/blob/4035db48c6963e46909f28ba8552[…]erver/src/main/java/org/opensearch/search/query/QueryPhase.java
  6. And then we do rewrite again: https://github.com/opensearch-project/OpenSearch/blob/4f97fd3d5588f9be52bee37d2c51[…]r/src/main/java/org/opensearch/search/DefaultSearchContext.java

The same was observed in the flame graphs too[will add the flame graphs soon], where when we have more than 1 shard, during fetch phase the rewrite on the query is called again. This leads to running of the Lucene engine k-NN query more than 1 and adds latency.

Why we need Query and its rewrite in fetch phase

A quick scan of the opensearch core code in fetch phase I found use cases that might require to run the query rewrite again and then use it during fetch phase. Below are the references where query which was rewritten in the DefaultSearchContext during FetchPhase is added to FetchPhaseSearchContext and was used.
Explain Sub phase: This is used to provide the explanation why a particular result is in the query.
PercolateQuery Highlighting: No idea what this query type is but it does use some visitor pattern of the Query(Lucene interface) to do something.
Inner Hits: This is used to get the child hits when there is a parent child relationship between fields. Not sure about this use case as it is actually doing some more funky logic on query. This needs more deep-dive

Possible Solution

Solution 1

One solution we can explore is by wrapping all the Lucene queries in another QueryClause lets say LuceneEngineKNNQuery and have a class member in this class which actual Lucene query. Now when createWeight is called we can first rewrite the query and then call the scorer on top of it. This will ensure that Lucene k-NN query is executed only once.

Sample Code:

public class LuceneEngineKNNQuery extends Query {

    Query luceneQuery; // this can be KnnFloatVectorQuery, KnnByteVectorQuery, DiversifyingChildrenByteKnnVectorQuery etc.
    
    @Override
    public Weight createWeight(IndexSearcher searcher, ScoreMode scoreMode, float boost) throws IOException {
        // rewrite the lucene query
        Query docAndScoreQuery = searcher.rewrite(luceneQuery);
        // now return the weight class
        return docAndScoreQuery.createWeight(searcher, scoreMode, boost);
    }

    
    @Override
    public String toString(String field) {
        return "";
    }

    
    @Override
    public void visit(QueryVisitor visitor) {

    }

    
    @Override
    public boolean equals(Object obj) {
        return false;
    }

    
    @Override
    public int hashCode() {
        return 0;
    }
}

Solution 2

Another solution we can implement by caching the SearchContext at a shard level, and when fetch phase is executed we use the same SearchContext so that we don't need to rewrite the queries.
Another approach can be we can defer the rewrite and make it lazy so that only Fetch Pre-Processors that needs re-write should do the rewrite, and once it is done by a Fetch Processor then none of the other will need to run the re-write again as the query has changed now.

Pros and Cons

Both solution has its own pros and cons. Like

  1. Solution 1, solves the problem without making drastic changes in core, but may create problem if Lucene query execution changes from rewrite.
  2. Solution 2, Implementing a shard level SearchContext Cache may be tricky as it can increase the heap and what could be a blast radius is unknown. But this solution will ensure that rewrite is happening once for the query in the whole search request at Opensearch level.
@jmazanec15
Copy link
Member

Im in favor of solution (1) over solution (2). I cannot think of a major advantage of doing rewrite vs createWeight, unless there is some kind of benefit around caching - but I cannot think of any

@navneet1v
Copy link
Collaborator Author

I am also in favor of 1, but I was just thinking if we can fix it from Core side will that help us or not. Hence I added solution 2. Since the change in core is not even simple and might impact latency if not implemented correctly. So I think we should go with solution 1. But would like to hear more from other maintainers.

@luyuncheng , @vamshin , @heemin32

@heemin32
Copy link
Collaborator

How about caching the faiss search result for a short time. Do we know if the query is from the same request or not using something like query UUID? Blast radius could be smaller than caching SearchContext.

@navneet1v
Copy link
Collaborator Author

@heemin32 this is not for faiss engine, this is for Lucene engine. Plus would you elaborate how caching will work in case of Lucene?

@heemin32
Copy link
Collaborator

@heemin32 this is not for faiss engine, this is for Lucene engine. Plus would you elaborate how caching will work in case of Lucene?

You are saying for faiss, the query will happen only one time?
I thought it could be same for faiss. Maybe my understanding is only limited to innerHit. For innerHit, the query get executed 1 + n(number of returned item) which is also true for faiss.

Hmm. But for n, because it is filtered to its single parent, I guess exact search will get hit and caching might not help here.

@navneet1v
Copy link
Collaborator Author

navneet1v commented Sep 19, 2024

@heemin32 I didn't mention faiss anywhere and this double query execution for lucene happens because lucene query is executed during rewrite and if you see the links added in the description rewrite for queries happen for both query and fetch phase(for more than 1 shard). Hence extra latency.

This issue doesn't talk about extra latency during inner hits.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Backlog (Hot)
Development

No branches or pull requests

3 participants