You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Together with the ML inference processor we want to find out if there is a way run the dynamic model-driven approach as part of the main query.
That would mean a low code integration for OpenSearch users.
The idea would be to
index the training data (index features in an index)
train a model based on the training data
create a pipeline with the ML inference request processor
a. the request processor takes the query together with its features as its input
b. it generates a prediction as its output
c. the prediction is the neural search weight, we can derive the keyword search weight from that value and use these in the hybrid search part (basically a result processor)
What would still live outside of OpenSearch is feature generation. Maybe that's alright as not everyone might use an identical set of features.
The text was updated successfully, but these errors were encountered:
With the changed approach (predict NDCG instead of neuralness) we are thinking about a custom pipeline. To be discussed in ml-commons community meeting.
OpenSearch supports a couple of models directly (e.g. linear regression models, see https://opensearch.org/docs/latest/ml-commons-plugin/algorithms/#linear-regression).
Together with the ML inference processor we want to find out if there is a way run the dynamic model-driven approach as part of the main query.
That would mean a low code integration for OpenSearch users.
The idea would be to
a. the request processor takes the query together with its features as its input
b. it generates a prediction as its output
c. the prediction is the neural search weight, we can derive the keyword search weight from that value and use these in the hybrid search part (basically a result processor)
What would still live outside of OpenSearch is feature generation. Maybe that's alright as not everyone might use an identical set of features.
The text was updated successfully, but these errors were encountered: