-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keeper Integration API #83
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
Should we add some tests to check that --conf spark.qbeast.keeper.XXX
is used in the Keeper?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some doubts about how the keeper service is defined. Also, I can't find documentation explaining how (at least not in this PR 😅)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad, I misread the cood. It seems all good 👍🏻
This PR is changing ONLY the API to connect to the Keeper. It does not introduce any logic.
PR #77 is the one solving #41. Here I only take care of matching calls between the different libraries we want to implement.
In order to delegate the maintenance of the index to a Keeper instance, this is the configuration you need to use:
As you can see, there's three new parameters:
1.
--packages io.qbeast:qbeast-spark-keeper_2.12:0.1.0-a2
-> Package of the qbeast-spark-keeper driver. It maps calls between the qbeast-spark and qbeast-keeper. It also shadesgrpc/netty
libraries to the Spark.2.
--conf spark.qbeast.keeper.host=localhost
-> The ip/host of the keeper3.
--conf spark.qbeast.keeper.port=50051
-> The port of the keeper