-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: store v3 #2431
feat: store v3 #2431
Conversation
You can find the image built from this PR at
Built from ba28042 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Direction lgtm! Added a couple of comments. Will also be making some minor changes to the draft RFC, but will keep you informed about those. :)
0d1e597
to
2a7dc0d
Compare
N.B. tests can't pass before archive is updated and store v3 is tied to it. |
15603b1
to
75177d9
Compare
This PR may contain changes to database schema of one of the drivers. If you are introducing any changes to the schema, make sure the upgrade from the latest release to this change passes without any errors/issues. Please make sure the label |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed at high level only Store protocol changes.
Will do the rest again as i have to understand a lot of existing code to understand changes :)
proc encode*(index: PagingIndexRPC): ProtoBuffer = | ||
## Encode an Index object into a ProtoBuffer | ||
## returns the resultant ProtoBuffer | ||
proc encode*(req: StoreQueryRequest): ProtoBuffer = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General Question:
We are manually encoding protobufs, seems error-prone and high maintenance.
Can't we use something like https://github.com/PMunch/protobuf-nim to generate the nim code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should but if I'm not mistaken we use proto2 and not 3?
edit: nvm we do use v3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this can be taken up as an improvement as part of separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very interesting @chaitanyaprem ! I think that is something we could look at but I don't see super urgency for now as we don't change the protobuf implementation very often
wdyt @jm-clius ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! The only blocking comment I have is that repeated fields are packed in proto3 (see #2509). Longer term, we should probably write our own library as wrapper for minprotobuf or use another.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! I think is getting a good shape!
I've added some comments that I hope you find useful.
Additionally:
- Shall we keep both legacy and new store testing? I have the impression that we are reducing the legacy-store testing coverage.
- Given that there are many changes, I think we need to validate that it is working properly from the Status point of view. @richard-ramos can help us on that matter :)
proc encode*(index: PagingIndexRPC): ProtoBuffer = | ||
## Encode an Index object into a ProtoBuffer | ||
## returns the resultant ProtoBuffer | ||
proc encode*(req: StoreQueryRequest): ProtoBuffer = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very interesting @chaitanyaprem ! I think that is something we could look at but I don't see super urgency for now as we don't change the protobuf implementation very often
wdyt @jm-clius ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving, based on proto3 discussion with @SionoiS
You can find the image built from this PR at
Built from 197d62d |
You can find the image built from this PR at
Built from 197d62d |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thank you for such a great PR! 💯
Just added minor comment :)
proc sendHistoryQueryRPC( | ||
w: WakuStoreClient, req: HistoryQuery, peer: RemotePeerInfo | ||
): Future[HistoryResult] {.async, gcsafe.} = | ||
let connOpt = await w.peerManager.dialPeer(peer, WakuStoreCodec) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe sth to check in the future but I can't see why dialing on every query. I think it would be better to separate the dialIn
from query
operations :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would you envision as an alternative? Dialing for each request makes this part of the code stateless which simplify things.
Do you mean separate for code organization purposes?
I made some changes in waku/waku_store/client.nim
Description
The new Store protocol version 3 will work side by side with the old version for some time.
I copied all the code from v2 as a starting point. Store v2 is now known as legacy store.
Store v3 order results a bit differently than legacy. Sorted by time and then by hash (was digest previously).
Everything remain the same except now we have the ability to query by hashes too 🎉
Changes
Tracking #2425