-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(geyser): support dumping accounts to csv #279
Conversation
* channel: replace channel with a better implementation * start integrating channel * add some test cases for the channel * only look at the final `gossip_table` when the we've exited * fix even more races in the test suite * enable tsan in the test CI * remove all usages of `unordered` from the codebase * removing PacketBatch in favour of sending singular packets * choose better time units for benchmarks * don't use identifiers that start with a underscore * remove the old channel implementation * add some documentation and clean up the code * remove stale code and improve gossip value filtering * implement an exit order sync method * cleanup allocation even more * remove the last usages of close * clarify `CONTRIBUTING.md` and more `initial_capacity` from the channel initialization * gossip: revert filter optimization - Also don't take ownership of the pull request gossip values, they aren't ours to free. This was causing GPA double free detection to trigger * gossip: add back missing `clearRetainingCapacity` * grafana: fix overlapping timestamps * fix racy deinit for gossip service * add some comments * remove `.discard` flag and fix up some logic * remove unused variable * remove UAF from gossip So this one was really hard to find. In Zig, storing the pointer (and then using it) to a capture which was by-value is not well defined. LLVM is allowed to create induction stack space for optimizing loops and overwrite the stack space where the pointer was pointing at. Here we are storing a pointer to the capture, probably as an premature optimization, and it was getting overwriten. Storing a pointer to the prune message wouldn't have optimized anyways, since the capture is still a stack-copy. * remove unneeded free * fix shred collector exiting early * cmd: remember to join on errors so they don't keep running * fix(shred-collector): not processing shreds there was a bool being used to decide whether to pass packets to the shred verifier, and it was being negated too many times. i fixed the logic. i think the bug was introduced due to having the logic spread across too many scopes, which made it confusing. i tried to make it clearer by moving the logic into a narrower scope --------- Co-authored-by: Drew Nutter <dnut@users.noreply.github.com>
66869b1
to
038a391
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some observations and questions, but otherwise the changes look really good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I skimmed it over, no comments on correctness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I'd like a chance to take a glance at the Grafana, but otherwise the code is good for me. |
nice - changed to use rate + bar and added docs explaining the metrics (see new description) probs want to document every metric we are recording 🤔 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just one nit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great!
dump a snapshot loading to csv using geyser -- main command: ./zig-out/bin/geyser csv * new tryCollapse method on recycle fba to collapse adjacent free records * add docs on new usage * add cli setup for geyser reader (src/geyser/bin.zig) * add new EndOfSnapshotLoading enum to payload type * add new geyser metrics to grafana
dump a snapshot loading to csv using geyser
main command:
./zig-out/bin/geyser csv
note: full dump of testnet snapshot is 75GB
tryCollapse
method on recycle fba to collapse adjacent free recordssrc/geyser/bin.zig
)