-
Notifications
You must be signed in to change notification settings - Fork 768
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serialize the peerset state on disk and reload it at startup #565
Comments
In the past, we used to use a network-specific |
Hey, is anyone still working on this? Due to the inactivity this issue has been automatically marked as stale. It will be closed if no further activity occurs. Thank you for your contributions. |
Issue still relevant and important. |
I had a look into the I would like to proceed with the following solution. Add another CLI argument In order to implement such behaviour, the modifications of the
Keep track of the
|
Why do we need a CLI option for this?
How should you only have inbound connections? If you only have these, you would have some problems already. You would for example running into a eclipse attack. You can just ignore this setting. |
The |
I second what @bkchr said, it'd make more sense to me to just have this behavior be the default instead of requiring the user to pass a CLI argument to enable it. (On the other hand adding an option to disable it might make some sense, e.g. for debugging purposes or something.) |
To keep track of what @RGafiyatullin and me discussed in dm: I think that for we should persist all peers. With some kind of What I don't know and what may could also work, can we not just announce all available peers to the peer set and it starts connecting to some of them? I mean when we currently connect to a boot node it will share known nodes with us (I don't know how it works for sure) and then we start connecting to these nodes as well. Could we not hook in there directly? So instead of getting these peers from a different node, we insert the known peers? One other random question, should we may not persist inbound peers? Otherwise this could may lead to some eclipse attack? As after the next restart these nodes would be outbound peers from our POV? Would be nice to get some feedback from you @tomaka. |
FWIW we have implemented simple networking persistence in Subspace and decided that first failure time is the best thing to store, so we can kick non-responsive peers eventually. |
@bkchr It's actually insanely complicated. The whole story about how we store peers, addresses, how long (if you store them forever, you've got a memory leak), which addresses we try, when do we stop trying addresses, etc. is a big hack, because in 3 years of networking I've never figured out an algorithm to do this properly. |
Yeah, but for now I'm more interested in have an idea what would be the best place to hook this in. What kind of pruning strategy etc could be figured out over time. |
* Fresh runtime api instance per call estimation * rustfmt skip * fmt
* High level docs - start. * Clean up README * Start adding details to high level docs * More docs on the header sync pallet * Testing scenarios document. * Add some scenarios. * Add multi-sig scenario. * Start writing about message dispatch pallet * Move content from old README into PoA specific doc * Apply suggestions from code review Co-authored-by: Andreas Doerr <adoerr@users.noreply.github.com> * GRANDPA for consistency. * Describe scenario steps. * WiP * Add notes about block production and forks * Update. * Add sequence diagram for Millau to Rialto transfer * Clean up header sync pallet overview * Remove leftover example code * Clean up testing scenarios and amend sequence diagram. * Linking docs. * Add some more docs. * Do a bit of cleanup on the high-level docs * Clean up the testing scenario * Fix typos in flow charts * Fix small typo * Fix indentation of Rust block * Another attempt at rendering block correctly * TIL about lazy list numbering in Markdown * Add list numbers across sections * Start counting from correct number * Update README to use correct path to local scripts * Wrap ASCII art in code block Co-authored-by: Tomasz Drwięga <tomasz@parity.io> Co-authored-by: Tomasz Drwięga <tomusdrw@users.noreply.github.com> Co-authored-by: Andreas Doerr <adoerr@users.noreply.github.com>
We used to store a
nodes.json
file on disk containing the known state of the network. This feature got removed when we introduced the peerset manager, but it has always been intended that it should be restored.It is fundamentally better to not be based too much on bootnodes, and instead try to restore connections to previously working nodes.
This issue is about restoring this feature.
The text was updated successfully, but these errors were encountered: