Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Protocol Design] Scaling the libp2p network to millions of nodes #7

Open
ghost opened this issue Jun 22, 2018 · 5 comments
Open

[Protocol Design] Scaling the libp2p network to millions of nodes #7

ghost opened this issue Jun 22, 2018 · 5 comments

Comments

@ghost
Copy link

ghost commented Jun 22, 2018

How to scale our network to millions (and what that implies from an engineering standpoint)

Name: (proposed by @diasdavid ; talk to him if you want to lead)

Length (choose one): 1 hour meeting

Title (~1-7 words):

Abstract (1 sentence up to a couple paragraphs, as you prefer):

@ghost ghost assigned daviddias Jun 22, 2018
@ghost ghost changed the title How to scale our network to millions Discussion Meeting: How to scale our network to millions Jun 22, 2018
@daviddias daviddias changed the title Discussion Meeting: How to scale our network to millions [Protocol Design] Scaling the libp2p network to millions of nodes Jun 22, 2018
@daviddias
Copy link
Member

Description of the format here -- ipfs-inactive/conf#43

@ghost ghost unassigned daviddias Jun 27, 2018
@ghost ghost added the awaiting-details label Jul 3, 2018
@ghost
Copy link
Author

ghost commented Jul 4, 2018

@diasdavid Would you be willing to lead/facilitate this group? It's a good topic and I want to put it in the last Protocol Design slot I have left, but I need a high context expert to agree to drive the group otherwise I think they may struggle with understanding what the goal is.

If not you, can you propose someone? (@whyrusleeping and @Stebalien are already booked during both Protocol Design slots.)

@ghost ghost added the transfered-to-sched label Jul 4, 2018
@jacobheun
Copy link

From cryptpad: https://cryptpad.fr/code/#/2/code/edit/rx3aKF6bVLqLdmy6nCdjNZx6/

Scaling DHT to millions of nodes

Current Shortcomming

  • No backoff mechanism - The DHT gets abused by other protocols constantly
  • Membership problem - Everyone gets to be a DHT, causes a lot of churn
  • No protection to flash crowds (aka sybil attacks)

Candidate solutions

  • Backoff mechanism -
  • Coordination Protocol -
  • Sane defaults for connection Management -
  • Crypto Puzzles for when flash crowds happen -
  • Network Layout Recipes for specific use cases (e.g. "Dias-Peer-Set" for PeerPad, X for Social Networks, Y for Live Streaming) -
  • Proxy reencryption for Gossip (blind bridge) - Share nodes and network state through Gossip protocol
  • Describe the network state in a signed IPLD graph - (needs an accumulator)
  • membership mechanism - Nodes should have to prove their worth (or their ability) to be parts of more sensitive protocols (i.e. time alive challenge)

Reference for Scaling DHTs

  • Vivalvi with geobandwidth location
  • Coral with DHT clusting
  • AnonRep and other Reputation Papers

Other, still to be organized

  • Options for the users to control the number of connections

  • Nodes should not need more than 100 to 300 conns. Some nodes won’t be able to have more than 30 (i.e. browsers)

  • DHT should have a backoff mechanism

  • There should be some kind of coordination service that tells the health of the DHT so that nodes can provide better heuristics to what is happening.

  • Reputation is big part of this

@daviddias
Copy link
Member

Added to the notes folder on #28

@jacobheun
Copy link

Pretty drawings, ooooo.
scaling-drawing-notes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants