Skip to content
This repository has been archived by the owner on Jun 11, 2024. It is now read-only.

Improve network performance #1 #406

Closed
4miners opened this issue Jan 27, 2017 · 3 comments
Closed

Improve network performance #1 #406

4miners opened this issue Jan 27, 2017 · 3 comments
Assignees

Comments

@4miners
Copy link
Contributor

4miners commented Jan 27, 2017

Because issues #332, #333, #335 are connected with each other I propose to fix them as one complex issue.

Following approach should be taken:

Peers

  • Change peers communication protocol from http to websockets.
  • Move peers management to memory instead of database.
  • Node should collect metadata about other peers:
    • First we will use /peer/list to collect as much information from other nodes as we can. Query X nodes for 100 random peers every Y seconds.
    • X and Y should be adjustable, forging nodes (maybe with active delegates only) need to query more peers more often to get better overview of network.
    • Then check consistency between received data, if data for some peers are inconsistent - we should query those peers directly using for example /height endpoint. Ofc there should be a limit.
    • Property state shouldn't be exposed to other peers, that is node-specific information and every node should determine it on his own.
    • We need part of last block's header to be included in http headers (height, id, timestamp). We can compare it with our block at same height and detect that we are on fork or other node is on fork.
    • We should save timestamp when we update peer data.
    • We should save info if peer broadcasted us full block and check them all in priority.

Block replication

  • After block is forged - full block should be broadcasted to limited number of peers (max 100), we should broadcast to peers that send us full blocks before in first order. We can also send some other peers block header. That approach will ensure that every forging delegate receive last network block first, no matter how big is network, should also prevent non-active delegates to determine who is forging.
  • When full block is received - we should relay immediatelly if passed schema validation.
  • Block header should be relayed (with txs ids) instead of full blocks. If node already have those ids - can reconstruct block on his own, if not - should ask other peer (who have that block) for transactions. That approach will decrease amount of data floating between peers significantly when there is lot of transactions in block.
  • When node receive block header - should broadcast it immediatelly to other nodes if it pass schema validation, no matter what. Also we can check that block if is forked one before we ask for full transactions.
  • If last block receipt is stale - forging delegates should figure out why and ask network for blocks more aggresively. Every node should take care on his own, if I missing some blocks I should ask for them and get it instead of waiting until somebody send me block stream that I missing.

Consensus

  • Two options here:
    • Check only peers who broadcasted us full blocks (100).
    • Check all peers that we updated in last X seconds (MAX(timestamp)-X_seconds) and have data about them. Checking slice of network is not enough because relying on that can lead to network splits.
  • Update consensus after every iteration of peer update loop, only for forging nodes.
  • If consensus is too low - we should keep perform another update loop immediatelly during slot until consensus is reached.
  • Do we need broadhash/consensus at all?
@karmacoma
Copy link
Contributor

@4miners: As discussed, I've added: Change peers communication protocol from http to websockets, as the first actionable item.

@karmacoma karmacoma added this to the Core Sprint 02 milestone Jan 30, 2017
@karmacoma karmacoma added the hard label Jan 30, 2017
@karmacoma
Copy link
Contributor

Move peers management to memory instead of database.

I think we should perform a dump and restore of peers to the database when shutting down and relaunching application. During application run-time we can persist in-memory only.

@karmacoma
Copy link
Contributor

I've separated this issue back into workable issues, so we can divide the work required more easily. Therefore closing this issue.

@karmacoma karmacoma removed this from the Core Sprint 02 milestone Jan 30, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants