Skip to content
This repository has been archived by the owner on Jul 21, 2023. It is now read-only.

Scope of queries #103

Open
vasco-santos opened this issue Apr 18, 2019 · 8 comments
Open

Scope of queries #103

vasco-santos opened this issue Apr 18, 2019 · 8 comments
Labels
kind/enhancement A net-new feature or improvement to an existing feature P1 High: Likely tackled by core team if no one steps up status/ready Ready to be worked

Comments

@vasco-santos
Copy link
Member

In the context of improving the performance of the DHT operations @dirkmc did a great job limiting the scope of queries to k closest peers on libp2p/js-libp2p-kad-dht#97. There was a good discussion in the PR regarding where we should tweak the scope of queries, according to the type of DHT operation, in order to not compromise the proper work of the DHT and boost performance.

Queries performed

We perform queries on:

  • getClosestPeers()
    • as a public API function
    • called by provide(), a public API function to add self as provider of a key
    • called by put(), a public API function, to find nodes no which to store a key/value
  • findPeer()
    • as a public API function
    • called by random walk, in order to discover peers
  • findProviders()
  • getMany()

In that PR, we decided to go with an initial step for limiting the scope of the first two operations.

Proposed Idea

@jacobheun proposed the following:

So getClosestPeers() belongs only to write actions to the network. This makes sense because we want to be accurate when we're storing values. The downside of this is that getClosestPeers should, by design, be "slow", moreso for newer peers with sparse routing tables.

For the fetching functions getMany, findPeer, findProviders we should be as quick as possible. As mentioned, I think this may warrant reducing the tolerance on number of peers required for a success. Finding 16 records when there are potentially only 20 nodes who have values feels too high to me, especially if the query should be relatively quick. With a network that is still in flux, with so many new nodes coming and going, I think lowering the threshold at least initially is a reasonable approach.

For eventually getting the tolerance back up to a higher level (16), there's discussion about adopting sloppiness at libp2p/specs#163. While it's not the same concept of sloppiness, I was thinking about how being sloppy with the number of writes, combined with eventual accuracy, might be helpful in alleviating some of this. For example, we discussed concerns about spending enough time finding closest peers when performing a PUT, so maybe exiting early isn't ideal, there could be closer peers. If we operated "sloppily" we could perform a PUT with the closest kValue (20) we have when we would normally end the query, and then continue searching looking for eventual accuracy. Once the query was "done", we could PUT to any closer peers we hadn't already. This breaks the Spec a bit because we're potentially writing to more than kValue peers, but I think there is merit in the fault tolerance it provides to the network.

@vasco-santos vasco-santos added kind/enhancement A net-new feature or improvement to an existing feature status/ready Ready to be worked P1 High: Likely tackled by core team if no one steps up labels Apr 18, 2019
@vasco-santos
Copy link
Member Author

We will start by testing the initial change in the real network and take conclusions on it. With the obtained results, we aim to understand how we can improve the changed queries, as well as to create proposals for the remaining ones.

cc @jacobheun @dirkmc @jhiesey

@dirkmc
Copy link
Contributor

dirkmc commented Apr 18, 2019

👍 Thanks for putting this together @vasco-santos

@jacobheun
Copy link
Contributor

jacobheun commented Apr 23, 2019

I spent some time today testing out queries with the latest js-ipfs and I think I discovered some issues. I was seeing some getClosestPeer queries taking 3-5mins when attempting to add basic content.

Issues

1. Slow peers block.

Currently all queries operate under a concurrency of ALPHA (3). This is fine, however, if we are trying to query a peer that is slow or never responds we don't move on quickly, we wait. As the spec mentions, we should remove them from consideration. If we end up with 3 bad peers at the front of the queue, the query could be stuck for a significant amount of time.

Nodes that fail to respond quickly are removed from consideration until and unless they do respond.

2. Query logic

With the latest logic, we start off querying ALPHA peers. Each query that returns closer peers, has those peers added to the query queue. Each time we start the next query in the queue, we check to see if we have already queried kValue peers and if we have any peers in the queue closer than those. If we have closer peers, we keep running the queue, with no priority. The spec indicates we should be querying the closest peers we're aware of. So instead of just running through the queue of peers, the running queries should be composed of ALPHA peers we have not yet queried, from the top kValue (20) closest peers we know.

Of the k nodes the initiator has heard of closest to the target, it picks α that it has not yet queried and resends the FIND NODE RPC to them.

Proposal

Add every unique peer found via querying to the PeerDistanceList, to maintain a full sorted list of peers. Every time a query is finished in the queue, get the next closest peer from PeerDistanceList that we have not already queried and add it to the queue. If a query errors or takes longer than, X ms (say 5 seconds for example), remove/blacklist if from PeerDistanceList and start the next item. Once we have successfully queried the top kValue (20) peers in the PeerDistanceList the query is done.

What does this do?
Let's say our queries will result in us finding 100 total peers. We know we have to run at least kValue queries, because we have to query each peer we end up returning. Assume we've done 18 queries and those happen to be the closest 18 of the 100 peers. On the 19th query, which also happens to be the 19th closest peer, assume we finally find the 20th closest peer.

Currently, that 20th peer will be close to the end of the queue. This means we will end up querying every peer before that peer, even though they are further away. Ideally, we should immediately grab that peer, query it and be done.

@dirkmc @vasco-santos if this sounds correct, I can work on the adjustments.

@dirkmc
Copy link
Contributor

dirkmc commented Apr 24, 2019

@jacobheun PeerQueue is actually a Heap sorted by xor distance from the key, so dequeue() should always return the closest peer to the key that it knows about.

I think it's a good idea to reduce the timeout, 60s seems way too long to hear back from a peer.

@jacobheun
Copy link
Contributor

PeerQueue is actually a Heap sorted by xor distance from the key

Ah, I missed that. It's specific to a single Path though? Shouldn't we use a single PeerQueue to pull from?

Using separate queues runs a risk of having paths finish much earlier than others. For example, if I take the 3 closest peers from my routing table and query them and two of them are no longer online, won't I lose 66% of my query power, and still have at least 19 queries to make?

It seems like moving the PeerQueue to the entire Run instead of each path would greatly improve performance there. Unless I am missing something?

@dirkmc
Copy link
Contributor

dirkmc commented Apr 24, 2019

As I understand it, the original Kademlia just queries until it finds d closest nodes, whereas for S/Kademlia we need to query d separate paths. The reason the paths are kept separate is in order to avoid adversarial nodes jamming up the query (eg by returning fake peers near the key). The disjoint paths lookup is described in section 4.4 of the S/Kademlia paper:

4.4. Lookup over disjoint paths

In section 3.2 we have shown the importance of using multiple disjoint paths to lookup keys in a network with adversarial nodes. The original Kademlia lookup iteratively queries α nodes with a FIND NODE RPC for the closest k nodes to the destination key. α is a system-wide redundancy parameter such as 2. In each step the returned nodes from previous RPCs are merged into a sorted list from which the next α nodes are picked. A major drawback of this approach is, that the lookup fails as soon as a single adversarial node is queried. We extended this algorithm to use d disjoint paths and thus increase the lookup success ratio in a network with adversarial nodes. The initiator starts a lookup by taking the k closest nodes to the destination key from his local routing table and distributes them into d independent lookup buckets. From there on the node continues with d parallel lookups similar to the traditional Kademlia lookup. The lookups are independent, except the important fact, that each node is only used once during the whole lookup process to ensure that the resulting paths are really disjoint. By using the sibling list from section 4.2 the lookup doesn’t converge at a single node but terminates on d close-by neighbors, which all know the complete s siblings for the destination key. Hence a lookup is still successful even if k − 1 of the neighbors are adversarial.

Now that I've re-read this it occurs to me that instead of starting with k (20) nodes from the local routing table, we are starting with α (3), so we could probably improve overall query time by making that change.

@vasco-santos
Copy link
Member Author

Great analysis! As @dirkmc stated, the PeerQuery is sorted by distance but regarding the timeout, I would suggest that we make some tests, in order to find average times that a query takes for good peers and find a good timeout according to the results.

In the disjoint paths lookup, we differentiate the queries per path as a security mechanism. But, can't we understand that a query has no more peers to query and divide the larger of the other queries and populate the empty one with half the peers?

Finally, getting k instead of α for starting the query makes total sense to me. We may know closer peers to the key that we are looking for, than the peers that are returned from the first set of queried peers.

@jacobheun
Copy link
Contributor

Now that I've re-read this it occurs to me that instead of starting with k (20) nodes from the local routing table, we are starting with α (3), so we could probably improve overall query time by making that change.

Oh yeah, dang, that will be a significant improvement. I was reading the original paper, I'll dive into the S/Kademlia paper more today.

I'll work on making these updates and get a WIP PR started soon so we can discuss more there.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/enhancement A net-new feature or improvement to an existing feature P1 High: Likely tackled by core team if no one steps up status/ready Ready to be worked
Projects
None yet
Development

No branches or pull requests

3 participants