Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DHT API #11

Merged
merged 21 commits into from
Oct 5, 2018
Merged

DHT API #11

merged 21 commits into from
Oct 5, 2018

Conversation

vyzo
Copy link
Collaborator

@vyzo vyzo commented Sep 30, 2018

also, bootstrap to have a working DHT and LIST_PEERS to see our peers.

@ghost ghost assigned vyzo Sep 30, 2018
@ghost ghost added the in progress label Sep 30, 2018
@vyzo vyzo requested a review from bigs September 30, 2018 12:36
@vyzo vyzo mentioned this pull request Oct 2, 2018
@vyzo
Copy link
Collaborator Author

vyzo commented Oct 4, 2018

Rebased on master.

Copy link
Member

@raulk raulk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some initial feedback.

ma "github.com/multiformats/go-multiaddr"
)

var BootstrapPeers = []string{
Copy link
Member

@raulk raulk Oct 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the application wanted a DHT separate from ours? I'm sure that'll be the case of Ethereum. The Ethereum Foundation runs their own bootstrap nodes per network:

https://github.com/ethereum/go-ethereum/blob/461291882edce0ac4a28f64c4e8725b7f57cbeae/params/bootnodes.go

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a related note, what if we had more than one app behind this daemon, and each wanted a different DHT?

Copy link
Collaborator Author

@vyzo vyzo Oct 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we should be able to support different bootstrap peers.
I think what we want is a configuration file of sorts for the daemon profile, and there we can list bootstrap peers there.
But I left that for a subsequent pr (we can open follow up issue).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I'd like to see an application "opening a session" with a daemon, and providing any initial seed configuration as part of that handshake.

I think of the daemon as a service provided by the environment/OS for applications to use, versus a sidecar for applications. As such, multi-tenancy and app-specific configuration should be part of the protocol.

For now, it's good that it's on our radar. An issue would be great, and perhaps a note in the specs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we can borrow the Options style configuration from go-libp2p and have a default bootstrap configuration with our peers

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's not a bad idea at all.
Upon further thought I'll add a -bootstrapPeers flag so that we can pass a comma separated list of peers in the command line, so that we can punt on the configuration.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a command-line flag in bd604c4

<-ticker.C

conns := d.host.Network().Conns()
if len(conns) >= BootstrapConnections {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit confusing to me, as it makes me think this goroutine is going to "pin/keep connections to bootstrap peers". But in practice we don't care if the active connections are to bootstrap peers or not.

If this is intended as a low watermark check for DHT, we should also verify that the returned conns support DHT?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a low watermark check mostly for the dht -- but you may want to keep bootstrap connections (say for relay, observed address identification, etc) without it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps we should earmark this as something that could/should be made into a connection manager?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

either that or move it into the dht itself, as it seems useful

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could make a connection manager, but for now it's well isolated in its corner of the code :)

defer cancel()

for _, pi := range pis {
if d.host.Network().Connectedness(pi.ID) == inet.Connected {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this check is redundant, because Host#Connect() already performs it as part of its contract.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not -- we want to skip already connected peers otherwise we will connect to fewer peers than intended.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, right, my bad!

return nil
}

func (d *Daemon) connectBootstrapPeers(pis []*pstore.PeerInfo, toconnect int) int {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe validate that len(pis) >= toconnect? Just a sanity check.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, if it is less we'll just connect to all of them.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah, the loop is a range loop over peers 👍

optional int64 timeout = 7;
}

message DHTResponse {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need a token that identifies the DHT request uniquely. Then all streaming responses would refer to that token, and the END response would close the life of the request. Currently it seems that responses for parallel DHT requests could get intermixed on the wire.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it too and concluded that request multiplexing is more trouble than it's worth.
If we want parallel queries, we can open another connection and issue the query there.
unix sockets are relatively cheap, just a transient file descriptor.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that makes sense, although it complicates the client because it'll either need to sync to allow only 1 concurrent request (bad), or it'll need to maintain a pool of client instances (too complicated), because we don't want each request to create a new client...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, i think the client can handle this with multiple connections. we definitely don't want to reinvent stream multiplexing at another layer in the stack.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can create a new socket for each request, that's simple enough.

ma "github.com/multiformats/go-multiaddr"
)

var BootstrapPeers = []string{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think we can borrow the Options style configuration from go-libp2p and have a default bootstrap configuration with our peers

<-ticker.C

conns := d.host.Network().Conns()
if len(conns) >= BootstrapConnections {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps we should earmark this as something that could/should be made into a connection manager?

<-ticker.C

conns := d.host.Network().Conns()
if len(conns) >= BootstrapConnections {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

either that or move it into the dht itself, as it seems useful

optional int64 timeout = 7;
}

message DHTResponse {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, i think the client can handle this with multiple connections. we definitely don't want to reinvent stream multiplexing at another layer in the stack.

}

if ch != nil {
for res := range ch {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to block on this response? is there any sane way it could be async? i suppose that would require per client connection locking?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No easy way to make this async without reinventing multiplexing.

@@ -189,6 +224,21 @@ func (d *Daemon) doStreamHandler(req *pb.Request) *pb.Response {
return okResponse()
}

func (d *Daemon) doListPeers(req *pb.Request) *pb.Response {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

p2pd/main.go Outdated
)

func main() {
identify.ClientVersion = "p2pd"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm should we get a version string in there? something like fmt.Sprintf("/p2pd/%d", DaemonVersion)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's probably a good idea. But what is our current version?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0.1!

@bigs
Copy link
Contributor

bigs commented Oct 5, 2018

agree w/ all your responses, just wanted to earmark some stuff for later perhaps

@vyzo vyzo merged commit 2b89ad5 into master Oct 5, 2018
@ghost ghost removed the in progress label Oct 5, 2018
@vyzo vyzo deleted the feat/dht branch October 5, 2018 15:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants