Skip to content
This repository has been archived by the owner on Aug 23, 2019. It is now read-only.

fix(transport): do not callback after listen errored #139

Merged
merged 4 commits into from
Dec 18, 2016

Conversation

dignifiedquire
Copy link
Member

Fixes the error seen in orbitdb-archive/orbit#210 (comment)

if (err) {
return
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch!

Can we make it more simpler with

listener.once('error', cb)
...
listener.removeListener('error', cb)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't, that's what it was before, which doesn't catch the case where an error happens but the listen callback still happens

Copy link
Contributor

@michaelfakhri michaelfakhri Dec 15, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May I offer an alternative to these two ways?

listener.once('error', (err) => {
  if (err !== 'timeout') { // timeout handled internally in libp2p-webrtc-star when connect_error is fired up
    listener.close() // data packet parse error
  }
})

Reasoning: socket io client emits two events on connection error (connect_error and error). callback is attached internally to connect error. so no need to attach again for error. BUT error is also called on a data packet parse error, so we need some kind of error handling for the two scenarios. Hence ^

@dignifiedquire
Copy link
Member Author

@michaelfakhri thanks but as far as I know this handler at the moment doesn't use socket.io. Only pull-ws, net and webrtc

@michaelfakhri
Copy link
Contributor

@dignifiedquire the webrtc transport uses socket.io internally to connect to the signalling server. Isn't that where you are seeing the error, while connecting to the sig server?
Actually another alternative is to remove the connect_error event handler from the webrtc transport, that way its much cleaner and simpler. since I doubt parse errors are actually the problem here. Are they?

@michaelfakhri
Copy link
Contributor

A bit of background: I am seeing the same exact error "callback was already called" while testing the browser application I am building using libp2p. The reason in my case being the restarting of the sig server while changing between mocha -> karma tests. I believe your error would be something similar relating to the sig server connection.

@dignifiedquire
Copy link
Member Author

@michaelfakhri I think I was seeing the issue in other cases as well. But in any case the handling here should not include checking for transport specific errors, if signaling server transport emits errors that shouldn't be handled as such in here, they should not be emitted from there in the first place.

I've added another important fix for close behaviour. Before we were not waiting for the stream muxer to properly close. Now we do. This depends on libp2p/js-libp2p-spdy#47

@michaelfakhri
Copy link
Contributor

@dignifiedquire Yes I completely agree with everything you said.

Just for reference: here is the error I was talking about earlier, it isn't actually caused by the reason I mentioned earlier. I wrote a test that emits the callback was already called error. I think its caused by multiple events firing internally inside socket.io-client. I haven't looked that deep into it but I can tell something quite weird is going on and I dont think its any of the code in here that's directly causing this error.

https://github.com/michaelfakhri/js-libp2p-swarm/commit/0d1f8bf7bc0a9bf397ed0add21643201da40ad9d

Copy link
Member

@daviddias daviddias left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small question, otherwise LGTM

each(transports[key].listeners, (listener, cb) => {
listener.close(cb)
}, cb)
}, callback)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't all they be closed in parallel?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

each is parallel

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, thank you :)

Copy link
Member

@daviddias daviddias left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's ensure CI is happy before merging

@dignifiedquire
Copy link
Member Author

Yeah looks like one test is unhappy, fixing now.

@dignifiedquire
Copy link
Member Author

should be happy now

@daviddias
Copy link
Member

@dignifiedquire CI still says no

@dignifiedquire
Copy link
Member Author

@diasdavid I don't get it :/ all passing locally

@daviddias
Copy link
Member

@dignifiedquire the only different to CI, is that CI is doing a fresh npm install every time, have you tried that in your local machine?

@dignifiedquire
Copy link
Member Author

Yeah I did, could you try as well?

@daviddias
Copy link
Member

daviddias commented Dec 18, 2016

Locally, it works for me too.

So, the error is:

  1) libp2p-swarm high level API - with everything mixed all together! close a muxer emits event:
     Uncaught AssertionError: expected true to not exist
      at conn.getPeerInfo (test/09-swarm-with-muxing.node.js:214:27)

But that line is on the dial from tcp+ws to tcp+ws, meaning that conn.getPeerInfo is being resolved after the hangup (CI is slower on things).

@dignifiedquire
Copy link
Member Author

@diasdavid ready for squash and ship

@daviddias daviddias merged commit 5d47adc into master Dec 18, 2016
@daviddias daviddias deleted the fix/transport-error branch December 18, 2016 11:20
@daviddias
Copy link
Member

squashed and shipped :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants