-
Notifications
You must be signed in to change notification settings - Fork 37
fix(transport): do not callback after listen errored #139
Conversation
if (err) { | ||
return | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch!
Can we make it more simpler with
listener.once('error', cb)
...
listener.removeListener('error', cb)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't, that's what it was before, which doesn't catch the case where an error happens but the listen callback still happens
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May I offer an alternative to these two ways?
listener.once('error', (err) => {
if (err !== 'timeout') { // timeout handled internally in libp2p-webrtc-star when connect_error is fired up
listener.close() // data packet parse error
}
})
Reasoning: socket io client emits two events on connection error (connect_error and error). callback is attached internally to connect error. so no need to attach again for error. BUT error is also called on a data packet parse error, so we need some kind of error handling for the two scenarios. Hence ^
@michaelfakhri thanks but as far as I know this handler at the moment doesn't use socket.io. Only pull-ws, net and webrtc |
@dignifiedquire the webrtc transport uses socket.io internally to connect to the signalling server. Isn't that where you are seeing the error, while connecting to the sig server? |
A bit of background: I am seeing the same exact error "callback was already called" while testing the browser application I am building using libp2p. The reason in my case being the restarting of the sig server while changing between mocha -> karma tests. I believe your error would be something similar relating to the sig server connection. |
@michaelfakhri I think I was seeing the issue in other cases as well. But in any case the handling here should not include checking for transport specific errors, if signaling server transport emits errors that shouldn't be handled as such in here, they should not be emitted from there in the first place. I've added another important fix for close behaviour. Before we were not waiting for the stream muxer to properly close. Now we do. This depends on libp2p/js-libp2p-spdy#47 |
6cf833d
to
878fdcc
Compare
@dignifiedquire Yes I completely agree with everything you said. Just for reference: here is the error I was talking about earlier, it isn't actually caused by the reason I mentioned earlier. I wrote a test that emits the callback was already called error. I think its caused by multiple events firing internally inside socket.io-client. I haven't looked that deep into it but I can tell something quite weird is going on and I dont think its any of the code in here that's directly causing this error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small question, otherwise LGTM
each(transports[key].listeners, (listener, cb) => { | ||
listener.close(cb) | ||
}, cb) | ||
}, callback) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't all they be closed in parallel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
each
is parallel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, thank you :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's ensure CI is happy before merging
Yeah looks like one test is unhappy, fixing now. |
79de643
to
d56df87
Compare
should be happy now |
@dignifiedquire CI still says |
@diasdavid I don't get it :/ all passing locally |
@dignifiedquire the only different to CI, is that CI is doing a fresh npm install every time, have you tried that in your local machine? |
Yeah I did, could you try as well? |
Locally, it works for me too. So, the error is:
But that line is on the |
@diasdavid ready for squash and ship |
squashed and shipped :) |
Fixes the error seen in orbitdb-archive/orbit#210 (comment)