Skip to content
This repository has been archived by the owner on Aug 23, 2019. It is now read-only.

clean up sync in async usage #297

Merged
merged 6 commits into from
Dec 20, 2018
Merged

clean up sync in async usage #297

merged 6 commits into from
Dec 20, 2018

Conversation

jacobheun
Copy link
Contributor

This cleans up some logic in the identify flow where sync calls were being done in async code, which can lead to stack bloating. This moves the sync logic out of the async control flow.

Relates to https://github.com/libp2p/js-libp2p-switch/issues/287

if (err) {
return muxedConn.end(() => {
callback(err, null)
})
}

observedAddrs.forEach((oa) => {
this.switch._peerInfo.multiaddrs.addSafe(oa)
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend using a for loop here instead

}
], (err, peerInfo) => {
(conn, cb) => identify.dialer(conn, cryptoPI, cb)
], (err, peerInfo, observedAddrs) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might want to remove that waterfall call, it would improve things as well.

@jacobheun
Copy link
Contributor Author

I refactored the waterfall logic to use awaits and created some utility methods to make that cleaner. Also removed forEach in favor of for(;;)

Copy link
Member

@vasco-santos vasco-santos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@dirkmc
Copy link

dirkmc commented Dec 20, 2018

With regards to switching forEach loops to for loops etc, I would recommend instead focusing performance improvements on changes that make a difference in terms of big O notation: https://en.wikipedia.org/wiki/Big_O_notation


const { peerInfo, observedAddrs } = results

for (var i = 0; i < observedAddrs.length; i++) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use a for ... of loop here instead?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for ... of suffers from some performance issues still.

@@ -0,0 +1,49 @@
'use strict'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you use promisify instead of these functions? https://github.com/digitaldesignlabs/es6-promisify#readme

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added these as an interim solution. We're going to be doing a lot of work soon for they async iterators initiative libp2p/js-libp2p#266 which will remove the need for these or promisfy. If that ends up taking longer than we'd like, or we have more immediate needs in other parts of the code base, then I think switching over to promisify is warranted.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@jacobheun
Copy link
Contributor Author

In regards to overall performance improvements, we have an OKR for Q1 to get benchmarking in place for libp2p. That along with the current work for ipfs benchmarking will put us in a better position to do isolated analyses to see where the bottlenecks are in switch and the rest of libp2p. While I think those are very important, I want to avoid pushing more of that into this particular PR, as it's goal was to fix an issue with synchronous code being executed asynchronously, to help alleviate #287. I appreciate all the great feedback!

@mcollina
Copy link
Contributor

With regards to switching forEach loops to for loops etc, I would recommend instead focusing performance improvements on changes that make a difference in terms of big O notation: https://en.wikipedia.org/wiki/Big_O_notation

I agree, the main gains are in algorithmic improvements in big O notation. However, there is a lot to gain from making the code more optimizable by V8.

As an example, forEach would do n function calls, which would not appear in a big O analysis. These have a cost in term of memory allocation and work inside V8, that would not bubble up when just reasoning about algorithms. Each one of these can easily eat 5-10% of the time of a benchmark in a hot path.

This is not always a hot path, but I've see this function in a significant number of flamegraphs.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants