-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
events: Faster ascending array index #3976
Conversation
Do you have any benchmark results to show a speedup? |
Using a numeric array of 1.000.000 elements and executing 100 times the old and the new version of the function arrayClone I am getting the following results: arrayCloneOld (descending index) -> 445ms This is an exceptional post about this topic (tip number 3): |
I tested both implementations using 5 different arrays (in benchd using node v5.1.0) of sizes 100000, 10000, 1000, 100, and 10. Both implementations performed about the same for me and had ~15% variance after ~100 runs. |
Ok I bumped up the number of runs and the variance drops to ~5.5-9% and the for loop seems to consistently be a little faster. LGTM |
I personally always use
|
Ok, then let's use |
@targos What about smaller number of events? |
Just testing for loop vs slice (number of runs inversly proportional to length):
|
This sounds familiar, I remember implementing a similar thing where I had determined some arbitrary (could be a defined V8 constant) cutoff so that a loop was used for smaller lengths and |
Comparison between current master (node2) and
Comparison between current master and for loop:
I don't have any stats about it but I guess that the common case is to have a few listeners so this PR LGTM. |
I found the PR where I reverted back to a single algorithm. As much as I dislike using magic cutoff values like that, maybe it's worth re-adding... |
I am +0. Most of the times we will not have events in the order of 10Ks I guess. Since the number of events normally range from 10s to 100s, based on the table shown above by targos, I tend to think that slice would be better. |
lib/events.js
Outdated
while (i--) | ||
function arrayClone(arr, length) { | ||
var copy = new Array(length); | ||
for (var i = 0; i < length; i++) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll bet v8 transforms these into assembly that is about the same.
…on the length of the array
After a good amount of test I got these results with the new implementation:
What do you think about it guys? The method is a bit longer but it is able to scale much better than use just slice or just the old while loop. |
Considering these benchmarks, this change seems reasonable. The speedup is measurable and changes to the source are not excessive. I normally do not think that we should optimize for engine implementation details like I say go ahead, but maybe a comment explaining this specific implementation would be nice. It may not be obvious why this is being used in a few months. |
lib/events.js
Outdated
function arrayClone(arr, length) { | ||
if (length === 1) { | ||
return [arr[0]]; | ||
} else if (length < 50) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for else
. you're returning early.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also a code comment about why 50
was chosen. At first look the decision seems arbitrary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like using elseif in those cases so the code looks more compacted in easy functions. Anyway in order to add a comment in the second case a simple if looks better.
For me concat two times faster than slice for large arrays. return arr.concat(); |
FWIW I was doing some more benchmarking on this today and from what I'm seeing now, the magic number seems to be more like 27 instead of 50 with the node master branch as of this writing. |
I ran this simple script and I got the next results, the elements of the arrays are milliseconds from array of 25 elements (index 0), to array of 75 elements (index 49), running each operation 200,000 times. The result is that the magic number is more or less Array of numbers:
Array of strings:
Here is the script:
|
Edit: nevermind, it looks like those are slow. |
The same than before using the updated node version v5.4.1 and adding Array#concat(). In this case Array#concat() is slower than Array#slice() so we will still get the best performance using a mix of loop and Array#slice() as it's in the pull request. Should I go then for 30 as the magic number? @ChALkeR I added Array.from and Array.of to the benchmark but it didn't finish in about 30 seconds so according to that, they have to be extremely slow in this particular case. Array of numbers:
Array of strings:
Here is the script:
|
From what I'm seeing 27 might be a better choice, but anything in between 27 and 30 should probably work. |
Ewww magic numbers are bleh, but there's obviously some gain here so I'm good with this. Just so it's clear why that number (27-30) was chosen, can you add a bit more detail in the comments and give a point in time type of statement, e.g. "As of v5.2 benchmarks showed..." that will give us a reference point if we decide to revisit this later on. Otherwise, LGTM |
I would probably reference at least the v8 version (in addition to the node version) in such a comment as something like this is most likely tied to v8. |
Yes, that's not the most elegant code that I've never written, anyhow the function is simple and short enough to be easy to understand and review in the future (as soon as I add a bit more comments as you said). I see myself adding one or two listeners per EventListener object the most of the time so it is worth it. |
lib/events.js
Outdated
} | ||
|
||
return arr.slice(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No newline at the end of file?
…ber. Improved comments and added missing new line at the end of file.
Note that v8 version in Not sure if that affects any benchmarks, but perhaps it would be better to re-run them to be on the safe side. That should be easy to track, because this is not merged yet =). |
This is the updated results using the last version of node in master branch. I ran the same script in the same machine.
Array of numbers:
Array of strings:
Slower results for build it functions |
FWIW I tested the 3 methods with benchmark.js and I think the loop would be best for most cases:
Each array was filled by having every even element be an array of functions, otherwise just set to a function. |
After narrowing it down a bit more, it looks like the threshold on master (v8 4.9) is now ~70, at which point switching over to
|
Just curious if anyone has done analysis on how many events are being passed through in different scenarios. For example, if nextTick is used I've found that there's a ~98% chance that there are <= 4 queued and processed before the event loop continues. Is there any chance the method will deopt because of the extra logic? Since the JIT will most likely have optimized for the common case when the other is hit. |
Quick aside: marking this as a don't land on v4 for the time being. |
7da4fd4
to
c7066fb
Compare
c133999
to
83c7a88
Compare
@nodejs/ctc ... is this something we' want to pursue? If not, recommend closing. |
Closing due to lack of forward progress on this. Can reopen and revisit if necessary |
Faster iteration over array using an ascending index.