-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Improve performance of stack function #1281
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This one only works if the items are sorted by their start time. So unfortunately it doesn't play nice with custom ordering.
Now the stacking algorithm respects order correctly, while still making a significant improvement to performance.
Nice work! I'll check this PR today and get it in ASAP |
yotamberk
approved these changes
Nov 6, 2021
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the tests I checked seemed to pass.
Nice work!
yotamberk
changed the title
Improve performance of stack function
feat: Improve performance of stack function
Nov 6, 2021
🎉 This PR is included in version 7.5.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
YoshiWalsh
pushed a commit
to YoshiWalsh/vis-timeline
that referenced
this pull request
Dec 27, 2024
When I wrote visjs#1281 I was under the impression that calling `Array.slice` and then using the resulting array in a read-only capacity would be optimised by JS engines to use an offset/crop into the same memory as the original array, similar to how Node's Buffer.subarray works. I have since discovered that this is not the case: `Array.slice` is an expensive function that physically shallow-copies the array. Avoiding the use of `slice` significantly improves stacking performance with large amounts of items. (On a dataset of 6,000 items, this reduces stacking time by ~27%!)
YoshiWalsh
pushed a commit
to YoshiWalsh/vis-timeline
that referenced
this pull request
Dec 27, 2024
When I wrote visjs#1281 I was under the impression that calling `Array.slice` and then using the resulting array in a read-only capacity would be optimised by JS engines to use an offset/crop into the same memory as the original array, similar to how Node's Buffer.subarray works. I have since discovered that this is not the case: in both Chrome & Firefox,`Array.slice` always carries the cost of shallow-copying the array. Avoiding the use of `slice` significantly improves stacking performance with large amounts of items. (On a dataset of 6,000 items, this reduces stacking time by ~27%!)
YoshiWalsh
pushed a commit
to YoshiWalsh/vis-timeline
that referenced
this pull request
Dec 27, 2024
When I wrote visjs#1281 I was under the impression that calling `Array.slice` and then using the resulting array in a read-only capacity would be optimised by JS engines to use an offset/crop into the same memory as the original array, similar to how Node's Buffer.subarray works. I have since discovered that this is not the case: in both Chrome & Firefox,`Array.slice` always carries the cost of shallow-copying the array. Avoiding the use of `slice` significantly improves stacking performance with large amounts of items. (On a dataset of 6,000 items, this reduces stacking time by ~27%!)
YoshiWalsh
pushed a commit
to YoshiWalsh/vis-timeline
that referenced
this pull request
Dec 30, 2024
When I wrote visjs#1281 I was under the impression that calling `Array.slice` and then using the resulting array in a read-only capacity would be optimised by JS engines to use an offset/crop into the same memory as the original array, similar to how Node's Buffer.subarray works. I have since discovered that this is not the case: in both Chrome & Firefox,`Array.slice` always carries the cost of shallow-copying the array. Avoiding the use of `slice` significantly improves stacking performance with large amounts of items. (On a dataset of 6,000 items, this reduces stacking time by 5-20%!)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
fixes #1233
Well this was a fun challenge. This PR improves the performance of the stack, substack, and stackSubgroups functions. It also refactors them so that they all use a single common function.
I tested this against a ~1,600 item dataset. Before the change, stacking took 4.6 seconds. After the change it takes 84 milliseconds.
I think the new algorithm is O(n log n) for the best case (order of items matches chronological order) and O(n^2) for the worst case (order of items is the exact inverse of chronological order). I believe the previous stacking algorithm was actually O(n^2 log n), and not the O(n^2) mentioned in yotamberk/timeline-plus#26. But I'm not a computer scientist, so someone else would have to confirm this.
I have tested this using the "subgroups" example and it appears to maintain the same stacking behaviour. I've also tested it with a large dataset in my employer's application and it looks identical there too. But I realise that this isn't exactly extensive testing, so please let me know if there's anything I missed.