-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Reduce linked node size in Enumerable.Append/Prepend/Union #15389
Conversation
1a4a35c
to
cfa26d1
Compare
2bbdce8
to
5f1dd82
Compare
OK, I was finally able to post perf tests! 🎉 (see the description) @JonHanna, @stephentoub, @VSadov PTAL @karelz This can be un-marked as no merge. |
I'm honestly not sure whether to consider this an improvement or not. I'd be inclined to favour what benefits They all benefit in terms of memory, but if the objects produced were short-lived (which is v. common with linq) it would all be collected anyway, no? I'm not sure how to judge the value of something that saves in that regard but increases GC calls. I does seem to have simplified things a bit, and that's always a good thing. |
The performance tests were only written to illustrate the impact on memory retention, not CPU time. It does not actually iterate any of the enumerables returned from the Linq methods. Iteration would quickly overshadow any impact on the time it takes to create the iterators. I have encountered significant speed/memory regressions when calling Count/ToArray/ToList on Concat iterators, though, so you are correct in that sense. This is mostly because there are extra virtual calls and unnecessary re-walks of the linked list via
I actually thought that allowing older iterators to be GC'd was the reason you used BTW, there is also a benefit (albeit small) in terms of CPU time here too: since nodes have uniform type we no longer have to typecast when walking the linked list, since the tail is just represented as a null. https://github.com/dotnet/corefx/pull/15389/files#diff-49756bcd3a3a836bf3963f982c6af0f8L253 |
Ah. I see where you're going now. So all that's left from what I've said above is that it seems to have simplified things, which is good 😄 |
Looking at the results at first I thought perhaps it was forcing more GC rather than allowing sooner GC. Seeing it's the latter, this seems like good stuff. I like the unionwith added, too. Nothing seems awry, so it's all down to the perf results really, so LGTM. |
Done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Reduce linked node size in Enumerable.Append/Prepend/Union Commit migrated from dotnet/corefx@d0001fe
For
Append
/Prepend
the count of previous iterators isn't needed by this iterator, so we can move that stuff from the nodes to the iterators which can be GC'd on successive linked Appends/Prepends.Similarly, all of the non-readonly state for
Union
of previous iterators isn't relevant for this iterator, so useSingleLinkedNode
to only keep alive what we need (the sources). Additionally, all of the iterators have the same comparer so we only need to store the comparer in the latest node.Introduce
Set.UnionWith
for brevity and moveSingleLinkedNode
into its own file.edit: Was finally able to run perf tests for this. GCs have increased, as expected, because there is a substantial reduction in leaked memory.
Concat
andUnion
see 50-60% reductions in leaked memory,Append
andPrepend
about 20%.edit 2: Ignore the results for
Concat
, I hit a few complications there and the changes became pretty big. I'm going to save them for another PR./cc @JonHanna @VSadov