-
Notifications
You must be signed in to change notification settings - Fork 292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Add managed packet recycling #389
Conversation
Any feedback @cheenamalhotra @David-Engel @saurabh500 ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like these perf changes but it does take some time to digest what exactly is going on in the changes. I've been trying to do that today and I think it looks good but I'd like a second set of eyes on it.
Also, please merge changes from master to get fresh build results. I think there were some existing-but-newly-failing-tests that have since been fixed since you originally created the PR (although I see the enclave pipeline still seems to be failing). I also adjusted the PR pipelines to retain build logs for a bit longer.
This is a pretty deep change so I was assuming it would be post 2.0 but if it's a poor path to take I'd prefer to get that feedback so I can work along other avenues. |
@Wraith2 No, I think it's good. For 2.0, we are okay with a bit more risk since we are bumping the major version. And since we have our own release vehicle, it's easier for us to hotfix issues quickly. The problem with this one is just finding the time it takes to wrap your head around the changes. Feel free to tag me or request my review directly if your PRs aren't getting attention (I have to filter GitHub notification emails to other folders or my inbox is unmanageable) and I will do my best to help review them. |
I havent looked into this PR. However I saw a packet recycle mechanism which already exists for Native SNI. Lines 324 to 382 in 2e3f39e
I don't believe this cache was implemented for Managed SNI. Would this help and maintain parity in the way Native SNI is caching packets, instead of creating another way to manage a packet cache? |
BTW I don't mean to say that what has been done in this PR is wrong. |
I did see that detail, and in the past I've noted that the vestiges of the same structure were in the managed state object but not completed. I chose to push the pool down to a lower layer because it makes the lifetime management easier and when I was routing packets around it got awkward at a higher layer. Packets come in at the physical level and having to navigate all the way up to the state object to get a wrapper would leave us transporting bits of incoming data upwards to the state object and then back down to physical level then up again to be handled possibly passing through demuxing or async queuing. Why is this better? Threadsafe because the pool is using interlocks and bounded which the stack is not. The aspnet pool was optimized for high perf on small pools which is good for this approach and means we inherit the testing and design the asp team already did. It's more localized in performance terms because any single connection will have a small search for a new packet in it's own pool, prevents packet fetch delay being unbounded. The lifetime handling of the async packets feels a lot cleaner, that disposeAfter bool in async sends really bothered me. |
Hi @Wraith2 Please update your branch to "master" to trigger new builds. |
8a112a8
to
dc0bed7
Compare
Done, that was an irritating set of merge conflicts. |
Any chance of movement on this PR? It's coming up to 3 months open now. |
e0898a7
to
e63b8b8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Next week this will gave been waiting 4 months. I don't want to have to start buying birthday cakes for PR's so can I get some feedback on this from the requested reviewers please @cheenamalhotra @karinazhou @JRahnama @DavoudEshtehari ? If it's too complicated to review here I can be available to talk it through in a teleconference if that helps. |
Hi @Wraith2 We intend to take this for 2.0.0-preview-4 (hence tagged), we've all been pretty busy with in-hand complex activities in progress, haven't spent enough time on this PR yet but we're going to review this soon. Thanks for your patience! |
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIPacketPool.cs
Outdated
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIPhysicalHandle.cs
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIPhysicalHandle.cs
Outdated
Show resolved
Hide resolved
src/Microsoft.Data.SqlClient/netcore/src/Microsoft/Data/SqlClient/SNI/SNIPhysicalHandle.cs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Wraith2
Could you make final improvements as mentioned in last comments, so we can get this merged?
- Fix extra lines
- Update obsolete link.
Updated. |
When using the managed network interface
SNIPacket
instances are used to contain data being sent and received from the socket stream. These packets rent their buffers from the arraypool to try and make sure we aren't constantly allocating and discarding (by default 8K) large arrays, this isn't being particularly effective because a lot of packet instances are being dropped to the GC which loses the array to the GC as well. This isn't harmful it just isn't a good use of the arraypool. The arrays allocated are the most prevalent class in memory traces.This PR add a new class in the handle hierarchy, the
SNIPhysicalHandle
and adds a packet pool to it. The packet pool implementation is derived from a very simple one used and tested in the aspnet extensions repo. The physical handle is used as the base class for named pipe and tcp handles, mars handle composes those and defers to the physical connection for abstract members. The abstract members are used to rent and return packets to the pool. This means that each 'real' connection has a packet pool and that packets used to read and write from the physical connection will be served from that pool. The pool is small and I think that each pool should usually contain no more than 3 packets, 1 send, I receive, 1 attn but it can contain more. It isn't invalid to use more packets than are in the pool as long as they are created and disposed through the pool. Mars will need more packets for partial packet reconstruction.In order to track the lifetime of packets and make sure they were always sourced and returned from the pool needed to add a lot of debugging infrastructure to the packet and pool. This allowed me to track down a subtle packet reuse bug that was fixed in #350 This code is debug only mostly. There is some additional tracing put on a compiler #define in the packet to allow tracing of individual packets, this is very performance expensive but very valuable when you need to know what is really going on with a piece of data.
Reuse of packets cleans up quite a lot of lifetime handling code and means that read and write paths no longer need to care about who is responsible for cleaning up the packet, it's handled elsewhere. It also means that the allocation of the async read callback can be cached and reused which is another small memory save..
This has passed all the tests and I've put it through some pretty punishing benching with the DataAccessPerformance benchmark project used to test throughput performance. It's stable and roughly 7% faster on my configuration.