Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pool SocketSenders #30771
Pool SocketSenders #30771
Changes from all commits
dc0b496
f4e75d2
1607af2
5067db4
11b8842
fe2d74c
704d9bc
b5aca36
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the perf impact of renting and returning senders to the pool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my tests, I haven't seen any impact on throughput. I ran plaintext though. I'm not sure of a case where this would show up being a real problem. The contention should be low as the queues are sharded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused why we need to rent and return for each send? Can't we rent once at the beginning of ProcessSends and return after (and calling Reset accordingly).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entire purpose of the change is to avoid that. Today we have one per connection, this reduces it from one per connection to one per send operation, hence why we go from 5000 in the test below to 8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. So this reduces allocations with a pool as well as reducing the amount of time an object is used for.
If we have a limit of 1024, couldn't that be worse though when you get a sudden increase in traffic and then reduces? Is that something to be concerned about here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Realistically for that to happen, you would need to cause the TCP buffers in the kernel to fill up per connection by sending massive payloads and having the client not read them fast enough. The only way the number of concurrent operations exceeds the number of executing sends is when the sends themselves go async and this is very rare.
That said, the SocketSender is about 300 bytes a pop and we default to a maximum of 16 IOQueues. If by some glitch we end up with 1024 entries per IOQueue then we'd have 4MB of unfreeable memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if SendAsync errors? Should the return be in a try/finally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it errors, it's disposed (further up the stack where exceptions are caught). No it shouldn't bein a try/finally, we don't need to return things to the pool in the error case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not make the _sender field a local var?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I can dispose the sender in the error case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need some comments explaining why. Is there a test for it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't mock the networking stack part of the sockets transport. There's just no need to reuse things in the face of exception, but I'll add a comment.
I don't see the value in making sure this gets disposed by trying to mock the SocketSender. We don't have tests making sure that the Socket itself is disposed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How much memory is expected to be used in the bufferlist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you asking how big the list will get?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we aren't clearing here, how much idle memory will be in the bufferlist?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We clear in Return.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you look to see how many were created with 5000 websockets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, 8 on my machine (8 cores).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1024 seems high to me. Is this the maximum number of sends occurring at the same time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What number do you like? A multiple of the number of cores?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this is a maximum, it doesn't mean we start with this many or that we'll have this many. It means we won't got past this many.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want to be strict. I'll pick whatever number ya'll want though as long as we don't introduce locking into this code path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can be strict without locking. You can use
Interlocked.Increment/Decrement
.ConcurrentQueue.Enqueue
is already doing it's ownInterlocked.Increment
at a minimum.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I've complicated the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, there's no way that I can think of to make this accurate without locking. Review the code and let me know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is thread safety an issue when disposing? What happens if there is a sender that hasn't been returned to the queue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not a concern. It'll probably leak. We have that same issue with the memory pool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify, in the face of ungraceful shutdown, bad things can happen with both the memory pool and now this pool.
This file was deleted.