You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 18, 2018. It is now read-only.
SocketInput and SocketOutput both have _head and _tail fields referencing MemoryPoolBlocks. In both cases, the bytes between _head and _tail haven't been consumed.
When all the bytes are fully consumed _head == _tail and the Start and End properties of the block are also equal. Still the fully consumed block is never returned to the MemoryPool until the connection is closed. This causes each idle connection to waste 8KB of memory even when the write buffers are completely flushed and the read allocation callback hasn't been called.
The text was updated successfully, but these errors were encountered:
The 8kb works for high high throughput and pipelined requests. I'm trying to understand the memory impact for large set of non-idle connection. Do we need to configure receiveBufferSize and SendBufferSize on these sockets?
The default value for Socket.ReceiveBufferSize and Socket.SendBufferSize is 8KB. That can explain why we saw 16-17 KB memory usage per connection with Kestrel.
For now I am not too concerned about the memory usage when the connections are idle. We are more interested to bring down the memory usage per connection (active or not). That's why Sajay mentioned the buffer size config.
A good buffer size depends on the average usage in real world and I believe 8KB was chosen for a good tradeoff. The scenario in my test was just a hello world MVC app and the payload size for request/response was around 200 bytes. So if we set the buffer size to 1KB, in theory we should be able to reduce the memory usage to 1/8. But we need to get a better knowledge of the real-world scenarios.
Actually reducing the memory usage for idle connections may be a good idea to reduce memory usage. In my test, I was making each of 60k connection idle for 15 sec to mimic a real scenario (each wcat client sleeps 10 sec between 2 consecutive requests). As a result, at any point of time only ~ 1k connections are active (with a request on the fly) and most of the connections are idle. If we are able to reduce the memory usage to almost 0 for idle connections, then potentially we may reduce the memory usage by 90% for this scenario. There will be more GC load and we need to measure that.
SocketInput
andSocketOutput
both have_head
and_tail
fields referencingMemoryPoolBlock
s. In both cases, the bytes between_head
and_tail
haven't been consumed.When all the bytes are fully consumed
_head == _tail
and theStart
andEnd
properties of the block are also equal. Still the fully consumed block is never returned to theMemoryPool
until the connection is closed. This causes each idle connection to waste 8KB of memory even when the write buffers are completely flushed and the read allocation callback hasn't been called.The text was updated successfully, but these errors were encountered: