-
Notifications
You must be signed in to change notification settings - Fork 524
Huge byte[] allocations in Large Object Heap after big load #2673
Comments
What chunk size are you writing to the body? e.g. a writeasync loop of 4096 bytes or writing 2MB in a single chunk? |
Hi , @benaadams , guys. I'm working with @kabazakra on this issue.
What we are doing In our code is basically static PostUpdate dummy_post;
public async Task<IActionResult> Events([FromQuery] string query)
{
if (dummy_post!=null)
{
return Ok(dummy_post); // we just keep giving away our first response
//to omit database reading and other business logic
}
dummy_post = await GetResult(query); //buisness code returns object with sizeof 2.0MB
return Ok(dummy_post);
} Where Microsoft.AspNetCore.Mvc.OkObjectResult Ok(object value) We looked at fiddler and saw it was chunked in 1024 bytes/chunk |
Might want to raise in Mvc? https://github.com/aspnet/Mvc Don't know what type From the Kestrel side of things; it will only put so much on the network at any time (TCP window-size); so if 2MB is written in one shot it still has to hold that in queued in memory until its all written out. If its done concurrently with many connections then that's many 2MB blocks it has to allocate while its holding on to them. Since you are using an |
To give a more concrete example; the top action of these will not allocate much when used concurrently/with many simultaneous connections; whereas the bottom will as its writing 2MB without checking for back pressure during the process, so Kestrel has to buffer it all up: static byte[] dummy_post = new byte[1024 * 1024 * 2]; // 2MB array
public async Task SmallAlloc([FromQuery] string query)
{
Response.StatusCode = 200;
var post = dummy_post;
// 2048 byte chunks
for (var i = 0; i < post.Length; i+= 2048)
{
await Response.Body.WriteAsync(post, i, Math.Min(2048, post.Length - i));
}
}
public async Task LargeAlloc([FromQuery] string query)
{
Response.StatusCode = 200;
var post = dummy_post;
// Single 2MB chunk
await Response.Body.WriteAsync(post, 0, post.Length);
} If you only have one connection however, the bottom one still shouldn't allocate much |
Hi, @benaadams What I see from sources, that KestrelThread has reference to MemoryPool. It is normal to allocate bytes in Gen2 or LOH during processing requests. But, what I don't see is the logic to free resources there (as I understand at least one KestrelThread should exist). Maybe you can point me to the code or just explain how it works in Kestrel. Does it free allocated (pinned) bytes (blocks, slab, etc) and what is a trigger for this. What happens with memory on application idle? Because in my case, app continued to hold memory for MemoryPool event if testing is stopped Thanks, |
The GC rarely frees memory as quickly as you might expect it to. Can you share the structure of PostUpdate? |
@kabazakra You're right about Kestrel never shrinking the memory pool. Being a pool, buffers get reused. But if a lot of buffers are needed simultaneously, the pool will grow to meet this need and will not shrink when utilization lowers. The upside is that if utilization then spikes again afterwards, new allocations are avoided. This issue is basically a dupe of #609 which was closed long ago. |
The above being said, with Kestrel's default configuration, 2MB of of memory will never be leased from the pool for a single request. The maximum is instead is 128KB per request. So it seems more likely that your Additionally, Kestrel's memory pool intentionally allocates byte arrays large enough to end up in the LOH precisely because these arrays never get dereferenced for the entire lifetime of the server. This helps ensure the GC scans objects that have a better chance of becoming dereferenced in its more frequent sweeps. It seems unlikely that Kestrel's memory pool would be the ultimate cause of your OOM issues because the pool only allocates more byte arrays if all of the memory ever allocated by the pool previously is currently in use processing requests. Under high load, the memory pool utilization should be extremely high. The only time memory is sitting in the pool not being utilized would be under low load when you generally wouldn't expect to see an OOM. Can you collect a core dump of your application when it OOMs? |
@kabazakra I don't think that restarting your app should be necessary because of Kestrel's memory pool. It would require more than 75,000 concurrent connections fully buffering in both directions for Kestrel's memory pool to grow to 10GB. Gathering a dump should help you determine what's causing your app's memory consumption to grow so large. |
We periodically close 'discussion' issues that have not been updated in a long period of time. We apologize if this causes any inconvenience. We ask that if you are still encountering an issue, please log a new issue with updated information and we will investigate. |
Hi
I am playing with performance tests for my app. It uses AspNet.Core + Kestrel ver. 1.1.2. Response size is pretty big, in average about 2MB, so app takes a lot of memory during testing. But what is strange, it doesn't free memory even if testing is completed. Profiler shows a big amount of byte arrays referenced by MemoryChunk and MemoryPool.
Is there a way to reduce pool size if the app in the standby state? Maybe some configs exist? What is the common strategy for Kestrel in this case? Should it keep allocated bytes forever?
Also, app throws OutOfMemory from time to time, is there some kind of limit for pool size, I would prefer to drop connection rather than lost my app at all on huge load,
Thank you in advance.
Ihor
The text was updated successfully, but these errors were encountered: