-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #15173
Comments
My first guess would be object tracking, based on the controller code. var entities = await Context.MemTestItems.Where(x => x.Created >= fromDate).ToListAsync(); Have you tried using |
Also looking at the I recommend trying to call |
The best default is Scoped. You don't have to register context as transient. |
Try set <PropertyGroup>
<ServerGarbageCollection>false</ServerGarbageCollection>
</PropertyGroup> |
We've (@blarsern is my colleague) just tried with the sample on another Azure subscription (but with the same app plan - S1). Then we get the following result. Please note that the error is not present when we remove the call to EF. This indicates that the error is somehow related to the combination of EF and azure subscriptions. |
@blarsern Does this only happen with "ASPNETCORE_ENVIRONMENT set to Development"? |
@ajcvickers No, this problem was first found in production, so I would not think it is related to ASPNETCORE_ENVIRONMENT = Development |
@blarsern @larserikfinholt Another couple of suggestions for things to check:
|
AppInsights is disabled it both places. It's the same project beeing deployed. So it's the same patch version 2.2.3. |
@blarsern Could you create a version of the repro that only uses ADO.NET APIs (e.g. SqlConnection, SqlCommand, SqlDataReader) to execute the same SQL and run it against the app service plans that show the symptoms? This could help narrowing things down. |
Can you share the dump? |
Memory dumps: First one is some time after 8k requests. |
@blarsern You are not the only one experiencing this, I was on my way here today to make an issue about it. Similar results on AWS ECS with an increase in memory usage on every query request. It will reach its peak allocated memory and then force terminate the task. We run docker containers with microsoft/dotnet:2.1-aspnetcore-runtime with 200MB soft limit and 400MB hard limit. We have tried the following :
Are there any temporary workarounds for this before 3.0, as while it is not completely critical (as the services restart themselves) it is causing massive headache and concern when under load. It is noted that left long enough the GC appears to release the memory, but that is after a substantial amount of time, and running even synchronous requests will cause the memory to 'overflow'. Only thing I can think of is if a mvc filter is keeping the context alive, but it is set to the default scoped, so should be disposed of after every request? |
@roji did you identify the root cause? |
@ErikEJ I haven't had any time to look at this yet (busy times), but plan to in the coming days... |
Just a follow up, we have a case going on with case id 119031526000920 So during analysis of the dumps they found a a huge amount of diagnostic listeners which where not released. So if i send 500 requests to the test project locally and take a dump, i get this i VS: If i remove the db context from the controller i get this (also after 500 requests): If i send 40000 requests, i get 40000 diagnostic listeners. I have tried turning logging off by this code: Seems like there is no way to turn off the diagnostic listener ? This leak is one per request, and it's reproducible locally. Can you look into why this is leaking ? |
Do you get this leak if you turn of intellitrace in visual studio? |
@blarsern thanks for the added info, can you confirm whether the leak info also occurs only in one specific Azure app plan as mentioned in #15173 (comment)? |
@davidfowl And thank you for suggesting this because: @roji Note, working app service plan means there is not self increasing memory leak. |
@blarsern This could be related to how logging is configured with EF. Do you have any calls to |
@ajcvickers And my post from yesterday shows code that was supposed to turn off logging completely. |
@blarsern Is the link above (GutHub MemTestRepo) still the code to repro the Listener leak? |
@roji Explain this test then, 1 mill requests: Result, memory usage: So after 1 mill requests, i have 1 mill diagnostic listeners in memory, the memory has increased The leak isn't massive, but it leaks alright. Anyway, can this problem be fixed ? Time schedule ? |
@roji @blarsern So the StackTrace: The first(only once)
All later(repeat for every request)
|
@yyjdelete That constructor should not be called every time a context instance is created. It should usually be called once per application. The reason is that EF should only built it's internal service provider once, and that singleton service should then be re-used. This most often happens when something pathologically causes the internal service provider to be re-built for every context instance. However, the repro code posted does not do this, at least as far as I can tell. So it's not that line of code that in-of-itself is wrong. It's whatever is causing the instance to be created multiple times. That's assuming that this is the line of code that is creating these instances. Given that this is not an EF type, it could be something else that is creating the instances. |
@ajcvickers |
@yyjdelete I am not able to reproduce this. I see that code be run on the first request, and then not again. In fact, with the repro code provided, |
To be clear, I am not claiming that you don't have a leak. I'm only saying that I've managed to see the spurious instances of DiagnosticListener being allocated, but in my repro they were collected by the GC. I intend to sit down and dive further into this tomorrow morning, so please hang tight for a bit more time. In the meantime, can you please confirm that you're still seeing this only on one specific Azure subscription, and are unable to produce the leak anywhere else (I asked this above in #15173 (comment) but never got an answer). |
@roji The problem is, my support case regarding the app service plan is somewhat halted because we need to get the diagnostic listener fixed first. Hopefully there is some crazy magic going on when the diagnostic listener leaks in azure. So fixing this will fix our main problem with the self increasing memory leak. |
@ajcvickers
|
When IsConfigured is called, it applies all services. This caused a DiagnosticListener to get instantiated on each DbContext instantiation, and since it wasn't disposed it caused a leak. Fixes dotnet#15173
OK, I've looked into this and I can confirm that essentially what @yyjdelete wrote above is correct - thanks for your analysis. Some additional details: Commit 188dbf4 changed the I've opened #16046 to fix the allocation issue (@ajcvickers have also added the consider-for-servicing label, hopefully that's the right process). The workaround is currently to avoid using Note that regardless of the leak, |
When IsConfigured is called, it applies all services. This caused a DiagnosticListener to get instantiated on each DbContext instantiation, and since it wasn't disposed it caused a leak. Fixes #15173
PR for 2.2: #16047 |
@blarsern thanks for confirming! And thanks also @yyjdelete, your analysis and proposed solution were spot on. |
@yyjdelete Agree with @roji. You were pointing at the root cause the whole time; I just didn't see it. Great work and many thanks for finding this! |
We have been experiencing massive memory leaks in all our microservices after upgrading to net core 2.2.3 and EF 2.2.3. (From 2.0)
And we have narrowed it down to the simplest app which still leaks.
It's a new core 2.2 webapi, added one context, one table, added context in startup.
One simple controller with one simple context query.
Steps to reproduce
Download this:
GutHub MemTestRepo
Fix connection string.
Create webapp in azure.
Set ASPNETCORE_ENVIRONMENT to Development.
Set always on webapp.
Create the DB.
Deploy to azure.
Now we are running some custom loadtest with 20 clients, 100 requests per client, repeated 4 times. 8k requests in total.
To this endpoint: api/memtest/test/1111111
At the red line 8k requests (24k requestst in total)
The memory keeps raising until azure restarts it:
(After getting OutOfMemory exceptions)
Another one:
If we remove the context from the controller, and AddDbContext in startup, doing a simple array create of 1mb, no DB access.
It looks like this after 32k requests:
Another one:
What are we missing ??
Azure problem or EF problem ?
I can't believe we are the only ones experiencing this.. :)
We have memory dumps etc but not so easy to figure out whats causing this.
The text was updated successfully, but these errors were encountered: