-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggressive GC: No Memory Decommit until 2nd GC (LOH only?) #78679
Comments
Tagging subscribers to this area: @dotnet/gc Issue DetailsDescriptionDocumentation of However, when combined with use of ArrayPool, I cannot get the memory to be decommitted without the use of 2 GCs. Reproduction StepsCode to reproduce: using System.Buffers;
using System.Runtime;
using System.Runtime.CompilerServices;
Console.WriteLine("Hello, World!");
// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
// Reset
Console.WriteLine("Resetting ArrayPool.");
ArrayRental.Reset();
// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
// Stall
Console.ReadLine();
void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
void ClearWorkingSetWorks()
{
GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(0, GCCollectionMode.Forced, false);
GC.RemoveMemoryPressure(nint.MaxValue);
}
[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
for (int x = 4096; x < 67108864; x *= 2)
for (int y = 0; y < 4; y++)
{
using var rental = new ArrayRental(x);
Console.WriteLine($"Allocated {x}");
}
}
public struct ArrayRental : IDisposable
{
static ArrayRental() => Reset();
private static ArrayPool<byte> _dataPool = null!;
private byte[] _data;
[MethodImpl(MethodImplOptions.NoInlining)]
public static void Reset() => _dataPool = ArrayPool<byte>.Create(67108864, 1); // 64MB
public ArrayRental(int count) => _data = _dataPool.Rent(count);
public void Dispose() => _dataPool.Return(_data, true); // Clear to ensure array is actually touched and counts towards working set.
} Expected behaviorRunning an aggressive GC causes memory to be decommitted; thus leading to a working set decrease. Actual behaviorMemory is not decommited without running either of the hacks (or variations thereof) Double Aggressive GC: GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); or Aggressive GC Under Pressure followed by any type of 2nd GC. GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(0, GCCollectionMode.Forced, false); // Invoke again to trigger some heuristic in GC telling it there's some pressure going on
GC.RemoveMemoryPressure(nint.MaxValue); Regression?I'm not entirely sure if this is intended behaviour or not, so I'm posting for feedback, the folks at #allow-unsafe-blocks (formerly #lowlevel on csharp Discord) didn't appear to know off the top of their head either. This seems to only happen with LOH allocations, when testing with only allocations smaller than 64K, Note: This is not an Known WorkaroundsAggressive GC under memory pressure followed by any GC at all GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(0, GCCollectionMode.Forced, false);
GC.RemoveMemoryPressure(nint.MaxValue); Leads to a reduction of working set as possible. Configuration
Most likely not configuration specific. Other informationNo response
|
Tagging subscribers to this area: @dotnet/area-system-buffers Issue DetailsDescriptionDocumentation of However, when combined with use of ArrayPool, I cannot get the memory to be decommitted without the use of 2 GCs. Reproduction StepsCode to reproduce: using System.Buffers;
using System.Runtime;
using System.Runtime.CompilerServices;
Console.WriteLine("Hello, World!");
// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
// Reset
Console.WriteLine("Resetting ArrayPool.");
ArrayRental.Reset();
// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
// Stall
Console.ReadLine();
void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
void ClearWorkingSetWorks()
{
GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(0, GCCollectionMode.Forced, false);
GC.RemoveMemoryPressure(nint.MaxValue);
}
[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
for (int x = 4096; x < 67108864; x *= 2)
for (int y = 0; y < 4; y++)
{
using var rental = new ArrayRental(x);
Console.WriteLine($"Allocated {x}");
}
}
public struct ArrayRental : IDisposable
{
static ArrayRental() => Reset();
private static ArrayPool<byte> _dataPool = null!;
private byte[] _data;
[MethodImpl(MethodImplOptions.NoInlining)]
public static void Reset() => _dataPool = ArrayPool<byte>.Create(67108864, 1); // 64MB
public ArrayRental(int count) => _data = _dataPool.Rent(count);
public void Dispose() => _dataPool.Return(_data, true); // Clear to ensure array is actually touched and counts towards working set.
} Expected behaviorRunning an aggressive GC causes memory to be decommitted; thus leading to a working set decrease. Actual behaviorMemory is not decommited without running either of the hacks (or variations thereof) Double Aggressive GC: GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); or Aggressive GC Under Pressure followed by any type of 2nd GC. GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(0, GCCollectionMode.Forced, false); // Invoke again to trigger some heuristic in GC telling it there's some pressure going on
GC.RemoveMemoryPressure(nint.MaxValue); Regression?I'm not entirely sure if this is intended behaviour or not, so I'm posting for feedback, the folks at #allow-unsafe-blocks (formerly #lowlevel on csharp Discord) didn't appear to know off the top of their head either. This seems to only happen with LOH allocations, when testing with only allocations smaller than 64K, Note: This is not an Known Workarounds(See Above) Configuration
Most likely not configuration specific. Other informationNo response
|
Reading from the description - the behavior you described is by design. From a GC's perspective, buffers returned to the ArrayPool are still considered allocated by the GC, as such, the GC cannot collect them. Once the first GC is over, the ArrayPool will receive an event that a Gen2 GC happened, and therefore do some trimming, that's why you can see the working set reduce in the second GC. That being said, I can see why this behavior is less than ideal, and I am looping in the ArrayPool owners to see if that's something we can do about. |
Tagging subscribers to this area: @dotnet/gc Issue DetailsDescriptionDocumentation of However, when combined with use of ArrayPool, I cannot get the memory to be decommitted without the use of 2 GCs. Reproduction StepsCode to reproduce: using System.Buffers;
using System.Runtime;
using System.Runtime.CompilerServices;
Console.WriteLine("Hello, World!");
// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
// Reset
Console.WriteLine("Resetting ArrayPool.");
ArrayRental.Reset();
// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
// Stall
Console.ReadLine();
void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
void ClearWorkingSetWorks()
{
GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(0, GCCollectionMode.Forced, false);
GC.RemoveMemoryPressure(nint.MaxValue);
}
[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
for (int x = 4096; x < 67108864; x *= 2)
for (int y = 0; y < 4; y++)
{
using var rental = new ArrayRental(x);
Console.WriteLine($"Allocated {x}");
}
}
public struct ArrayRental : IDisposable
{
static ArrayRental() => Reset();
private static ArrayPool<byte> _dataPool = null!;
private byte[] _data;
[MethodImpl(MethodImplOptions.NoInlining)]
public static void Reset() => _dataPool = ArrayPool<byte>.Create(67108864, 1); // 64MB
public ArrayRental(int count) => _data = _dataPool.Rent(count);
public void Dispose() => _dataPool.Return(_data, true); // Clear to ensure array is actually touched and counts towards working set.
} Expected behaviorRunning an aggressive GC causes memory to be decommitted; thus leading to a working set decrease. Actual behaviorMemory is not decommited without running either of the hacks (or variations thereof) Double Aggressive GC: GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); or Aggressive GC Under Pressure followed by any type of 2nd GC. GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(0, GCCollectionMode.Forced, false); // Invoke again to trigger some heuristic in GC telling it there's some pressure going on
GC.RemoveMemoryPressure(nint.MaxValue); Regression?I'm not entirely sure if this is intended behaviour or not, so I'm posting for feedback, the folks at #allow-unsafe-blocks (formerly #lowlevel on csharp Discord) didn't appear to know off the top of their head either. This seems to only happen with LOH allocations, when testing with only allocations smaller than 64K, Note: This is not an Known Workarounds(See Above) Configuration
Most likely not configuration specific. Other informationNo response
|
Please note, I opened this issue knowing
I left the using System.Runtime;
using System.Runtime.CompilerServices;
// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
// Reset
Console.WriteLine("Resetting ArrayPool.");
Console.ReadLine();
ArrayRental.Reset();
// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
// Stall
Console.ReadLine();
void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
void ClearWorkingSetWorks()
{
GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
GC.Collect(0, GCCollectionMode.Forced, false);
GC.RemoveMemoryPressure(nint.MaxValue);
}
[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
for (int x = 4096; x < 67108864; x *= 2)
for (int y = 0; y < 4; y++)
{
using var rental = new ArrayRental(x);
Console.WriteLine($"Allocated {x}");
}
}
public struct ArrayRental : IDisposable
{
static ArrayRental() => Reset();
private static Dictionary<int, byte[]> _dataPool = null!;
private byte[] _data;
[MethodImpl(MethodImplOptions.NoInlining)]
public static void Reset() => _dataPool = new Dictionary<int, byte[]>(); // 64MB
public ArrayRental(int count)
{
if (!_dataPool.TryGetValue(count, out _data))
{
_data = new byte[count];
_dataPool[count] = _data;
}
}
public void Dispose()
{
// Make sure our byte array is actually touched by RAM so counts towards private working set.
for (int x = 0; x < _data.Length; x++)
_data[x] = (byte)x;
}
} |
The dictionary scenario looks the same. |
Removing it makes no difference in my testing, Edit: For further simplicity, you can cut [MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
for (int x = 4096; x < 67108864; x *= 2)
for (int y = 0; y < 4; y++)
{
var array = new byte[x];
for (int z = 0; z < array.Length; z++)
array[z] = (byte)z; // write to ensure touched by RAM / in private working set
Console.WriteLine($"Allocated {x}");
}
} In retrospect I should probably have done this in the opening post.
for non-LOH only allocations and,
to allocate LOH only, you will notice that after running |
Thank you, this makes the issue so much easier to investigate. |
I experimented with the simplified repro, and here is what I get: // Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.
using System;
using System.Runtime.CompilerServices;
namespace CoreLab
{
public static class Program
{
public static void Main(string[] args)
{
// Console.WriteLine("Hello, World!");
GC.Collect();
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
}
[MethodImpl(MethodImplOptions.NoInlining)]
private static void Allocate()
{
for (int x = 4096; x < 67108864; x *= 2)
{
for (int y = 0; y < 4; y++)
{
var array = new byte[x];
for (int z = 0; z < array.Length; z++)
array[z] = (byte)z; // write to ensure touched by RAM / in private working set
Console.WriteLine($"Allocated {x}");
}
}
}
}
}
The execution result looks okay to me - the memory is indeed decommitted. Are you seeing something else? |
With
With
I see:
Which matches your numbers more closely. Host Info:
|
Oops, somehow one way or another I got carried away and forgot to respond 😅, that's embarrassing. In any case, I can confirm the PR in question resolves the issue, I tested with the CI build from the PR, specifically
|
Description
Documentation of
GCCollectionMode.Aggressive
indicates for the GC to decommit as much memory as possible.However, when combined with use of ArrayPool, I cannot get the memory to be decommitted without the use of 2 GCs.
Reproduction Steps
Code to reproduce:
Expected behavior
Running an aggressive GC causes memory to be decommitted; thus leading to a working set decrease.
(i.e. Reduction in
GC.GetGCMemoryInfo().TotalCommittedBytes
)Actual behavior
Memory is not decommited without running either of the hacks (or variations thereof)
Double Aggressive GC:
or
Aggressive GC Under Pressure followed by any type of 2nd GC.
Regression?
I'm not entirely sure if this is intended behaviour or not, so I'm posting for feedback, the folks at #allow-unsafe-blocks (formerly #lowlevel on csharp Discord) didn't appear to know off the top of their head either.
This seems to only happen with LOH allocations, when testing with only allocations smaller than 64K,
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true)
works as expected.Note: This is not an
ArrayPool
related issue, this still reproduces if you replace theArrayPool
with e.g.Dictionary
.Known Workarounds
(See Above)
Configuration
Most likely not configuration specific.
Other information
No response
The text was updated successfully, but these errors were encountered: