Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aggressive GC: No Memory Decommit until 2nd GC (LOH only?) #78679

Closed
Sewer56 opened this issue Nov 22, 2022 · 13 comments
Closed

Aggressive GC: No Memory Decommit until 2nd GC (LOH only?) #78679

Sewer56 opened this issue Nov 22, 2022 · 13 comments

Comments

@Sewer56
Copy link
Contributor

Sewer56 commented Nov 22, 2022

Description

Documentation of GCCollectionMode.Aggressive indicates for the GC to decommit as much memory as possible.

However, when combined with use of ArrayPool, I cannot get the memory to be decommitted without the use of 2 GCs.

Reproduction Steps

Code to reproduce:

using System.Buffers;
using System.Runtime;
using System.Runtime.CompilerServices;

Console.WriteLine("Hello, World!");

// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Reset
Console.WriteLine("Resetting ArrayPool.");
ArrayRental.Reset();

// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Stall
Console.ReadLine();

void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);

void ClearWorkingSetWorks() 
{
    GC.AddMemoryPressure(nint.MaxValue);
    GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
    GC.Collect(0, GCCollectionMode.Forced, false);
    GC.RemoveMemoryPressure(nint.MaxValue);
}

[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
    for (int x = 4096; x < 67108864; x *= 2)
    for (int y = 0; y < 4; y++)
    {
        using var rental = new ArrayRental(x);
        Console.WriteLine($"Allocated {x}");
    }
}

public struct ArrayRental : IDisposable
{
    static ArrayRental() => Reset();

    private static ArrayPool<byte> _dataPool = null!;
    private byte[] _data;

    [MethodImpl(MethodImplOptions.NoInlining)]
    public static void Reset() => _dataPool = ArrayPool<byte>.Create(67108864, 1); // 64MB
    
    public ArrayRental(int count) => _data = _dataPool.Rent(count);

    public void Dispose() => _dataPool.Return(_data, true); // Clear to ensure array is actually touched and counts towards working set.
}

Expected behavior

Running an aggressive GC causes memory to be decommitted; thus leading to a working set decrease.
(i.e. Reduction in GC.GetGCMemoryInfo().TotalCommittedBytes)

Actual behavior

Memory is not decommited without running either of the hacks (or variations thereof)

Double Aggressive GC:

GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 

or

Aggressive GC Under Pressure followed by any type of 2nd GC.

GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(0, GCCollectionMode.Forced, false); // Invoke again to trigger some heuristic in GC telling it there's some pressure going on 
GC.RemoveMemoryPressure(nint.MaxValue);

Regression?

I'm not entirely sure if this is intended behaviour or not, so I'm posting for feedback, the folks at #allow-unsafe-blocks (formerly #lowlevel on csharp Discord) didn't appear to know off the top of their head either.

This seems to only happen with LOH allocations, when testing with only allocations smaller than 64K, GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true) works as expected.

Note: This is not an ArrayPool related issue, this still reproduces if you replace the ArrayPool with e.g. Dictionary.

Known Workarounds

(See Above)

Configuration

  • .NET 7.0.0, Win11 22H2, x64

Most likely not configuration specific.

Other information

No response

@ghost ghost added the untriaged New issue has not been triaged by the area owner label Nov 22, 2022
@ghost
Copy link

ghost commented Nov 22, 2022

Tagging subscribers to this area: @dotnet/gc
See info in area-owners.md if you want to be subscribed.

Issue Details

Description

Documentation of GCCollectionMode.Aggressive indicates for the GC to decommit as much memory as possible.

However, when combined with use of ArrayPool, I cannot get the memory to be decommitted without the use of 2 GCs.

Reproduction Steps

Code to reproduce:

using System.Buffers;
using System.Runtime;
using System.Runtime.CompilerServices;

Console.WriteLine("Hello, World!");

// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Reset
Console.WriteLine("Resetting ArrayPool.");
ArrayRental.Reset();

// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Stall
Console.ReadLine();

void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);

void ClearWorkingSetWorks() 
{
    GC.AddMemoryPressure(nint.MaxValue);
    GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
    GC.Collect(0, GCCollectionMode.Forced, false);
    GC.RemoveMemoryPressure(nint.MaxValue);
}

[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
    for (int x = 4096; x < 67108864; x *= 2)
    for (int y = 0; y < 4; y++)
    {
        using var rental = new ArrayRental(x);
        Console.WriteLine($"Allocated {x}");
    }
}

public struct ArrayRental : IDisposable
{
    static ArrayRental() => Reset();

    private static ArrayPool<byte> _dataPool = null!;
    private byte[] _data;

    [MethodImpl(MethodImplOptions.NoInlining)]
    public static void Reset() => _dataPool = ArrayPool<byte>.Create(67108864, 1); // 64MB
    
    public ArrayRental(int count) => _data = _dataPool.Rent(count);

    public void Dispose() => _dataPool.Return(_data, true); // Clear to ensure array is actually touched and counts towards working set.
}

Expected behavior

Running an aggressive GC causes memory to be decommitted; thus leading to a working set decrease.
(i.e. Reduction in GC.GetGCMemoryInfo().TotalCommittedBytes)

Actual behavior

Memory is not decommited without running either of the hacks (or variations thereof)

Double Aggressive GC:

GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 

or

Aggressive GC Under Pressure followed by any type of 2nd GC.

GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(0, GCCollectionMode.Forced, false); // Invoke again to trigger some heuristic in GC telling it there's some pressure going on 
GC.RemoveMemoryPressure(nint.MaxValue);

Regression?

I'm not entirely sure if this is intended behaviour or not, so I'm posting for feedback, the folks at #allow-unsafe-blocks (formerly #lowlevel on csharp Discord) didn't appear to know off the top of their head either.

This seems to only happen with LOH allocations, when testing with only allocations smaller than 64K, GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true) works as expected.

Note: This is not an ArrayPool related issue, this still reproduces if you replace the ArrayPool with e.g. Dictionary.

Known Workarounds

Aggressive GC under memory pressure followed by any GC at all

GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(0, GCCollectionMode.Forced, false);
GC.RemoveMemoryPressure(nint.MaxValue);

Leads to a reduction of working set as possible.

Configuration

  • .NET 7.0.0, Win11 22H2, x64

Most likely not configuration specific.

Other information

No response

Author: Sewer56
Assignees: -
Labels:

area-GC-coreclr

Milestone: -

@ghost
Copy link

ghost commented Dec 3, 2022

Tagging subscribers to this area: @dotnet/area-system-buffers
See info in area-owners.md if you want to be subscribed.

Issue Details

Description

Documentation of GCCollectionMode.Aggressive indicates for the GC to decommit as much memory as possible.

However, when combined with use of ArrayPool, I cannot get the memory to be decommitted without the use of 2 GCs.

Reproduction Steps

Code to reproduce:

using System.Buffers;
using System.Runtime;
using System.Runtime.CompilerServices;

Console.WriteLine("Hello, World!");

// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Reset
Console.WriteLine("Resetting ArrayPool.");
ArrayRental.Reset();

// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Stall
Console.ReadLine();

void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);

void ClearWorkingSetWorks() 
{
    GC.AddMemoryPressure(nint.MaxValue);
    GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
    GC.Collect(0, GCCollectionMode.Forced, false);
    GC.RemoveMemoryPressure(nint.MaxValue);
}

[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
    for (int x = 4096; x < 67108864; x *= 2)
    for (int y = 0; y < 4; y++)
    {
        using var rental = new ArrayRental(x);
        Console.WriteLine($"Allocated {x}");
    }
}

public struct ArrayRental : IDisposable
{
    static ArrayRental() => Reset();

    private static ArrayPool<byte> _dataPool = null!;
    private byte[] _data;

    [MethodImpl(MethodImplOptions.NoInlining)]
    public static void Reset() => _dataPool = ArrayPool<byte>.Create(67108864, 1); // 64MB
    
    public ArrayRental(int count) => _data = _dataPool.Rent(count);

    public void Dispose() => _dataPool.Return(_data, true); // Clear to ensure array is actually touched and counts towards working set.
}

Expected behavior

Running an aggressive GC causes memory to be decommitted; thus leading to a working set decrease.
(i.e. Reduction in GC.GetGCMemoryInfo().TotalCommittedBytes)

Actual behavior

Memory is not decommited without running either of the hacks (or variations thereof)

Double Aggressive GC:

GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 

or

Aggressive GC Under Pressure followed by any type of 2nd GC.

GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(0, GCCollectionMode.Forced, false); // Invoke again to trigger some heuristic in GC telling it there's some pressure going on 
GC.RemoveMemoryPressure(nint.MaxValue);

Regression?

I'm not entirely sure if this is intended behaviour or not, so I'm posting for feedback, the folks at #allow-unsafe-blocks (formerly #lowlevel on csharp Discord) didn't appear to know off the top of their head either.

This seems to only happen with LOH allocations, when testing with only allocations smaller than 64K, GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true) works as expected.

Note: This is not an ArrayPool related issue, this still reproduces if you replace the ArrayPool with e.g. Dictionary.

Known Workarounds

(See Above)

Configuration

  • .NET 7.0.0, Win11 22H2, x64

Most likely not configuration specific.

Other information

No response

Author: Sewer56
Assignees: -
Labels:

area-System.Buffers, untriaged

Milestone: -

@cshung
Copy link
Member

cshung commented Dec 3, 2022

Reading from the description - the behavior you described is by design. From a GC's perspective, buffers returned to the ArrayPool are still considered allocated by the GC, as such, the GC cannot collect them. Once the first GC is over, the ArrayPool will receive an event that a Gen2 GC happened, and therefore do some trimming, that's why you can see the working set reduce in the second GC.

That being said, I can see why this behavior is less than ideal, and I am looping in the ArrayPool owners to see if that's something we can do about.

@jkotas
Copy link
Member

jkotas commented Dec 3, 2022

@cshung There is nothing ArrayPool can do about this without introducing a new caching policy API like what is discussed in #53895.

@ghost
Copy link

ghost commented Dec 3, 2022

Tagging subscribers to this area: @dotnet/gc
See info in area-owners.md if you want to be subscribed.

Issue Details

Description

Documentation of GCCollectionMode.Aggressive indicates for the GC to decommit as much memory as possible.

However, when combined with use of ArrayPool, I cannot get the memory to be decommitted without the use of 2 GCs.

Reproduction Steps

Code to reproduce:

using System.Buffers;
using System.Runtime;
using System.Runtime.CompilerServices;

Console.WriteLine("Hello, World!");

// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Reset
Console.WriteLine("Resetting ArrayPool.");
ArrayRental.Reset();

// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Stall
Console.ReadLine();

void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);

void ClearWorkingSetWorks() 
{
    GC.AddMemoryPressure(nint.MaxValue);
    GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
    GC.Collect(0, GCCollectionMode.Forced, false);
    GC.RemoveMemoryPressure(nint.MaxValue);
}

[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
    for (int x = 4096; x < 67108864; x *= 2)
    for (int y = 0; y < 4; y++)
    {
        using var rental = new ArrayRental(x);
        Console.WriteLine($"Allocated {x}");
    }
}

public struct ArrayRental : IDisposable
{
    static ArrayRental() => Reset();

    private static ArrayPool<byte> _dataPool = null!;
    private byte[] _data;

    [MethodImpl(MethodImplOptions.NoInlining)]
    public static void Reset() => _dataPool = ArrayPool<byte>.Create(67108864, 1); // 64MB
    
    public ArrayRental(int count) => _data = _dataPool.Rent(count);

    public void Dispose() => _dataPool.Return(_data, true); // Clear to ensure array is actually touched and counts towards working set.
}

Expected behavior

Running an aggressive GC causes memory to be decommitted; thus leading to a working set decrease.
(i.e. Reduction in GC.GetGCMemoryInfo().TotalCommittedBytes)

Actual behavior

Memory is not decommited without running either of the hacks (or variations thereof)

Double Aggressive GC:

GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 

or

Aggressive GC Under Pressure followed by any type of 2nd GC.

GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(0, GCCollectionMode.Forced, false); // Invoke again to trigger some heuristic in GC telling it there's some pressure going on 
GC.RemoveMemoryPressure(nint.MaxValue);

Regression?

I'm not entirely sure if this is intended behaviour or not, so I'm posting for feedback, the folks at #allow-unsafe-blocks (formerly #lowlevel on csharp Discord) didn't appear to know off the top of their head either.

This seems to only happen with LOH allocations, when testing with only allocations smaller than 64K, GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true) works as expected.

Note: This is not an ArrayPool related issue, this still reproduces if you replace the ArrayPool with e.g. Dictionary.

Known Workarounds

(See Above)

Configuration

  • .NET 7.0.0, Win11 22H2, x64

Most likely not configuration specific.

Other information

No response

Author: Sewer56
Assignees: -
Labels:

area-System.Buffers, area-GC-coreclr, untriaged

Milestone: -

@Sewer56
Copy link
Contributor Author

Sewer56 commented Dec 3, 2022

@cshung

Please note, I opened this issue knowing ArrayPool has this sort of behaviour by design (from reading source) and explicitly stated in the opening post:

This is not an ArrayPool related issue, this still reproduces if you replace the ArrayPool with e.g. Dictionary.

I left the ArrayPool in place in the repro because it represents a more realistic usage scenario, that said, here is a one that uses Dictionary<T> instead.

using System.Runtime;
using System.Runtime.CompilerServices;

// Allocate bunch of memory.
Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
Allocate();
Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Reset
Console.WriteLine("Resetting ArrayPool.");
Console.ReadLine();
ArrayRental.Reset();

// Working set should decrease after this one.
Console.WriteLine("Working set should decrease after this GC.");
Console.ReadLine();
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
ClearWorkingSetShouldWorks(); // Replace with ClearWorkingSetWorks for expected result.
Console.WriteLine($"Commit Clean: {GC.GetGCMemoryInfo().TotalCommittedBytes}");

// Stall
Console.ReadLine();

void ClearWorkingSetShouldWorks() => GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);

void ClearWorkingSetWorks() 
{
    GC.AddMemoryPressure(nint.MaxValue);
    GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
    GC.Collect(0, GCCollectionMode.Forced, false);
    GC.RemoveMemoryPressure(nint.MaxValue);
}

[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
    for (int x = 4096; x < 67108864; x *= 2)
    for (int y = 0; y < 4; y++)
    {
        using var rental = new ArrayRental(x);
        Console.WriteLine($"Allocated {x}");
    }
}

public struct ArrayRental : IDisposable
{
    static ArrayRental() => Reset();

    private static Dictionary<int, byte[]> _dataPool = null!;
    private byte[] _data;

    [MethodImpl(MethodImplOptions.NoInlining)]
    public static void Reset() => _dataPool = new Dictionary<int, byte[]>(); // 64MB
    
    public ArrayRental(int count)
    {
        if (!_dataPool.TryGetValue(count, out _data))
        {
            _data = new byte[count];
            _dataPool[count] = _data;
        }
    }

    public void Dispose()
    {
        // Make sure our byte array is actually touched by RAM so counts towards private working set.
        for (int x = 0; x < _data.Length; x++)
            _data[x] = (byte)x;
    }
}

@cshung
Copy link
Member

cshung commented Dec 3, 2022

The dictionary scenario looks the same. ArrayRental is IDisposable, so you are explicitly asking the GC to keep it around so that its destructor gets a chance to run after the GC determines it is no longer reachable. What would happen if you make ArrayRental not an IDisposable object?

@Sewer56
Copy link
Contributor Author

Sewer56 commented Dec 3, 2022

The dictionary scenario looks the same. ArrayRental is IDisposable, so you are explicitly asking the GC to keep it around so that its destructor gets a chance to run after the GC determines it is no longer reachable. What would happen if you make ArrayRental not an IDisposable object?

Removing it makes no difference in my testing, ArrayRental is a value type that's used in stack of Allocate(), which is set to NoInlining, it shouldn't be rooted by GC after exit from Allocate method.


Edit:

For further simplicity, you can cut ArrayRental out entirely and still reproduce the issue with just a simple array allocation.

[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
    for (int x = 4096; x < 67108864; x *= 2)
    for (int y = 0; y < 4; y++)
    {
        var array = new byte[x];
        for (int z = 0; z < array.Length; z++)
            array[z] = (byte)z; // write to ensure touched by RAM / in private working set
        
        Console.WriteLine($"Allocated {x}");
    }
}

In retrospect I should probably have done this in the opening post.
If you tweak around with the parameters a bit, e.g. use

    for (int x = 4096; x < 65536; x *= 2)
    for (int y = 0; y < 1024; y++)

for non-LOH only allocations and,

    for (int x = 65536 * 2; x < 65536 * 1024; x *= 2)
    for (int y = 0; y < 4; y++)

to allocate LOH only, you will notice that after running ClearWorkingSetShouldWorks, the Private Working Set is full cleared for the first case but only partially cleared for the second case (LOH allocations), despite setting GCCollectionMode.Aggressive

@cshung
Copy link
Member

cshung commented Dec 3, 2022

The dictionary scenario looks the same. ArrayRental is IDisposable, so you are explicitly asking the GC to keep it around so that its destructor gets a chance to run after the GC determines it is no longer reachable. What would happen if you make ArrayRental not an IDisposable object?

Removing it makes no difference in my testing, ArrayRental is a value type that's used in stack of Allocate(), which is set to NoInlining, it shouldn't be rooted by GC after exit from Allocate method.

Edit:

For further simplicity, you can cut ArrayRental out entirely and still reproduce the issue with just a simple array allocation.

[MethodImpl(MethodImplOptions.NoInlining)]
void Allocate()
{
    for (int x = 4096; x < 67108864; x *= 2)
    for (int y = 0; y < 4; y++)
    {
        var array = new byte[x];
        for (int z = 0; z < array.Length; z++)
            array[z] = (byte)z; // write to ensure touched by RAM / in private working set
        
        Console.WriteLine($"Allocated {x}");
    }
}

In retrospect I should probably have done this in the opening post. If you tweak around with the parameters a bit, e.g. use

    for (int x = 4096; x < 65536; x *= 2)
    for (int y = 0; y < 1024; y++)

for non-LOH only allocations and,

    for (int x = 65536 * 2; x < 65536 * 1024; x *= 2)
    for (int y = 0; y < 4; y++)

to allocate LOH only, you will notice that after running ClearWorkingSetShouldWorks, the Private Working Set is full cleared for the first case but only partially cleared for the second case (LOH allocations), despite setting GCCollectionMode.Aggressive

Thank you, this makes the issue so much easier to investigate.

@cshung
Copy link
Member

cshung commented Dec 5, 2022

I experimented with the simplified repro, and here is what I get:

// Licensed to the .NET Foundation under one or more agreements.
// The .NET Foundation licenses this file to you under the MIT license.

using System;
using System.Runtime.CompilerServices;

namespace CoreLab
{
    public static class Program
    {
        public static void Main(string[] args)
        {
            // Console.WriteLine("Hello, World!");
            GC.Collect();
            Console.WriteLine($"Commit Before: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
            Allocate();
            Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
            GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true);
            Console.WriteLine($"Commit After: {GC.GetGCMemoryInfo().TotalCommittedBytes}");
        }

        [MethodImpl(MethodImplOptions.NoInlining)]
        private static void Allocate()
        {
            for (int x = 4096; x < 67108864; x *= 2)
            {
                for (int y = 0; y < 4; y++)
                {
                    var array = new byte[x];
                    for (int z = 0; z < array.Length; z++)
                        array[z] = (byte)z; // write to ensure touched by RAM / in private working set

                    Console.WriteLine($"Allocated {x}");
                }
            }
        }
    }
}
Commit Before: 1921024
Allocated 4096
Allocated 4096
Allocated 4096
Allocated 4096
Allocated 8192
Allocated 8192
Allocated 8192
Allocated 8192
Allocated 16384
Allocated 16384
Allocated 16384
Allocated 16384
Allocated 32768
Allocated 32768
Allocated 32768
Allocated 32768
Allocated 65536
Allocated 65536
Allocated 65536
Allocated 65536
Allocated 131072
Allocated 131072
Allocated 131072
Allocated 131072
Allocated 262144
Allocated 262144
Allocated 262144
Allocated 262144
Allocated 524288
Allocated 524288
Allocated 524288
Allocated 524288
Allocated 1048576
Allocated 1048576
Allocated 1048576
Allocated 1048576
Allocated 2097152
Allocated 2097152
Allocated 2097152
Allocated 2097152
Allocated 4194304
Allocated 4194304
Allocated 4194304
Allocated 4194304
Allocated 8388608
Allocated 8388608
Allocated 8388608
Allocated 8388608
Allocated 16777216
Allocated 16777216
Allocated 16777216
Allocated 16777216
Allocated 33554432
Allocated 33554432
Allocated 33554432
Allocated 33554432
Commit After: 170180608
Commit After: 667648

The execution result looks okay to me - the memory is indeed decommitted. Are you seeing something else?

@Sewer56
Copy link
Contributor Author

Sewer56 commented Dec 5, 2022

With GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); alone, I see:

Commit Before: 1921024
Allocated 4096
Allocated 4096
Allocated 4096
Allocated 4096
Allocated 8192
Allocated 8192
Allocated 8192
Allocated 8192
Allocated 16384
Allocated 16384
Allocated 16384
Allocated 16384
Allocated 32768
Allocated 32768
Allocated 32768
Allocated 32768
Allocated 65536
Allocated 65536
Allocated 65536
Allocated 65536
Allocated 131072
Allocated 131072
Allocated 131072
Allocated 131072
Allocated 262144
Allocated 262144
Allocated 262144
Allocated 262144
Allocated 524288
Allocated 524288
Allocated 524288
Allocated 524288
Allocated 1048576
Allocated 1048576
Allocated 1048576
Allocated 1048576
Allocated 2097152
Allocated 2097152
Allocated 2097152
Allocated 2097152
Allocated 4194304
Allocated 4194304
Allocated 4194304
Allocated 4194304
Allocated 8388608
Allocated 8388608
Allocated 8388608
Allocated 8388608
Allocated 16777216
Allocated 16777216
Allocated 16777216
Allocated 16777216
Allocated 33554432
Allocated 33554432
Allocated 33554432
Allocated 33554432
Commit After: 102150144
Commit After: 34226176

image


With

GC.AddMemoryPressure(nint.MaxValue);
GC.Collect(GC.MaxGeneration, GCCollectionMode.Aggressive, true, true); 
GC.Collect(0, GCCollectionMode.Forced, false);
GC.RemoveMemoryPressure(nint.MaxValue);

I see:

Commit Before: 1921024
Allocated 4096
Allocated 4096
Allocated 4096
Allocated 4096
Allocated 8192
Allocated 8192
Allocated 8192
Allocated 8192
Allocated 16384
Allocated 16384
Allocated 16384
Allocated 16384
Allocated 32768
Allocated 32768
Allocated 32768
Allocated 32768
Allocated 65536
Allocated 65536
Allocated 65536
Allocated 65536
Allocated 131072
Allocated 131072
Allocated 131072
Allocated 131072
Allocated 262144
Allocated 262144
Allocated 262144
Allocated 262144
Allocated 524288
Allocated 524288
Allocated 524288
Allocated 524288
Allocated 1048576
Allocated 1048576
Allocated 1048576
Allocated 1048576
Allocated 2097152
Allocated 2097152
Allocated 2097152
Allocated 2097152
Allocated 4194304
Allocated 4194304
Allocated 4194304
Allocated 4194304
Allocated 8388608
Allocated 8388608
Allocated 8388608
Allocated 8388608
Allocated 16777216
Allocated 16777216
Allocated 16777216
Allocated 16777216
Allocated 33554432
Allocated 33554432
Allocated 33554432
Allocated 33554432
Commit After: 102150144
Commit After: 671744

image

Which matches your numbers more closely.


Host Info:

OS=Windows 11 (10.0.22621.819)
Intel Core i7-4790K CPU 4.00GHz (Haswell), 1 CPU, 8 logical and 4 physical cores
.NET SDK=7.0.100
  [Host]     : .NET 7.0.0 (7.0.22.51805), X64 RyuJIT AVX2

General OS Mem Stats at Time of GC
image

@cshung
Copy link
Member

cshung commented Dec 6, 2022

@Sewer56, it looks like I was using an old build. In the released version, I can reproduce your behavior, and I have got a candidate fix here.

It would be great if you can try it out.

@Sewer56
Copy link
Contributor Author

Sewer56 commented Dec 8, 2022

Oops, somehow one way or another I got carried away and forgot to respond 😅, that's embarrassing.

In any case, I can confirm the PR in question resolves the issue, I tested with the CI build from the PR, specifically CoreCLRProduct___windows_x64_release using CoreRun.

Commit Before: 221184
Allocated 4096
Allocated 4096
Allocated 4096
Allocated 4096
Allocated 8192
Allocated 8192
Allocated 8192
Allocated 8192
Allocated 16384
Allocated 16384
Allocated 16384
Allocated 16384
Allocated 32768
Allocated 32768
Allocated 32768
Allocated 32768
Allocated 65536
Allocated 65536
Allocated 65536
Allocated 65536
Allocated 131072
Allocated 131072
Allocated 131072
Allocated 131072
Allocated 262144
Allocated 262144
Allocated 262144
Allocated 262144
Allocated 524288
Allocated 524288
Allocated 524288
Allocated 524288
Allocated 1048576
Allocated 1048576
Allocated 1048576
Allocated 1048576
Allocated 2097152
Allocated 2097152
Allocated 2097152
Allocated 2097152
Allocated 4194304
Allocated 4194304
Allocated 4194304
Allocated 4194304
Allocated 8388608
Allocated 8388608
Allocated 8388608
Allocated 8388608
Allocated 16777216
Allocated 16777216
Allocated 16777216
Allocated 16777216
Allocated 33554432
Allocated 33554432
Allocated 33554432
Allocated 33554432
Commit After: 84779008
Commit After: 81920

@cshung cshung closed this as completed Dec 10, 2022
@ghost ghost removed the untriaged New issue has not been triaged by the area owner label Dec 10, 2022
@ghost ghost locked as resolved and limited conversation to collaborators Jan 9, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants