Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pathological GC time on M1 mac #48473

Closed
Keno opened this issue Jan 31, 2023 · 26 comments · Fixed by #48614 · May be fixed by JuliaLang/libuv#34
Closed

Pathological GC time on M1 mac #48473

Keno opened this issue Jan 31, 2023 · 26 comments · Fixed by #48614 · May be fixed by JuliaLang/libuv#34
Labels
GC Garbage collector regression Regression in behavior compared to a previous version system:apple silicon Affects Apple Silicon only (Darwin/ARM64) - e.g. M1 and other M-series chips
Milestone

Comments

@Keno
Copy link
Member

Keno commented Jan 31, 2023

M1 mac:

category = "linked"
bench = "list.jl"
  No Changes to `~/GCBenchmarks/benches/serial/linked/Project.toml`
  No Changes to `~/GCBenchmarks/benches/serial/linked/Manifest.toml`
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │      50297 │   48807 │     41368 │       7439 │         2172 │                 6 │     4034 │         97 │
│  median │      54143 │   52600 │     44078 │       8607 │         3089 │                 9 │     4048 │         97 │
│ maximum │      89575 │   88079 │     74315 │      13758 │         4959 │                27 │     4072 │         98 │
│   stdev │      11430 │   11436 │      9714 │       1845 │          928 │                 8 │       14 │          0 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘

x86 server:

category = "linked"
bench = "list.jl"
  No Changes to `~/GCBenchmarks/benches/serial/linked/Project.toml`
  No Changes to `~/GCBenchmarks/benches/serial/linked/Manifest.toml`
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │       7205 │    4740 │      3419 │       1320 │         1625 │                 6 │     2557 │         63 │
│  median │       7455 │    5015 │      3637 │       1369 │         1807 │                 7 │     2600 │         66 │
│ maximum │       8035 │    5657 │      4266 │       1416 │         2395 │                10 │     2622 │         69 │
│   stdev │        300 │     305 │       275 │         39 │          248 │                 2 │       20 │          2 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘

For comparison, on other benchmarks, the M1 is faster:

category = "append"
bench = "append.jl"
  No Changes to `~/GCBenchmarks/benches/serial/append/Project.toml`
  No Changes to `~/GCBenchmarks/benches/serial/append/Manifest.toml`
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │        888 │     127 │        52 │         73 │           30 │                 5 │     1483 │          8 │
│  median │        907 │     132 │        56 │         76 │           33 │                 7 │     1484 │          9 │
│ maximum │       1163 │     145 │        65 │         84 │           42 │                12 │     1485 │         10 │
│   stdev │        109 │       6 │         4 │          3 │            4 │                 2 │        0 │          1 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘

vs

category = "append"
bench = "append.jl"
  No Changes to `~/GCBenchmarks/benches/serial/append/Project.toml`
  No Changes to `~/GCBenchmarks/benches/serial/append/Manifest.toml`
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │       2305 │     108 │        84 │         24 │           48 │                 3 │     1483 │          1 │
│  median │       2664 │     112 │        87 │         25 │           49 │                 4 │     1484 │          2 │
│ maximum │       3442 │     139 │       102 │         40 │           59 │                 4 │     1524 │          2 │
│   stdev │        345 │      11 │         7 │          5 │            5 │                 1 │       16 │          0 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘

Benchmarks are from https://github.com/JuliaCI/GCBenchmarks.

@vchuravy
Copy link
Member

I assume this is on master? cc: @d-netto

@gbaraldi
Copy link
Member

I'm not sure, but we're seeing bad behaviour on 1.9 as well.

@Keno
Copy link
Member Author

Keno commented Jan 31, 2023

Yes, this was master.

@gbaraldi
Copy link
Member

gbaraldi commented Jan 31, 2023

bench = "list.jl"
  No Changes to `~/GCBenchmarks/benches/serial/linked/Project.toml`
  No Changes to `~/GCBenchmarks/benches/serial/linked/Manifest.toml`
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │       925360995380704198112305563 │
│  median │       934261185396722199513313664 │
│ maximum │       942462265503725203118314065 │
│   stdev │         6253516192260 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘
Julia Version 1.9.0-beta3
Commit 24204a73447 (2023-01-18 07:20 UTC)
Platform Info:
  OS: Linux (aarch64-linux-gnu)
  CPU: 4 × Neoverse-N1
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-14.0.6 (ORCJIT, neoverse-n1)
  Threads: 1 on 4 virtual cores

Doesn't seem to be aarch64 related. It would be nice to test on x86 mac as well.

bench = "append.jl"
  No Changes to `~/GCBenchmarks/benches/serial/append/Project.toml`
  No Changes to `~/GCBenchmarks/benches/serial/append/Manifest.toml`
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │       196319915840981514703 │
│  median │       197520416341991714703 │
│ maximum │       2016211170421012114704 │
│   stdev │         163311200 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘

For reference. It's slower than expected but not pathological.

@d-netto
Copy link
Member

d-netto commented Jan 31, 2023

linked/list.jl got worse after #47292 on M1 (master refers to before the merge and PR after):

┌─────────────────┬────────┬────────┐
│                 │ master │     PR │
├─────────────────┼────────┼────────┤
│ total time [ms] │  76984 │ 102113 │
│    gc time [ms] │  75482 │ 101252 │
│  mark time [ms] │  53561 │  73611 │
│ sweep time [ms] │  21905 │  27691 │
│  max pause [ms] │   4038 │   4330 │
│ max memory [MB] │   4051 │   4047 │
│          pct gc │     98 │     99 │
└─────────────────┴────────┴────────┘

The same benchmark had an opposite trend on a AMD/x86-64 machine:

┌─────────────────┬────────┬──────┐
│                 │ master │   PR │
├─────────────────┼────────┼──────┤
│ total time [ms] │   7152 │ 5736 │
│    gc time [ms] │   5357 │ 3918 │
│  mark time [ms] │   4826 │ 3376 │
│ sweep time [ms] │    532 │  542 │
│  max pause [ms] │   1787 │ 1302 │
│ max memory [MB] │   2410 │ 2410 │
│          pct gc │     74 │   67 │
└─────────────────┴────────┴──────┘

@Keno
Copy link
Member Author

Keno commented Jan 31, 2023

Would be interesting to test M1 linux as well. I guess a VM might be fine to just get a rough idea.

@gbaraldi
Copy link
Member

bench = "list.jl"
  No Changes to `~/gctest/Resources/julia/bin/GCBenchmarks/benches/serial/linked/Project.toml`
  No Changes to `~/gctest/Resources/julia/bin/GCBenchmarks/benches/serial/linked/Manifest.toml`
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬──────
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ tim 
│         │         ms │      ms │        ms │         ms │           ms │     
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼──────
│ minimum │       7592548748256621792
│  median │       7912577250756951888
│ maximum │       8352605053397201982
│   stdev │        2661841731969
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴──────
                                                               3 columns omitted

bench = "append.jl"
  No Changes to `~/gctest/Resources/julia/bin/GCBenchmarks/benches/serial/append/Project.toml`
  No Changes to `~/gctest/Resources/julia/bin/GCBenchmarks/benches/serial/append/Manifest.toml`
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬──────
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ tim 
│         │         ms │      ms │        ms │         ms │           ms │     
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼──────
│ minimum │       21672408615358
│  median │       22222519016263
│ maximum │       23562639517265
│   stdev │         557372
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴──────


julia> versioninfo()
Julia Version 1.9.0-beta3
Commit 24204a73447 (2023-01-18 07:20 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin21.4.0)
  CPU: 12 × Intel(R) Core(TM) i7-8700B CPU @ 3.20GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-14.0.6 (ORCJIT, skylake)
  Threads: 1 on 12 virtual cores
Environment:
  DYLD_FALLBACK_LIBRARY_PATH = /Users/julia/lib:/usr/local/lib:/lib:/usr/lib:/Users/julia/Library/Python/3.7/lib
  JULIA_PKG_PRECOMPILE_AUTO = 1
  JULIA_PKG_SERVER = https://pkg.julialang.org

It seems to be m1 related 🤔

@gbaraldi
Copy link
Member

gbaraldi commented Jan 31, 2023

macOS is doing something really bad here because this is on a Linux VM on my m1 laptop
image
I couldn't get the clipboard working so bear witb the prints

@Keno
Copy link
Member Author

Keno commented Jan 31, 2023

It seems surprising that there would be such a big OS difference here .... We're not really doing any system calls. I guess we might be stressing the memory subsystem, but jeez....

@Keno
Copy link
Member Author

Keno commented Jan 31, 2023

I did a bare metal linux run and the results match @gbaraldi's VM results.

@Keno
Copy link
Member Author

Keno commented Jan 31, 2023

GC log on M1/macOS:

GC: pause 6.97ms. collected 38.484309MB. incr
GC: pause 29.97ms. collected 0.093800MB. incr
GC: pause 49.96ms. collected 0.000000MB. incr
GC: pause 77.46ms. collected 0.000000MB. incr
GC: pause 109.58ms. collected 0.000000MB. incr
GC: pause 161.82ms. collected 0.000000MB. incr
GC: pause 241.54ms. collected 0.000000MB. incr
GC: pause 369.79ms. collected 0.000000MB. incr
GC: pause 563.72ms. collected 0.000000MB. incr
GC: pause 790.89ms. collected 0.000000MB. incr
GC: pause 729.75ms. collected 0.000000MB. full
GC: pause 1206.77ms. collected 14.342874MB. full
GC: pause 1235.39ms. collected 2.383547MB. full
GC: pause 1272.20ms. collected 0.000000MB. full
GC: pause 1284.24ms. collected 0.000000MB. full
GC: pause 1305.48ms. collected 0.000000MB. full
GC: pause 1328.40ms. collected 0.000000MB. full
GC: pause 1349.18ms. collected 0.000000MB. full
GC: pause 1369.07ms. collected 0.000000MB. full
GC: pause 1395.83ms. collected 0.000000MB. full
GC: pause 1420.44ms. collected 0.000000MB. full
GC: pause 1438.30ms. collected 0.000000MB. full
GC: pause 1462.76ms. collected 0.000000MB. full
GC: pause 1487.71ms. collected 0.000000MB. full
GC: pause 1512.28ms. collected 0.000000MB. full
GC: pause 1531.34ms. collected 0.000000MB. full
GC: pause 1553.63ms. collected 0.000000MB. full
GC: pause 1571.83ms. collected 0.000000MB. full
GC: pause 1597.79ms. collected 0.000000MB. full
GC: pause 1617.82ms. collected 0.000000MB. full
GC: pause 1634.78ms. collected 0.000000MB. full
GC: pause 1657.50ms. collected 0.000000MB. full
GC: pause 1683.11ms. collected 0.000000MB. full
GC: pause 1704.77ms. collected 0.000000MB. full
GC: pause 1729.11ms. collected 0.000000MB. full
GC: pause 1934.86ms. collected 0.000000MB. full
GC: pause 1768.93ms. collected 0.000000MB. full
GC: pause 1794.73ms. collected 0.000000MB. full
GC: pause 1830.96ms. collected 0.000000MB. full
GC: pause 1844.74ms. collected 0.000000MB. full
GC: pause 1870.58ms. collected 0.000000MB. full
GC: pause 1893.89ms. collected 0.000000MB. full
GC: pause 1900.76ms. collected 0.000000MB. full
GC: pause 1913.94ms. collected 0.000000MB. full
GC: pause 1937.05ms. collected 0.000000MB. full
GC: pause 1961.40ms. collected 0.000000MB. full
GC: pause 1979.99ms. collected 0.000000MB. full
GC: pause 2001.17ms. collected 0.000000MB. full
GC: pause 2036.07ms. collected 0.000000MB. full
GC: pause 2048.52ms. collected 0.000000MB. full
GC: pause 2104.55ms. collected 0.000000MB. full
GC: pause 2120.38ms. collected 0.000000MB. full
GC: pause 2135.64ms. collected 13.838608MB. full
(value = 134217728, times = 0x00000010f92eb423, gc_diff = Base.GC_Diff(4294967296, 0, 0, 134217728, 0, 175, 71423721960, 52, 42), gc_end = Base.GC_Num(30364928, 0, 0, 1483, 1, 142908647, 50668, 1480, 71518025462, 5312867266, 0, 0x0000000002bc0000, 74, 42, 2120381292, 4244861763, 7542, 15209, 322084250, 1798285791, 10952903502, 60563773457))

vs x86_64 linux:

GC: pause 18.85ms. collected 26.226526MB. incr
GC: pause 41.31ms. collected 0.146512MB. incr
GC: pause 65.38ms. collected 0.000000MB. incr
GC: pause 83.65ms. collected 0.000000MB. incr
GC: pause 155.24ms. collected 0.000000MB. incr
GC: pause 174.23ms. collected 0.000000MB. incr
GC: pause 260.61ms. collected 0.000000MB. incr
GC: pause 394.58ms. collected 0.000000MB. incr
GC: pause 617.96ms. collected 0.000000MB. incr
GC: pause 1018.14ms. collected 0.000000MB. incr
GC: pause 1329.41ms. collected 0.000000MB. incr
(value = 134217728, times = 0x0000000158a0571d, gc_diff = Base.GC_Diff(4294967296, 0, 0, 134217728, 0, 13, 4159353906, 11, 0), gc_end = Base.GC_Num(822962816, 0, 0, 35, 0, 136652179, 1193, 27, 4230891708, 3642918580, 0, 0x0000000068707a6b, 14, 0, 1329409486, 2725647287, 15250, 20040, 280061775, 1049347211, 872873882, 3357979326))

For some reason on macOS, we're just doing way more collections and in particular an excessive amount of full collections.

@KristofferC
Copy link
Member

KristofferC commented Jan 31, 2023

What's the status on 1.8? Just thinking if there is something to bisect.

@Keno
Copy link
Member Author

Keno commented Jan 31, 2023

Looks like 1.8 is fine (M1/aarch64):

GC: pause 11.79ms. collected 24.086122MB. incr
GC: pause 26.25ms. collected 0.008432MB. incr
GC: pause 47.50ms. collected 0.003504MB. full
GC: pause 124.23ms. collected 0.875768MB. full
GC: pause 219.33ms. collected 0.000000MB. full
GC: pause 488.75ms. collected 0.000000MB. full
GC: pause 1290.01ms. collected 0.000000MB. full
(value = 134217728, times = 0x00000000b6d8db62, gc_diff = Base.GC_Diff(4294967296, 0, 0, 134217728, 0, 20, 2207871376, 7, 5), gc_end = Base.GC_Num(1270836832, 0, 0, 210, 0, 136860540, 1828, 208, 2230395626, 3231358286, 0, 0x000000010b076000, 11, 5))

@Keno Keno added regression Regression in behavior compared to a previous version GC Garbage collector labels Jan 31, 2023
@Keno Keno added this to the 1.9 milestone Jan 31, 2023
@DilumAluthge DilumAluthge added the system:apple silicon Affects Apple Silicon only (Darwin/ARM64) - e.g. M1 and other M-series chips label Jan 31, 2023
@d-netto
Copy link
Member

d-netto commented Jan 31, 2023

Could to be related to #44805.

For reference, after commenting out:

// If the live data outgrows the suggested max_total_memory
// we keep going with minimum intervals and full gcs until
// we either free some space or get an OOM error.
if (live_bytes > max_total_memory) {
    sweep_full = 1;
}

and:

// We need this for 32 bit but will be useful to set limits on 64 bit
if (gc_num.interval + live_bytes > max_total_memory) {
    if (live_bytes < max_total_memory) {
        gc_num.interval = max_total_memory - live_bytes;
    } else {
        // We can't stay under our goal so let's go back to
        // the minimum interval and hope things get better
        gc_num.interval = default_collect_interval;
   }
}

on master it goes to

┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │       6085 │    4596 │      3853 │        728 │         1859 │                 8 │     2740 │         75 │
│    mean │       6235 │    4733 │      3943 │        789 │         2133 │                12 │     2741 │         75 │
│ maximum │       6367 │    4862 │      3986 │        876 │         2312 │                17 │     2745 │         76 │
│   stdev │        101 │      96 │        54 │         60 │          180 │                 4 │        2 │          0 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘

on the M1/macOS.

@gbaraldi
Copy link
Member

I wonder if we're getting the wrong values from libuv here.

@oscardssmith
Copy link
Member

are we detecting max memory incorrectly on mac?

@gbaraldi
Copy link
Member

Yes, Sys.free_memory() shows I have 350 MB free while I have over 5GB

@d-netto
Copy link
Member

d-netto commented Jan 31, 2023

Seems like it's the case (instead of an issue with the heuristics). Hardcoding 2GB (which is closer to what I have on my machine) into free_mem (instead of uv_get_available_memory) gives me:

┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │       6983 │    5537 │      4367 │       1091 │         2457 │                 8 │     2740 │         78 │
│    mean │       7100 │    5615 │      4466 │       1149 │         2511 │                11 │     2741 │         78 │
│ maximum │       7247 │    5700 │      4546 │       1226 │         2579 │                15 │     2745 │         78 │
│   stdev │        114 │      75 │        70 │         57 │           51 │                 3 │        2 │          0 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘

on M1/macOS.

@gbaraldi
Copy link
Member

gbaraldi commented Feb 1, 2023

Free memory on macos or at least on the M1 doesn't seem to be a reliable way of checking.

@gbaraldi
Copy link
Member

gbaraldi commented Feb 1, 2023

With JuliaLang/libuv#34 I get

bench = "list.jl"
┌─────────┬────────────┬─────────┬───────────┬────────────┬──────────────┬───────────────────┬──────────┬────────────┐
│         │ total time │ gc time │ mark time │ sweep time │ max GC pause │ time to safepoint │ max heap │ percent gc │
│         │         ms │      ms │        ms │         ms │           ms │                us │       MB │          % │
├─────────┼────────────┼─────────┼───────────┼────────────┼──────────────┼───────────────────┼──────────┼────────────┤
│ minimum │       51123703322346612227270171 │
│  median │       51503747326248812369270172 │
│ maximum │       527038683372507132315270572 │
│   stdev │         5750421234220 │
└─────────┴────────────┴─────────┴───────────┴────────────┴──────────────┴───────────────────┴──────────┴────────────┘

Which seems more reasonable.

@vtjnash
Copy link
Member

vtjnash commented Feb 1, 2023

Duplicate of #47684

@JeffBezanson
Copy link
Member

Should we be calling uv_get_available_memory at all? It might make sense just to use uv_get_constrained_memory as a hint of roughly how much memory is ok to use and let swap handle it.

@vchuravy
Copy link
Member

vchuravy commented Feb 7, 2023

Should we be calling uv_get_available_memory at all? It might make sense just to use uv_get_constrained_memory as a hint of roughly how much memory is ok to use and let swap handle it.

I am in favor of that. It's odd to take the state of the system when Julia starts as the high-water mark. Constrained memory seems like the right concept (in particular since cgroups is a thing).

@gbaraldi
Copy link
Member

gbaraldi commented Feb 7, 2023

Macos has no concept of constrained memory, and returns 0 here.

@vchuravy
Copy link
Member

vchuravy commented Feb 7, 2023

Then we use total memory.

@gbaraldi
Copy link
Member

gbaraldi commented Feb 9, 2023

After some discussion should we just remove

julia/src/gc.c

Lines 3258 to 3262 in d72a9a1

uint64_t free_mem = uv_get_available_memory();
uint64_t high_water_mark = free_mem / 10 * 7; // 70% high water mark
if (high_water_mark < max_total_memory)
max_total_memory = high_water_mark;
and change to just using 50% or some other number of the constrained memory? Which probably is total memory on non linux systems?

vtjnash pushed a commit that referenced this issue Feb 10, 2023
Remove the high watermark logic, because it doesn't really make sense,
and allow for use of 60% of system memory before aggressive GC kicks in.

Should fix #48473
KristofferC pushed a commit that referenced this issue Feb 20, 2023
Remove the high watermark logic, because it doesn't really make sense,
and allow for use of 60% of system memory before aggressive GC kicks in.

Should fix #48473

(cherry picked from commit 500f561)
KristofferC pushed a commit that referenced this issue Feb 20, 2023
Remove the high watermark logic, because it doesn't really make sense,
and allow for use of 60% of system memory before aggressive GC kicks in.

Should fix #48473

(cherry picked from commit 500f561)
KristofferC pushed a commit that referenced this issue Feb 21, 2023
Remove the high watermark logic, because it doesn't really make sense,
and allow for use of 60% of system memory before aggressive GC kicks in.

Should fix #48473

(cherry picked from commit 500f561)
KristofferC pushed a commit that referenced this issue Feb 21, 2023
Remove the high watermark logic, because it doesn't really make sense,
and allow for use of 60% of system memory before aggressive GC kicks in.

Should fix #48473

(cherry picked from commit 500f561)
KristofferC pushed a commit that referenced this issue Feb 21, 2023
Remove the high watermark logic, because it doesn't really make sense,
and allow for use of 60% of system memory before aggressive GC kicks in.

Should fix #48473

(cherry picked from commit 500f561)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GC Garbage collector regression Regression in behavior compared to a previous version system:apple silicon Affects Apple Silicon only (Darwin/ARM64) - e.g. M1 and other M-series chips
Projects
None yet
9 participants