Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: excessive memory use between 1.21.0 -> 1.21.1 due to hugepages and the linux/amd64 max_ptes_none default of 512 #64332

Closed
LeGEC opened this issue Nov 22, 2023 · 70 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided.

Comments

@LeGEC
Copy link

LeGEC commented Nov 22, 2023

What version of Go are you using (go version)?

$ go version
go version go1.21.4 linux/amd64

Does this issue reproduce with the latest release?

Yes

What operating system and processor architecture are you using (go env)?

go env Output
$ go env
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/builder/.cache/go-build'
GOENV='/home/builder/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/builder/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/builder/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.21.4'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='0'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3335037910=/tmp/go-build -gno-record-gcc-switches'

What did you do?

Our production service was recently shut down by the OOM, which led us to inspect its memory usage in details.

We discovered that, when running, our process had a memory consumption (RSS) that grew over time and never shrank.

sample runtime.MemStats report :

	"Alloc": 830731560,
	"TotalAlloc": 341870177656,
	"Sys": 8023999048,
	"Lookups": 0,
	"Mallocs": 3129044622,
	"Frees": 3124956536,
	"HeapAlloc": 830731560,
	"HeapSys": 7836532736,
	"HeapIdle": 6916292608,
	"HeapInuse": 920240128,
	"HeapReleased": 6703923200,
	"HeapObjects": 4088086,
	"StackInuse": 15204352,
	"StackSys": 15204352,
	"MSpanInuse": 8563968,
	"MSpanSys": 17338944,
	"MCacheInuse": 4800,
	"MCacheSys": 15600,
	"BuckHashSys": 5794138,
	"GCSys": 146092920,
	"OtherSys": 3020358,
	"NextGC": 1046754240,
	"LastGC": 1700579048506142728,
	"PauseTotalNs": 108783964,

at that time, the reported RSS for our process was 3,19Gb.

We looked at our history in more details, and observed that we had a big change in production behavior when we upgraded our go version from 1.19.5 to 1.20.0 - we unfortunately didn't notice the issue at that time, because we upgrade (and restart) our service on a regular basis.

To confirm this theory, we have downgraded our go version back to 1.19.13, and our memory consumption is now small and stable again.

Here is a graph of the RSS of our service over the last 48h, the drop corresponds to our new deployment with go 1.19.13 :

image

It should be noted that our production kernel is a hardened kernel based on grsecurity 5.15.28, which may be related to this issue (randomized heap addresses ?)

What did you expect to see?

A constant and stable memory usage.

What did you see instead?

The go runtime seems to not release memory back to the system.


Unfortunately, we have only been able to observe this issue on our production system, in production conditions.

We were not yet able to reproduce the issue on other systems, or by running isolated features in test programs deployed on our production infrastructure.

@seankhliao
Copy link
Member

perhaps try looking at profiling

@seankhliao seankhliao added the WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided. label Nov 22, 2023
@LeGEC
Copy link
Author

LeGEC commented Nov 22, 2023

@seankhliao : we did, the OOM killer incident was a combination of high memory usage and a bug on our side, which we fixed.

After that fix, memory profiles would report only small amounts for 'inuse_size', in the 300-400 Mb range when our process' RSS would be several GB, and more importantly: the RSS of our process would grow over time, but the profiles would report no extra used memory.

We could definitely optimize allocations at large, but the fact is: compiling the exact same code in 1.19 produces a binary which doesn't leak. [edit: we will watch for the "leak" part over the next few days, but the system memory footprint is definitely lower, and shrinks on occasions, which didn't happen with go 1.21]

@LeGEC
Copy link
Author

LeGEC commented Nov 22, 2023

@seankhliao : I see the WaitingForInfo tag, do you have a more specific request in mind ?

@vanackere
Copy link
Contributor

@seankhliao (sidenote: I work with @LeGEC): we did extensive memory profiling of our processes during several weeks (and can provide full data privately if necessary) but from the point of view of the memory profiler nothing was leaking and nothing would explain the constant grow in memory usage, mostly unreleased to the OS.
This is definitely a runtime issue as running the exact same code with go 1.19.13 leads to a drastically better behavior... (see graph in first post)
We are also willing to test any intermediate compiler version or run any profiling command that would be useful to track down this issue.

@vanackere
Copy link
Contributor

And for completeness: we also tried various combination of debug.FreeOSMemory() / GOMEMLIMIT=1500MiB with
no success at all.
Downgrading our production service to a now unsupported version of the Go compiler was only done
as a last resort attempt since we have not managed to reproduce the issue outside of production. We'd like to make sure
that this issue at least gets fixed for go 1.22 and will provide all data necessary.

Unless someone from the go compiler / runtime team has a better suggestion : we are planning to make a test run with go 1.20.0 tomorrow and update this issue with the corresponding numbers.

@mauri870
Copy link
Member

mauri870 commented Nov 22, 2023

I remember some things changed regarding huge pages on linux and were backported to 1.21, that may be related.

#61718

Worth it to read the wiki for THP and GOGC https://go.dev/doc/gc-guide#Linux_transparent_huge_pages

@mauri870
Copy link
Member

cc @mknyszek

@vanackere
Copy link
Contributor

@mauri870 on our production system:

$ cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never

If huge page are indeed involved: is there a way to disable transparent_hugepage for a single process / without root access ? (Alternatively, is there a way to patch the go compiler to prevent transparent_hugepage usage ?)

@seankhliao seankhliao changed the title go 1.20.0 -> 1.21.4: excessive memory consumption (possible runtime memory leak ?) runtime: excessive memory use between 1.19 -> 1.21 Nov 22, 2023
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Nov 22, 2023
@LeGEC
Copy link
Author

LeGEC commented Nov 22, 2023

Memstats of our go1.19 program :

	"Alloc": 710580968,
	"TotalAlloc": 252397361888,
	"Sys": 1459456584,
	"Lookups": 0,
	"Mallocs": 2418596147,
	"Frees": 2414925477,
	"HeapAlloc": 710580968,
	"HeapSys": 1360199680,
	"HeapIdle": 517931008,
	"HeapInuse": 842268672,
	"HeapReleased": 372064256,
	"HeapObjects": 3670670,
	"StackInuse": 15532032,
	"StackSys": 15532032,
	"MSpanInuse": 7786368,
	"MSpanSys": 11976192,
	"MCacheInuse": 4800,
	"MCacheSys": 15600,
	"BuckHashSys": 4959765,
	"GCSys": 64211392,
	"OtherSys": 2561923,
	"NextGC": 992932464,
	"LastGC": 1700665950049685014,
	"PauseTotalNs": 101655886,

RSS of process: 1158688 Kb (so ~ 1,16 Gb)

@mknyszek
Copy link
Contributor

The huge page related changes can't be the culprit, because huge pages were only forced by the runtime between Go 1.21.0 and Go 1.21.3 (inclusive). Go 1.21.4 no longer adjusts huge page settings at all. I don't see any of those versions in the conversation above.

@mknyszek
Copy link
Contributor

mknyszek commented Nov 22, 2023

How are you measuring RSS?

Also, as a side note, "Sys" is virtual memory footprint. It's not going to match up with RSS. "Sys - HeapReleased" will be much closer.

Hmm... is it possible this is related to a regression in a specific standard library package for instance? IIRC we didn't see anything like this back when Go 1.20 was released, so it's a bit surprising. It's still possible it's related to the compiler/runtime, so I don't mean to rule it out.

Is it possible to reproduce this on a smaller scale? The difference seems fairly big and "obvious" so if you can bisect down to the commit between Go 1.19 and Go 1.20 that causes it, that would immediately tell us what happened.

@vanackere
Copy link
Contributor

@mknyszek RSS was mesured by simple output of ps aux. The value mentionned in the inital post may have been slightly wrong indeed, but not by much : the RSS of our process right before restarting the service with our version compiled with go 1.19 was 4397688 (with a VSZ of 9107484)

The same production process with go 1.19.13 gives currently the following values, much more coherent with the actual allocations reported by profiling :

image

@vanackere
Copy link
Contributor

@mknyszek yes we also thought possible that it could be related to changes in some standard library packages.
However unless those have specific hooks into the runtime or make use of unsafe, those allocations should normally have been visible in the memory profiling, no ?

I can share - privately - a heap dump obtained using /pprof/heap endpoint when the relevant service was on go 1.21.4 if it can help.

Bisecting beetween 1.19 and 1.20 is a possibility, we can do that, but we can only try 1 version / day in order to reproduce since the issue looks dependent on user / service activity... Can you suggest a list of commits to try ?

@mknyszek
Copy link
Contributor

mknyszek commented Nov 22, 2023

@vanackere There were a lot of commits that went into Go 1.20, it would take quite a while to enumerate them and try to guess what went wrong. 😅

The commit range is e99f53f..9088c69 which contains 2018 commits.

$ git log --oneline e99f53fed98b0378c147588789b8c56b0305469b..9088c691dac424540f562d6271c5ee479e9f9d80 | wc -l
2018

log2(2018) ~= 11 so it would take around 11 attempts to identify the culprit. I realize that's a lot of days to by trying this out in production. Is there really no other way to create a smaller reproducer?

Although... it occurs to me that Go 1.20 had some significant crypto regressions that could make TLS (and/or other similar operations like validating JWT tokens) slower. I could imagine that perhaps that could cause requests to pile up in your service? I would've expected them to have been mitigated somewhat in Go 1.21, but perhaps that's worth looking into?

See #63516 and #59442 maybe? (EDIT: I accidentally put the wrong issue number in a moment ago. It should be correct now.)

On that note, what does a CPU profile say?

@LeGEC
Copy link
Author

LeGEC commented Nov 24, 2023

one update: we compiled and deployed our server using go 1.20.0, and the leaky behavior isn't observed.

We're pondering between "testing" the 1.20.11 (the latest 1.20) and 1.21.0 .

Would you have some insight about these two versions ?

for example: were there some fixes or updates in memory handling in one of the 1.20 minor versions ?

@LeGEC
Copy link
Author

LeGEC commented Nov 24, 2023

update: we tried go 1.20.11 today, and still have a reasonable (for us ...) RSS value

behavior change happens between 1.20.11 and 1.21.4. updating the title accordingly

@LeGEC LeGEC changed the title runtime: excessive memory use between 1.19 -> 1.21 runtime: excessive memory use between 1.20.11 -> 1.21 Nov 24, 2023
@mknyszek
Copy link
Contributor

mknyszek commented Nov 27, 2023

@Nasfame Unfortunately neither of us can bisect because we can't run the reproducer.

@LeGEC Thanks for the updates. I think the only thing I'll say is it seems like the crypto-related regressions are unrelated. At this point bisection and/or a reproducer seems like it would be the most fruitful path forward. As before, it may be worthwhile to look at a CPU profile before and after, which might reveal other unexpected changes in application behavior that lead to the culprit.

@LeGEC
Copy link
Author

LeGEC commented Nov 27, 2023

Unfortunately neither of us can bisect because we can't run the reproducer.

@mknyszek : yes, we're trying to run the parts of our services in isolation, unfortunately none of the would be culprits seem to be guilty alone (or, more probably, we haven't identified the right culprit).

Today we rebuilt our server with gotip (golang master version 0c7e5d3) and deployed, and the same buggy memory behavior clearly shows up.

@LeGEC
Copy link
Author

LeGEC commented Nov 27, 2023

@mknyszek :
I will repeat one point: we suspect that the behavior we observe on our production server may be due to an interaction between the golang runtime and the kernel on our production server. The system on our servers is managed by our host provider, and they run a kernel based on grsecurity 5.15.28.
There definitely are some features activated around memory safety, we are trying to get more details on which ones are.

One extra information, if it is of any help:
when we tried to run a binary compiled with the race detector, it panicked with an error (unfold details for complete stack trace) :

fatal error: too many address space collisions for -race mode runtime.(*mheap).alloc.func1() /usr/local/go/src/runtime/mheap.go:968 +0x5c fp=0x73783ad4f7f8 sp=0x73783ad4f7b0 pc=0x4637fc runtime.(*mheap).alloc(0x2780160?, 0x274e720?, 0x19?) /usr/local/go/src/runtime/mheap.go:962 +0x5b fp=0x73783ad4f840 sp=0x73783ad4f7f8 pc=0x46375b runtime.(*mcentral).grow(0x73783ad4f8c0?) /usr/local/go/src/runtime/mcentral.go:246 +0x52 fp=0x73783ad4f880 sp=0x73783ad4f840 pc=0x451e52 runtime.(*mcentral).cacheSpan(0x277aea8) /usr/local/go/src/runtime/mcentral.go:166 +0x306 fp=0x73783ad4f8d8 sp=0x73783ad4f880 pc=0x451cc6 runtime.(*mcache).refill(0x689f9e4c0108, 0x0?) /usr/local/go/src/runtime/mcache.go:182 +0x153 fp=0x73783ad4f918 sp=0x73783ad4f8d8 pc=0x451413 runtime.(*mcache).nextFree(0x689f9e4c0108, 0x1c) /usr/local/go/src/runtime/malloc.go:925 +0x85 fp=0x73783ad4f960 sp=0x73783ad4f918 pc=0x447745 runtime.mallocgc(0xc0, 0x1656b60, 0x1) /usr/local/go/src/runtime/malloc.go:1112 +0x448 fp=0x73783ad4f9e0 sp=0x73783ad4f960 pc=0x447d08 runtime.newobject(0x73783ad4f9f8?) /usr/local/go/src/runtime/malloc.go:1324 +0x25 fp=0x73783ad4fa08 sp=0x73783ad4f9e0 pc=0x448305 internal/cpu.doinit() /usr/local/go/src/internal/cpu/cpu_x86.go:51 +0x1e fp=0x73783ad4fa68 sp=0x73783ad4fa08 pc=0x43b1be internal/cpu.Initialize({0x0, 0x0}) /usr/local/go/src/internal/cpu/cpu.go:125 +0x1d fp=0x73783ad4fa88 sp=0x73783ad4fa68 pc=0x43ac1d runtime.cpuinit({0x0?, 0x800000000?}) /usr/local/go/src/runtime/proc.go:639 +0x1f fp=0x73783ad4faa8 sp=0x73783ad4fa88 pc=0x476b1f runtime.schedinit() /usr/local/go/src/runtime/proc.go:729 +0xbd fp=0x73783ad4faf0 sp=0x73783ad4faa8 pc=0x476d7d runtime.rt0_go() /usr/local/go/src/runtime/asm_amd64.s:349 +0x11c fp=0x73783ad4faf8 sp=0x73783ad4faf0 pc=0x4a8dfc

@mknyszek
Copy link
Contributor

mknyszek commented Nov 27, 2023

Unfortunately neither of us can bisect because we can't run the reproducer.

yes, we're trying to run the parts of our services in isolation, unfortunately none of the would be culprits seem to be guilty alone (or, more probably, we haven't identified the right culprit).

Just to be clear I didn't mean that in a bad way -- I definitely understand the difficulty of creating reproducers (especially when system details might be involved). 😅 And unless the reproducing code is open source it's difficult and/or impossible to share too many details. I was just clarifying for @Nasfame. Thanks for your continued communication here!

One extra information, if it is of any help:
when trying to run a binary compiled with the race detector, it panicked with an error (unfold details for complete stack trace) :

This is something I would expect with a kernel forcing address space randomization. The race detector (TSAN) requires all heap memory to exist in a very specific range of addresses. This is unfortunately not easy to change. IIUC it's fairly fundamental to the technique.

@mknyszek
Copy link
Contributor

mknyszek commented Nov 27, 2023

There definitely are some features activated around memory safety, we are trying to get more details on which ones are.

Acknowledged. That does seem to be one big thing that's unique about your system and any other details you can share about what's activated would be helpful. I agree that it seems plausible that this issue is due to some unfortunate interaction between Go 1.21 and your specific environment.

@LeGEC
Copy link
Author

LeGEC commented Nov 27, 2023

To give a clearer view of how visible is the memory misbehavior on our server: here is a graph of the RAM usage (RSS) over the last 2 weeks of our 3 main services (the green line is the sum of the 3 others)

process RSS over time

grayed zones are weekends (our service is way less used on weekends),
sudden "drops" in RAM usage are when we restart the service -- depending on the occasions: for new deployments or to reduce memory footprint.

@gopherbot

This comment was marked as off-topic.

@mknyszek
Copy link
Contributor

@Nasfame I appreciate that you're trying to help create a reproducer, but I don't think replicating the high-level details of the application is going to result in a reproducible case. There are plenty of high-load services (microservices, monoliths, etc.) on common platforms that are running with Go 1.21, and to our knowledge they haven't experienced the same issue.

Therefore, the issue has to lie in something very specific to @LeGEC's application, either in how their application uses Go or in how Go is interacting with their environment (even if the issue is ultimately in the Go runtime). The relevant details appear to be that (1) it reproduces in Go 1.21 and not Go 1.20, and (2) the system is running with a bunch of security features enabled that to my knowledge aren't super common (like ASLR).

As a side-note, the number of lines of code in an application's source isn't really going to be indicative of much in general.

https://go.dev/cl/545275 mentions the fix for the darwin kernel. But the bug occurs in linux.

It was a mistake. I marked the comment off-topic.

@LeGEC
Copy link
Author

LeGEC commented Nov 28, 2023

blind shot: did something land on go1.21 which changed the way a go process returns memory to the OS ?

@LeGEC
Copy link
Author

LeGEC commented Dec 6, 2023

Regarding the allocated pages for mheap.arenas and pageAlloc.chunks (@mknyszek : thanks for your patience, I finally wrapped my head around the fact that only these pages were explicitly marked with madvise(..., MADV_{NO,}HUGEPAGE) ... ) :

what is the expected gain from backing these pages with hugepages (or not) ?

LeGEC pushed a commit to trustelem/go that referenced this issue Dec 6, 2023
Go 1.21.1 and Go 1.22 have ceased working around an issue with Linux
kernel defaults for transparent huge pages that can result in excessive
memory overheads. (https://bugzilla.kernel.org/show_bug.cgi?id=93111)

Many Linux distributions disable huge pages altogether these days, so
this problem isn't quite as far-reaching as it used to be. Also, the
problem only affects Go programs with very particular memory usage
patterns.

That being said, because the runtime used to actively deal with this
problem (but with some unpredictable behavior), it's preventing users
that don't have a lot of control over their execution environment from
upgrading to Go beyond Go 1.20.

This change adds a GODEBUG to smooth over the transition. The GODEBUG
setting disables transparent huge pages for all heap memory on Linux,
which is much more predictable than restoring the old behavior.

For golang#64332.
Fixes golang#64561.

Change-Id: I73b1894337f0f0b1a5a17b90da1221e118e0b145
Reviewed-on: https://go-review.googlesource.com/c/go/+/547475
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
(cherry picked from commit c915215)
@LeGEC
Copy link
Author

LeGEC commented Dec 7, 2023

We can confirm that the disablethp flag and the actions linked with it (see commit e48984499a) are enough to fix the issue in our case.

[edit] for completeness: we tested this patch on top of version 1.21.5, which is the the version referenced by this branch in our fork: test-fix-64332

@mknyszek
Copy link
Contributor

mknyszek commented Dec 7, 2023

what is the expected gain from backing these pages with hugepages (or not) ?

It's less about backing with huge pages and more about disabling huge pages for small heaps. These mappings tend to be quite large in comparison to small heaps which might lead to the kernel backing them with a huge page, leading to large proportional overheads.

The main reason to keep the mitigation in this case is that the runtime doesn't ever return any of this memory to the OS, so even if the Linux configuration doesn't have a high max_ptes_none we might still end up with an unnecessary huge page on this metadata with nothing else we can do about it. With the general heap we regularly return memory to the OS, so if max_ptes_none is zero as per the GC guide's recommendation, the broken huge pages will stay broken.

If it were up to me we wouldn't call MADV_HUGEPAGE at all, but these mappings grow very slowly compared to the heap and we have no other way to undo MADV_NOHUGEPAGE. Although the initial mapping size is big compared to the heap size, because these mappings grow slowly, there won't actually be very many calls to and/or initial accesses to newly MADV_HUGEPAGE'd memory, mitigating most of the stall issues that make MADV_HUGEPAGE problematic. Also, the math works out that the worst-case additional overhead for asking the OS to provide a huge page is proportionally very small, mitigating that issue as well.

Overall, I think the cost/benefit just works out in favor of keeping this particular mitigation. It helps small programs stay small without really hurting anyone else in practice.

gopherbot pushed a commit that referenced this issue Jan 4, 2024
Go 1.21.1 and Go 1.22 have ceased working around an issue with Linux
kernel defaults for transparent huge pages that can result in excessive
memory overheads. (https://bugzilla.kernel.org/show_bug.cgi?id=93111)

Many Linux distributions disable huge pages altogether these days, so
this problem isn't quite as far-reaching as it used to be. Also, the
problem only affects Go programs with very particular memory usage
patterns.

That being said, because the runtime used to actively deal with this
problem (but with some unpredictable behavior), it's preventing users
that don't have a lot of control over their execution environment from
upgrading to Go beyond Go 1.20.

This change adds a GODEBUG to smooth over the transition. The GODEBUG
setting disables transparent huge pages for all heap memory on Linux,
which is much more predictable than restoring the old behavior.

For #64332.
Fixes #64561.

Change-Id: I73b1894337f0f0b1a5a17b90da1221e118e0b145
Reviewed-on: https://go-review.googlesource.com/c/go/+/547475
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
(cherry picked from commit c915215)
Reviewed-on: https://go-review.googlesource.com/c/go/+/547636
Reviewed-by: Mauri de Souza Meneguzzo <mauri870@gmail.com>
TryBot-Bypass: Michael Knyszek <mknyszek@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
@ihtkas
Copy link

ihtkas commented Feb 6, 2024

Our production applications also experienced similar behavior and we used the GODEBUG=disablethp=1 environment as suggested in this issue. We still couldn't figure out how to validate this fix in the production system as the issue is reproduced only in long-living nodes [>15hours]. We don't want to wait for a long time to validate this fix and then explore alternative fixes. Is there any information that we can read from system files to validate that the THP is disabled for a particular go process?

@mknyszek
Copy link
Contributor

mknyszek commented Feb 6, 2024

One thing you can do is dump /proc/<pid>/smaps for the process and observe that memory regions have the nh ("no hugepage") attribute.

@ihtkas
Copy link

ihtkas commented Feb 8, 2024

Yes. I can see nh attribute in a few memory regions. We explicitly disabled the flag in the go code as well. To rule out any memory leaks, we deployed one node with binary built with go1.18 and 2 nodes with go1.21 toolchain. The issue persists only in the binaries built with go1.21 and the other node's memory stays flat. Check the attached screenshot for reference.

S.txt
Feb 8

unix.Prctl(unix.PR_SET_THP_DISABLE, 1, 0, 0, 0)

On a side, Is this fixed in Go 1.22 by any chance? We are anyways experimenting with the new version. I will update after observing the metrics for a day.

@mknyszek
Copy link
Contributor

mknyszek commented Feb 8, 2024

@ihtkas If you're disabling huge pages and still seeing a memory increase, then that is something else, then that is independent of this issue. Please file a new issue. I have updated the issue title to be more precise.

On a side, Is this fixed in Go 1.22 by any chance?

I'm not sure what you mean. As of Go 1.21, the runtime is no longer going to try and work around the max_ptes_none default in the Linux kernel. That is not going to change. There already are quite a few workarounds to this particular issue, and it looks like you've applied them.

@mknyszek mknyszek changed the title runtime: excessive memory use between 1.21.0 -> 1.21.1 runtime: excessive memory use between 1.21.0 -> 1.21.1 due to hugepages and the linux/amd64 max_ptes_none default of 512 Feb 8, 2024
jcpowermac added a commit to jcpowermac/release that referenced this issue Feb 15, 2024
Increase memory limits for steps running openshift-tests to account for increased memory consumption.
This change is necessary due to the removal of a workaround for a Linux kernel bug in golang versions 1.21+.
See golang/go#64332.
openshift-merge-bot bot pushed a commit to openshift/release that referenced this issue Feb 15, 2024
Increase memory limits for steps running openshift-tests to account for increased memory consumption.
This change is necessary due to the removal of a workaround for a Linux kernel bug in golang versions 1.21+.
See golang/go#64332.
ezz-no pushed a commit to ezz-no/go-ezzno that referenced this issue Feb 18, 2024
Go 1.21.1 and Go 1.22 have ceased working around an issue with Linux
kernel defaults for transparent huge pages that can result in excessive
memory overheads. (https://bugzilla.kernel.org/show_bug.cgi?id=93111)

Many Linux distributions disable huge pages altogether these days, so
this problem isn't quite as far-reaching as it used to be. Also, the
problem only affects Go programs with very particular memory usage
patterns.

That being said, because the runtime used to actively deal with this
problem (but with some unpredictable behavior), it's preventing users
that don't have a lot of control over their execution environment from
upgrading to Go beyond Go 1.20.

This change adds a GODEBUG to smooth over the transition. The GODEBUG
setting disables transparent huge pages for all heap memory on Linux,
which is much more predictable than restoring the old behavior.

Fixes golang#64332.

Change-Id: I73b1894337f0f0b1a5a17b90da1221e118e0b145
Reviewed-on: https://go-review.googlesource.com/c/go/+/547475
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
mshitrit pushed a commit to mshitrit/release that referenced this issue Feb 18, 2024
Increase memory limits for steps running openshift-tests to account for increased memory consumption.
This change is necessary due to the removal of a workaround for a Linux kernel bug in golang versions 1.21+.
See golang/go#64332.
derekhiggins added a commit to derekhiggins/release that referenced this issue Feb 18, 2024
This was increased recently for  golang/go#64332 but
metal-ipi bm jobs persist in being OOM'd. Bump it further.
shaior pushed a commit to natifridman/release that referenced this issue Feb 20, 2024
Increase memory limits for steps running openshift-tests to account for increased memory consumption.
This change is necessary due to the removal of a workaround for a Linux kernel bug in golang versions 1.21+.
See golang/go#64332.
openshift-merge-bot bot pushed a commit to openshift/release that referenced this issue Feb 20, 2024
This was increased recently for  golang/go#64332 but
metal-ipi bm jobs persist in being OOM'd. Bump it further.
sgoveas pushed a commit to sgoveas/release that referenced this issue Feb 22, 2024
Increase memory limits for steps running openshift-tests to account for increased memory consumption.
This change is necessary due to the removal of a workaround for a Linux kernel bug in golang versions 1.21+.
See golang/go#64332.
sgoveas pushed a commit to sgoveas/release that referenced this issue Feb 22, 2024
This was increased recently for  golang/go#64332 but
metal-ipi bm jobs persist in being OOM'd. Bump it further.
gopherbot pushed a commit to golang/website that referenced this issue Mar 11, 2024
Go 1.21.1 and Go 1.22 have ceased working around an issue with Linux
kernel defaults for transparent huge pages that can result in excessive
memory overheads. (https://bugzilla.kernel.org/show_bug.cgi?id=93111)

Many Linux distributions disable huge pages altogether these days, so
this problem isn't quite as far-reaching as it used to be. Also, the
problem only affects Go programs with very particular memory usage
patterns.

That being said, because the runtime used to actively deal with this
problem (but with some unpredictable behavior), it's preventing users
that don't have a lot of control over their execution environment from
upgrading to Go beyond Go 1.20.

This adds documentation about this change in behavior in both the GC
guide and the Go 1.21 release notes.

For golang/go#64332.

Change-Id: I29baaffcc678d08255364a3cd6f11211ce4164ba
Reviewed-on: https://go-review.googlesource.com/c/website/+/547675
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Mauri de Souza Meneguzzo <mauri870@gmail.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
memodi pushed a commit to memodi/release that referenced this issue Mar 14, 2024
Increase memory limits for steps running openshift-tests to account for increased memory consumption.
This change is necessary due to the removal of a workaround for a Linux kernel bug in golang versions 1.21+.
See golang/go#64332.
memodi pushed a commit to memodi/release that referenced this issue Mar 14, 2024
This was increased recently for  golang/go#64332 but
metal-ipi bm jobs persist in being OOM'd. Bump it further.
@lojies
Copy link

lojies commented Oct 22, 2024

@LeGEC ,we use go1.22.5 and met the same question. Have you resolved this issue? Can you provide us with some suggestions?

@LeGEC
Copy link
Author

LeGEC commented Oct 24, 2024

@lojies : in our setting, yes, setting GODEBUG=disablethp=1 is enough to fix this very specific issue.

Note however that we first ran several memory profiles on our app to rule out any "regular" memory leaks in our code -- and we found and fixed several candidates for memory consumption by doing so.

@lojies
Copy link

lojies commented Oct 24, 2024

@lojies : in our setting, yes, setting GODEBUG=disablethp=1 is enough to fix this very specific issue.

Note however that we first ran several memory profiles on our app to rule out any "regular" memory leaks in our code -- and we found and fixed several candidates for memory consumption by doing so.

Thank you for your suggestion. We tried, but it had no effect.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. WaitingForInfo Issue is not actionable because of missing required information, which needs to be provided.
Projects
None yet
Development

No branches or pull requests

10 participants