-
Notifications
You must be signed in to change notification settings - Fork 18k
runtime/cgo: pthread_create failed: Resource temporarily unavailable #24484
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There are many reasons why a program might leak threads. We need to know something about your programs. Ideally, you would give us code that we can use to recreate the problem. Thanks. |
I'm afraid I can not publicly share too much detail. The code and setup are fairly complex, so difficult to share either way. The instance processes run in separate network namespaces and exchange UDP+ICMP with approx. 100k peers in total. After the startup there is almost no spawning of new child processes going on. I don't see any goroutines leaking. I have 50+ CPUs in the server and I believe I can reduce the thread bleeding substantially by setting a lower GOMAXPROCS for the instances. If you can share some reaons or areas I may be able to take some of the list. |
The most common reason for a thread to be created is because all the existing threads are blocked in system calls or in calls to C code via cgo (the error message shows that your application uses cgo). cgo calls would be the first place to look. See if any of those calls do not return. |
I believe there is no cgo being used outside the standard library. I have compiled with CGO_ENABLED=0 now and will monitor the situation. |
compiling with CGO_ENABLED=0 and setting a low GOMAXPROCS did reduce the overall number of threads to about 1/3rd. Instead of peaking out at 12k threads, I'm now down to 4.5k, but still (very) slowly creeping up. Still investigating. |
@fiber you are probably hitting the The real problem, though, if why golang runtime chooses to die upon receiving |
In a Go program that uses cgo, new threads are created using |
...on In my case this is docker daemon that gets aborted once the |
Goroutines are not threads. There are normally many many more goroutines than threads. A goroutine leak won't in itself lead to this problem. A thread leak will. We can't fix this problem until we understand where the thread leak is coming from in the original program. |
@ianlancetaylor Our service occur crash with the same error: runtime/cgo: pthread_create failed: Resource temporarily unavailable Our go version is 1.11.1. We didn't use cgo in our code. The library which we relying on also seems not use. Does the go itself will use cgo in some scenarios? |
@xianglinghui - I have the same issue but as @ianlancetaylor explained
Eg. if you disable cgo, new threads will not use For my application (which contains a very small number of concurrent go-routines and no explicit |
Things like making parallel DNS requests from a large number of goroutines
can also contribute. I managed to drive the number of threads down by
queuing requests, but
$ env CGO_ENABLED=0 go build
really fixed the issue for me.
Am Fr., 19. Okt. 2018 um 14:38 Uhr schrieb SjonHortensius <
notifications@github.com>:
… @xianglinghui <https://github.com/xianglinghui> - I have the same issue
but as @ianlancetaylor <https://github.com/ianlancetaylor> explained
In a Go program that uses cgo, new threads are created using
pthread_create
Eg. if you disable cgo, new threads will not use pthread_create but
another mechanism.
For my application (which contains a very small number of concurrent
go-routines and no explicit cgo usage but a lot of os.Exec calls)
disabling cgo fixed a lot of Resource temporarily unavailable crashes as
well. I'm not sure what causes this and how expected it is - but I'm just
disabling cgo from now on
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#24484 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AALulpms_fZX_Vkp1u0BPR0rhYuNaCBbks5umcfegaJpZM4S2QK1>
.
|
@xianglinghui I believe it's platform dependent. If you are running on Darwin, then the standard library will use cgo by default, for DNS requests, unless you build with This bug is still waiting for a reproduction case. If you have a case where a program crashes by running out of threads for no clear reason, please do share the code if you can so that we can try to reproduce it ourself. Don't forget to provide all the relevant system details. @fiber Large numbers of concurrent DNS requests did previously cause large numbers of threads to be created, but we fixed that, at least partially, in #25694. Though we could perhaps extend that fix to also check |
I was trying to solve a hackrRank problem question. What I Got: goroutine 0 [idle]: goroutine 1 [semacquire]: goroutine 6 [syscall]: goroutine 21 [semacquire]: goroutine 22 [select]: goroutine 23 [semacquire]: goroutine 24 [semacquire]: goroutine 25 [semacquire]: Here's the code package main import ( // Complete the migratoryBirds function below. func main() {
} func readLine(reader *bufio.Reader) string {
} func checkError(err error) { |
This can happen if your system is overloaded. Is the problem repeatable? |
It's repeatable. happens every time. I tried running different problems (like a different question) but it still keeps happening |
Hi , |
iam too facing the same issue , i was not there in the last week , some bug in hacker rank i guess |
Thank you for commenting. I’m sorry you are also experiencing issues but saying “me too” is not as helpful as giving complete information on what you tried to do — the program you wrote, what happened when you ran it, and the details of the machine you ran on, what operating system, what version of go, etc. With this information it should be possible to locate the cause of the problem. Please consider updating your responses |
@zainabb12345 Thanks for providing a test case. However, the stack trace that you provided is from the go tool. It is not from your test case. I can build your test using Can you give us precise instructions for how we can reproduce the problem ourselves? Thanks. |
@ianlancetaylor Thanks for your response. It had something to do with hackerrank and not with golang. It's working for me now too. Thanks alot |
I'm able to reproduce this if I use package main
import "C"
func main() {
} Running with
strace seems to indicate that one of the calls to mmap is miscalculating available memory. When using chpst, there's a failed mmap call of It's not clear to me if this is the same root cause as what was originally reported, but gut feeling says no. I can open a separate issue if desired. |
I'm getting the same error in a RHEL environment. |
Timed out in state WaitingForInfo. Closing. (I am just a bot, though. Please speak up if this is a mistake or you have the requested information.) |
As of today it gets reproduced on One alternative to e.g. |
I reproduce the bug. |
I was facing this same problem when launching hundreds of tiny processes in parallel on my uni cluster. Ultimately, I realized that the So, that's my tip/lesson: make sure your Go process is not running out of memory. |
I stumbled across this issue when opening #69105, |
Yes, that is essentially the same error. |
Please answer these questions before submitting your issue. Thanks!
What version of Go are you using (
go version
)?go version go1.10 linux/amd64
Does this issue reproduce with the latest release?
go1.10 is latest
What operating system and processor architecture are you using (
go env
)?GOARCH="amd64"
GOOS="linux"
What did you do?
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
my process starts around 500 child processes. The number of os level threads is creeping up slowly until it reaches around 10k, at which point child processes start to die with the below message.
Process limits seem set sufficiently high
Limit Soft Limit Hard Limit Units
Max processes 257093 257093 processes
$ cat /proc/sys/kernel/threads-max
514187
What did you expect to see?
no crash ;)
What did you see instead?
runtime/cgo: pthread_create failed: Resource temporarily unavailable
SIGABRT: abort
PC=0x7f24685ab428 m=44 sigcode=18446744073709551610
goroutine 0 [idle]:
runtime: unknown pc 0x7f24685ab428
stack: frame={sp:0x7f2407ffea08, fp:0x0} stack=[0x7f24077ff2f0,0x7f2407ffeef0)
00007f2407ffe908: 00007f2468d84168 00007f2407ffea68
00007f2407ffe918: 00007f2468b67b1f 0000000000000002
00007f2407ffe928: 00007f2468d79a80 0000000000000005
00007f2407ffe938: 0000000000f021e0 00007f23d80008c0
00007f2407ffe948: 00000000000000f1 0000000000000011
00007f2407ffe958: 0000000000000000 0000000000c2597a
00007f2407ffe968: 00007f2468b6cac6 0000000000000005
00007f2407ffe978: 0000000000000000 0000000100000000
00007f2407ffe988: 00007f246857cde0 00007f2407ffeb20
00007f2407ffe998: 00007f2468b74923 000000ffffffffff
00007f2407ffe9a8: 0000000000000000 0000000000000000
00007f2407ffe9b8: 0000000000000000 2525252525252525
00007f2407ffe9c8: 2525252525252525 0000000000000000
00007f2407ffe9d8: 00007f246893b700 0000000000c2597a
00007f2407ffe9e8: 00007f23d80008c0 00000000000000f1
00007f2407ffe9f8: 0000000000000011 0000000000000000
00007f2407ffea08: <00007f24685ad02a 0000000000000020
00007f2407ffea18: 0000000000000000 0000000000000000
00007f2407ffea28: 0000000000000000 0000000000000000
00007f2407ffea38: 0000000000000000 0000000000000000
00007f2407ffea48: 0000000000000000 0000000000000000
00007f2407ffea58: 0000000000000000 0000000000000000
00007f2407ffea68: 0000000000000000 0000000000000000
00007f2407ffea78: 0000000000000000 0000000000000000
00007f2407ffea88: 0000000000000000 0000000000000000
00007f2407ffea98: 0000000000000000 0000000000000000
00007f2407ffeaa8: 00007f24685eebff 00007f246893b540
00007f2407ffeab8: 0000000000000001 00007f246893b5c3
00007f2407ffeac8: 00000000000000f1 0000000000000011
00007f2407ffead8: 00007f24685f0409 000000000000000a
00007f2407ffeae8: 00007f246866d2dd 000000000000000a
00007f2407ffeaf8: 00007f246893c770 0000000000000000
runtime: unknown pc 0x7f24685ab428
stack: frame={sp:0x7f2407ffea08, fp:0x0} stack=[0x7f24077ff2f0,0x7f2407ffeef0)
00007f2407ffe908: 00007f2468d84168 00007f2407ffea68
00007f2407ffe918: 00007f2468b67b1f 0000000000000002
00007f2407ffe928: 00007f2468d79a80 0000000000000005
00007f2407ffe938: 0000000000f021e0 00007f23d80008c0
00007f2407ffe948: 00000000000000f1 0000000000000011
00007f2407ffe958: 0000000000000000 0000000000c2597a
00007f2407ffe968: 00007f2468b6cac6 0000000000000005
00007f2407ffe978: 0000000000000000 0000000100000000
00007f2407ffe988: 00007f246857cde0 00007f2407ffeb20
00007f2407ffe998: 00007f2468b74923 000000ffffffffff
00007f2407ffe9a8: 0000000000000000 0000000000000000
00007f2407ffe9b8: 0000000000000000 2525252525252525
00007f2407ffe9c8: 2525252525252525 0000000000000000
00007f2407ffe9d8: 00007f246893b700 0000000000c2597a
00007f2407ffe9e8: 00007f23d80008c0 00000000000000f1
00007f2407ffe9f8: 0000000000000011 0000000000000000
00007f2407ffea08: <00007f24685ad02a 0000000000000020
00007f2407ffea18: 0000000000000000 0000000000000000
00007f2407ffea28: 0000000000000000 0000000000000000
00007f2407ffea38: 0000000000000000 0000000000000000
00007f2407ffea48: 0000000000000000 0000000000000000
00007f2407ffea58: 0000000000000000 0000000000000000
00007f2407ffea68: 0000000000000000 0000000000000000
00007f2407ffea78: 0000000000000000 0000000000000000
00007f2407ffea88: 0000000000000000 0000000000000000
00007f2407ffea98: 0000000000000000 0000000000000000
00007f2407ffeaa8: 00007f24685eebff 00007f246893b540
00007f2407ffeab8: 0000000000000001 00007f246893b5c3
00007f2407ffeac8: 00000000000000f1 0000000000000011
00007f2407ffead8: 00007f24685f0409 000000000000000a
00007f2407ffeae8: 00007f246866d2dd 000000000000000a
00007f2407ffeaf8: 00007f246893c770 0000000000000000
goroutine 632 [running]:
runtime.systemstack_switch()
/opt/go/1.10.0/go/src/runtime/asm_amd64.s:363 fp=0xc4204f6d50 sp=0xc4204f6d48 pc=0x457270
runtime.gcMarkTermination(0x3ff75e93c8506a48)
/opt/go/1.10.0/go/src/runtime/mgc.go:1647 +0x407 fp=0xc4204f6f20 sp=0xc4204f6d50 pc=0x41a907
runtime.gcMarkDone()
/opt/go/1.10.0/go/src/runtime/mgc.go:1513 +0x22c fp=0xc4204f6f48 sp=0xc4204f6f20 pc=0x41a49c
runtime.gcBgMarkWorker(0xc420048500)
/opt/go/1.10.0/go/src/runtime/mgc.go:1912 +0x2e7 fp=0xc4204f6fd8 sp=0xc4204f6f48 pc=0x41b417
runtime.goexit()
/opt/go/1.10.0/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc4204f6fe0 sp=0xc4204f6fd8 pc=0x459de1
created by runtime.gcBgMarkStartWorkers
/opt/go/1.10.0/go/src/runtime/mgc.go:1723 +0x79
The text was updated successfully, but these errors were encountered: