-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
go routine deadlock #2485
Comments
Others have encountered something like this in the past: #1842. Unfortunately I can't reproduce this myself, and I was unable to fix it. Marking as |
I haven't been able to come up with a reproduction of the deadlock, but I did theorycraft a data race and confirmed it via
And discussed in this thread. Here is the race detector report:
I generated this by building esbuild (at the diff --git a/Makefile b/Makefile
index 87093c61..6f44e7aa 100644
--- a/Makefile
+++ b/Makefile
@@ -7,7 +7,7 @@ GO_FLAGS += "-ldflags=-s -w"
GO_FLAGS += -trimpath
esbuild: version-go cmd/esbuild/*.go pkg/*/*.go internal/*/*.go go.mod
- CGO_ENABLED=0 go build $(GO_FLAGS) ./cmd/esbuild
+ CGO_ENABLED=1 go build $(GO_FLAGS) -race ./cmd/esbuild
test:
@$(MAKE) --no-print-directory -j6 test-common
diff --git a/cmd/esbuild/service.go b/cmd/esbuild/service.go
index 96f18ba0..cf6d53c0 100644
--- a/cmd/esbuild/service.go
+++ b/cmd/esbuild/service.go
@@ -10,6 +10,7 @@ import (
"fmt"
"io"
"io/ioutil"
+ "log"
"os"
"regexp"
"sync"
@@ -65,6 +66,7 @@ func (service *serviceType) trackActiveBuild(key int, activeBuild *activeBuild)
service.activeBuilds[key] = activeBuild
// This pairs with "Done()" in "decRefCount"
+ log.Print("service.keepAliveWaitGroup.Add(1) (trackActiveBuild)")
service.keepAliveWaitGroup.Add(1)
}
}
@@ -85,6 +87,7 @@ func (service *serviceType) decRefCount(key int, activeBuild *activeBuild) {
service.mutex.Unlock()
// This pairs with "Add()" in "trackActiveBuild"
+ log.Print("service.keepAliveWaitGroup.Done() (decRefCount)")
service.keepAliveWaitGroup.Done()
}
}
@@ -94,6 +97,8 @@ type outgoingPacket struct {
}
func runService(sendPings bool) {
+ log.Print("runService")
+
logger.API = logger.JSAPI
service := serviceType{
@@ -114,6 +119,7 @@ func runService(sendPings bool) {
if _, err := os.Stdout.Write(packet.bytes); err != nil {
os.Exit(1) // I/O error
}
+ log.Print("service.keepAliveWaitGroup.Done() (packet writer)")
service.keepAliveWaitGroup.Done() // This pairs with the "Add()" when putting stuff into "outgoingPackets"
}
}()
@@ -129,6 +135,8 @@ func runService(sendPings bool) {
go func() {
for {
time.Sleep(1 * time.Second)
+ log.Print("ping")
+
service.sendRequest(map[string]interface{}{
"command": "ping",
})
@@ -158,13 +166,16 @@ func runService(sendPings bool) {
// Clone the input and run it on another goroutine
clone := append([]byte{}, packet...)
+ log.Print("service.keepAliveWaitGroup.Add(1) (loop 1)")
service.keepAliveWaitGroup.Add(1)
go func() {
out := service.handleIncomingPacket(clone)
if out.bytes != nil {
+ log.Print("service.keepAliveWaitGroup.Add(1) (loop 2)")
service.keepAliveWaitGroup.Add(1) // The writer thread will call "Done()"
service.outgoingPackets <- out
}
+ log.Print("service.keepAliveWaitGroup.Done() (loop 3)")
service.keepAliveWaitGroup.Done() // This pairs with the "Add()" in the stdin thread
}()
}
@@ -174,7 +185,9 @@ func runService(sendPings bool) {
}
// Wait for the last response to be written to stdout
+ log.Print("service.keepAliveWaitGroup.Wait()")
service.keepAliveWaitGroup.Wait()
+ log.Print("exiting")
}
func (service *serviceType) sendRequest(request interface{}) interface{} {
@@ -192,6 +205,7 @@ func (service *serviceType) sendRequest(request interface{}) interface{} {
service.callbacks[id] = callback
return id
}()
+ log.Print("service.keepAliveWaitGroup.Add(1) (sendRequest)")
service.keepAliveWaitGroup.Add(1) // The writer thread will call "Done()"
service.outgoingPackets <- outgoingPacket{
bytes: encodePacket(packet{
diff --git a/lib/npm/node.ts b/lib/npm/node.ts
index eee195e6..20b646ad 100644
--- a/lib/npm/node.ts
+++ b/lib/npm/node.ts
@@ -269,6 +269,11 @@ let ensureServiceIsRunning = (): Service => {
// Assume the service was stopped if we get an error writing to stdin
child.stdin.on('error', afterClose);
+ setTimeout(() => {
+ console.log("closing stdin")
+ child.stdin.end()
+ }, 993)
+
// Propagate errors about failure to run the executable itself
child.on('error', afterClose); The value const esbuild = require('esbuild')
esbuild.initialize({})
setTimeout(() => process.exit(1), 1000) My hunch is that without |
I don't if it can help, I have same issue here with a code like this: await esbuild(/* ...*/);
process.exit(0); If I add a timeout for exit, problem is gone await esbuild(/* ...*/);
setTimeout(() => process.exit(0), 100); |
@jfirebaugh Thanks very much for the detailed post. I wasn't aware of this issue with |
Just pushed a potential fix for this issue. I'm going to consider this fixed and close this issue since there isn't enough information to verify the fix. We can reopen this issue if it's still happening after the next version of esbuild is released with the fix. |
My attempt to fix this didn't work: #2727. Specifically preventing esbuild from calling |
@evanw Not sure if it's helpful, but I think I was able to reproduce this issue by using the React integration of Astro and running the Output
Copied from the StackBlitz terminal. Steps to reproduce
To prove that it's the React integration that's causing this, open the diff --git a/astro.config.mjs b/astro.config.mjs
--- a/astro.config.mjs
+++ b/astro.config.mjs
@@ -4,5 +4,5 @@ import react from '@astrojs/react';
// https://astro.build/config
export default defineConfig({
// Enable React to support React JSX components.
- integrations: [react()],
+ // integrations: [react()],
}); Now the
Footnotes
|
I think I encountered this issue too
|
This issue occurred also on vite 6.0.1:
|
Using Vite+sveltekit:
I switched branches in my repo and left vite running, while a lot of files changed (thanks to sveltejs/kit#5774) and I got the following core dump deadlock.
Maybe not as important in this case, but I want to put it here in case it has some importance in other cases.
The text was updated successfully, but these errors were encountered: