Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deadlocks with version 1.93 #1440

Open
mdantonio opened this issue Aug 27, 2024 · 3 comments
Open

Deadlocks with version 1.93 #1440

mdantonio opened this issue Aug 27, 2024 · 3 comments

Comments

@mdantonio
Copy link

Hey, we are using logrus on a service deployed on kubernetes that produces a pretty big amount of logs (around 4 millions per hour). We never observed similar issues in several months, but today we experienced several deadlocks, without any change on our stack (service was stable and running since many days then it started to completely freeze every 15-20 minutes).
We are using latest version (1.9.3)

	/go/pkg/mod/github.com/sirupsen/logrus@v1.9.3/entry.go:321
github.com/sirupsen/logrus.(*Entry).Info(...)
	/go/pkg/mod/github.com/sirupsen/logrus@v1.9.3/entry.go:304 +0x48 fp=0xc0001cb7c8 sp=0xc0001cb798 pc=0x7dc1e8
github.com/sirupsen/logrus.(*Entry).Log(0xc0059b9b90, 0x4, {0xc0001cb7f8?, 0xc0001cb808?, 0x41535b?})
	/go/pkg/mod/github.com/sirupsen/logrus@v1.9.3/entry.go:233 +0x2d1 fp=0xc0001cb798 sp=0xc0001cb6a0 pc=0x7dba91
github.com/sirupsen/logrus.(*Entry).log(0xc0059b9b90, 0x4, {0xc004dbbc98, 0x11})
	/go/pkg/mod/github.com/sirupsen/logrus@v1.9.3/logger.go:61
github.com/sirupsen/logrus.(*MutexWrap).Lock(...)
	/usr/local/go/src/sync/mutex.go:90
sync.(*Mutex).Lock(...)
	/usr/local/go/src/sync/mutex.go:171 +0x15d fp=0xc0001cb6a0 sp=0xc0001cb650 pc=0x48ad5d
sync.(*Mutex).lockSlow(0x24228f0)
	/usr/local/go/src/runtime/sema.go:77 +0x25 fp=0xc0001cb650 sp=0xc0001cb618 pc=0x47b065
sync.runtime_SemacquireMutex(0x0?, 0x0?, 0xc0059b9c00?)
	/usr/local/go/src/runtime/sema.go:160 +0x225 fp=0xc0001cb618 sp=0xc0001cb5b0 pc=0x459fe5
runtime.semacquire1(0x24228f4, 0x0, 0x3, 0x1, 0x15)
	/usr/local/go/src/runtime/proc.go:408
runtime.goparkunlock(...)
	/usr/local/go/src/runtime/proc.go:402 +0xce fp=0xc0001cb5b0 sp=0xc0001cb590 pc=0x44716e
runtime.gopark(0x3631905e71dab8?, 0x11?, 0x10?, 0x0?, 0xc0001cb600?)
goroutine 848 gp=0xc003eb4000 m=nil [sync.Mutex.Lock, 218 minutes]:

I see similar issues reported in the past (#1201, that is closed, but apparently not resolved).

What is the reason for such deadlocks? Can be caused by some dirty data being logged? Any hint to prevent similar deadlocks?

@ss920386
Copy link

Hi @mdantonio , may I ask more about your "freeze every 15-20 minutes"? Did that mean the freeze (deadlock) was eliminated itself and the service consumed for a while and everything happen again? Or did you restart the service manually to eliminate the deadlock?

@mdantonio
Copy link
Author

Hi @ss920386 the service was not able to recover by itself and a restart of the service was needed

The pattern was something like:

  • service started and working for some time (15-20 minutes on average)
  • suddenly the service stuck, forever
  • by sending sigterm to the service we got the above stack trace
  • by manually restarting the service, it was working again for 15-20 minutes avg, that was stucking again

@ss920386
Copy link

@mdantonio Thanks for your information. I first suspect that I encountered the same issue as yours😂However, I only got long processing time instead of deadlock. Sorry that I still don’t have any idea about the issue…but thanks again for your quick response🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants