-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deadlocks with version 1.93 #1440
Comments
Hi @mdantonio , may I ask more about your "freeze every 15-20 minutes"? Did that mean the freeze (deadlock) was eliminated itself and the service consumed for a while and everything happen again? Or did you restart the service manually to eliminate the deadlock? |
Hi @ss920386 the service was not able to recover by itself and a restart of the service was needed The pattern was something like:
|
@mdantonio Thanks for your information. I first suspect that I encountered the same issue as yours😂However, I only got long processing time instead of deadlock. Sorry that I still don’t have any idea about the issue…but thanks again for your quick response🙏 |
This issue is stale because it has been open for 30 days with no activity. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
@ss920386 @mdantonio : I have more information on this. We just encountered this issue when upgrading from logrus at commit
|
Reopening this bug would be greatly appreciated - due to a chain of dependency conflicts, we can't really just roll back to the old version (at least, not forever). |
I ran a diff between v1.9.3 and v1.8.2 and found the only substantial difference to be this part, which is in the same struct as our stack trace (Entry):
|
Also meet this deadlock. Can we reopen this and see if there is any solution? @ss920386 For me the lock is happen in Here is a simple test code for this scenario
|
Hi @Pure-AdamuKaapan @sillydong ! |
Shall we file a new bug then, now that we have a specific repro? (thanks so much @sillydong , beat me to it!) |
Here is a new issue tracking this #1448. It's fully reproducible and root cause found. |
Hey, we are using logrus on a service deployed on kubernetes that produces a pretty big amount of logs (around 4 millions per hour). We never observed similar issues in several months, but today we experienced several deadlocks, without any change on our stack (service was stable and running since many days then it started to completely freeze every 15-20 minutes).
We are using latest version (1.9.3)
I see similar issues reported in the past (#1201, that is closed, but apparently not resolved).
What is the reason for such deadlocks? Can be caused by some dirty data being logged? Any hint to prevent similar deadlocks?
The text was updated successfully, but these errors were encountered: