-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix for "fatal error: concurrent map iteration and map write" #667
base: master
Are you sure you want to change the base?
Conversation
update from master
fixes to prevent "fatal error: concurrent map iteration and map write"
Is this fix still valid? I'm getting these panics fairly often. When I look at the current version of |
Yes, I tried and it is valid. Working without any errors. |
Have this issue with the latest release. |
I'm also experiencing this with 1.7.0 |
Experiencing this too, would be nice to have this PR merged! :) |
Just noticed this today. Any luck getting this merged if it's updated? |
Same issue here, could it be merged ? |
i have the same issue, please merge |
@sirupsen please review |
Hello? :) |
The root cause that cause this on my side was logging and concurrently modifying the data (d'oh, I could've read that in the error message!). It was a very rare bug, but it ultimately reared its head in forms other than logging. The erroneous code looked something like this: func process(b []byte) {
var modifierMap map[string]interface{}
_ = json.Unmarshal(b, &modifierMap)
go func() {
logrus.WithField("incoming map", modifierMap).Info("started processing A")
// ... further processing
}()
modifierMap["ID"] = "bar" // or new UUID or some similar edit
// ... any further processing
} My solution was to deep copy the map before handing it off to a goroutine that read/logged it, basically the same as this PR suggests. IMO this is an acceptable tradeoff for clients to make and I'm not sure the logging framework should impose the penalty of deep copying a map within itself. |
@sirupsen could you please pay attention to this bug? cause it's really important, looks like all stuff related to thread safety in logrus with this bug is unusable at all. |
@quenbyako the code has changed a lot since this MR. The latest v1.8.1 should contain fixes for this kind of issue. |
this is happening to me as well in version 1.8.1 |
Is there no other way of solving this that doesn't require a full copy of the map? Can you still reproduce it? |
I've seen race condition issue during testing logrus with hook.
So to isolate the issue, I've made a test program that can cause "fatal error", and add PR for fixing it.
Thanks.
===========
logrus_fatalerror.go.txt
_fatal error: concurrent map iteration and map write
goroutine 6 [running]:
runtime.throw(0x10f4c35, 0x26)
/usr/local/go/src/runtime/panic.go:605 +0x95 fp=0xc420071c48 sp=0xc420071c28 pc=0x1027a65
runtime.mapiternext(0xc420071de8)
/usr/local/go/src/runtime/hashmap.go:778 +0x6f1 fp=0xc420071ce0 sp=0xc420071c48 pc=0x1009251
github.com/Sirupsen/logrus.(*JSONFormatter).Format(0xc42000a080, 0xc420084730, 0xc42000a060, 0x10e7a20, 0xc420092000, 0x0, 0x0)
/Users/javarange/GOLANG/src/github.com/Sirupsen/logrus/json_formatter.go:51 +0xe5 fp=0xc420071e58 sp=0xc420071ce0 pc=0x10b28e5
github.com/Sirupsen/logrus.Entry.log(0xc420084190, 0xc42007c2d0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x4, ...)
/Users/javarange/GOLANG/src/github.com/Sirupsen/logrus/entry.go:109 +0x22c fp=0xc420071f28 sp=0xc420071e58 pc=0x10b1e2c
github.com/Sirupsen/logrus.(*Entry).Info(0xc420084230, 0xc420071fb8, 0x1, 0x1)
/Users/javarange/GOLANG/src/github.com/Sirupsen/logrus/entry.go:144 +0xac fp=0xc420071f98 sp=0xc420071f28 pc=0x10b22ec
main.(*Mystruct).f2(0xc42000a0a0)
/Users/javarange/Work/FM/logrus_panic/logrus_fatalerror.go:91 +0x75 fp=0xc420071fd8 sp=0xc420071f98 pc=0x10b6045_