-
Notifications
You must be signed in to change notification settings - Fork 568
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reassigning global Logger output for testing throws race condition #242
Comments
This would introduce serialization of all the writes of a logger, just to handle this edge case. For me it's a no-go. You should only change the global logger before you start using it concurrently (during your app init for instance). If you tests need to customize the global logger, it should be creating it's own logger and not rely on the global logger. |
It's a design decision for sure and I do appreciate that. The problem for me is that I've come from using logrus which I could do this as it was thread-safe. Looking at zap's global logger, it also allows this. I mean don't get me wrong, a global logger stinks of an anti-pattern and screams dependency injection - it's just so damn convenient (and is accepted in the standard library). In the case of logrus, they have a MutexWrap on the logger, which can essentially turn on serialisation across the logger. This would solve the problem and we could get rid of the SyncWriter, but I appreciate this is a big change... |
I wouldn’t like my logging library to add mutexes in my back :) |
This is only tangentially related, but we're looking to use Zerolog and we're running into a similar data race when This test consistently reproduces the problem for me
The data races are happening during writes to the What is the expected/recommended usage of Zerolog in a concurrent environment? Are |
A given |
Thanks for the reply. We've updated our code to pass around |
@rs could you share the link or code snippet where explain how to use How to properly instance logger based on global logger? When I try import "github.com/rs/zerolog/log"
func .. {
go somethingDo(log.With().Str("logger","somtehing").Logger())
} I still got race conditions |
@rs |
Some tests reassigns the global logger. These tests may fail due to race condition. It's known zerolog issue: rs/zerolog#242 ![Screenshot 2024-05-13 at 12 21 06](https://github.com/QuesmaOrg/quesma/assets/1474/cff627d2-977d-43ae-b239-38c526a5cb75) Changes 1. We moved these test to separate file, and exclude from running under race detector. 2. We need to run these tests anyway. So we run tests twice with and without race detector enabled. Reference: https://go.dev/doc/articles/race_detector
for my issue, solved with Altinity/clickhouse-backup#670 (comment) help me, thanks @rdmrcv |
I've noticed that when I'm trying to test the output of logs in some tests, it can cause a race condition to be detected. This is because reassigning the global logger's output is not thread-safe. Just to clarify, I've understood that the only way to do this is
log.Logger = log.Output(<writer>)
. log.Logger.Output() returns a copy of the global logger, it doesn't adjust itself.In the Go standard library, log.SetOutput() is thread-safe. Can we implement the same here? If so, I'd be happy to do the PR.
Example code to make it as simple as possible to replicate the issue:
(run it with
go test -race -count=1 ./...
)Output is:
Interestingly, rearranging the tests the other way "fixes" the issue, but of course that's super brittle and doesn't solve the problem 😅
The text was updated successfully, but these errors were encountered: