-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix data race: collect results, then append. #872
Fix data race: collect results, then append. #872
Conversation
Good stuff. Until this issue I didn't know that Go has a built in race detector. Might as well use it in testing by default. --- a/Makefile
+++ b/Makefile
@@ -22,3 +22,3 @@ check-go-version:
test-go:
- go test ./internal/...
+ go test -race ./internal/... |
Yes, and the race detector is awesome! To be clear, it doesn't do any static analysis of the code. It compiles a special instrumented version of the package, which keeps track at runtime of all of the read and write memory accesses of the current execution. Then when the program is run (or when the tests are run) and a violation of the memory model happens, it detects it and reports it. |
With this PR and the patch above I'm still seeing a test failure due to a race condition:
I'm using:
|
Just to be clear, this error no longer occurs on my machine with this PR:
|
The race detector does incur a large overhead in memory and compute. It may make sense to use The data we have about the current PR is positive: sensible diff, fixes the problem on my machine, passes the CI tests. The current PR doesn't add any automated use of the race detector. |
Edit: Never mind - resolved below. |
The fatal error I'm seeing in The stack trace in the link is identical to the one on my machine. Edit 1: If I downgrade to Edit 2: Long story short - my local |
Thanks so much for finding this, and for the fix. Looks good to me. |
Wait actually now that I look at this more, I'm confused about why this is an issue. Sorry for not taking a closer look first. The array should already be big enough since there is this at the top: injectedFiles := make([]config.InjectedFile, 0, len(s.options.InjectedDefines)+len(s.options.InjectAbsPaths)) The fix in this PR isn't quite right because the Edit: Maybe the race is with the capacity field in the slice, and not with the data in the slice at all? Perhaps the bounds check in diff --git a/internal/bundler/bundler.go b/internal/bundler/bundler.go
index dac5dfee..30f902e1 100644
--- a/internal/bundler/bundler.go
+++ b/internal/bundler/bundler.go
@@ -1081,10 +1081,10 @@ func (s *scanner) preprocessInjectedFiles() {
// Wait for the results in parallel
injectWaitGroup.Add(1)
- go func(i int, prettyPath string, resolveResult *resolver.ResolveResult) {
- injectedFiles[i] = <-channel
+ go func(file *config.InjectedFile) {
+ *file = <-channel
injectWaitGroup.Done()
- }(i, prettyPath, resolveResult)
+ }(&injectedFiles[i])
}
injectWaitGroup.Wait() |
For the "why was this an issue" in the first place:
|
For the "leave holes in the array", I don't think we have them. Consider:
|
Ah right there are no holes because of the |
Fixes #871
I preserved the apparent semantics, i.e. the order in which results are appended to
InjectedFiles
.