-
-
Notifications
You must be signed in to change notification settings - Fork 640
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fatal error: concurrent map writes #1605
Comments
@trim21 This can likely be fixed by adding a mutex to the |
I'm using a taskfile with glob source and generated. But I'm not sure how to re-produce this. |
I'm getting the same issue at the moment. It's been happening fairly often. It seems to happen less frequently when I limit the concurrency via I'm now experiencing it on a single task which runs a complicated nested loop in Bash.
Stack traces:
|
And of course right after I post #1605 (comment) the individual task succeeded on simply rerunning. 🙂 |
I'm getting this as well when running deps in parallel |
FYI I think there may be a few different issues at play here. My particular issue seems to stem from some sort of non-parallel-compatibility in https://github.com/mvdan/sh. My issue also did not get resolved by the branch at #1701. This commit in my project (which reworked a complicated shell command I was using) seems to have resolved the concurrency issue for me. I haven't quite narrowed down exactly which shell functionality was causing the bug, but my current best guess is the |
Also saw something similar with fatal error: concurrent map read and map write
[
goroutine 205 [running]:
github.com/go-task/task/v3/internal/output.(*prefixWriter).writeLine(0x1400031cc80, {0x140001441e0, 0x26})
/Users/chirino/go/pkg/mod/github.com/go-task/task/v3@v3.38.0/internal/output/prefixed.go:88 +0x9c
github.com/go-task/task/v3/internal/output.(*prefixWriter).writeOutputLines(0x1400031cc80, 0x0)
/Users/chirino/go/pkg/mod/github.com/go-task/task/v3@v3.38.0/internal/output/prefixed.go:58 +0x70
github.com/go-task/task/v3/internal/output.(*prefixWriter).Write(0x1400031cc80, {0x14000668000?, 0x1400012cda8?, 0x104bf5afc?})
/Users/chirino/go/pkg/mod/github.com/go-task/task/v3@v3.38.0/internal/output/prefixed.go:47 +0x50 |
I just got this as well. Also using output: prefixed if that is relevant. Running some tasks in parallel via deps: asset-searchmhm@mhm:~/Documents/asset-search$ task --dry deploy:development
task: [install] pipenv sync --dev
task: [test] pipenv run pytest -s -v
task: [docker:harbor-login] docker login harbor.one.com
task: Task "docker:build-frontend" is up to date
task: Task "docker:build-backend" is up to date
task: [docker:push] docker push backend:latest
task: [docker:push] docker push frontend:latest
task: [kubernetes:cluster] kubectl config use-context $(whoami)-wip1-k8s-cph3
task: [kubernetes:namespace] kubectl config set-context --current --namespace=playground-mhm
task: Task "kubernetes:apply" is up to date
task: [kubernetes:restart] kubectl rollout restart deployment wip1-asset-search-backend
task: [kubernetes:restart] kubectl rollout restart deployment wip1-asset-search-frontend I get the error consistently. Moving everything to cmds makes it run in sequence, and then it works. However the run-time goes from around 2 sec to around 14 sec. Not the end of the world in this case though. |
Can anybody here who is willing and able to test this see if the changes in #1972 fix their issues. Unfortunately, this is difficult to reproduce, so your help would be appreciated. If you're installing Task using Go, these changes can be installed by running |
This doesn't fix my issue. With the version of task above, I see a crash like:
|
there are 2 map data race issue here, ordered map and prefix writter |
I've created #1974 to resolve the prefix writer concurrency issue. |
please try this branch #1701 if you encounter this issue.
I'm running job with
--watch
The text was updated successfully, but these errors were encountered: