Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deadlock when using invalid local prefix path #13

Open
nick-jones opened this issue May 26, 2020 · 1 comment
Open

Deadlock when using invalid local prefix path #13

nick-jones opened this issue May 26, 2020 · 1 comment

Comments

@nick-jones
Copy link
Contributor

I noticed whilst testing changes over the weekend that a deadlock can occur if a local prefix is supplied that doesn't match any import (unsure what the exact reason is at this stage). This occurs on v0.0.1 so it's not linked to any recent changes. I'm happy to take a look at some point, just raising for visibility at the moment.

$ /Users/nicholas/Dev/impi/cmd/impi/impi --local github.com/invalid/prefix --scheme stdThirdPartyLocal --ignore-generated=true ./...
fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan send]:
github.com/pavius/impi.(*Impi).addFilePathToFilePathsChan(0xc00004a3c0, 0xc0004106c0, 0x25)
	/Users/nicholas/Dev/impi/impi.go:254 +0x1df
github.com/pavius/impi.(*Impi).populatePathsChan(0xc00004a3c0, 0x7ffeefbffa8d, 0x5, 0x0, 0x135a008)
	/Users/nicholas/Dev/impi/impi.go:144 +0x29b
github.com/pavius/impi.(*Impi).Verify(0xc00004a3c0, 0x7ffeefbffa8d, 0x5, 0xc00004a400, 0x11d1020, 0x1306c10, 0x15, 0xc0000403e0)
	/Users/nicholas/Dev/impi/impi.go:105 +0x162
main.run(0x17, 0xc000097f58)
	/Users/nicholas/Dev/impi/cmd/impi/main.go:70 +0x405
main.main()
	/Users/nicholas/Dev/impi/cmd/impi/main.go:92 +0x41

goroutine 4 [chan send]:
github.com/pavius/impi.(*Impi).verifyPathsFromChan(0xc00004a3c0, 0x118a660, 0x60)
	/Users/nicholas/Dev/impi/impi.go:215 +0x18b
created by github.com/pavius/impi.(*Impi).createWorkers
	/Users/nicholas/Dev/impi/impi.go:184 +0x48

goroutine 5 [chan send]:
github.com/pavius/impi.(*Impi).verifyPathsFromChan(0xc00004a3c0, 0x118a660, 0x60)
	/Users/nicholas/Dev/impi/impi.go:215 +0x18b
created by github.com/pavius/impi.(*Impi).createWorkers
	/Users/nicholas/Dev/impi/impi.go:184 +0x48

goroutine 6 [chan send]:
github.com/pavius/impi.(*Impi).verifyPathsFromChan(0xc00004a3c0, 0x118a660, 0x60)
	/Users/nicholas/Dev/impi/impi.go:215 +0x18b
created by github.com/pavius/impi.(*Impi).createWorkers
	/Users/nicholas/Dev/impi/impi.go:184 +0x48

goroutine 7 [chan send]:
github.com/pavius/impi.(*Impi).verifyPathsFromChan(0xc00004a3c0, 0x118a660, 0x60)
	/Users/nicholas/Dev/impi/impi.go:215 +0x18b
created by github.com/pavius/impi.(*Impi).createWorkers
	/Users/nicholas/Dev/impi/impi.go:184 +0x48
@nick-jones
Copy link
Contributor Author

nick-jones commented May 26, 2020

From a quick look, I believe the cause is because resultChan fills up to capacity before waitWorkerCompletion is invoked (I'm running this in a large repo)

nick-jones added a commit to utilitywarehouse/impi that referenced this issue May 27, 2020
Deadlock was possible before because workers were spun up prior to
the results channel being read. Using a buffered channel mitigates
this to some extent, but it's still possible to trigger on large
codebases.

Realistically this needs to be reading the results channel in
advance of the workers being spun up. This can end up getting quite
complicated, as this means spinning up and coordinating another
goroutine.

Wait groups are quite handy for this - in particular the error
handling errgroup is ideal. I've adjusted the code to spin up the
necessary goroutines in order. The workers operate on a separate
errgroup - this might seem quirky, but IMO it's the easiest way
deal with the need to close the results channel when the workers
are all done (if they were attached to the main group this would
be a logistical issue.. something would need to understand that
all workers have stopped).

NB I relocated the channels from struct level fields to variables
that are passed around. Happy to change this back, I just feel it's
a little more clear to work with, particularly given channels will
end up being closed (which would previously mean calling
`Impi.Verify()` twice would result in an error).

Resolves pavius#13
nick-jones added a commit to utilitywarehouse/impi that referenced this issue May 27, 2020
Deadlock was possible before because workers were spun up prior to
the results channel being read. Using a buffered channel mitigates
this to some extent, but it's still possible to trigger on large
codebases.

Realistically this needs to be reading the results channel in
advance of the workers being spun up. This can end up getting quite
complicated, as this means spinning up and coordinating another
goroutine.

Wait groups are quite handy for this - in particular the error
handling errgroup is ideal. I've adjusted the code to spin up the
necessary goroutines in order. The workers operate on a separate
errgroup - this might seem quirky, but IMO it's the easiest way
deal with the need to close the results channel when the workers
are all done (if they were attached to the main group this would
be a logistical issue.. something would need to understand that
all workers have stopped).

NB I relocated the channels from struct level fields to variables
that are passed around. Happy to change this back, I just feel it's
a little more clear to work with, particularly given channels will
end up being closed (which would previously mean calling
`Impi.Verify()` twice would result in an error).

Resolves pavius#13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant