Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.9.2 panic instead of cycle error #13565

Closed
gerr1t opened this issue Apr 12, 2017 · 4 comments · Fixed by #13665
Closed

0.9.2 panic instead of cycle error #13565

gerr1t opened this issue Apr 12, 2017 · 4 comments · Fixed by #13665
Assignees

Comments

@gerr1t
Copy link
Contributor

gerr1t commented Apr 12, 2017

I just upgraded my terraform plans to 0.9.2 from 0.8.2. In 0.8.2 I encountered a cycle error when running a destroy, however, in 0.9.2 the cycle error is replaced with a panic.

Terraform Version

Terraform v0.9.2

Panic Output

Crash.log: https://gist.github.com/gerr1t/eea1fdc318f711dff17f3f98cd2edcfd

Releasing state lock. This may take a few moments...
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0x159f196]

goroutine 1957 [running]:
github.com/hashicorp/terraform/terraform.(*State).sort(0x0)
	/opt/gopath/src/github.com/hashicorp/terraform/terraform/state.go:745 +0x26
github.com/hashicorp/terraform/terraform.WriteState(0x0, 0x8050700, 0xc4201052d0, 0x1038a15, 0xc420cbbd10)
	/opt/gopath/src/github.com/hashicorp/terraform/terraform/state.go:2000 +0x40
github.com/hashicorp/terraform/state/remote.(*State).PersistState(0xc420750060, 0x0, 0x0)
	/opt/gopath/src/github.com/hashicorp/terraform/state/remote/state.go:58 +0x79
github.com/hashicorp/terraform/backend/local.(*Local).opApply(0xc4201e48c0, 0x8075f80, 0xc42076a7c0, 0xc4201fc790, 0xc420831590)
	/opt/gopath/src/github.com/hashicorp/terraform/backend/local/backend_apply.go:138 +0x5d9
github.com/hashicorp/terraform/backend/local.(*Local).(github.com/hashicorp/terraform/backend/local.opApply)-fm(0x8075f80, 0xc42076a7c0, 0xc4201fc790, 0xc420831590)
	/opt/gopath/src/github.com/hashicorp/terraform/backend/local/backend.go:226 +0x52
github.com/hashicorp/terraform/backend/local.(*Local).Operation.func1(0xc4201e48c0, 0xc42064cb00, 0xc42064caf0, 0x8075f80, 0xc42076a7c0, 0xc4201fc790, 0xc420831590)
	/opt/gopath/src/github.com/hashicorp/terraform/backend/local/backend.go:246 +0x9a
created by github.com/hashicorp/terraform/backend/local.(*Local).Operation
	/opt/gopath/src/github.com/hashicorp/terraform/backend/local/backend.go:247 +0x192

Expected Behavior

Either the cycle error to be gone, or to present me with a cycle error.

Actual Behavior

A panic.

@jbardin
Copy link
Member

jbardin commented Apr 14, 2017

Looks like the apply operation is returning a nil state during destroy, which panics in WriteState.

@jbardin
Copy link
Member

jbardin commented Apr 14, 2017

@gerr1t,

I have a PR pending that will hopefully fix the crash you're seeing, but I haven't been able to reproduce your specific case yet. I have a feeling that you'll still have an error, probably the same cycle error, after this fix. If that's the case, it would be great if you could open a new issue with any info on how to reproduce the graph cycle on destroy. Thanks!

@gerr1t
Copy link
Contributor Author

gerr1t commented Apr 17, 2017

@jbardin Thanks! My expectation isn't that the cycle error is fixed, but that the exception was caught without causing a panic, seems like you fixed that, perfect!

@ghost
Copy link

ghost commented Apr 13, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants