-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: (logging) fatal error: concurrent map writes
(Error: Plugin did not respond
)
#29236
Comments
Community NoteVoting for Prioritization
Volunteering to Work on This Issue
|
Similar: #29153. |
|
I suspect that I can reproduce the issue in In case it is helpful, here is a complete (~4100 line) trace: https://gist.github.com/YakDriver/66b508dab3342da50b2da75f4b17a73a |
fatal error: concurrent map writes
fatal error: concurrent map writes
fatal error: concurrent map writes
(Error: Plugin did not respond
)
Can confirm the same issue, and it is when destroying security groups on Terraform 0.12.31, AWS provider 4.53.0. Looks like the tf-plugin-log lib had a new release in the past hour to v0.8.0, which includes a bugfix that "tflog+tflogsdk: Prevented data race conditions when using SetField and other option functions (hashicorp/terraform-plugin-log#132)" Given Edit: this has just failed on another module (we have several dozen of these) and the place it has failed at is again on destroying a security group. Doesn't seem to be failing on anything else for me. Edit 2: can also confirm 4.52.0 does not have this issue. |
Based on @bflad's comment at hashicorp/terraform-plugin-log#126 (comment), assuming this is indeed related to that fix, in which case bumping that version from 0.7.0 to 0.8.0 will assumably fix this? |
I am also running into this. It's happening primarily when we are running a destroy workflow to clean up feature stack resources. The error occurs while waiting for our RDS cluster to be destroyed. If I re-run the workflow job again right away, it continues to error. Since the RDS database takes about 10-15 minutes to destroy, I've found that if I come back and re-run the workflow after I know the database is destroyed, the error goes away. |
I've experienced a similar situation with destroying Lambdas and their associated security groups. Normally, the Terraform run will wait for the NICs attached to the Lambdas to be destroyed then destroy the security group. The plugin now fails with the I reverted back to the version I used prior to upgrading which was |
Just ran into the same issue today while doing a Added a terraform.tf to the project: aws = {
source = "hashicorp/aws"
version = ">= 4.52.0, != 4.53.0"
} Skipping 4.53.0 for now. |
This required an upstream fix (hashicorp/terraform-plugin-log#132) which will be part of v4.54.0. We are very sorry for the inconvenience! (Which also greatly impacted us internally 😞 .) Fixed with merged new version of |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Previously titled: "[Bug]: The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call."
Changing the name because at this stage I believe it is misleading since any (?) error may cause this crash.
See #29153
Terraform Core Version
v1.3.7
AWS Provider Version
v4.53.0
Affected Resource(s)
All ressources
Expected Behavior
Create, update or delete resources
Actual Behavior
Crash randomly
Relevant Error/Panic Output Snippet
Terraform Configuration Files
N/A
Steps to Reproduce
Debug Output
No response
Panic Output
Important Factoids
Reverting in v4.52.0 is a workaround ✅
References
fatal error: out of memory
(Error: Plugin did not respond
) #29153Would you like to implement a fix?
None
The text was updated successfully, but these errors were encountered: