-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provider 2.20.0 -> 3.40.0 google_compute_instance_template scratch disk validation anomalies #7341
provider 2.20.0 -> 3.40.0 google_compute_instance_template scratch disk validation anomalies #7341
Comments
@nyc3-mikekot Below are the nodes of disk in the responses from api. The first one is the old node when you creates the instance without specifying the size, while the later was created with v3.40.0. Clearly the Cloud does not have the attribute of diskSizeGb (but it shows "375" on the GCP Console), therefore there is always a difference in To solve this problem, we need API to return the disk node which can reflect the value of
|
@edwardmedia So would something on the API end have to be changed to remediate this, and nothing can be done on the provider level? On the terraform end, wouldn't it just be possible to relax the validation slightly for when disk_size_gb is 0 and default to 375 somehow? Scratch disks can ONLY be 375 GB anyway, so it seems superfluous to offer the option of passing in a value for
Here is the link to the specific error validation in the source code:
|
@nyc3-mikekot ideally it would be a change in API end. But we can plan on |
I think either or would be good enough (at least in terms of a resolution). Is the only other workaround at this point to simply recreate the resource altogether using the latest provider? Additionally, Sorry if this might seem pressing, but is there any way to ETA when the fix could be live? |
@nyc3-mikekot I have a PR out to assume a value of 375 GB when the API doesn't return a value for scratch disks. v3.41 is already cut and scheduled for 9/28, so the next available release with this fix will be v3.42 scheduled for 10/5. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform version is :
0.12.18
Original Google provider version used:
2.20.0
; attempting to upgrade to3.40.0
Here is the google_compute_instance_template resource example, it is slightly modified but I think it should still work for purposes of testing. Create it with 2.20.0, ideally I expect disk_size_gb to be reported as 0 for all of the scratch disks. Once switching over to 3.40.0, running another plan will fail since 0 is an invalid entry, and once you uncomment out the
disk_size_gb = 375
line, you will see plans that say something like~ disk_size_gb = 0 -> 375 # forces replacement
Please let me know if there's any more information that you'd like for me to add or if anything I've mentioned previously does not make sense. Additionally, I have tried editing the state file with these steps:
terraform state show
for the instance template in question.disk_size_gb
was reported as 375terraform refresh
to reconcile real world with what the state file is showing since I modified itterraform state show
(same as step 2) on the instance template again and values had changed back to 0 againExample plan:
The text was updated successfully, but these errors were encountered: