-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storagegateway: Add cached iSCSI volume resource, retry on busy connection, fix cache resource updating disk ID #5476
Conversation
…ction, fix cache resource updating disk ID * New Resource: aws_storagegateway_cached_iscsi_disk * storagegatewayconn: Retry on * resource/aws_storagegateway_cache: Ensure resource updates disk ID after creation due to disk ID relabeling Previously: === RUN TestAccAWSStorageGatewayCache_TapeAndVolumeGateway --- FAIL: TestAccAWSStorageGatewayCache_TapeAndVolumeGateway (301.11s) testing.go:518: Step 0 error: Check failed: Check 1/3 error: Not found: aws_storagegateway_cache.test === RUN TestAccAWSStorageGatewayCache_TapeAndVolumeGateway --- PASS: TestAccAWSStorageGatewayCache_TapeAndVolumeGateway (302.07s) === RUN TestAccAWSStorageGatewayCachedIscsiVolume_Basic --- PASS: TestAccAWSStorageGatewayCachedIscsiVolume_Basic (306.42s) === RUN TestAccAWSStorageGatewayCachedIscsiVolume_SnapshotId --- PASS: TestAccAWSStorageGatewayCachedIscsiVolume_SnapshotId (301.38s) === RUN TestAccAWSStorageGatewayCachedIscsiVolume_SourceVolumeArn --- SKIP: TestAccAWSStorageGatewayCachedIscsiVolume_SourceVolumeArn (0.00s) resource_aws_storagegateway_cached_iscsi_volume_test.go:146: This test can cause Storage Gateway 2.0.10.0 to enter an irrecoverable state during volume deletion.
log.Printf("[DEBUG] Reading Storage Gateway Local Disk: %s", listLocalDisksInput) | ||
output, err := conn.ListLocalDisks(listLocalDisksInput) | ||
if err != nil { | ||
return fmt.Errorf("error reading Storage Gateway Local Disk: %s", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be doing return
here?
I'm lacking some context, so bare with me. We don't seem to record the results of AddCache
, what is created? Do we need to track it?
If conn.AddCache(input)
above is successful, Terraform has created something. If this call errors for whatever reason, valid or not, and we return before calling d.SetId
, Terraform quits and we've lost track of whatever we created.
Again, lacking context, sorry 😰
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry if my comment note wasn't clear enough. 😅
We're basically allocating a disk for a specified usage (cache) within a gateway (e.g. EC2 instance). When allocating said disk, the gateway (sometimes, but annoyingly) changes the disk identifier. Without refreshing the disks to try and find the new disk identifier, Terraform cannot track the resource as the cache is now based on the new disk identifier and there's no way to map between the old and new identifier.
That does make a good point though that it should probably still call the d.SetId()
with the potentially wrong identifier before this API call just in case it does error and the disk identifier potentially doesn't change. I'll push up a commit momentarily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That clarifies, thanks!
…reation but before potentially failing ListLocalDisk refresh
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 looks good to me
This has been released in version 1.32.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Reference #943
Sorry for bundling these changes together, but it was the creation of this resource and its acceptance testing that highlighted these Storage Gateway API issues.
Changes proposed in this pull request:
aws_storagegateway_cached_iscsi_disk
InvalidGatewayRequestException: The specified gateway proxy network connection is busy.
which occurs when (near) concurrent operations are performed on a gatewayaws-storage-gateway-2.0.10.0
AMI for tape/volume gateways, but notaws-thinstaller
AMI for file gateways)Before
aws_storagegateway_cache
code update:Output from acceptance testing: