Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS: diffs didn't match during apply. This is a bug with Terraform and should be reported as a GitHub Issue #7963

Closed
ghost opened this issue Mar 15, 2019 · 6 comments · Fixed by #7982
Labels
bug Addresses a defect in current functionality.
Milestone

Comments

@ghost
Copy link

ghost commented Mar 15, 2019

This issue was originally opened by @iloveicedgreentea as hashicorp/terraform#20710. It was migrated here as a result of the provider split. The original body of the issue is below.


Please include the following information in your report:

    Resource ID: aws_cloudwatch_metric_alarm.fleet-scale-down
    Mismatch reason: attribute mismatch: alarm_actions.1437824865
    Diff One (usually from plan): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"alarm_actions.1437824865":*terraform.ResourceAttrDiff{Old:"", New:"arn:aws:autoscaling:us-east-1:REDACTED:scalingPolicy:7f2a81b4-fb1d-440a-9d81-fcc9a2bdf4e6:resource/ec2/spot-fleet-request/sfr-7b43f06a-49dc-4edd-8029-374e3ed1a2ca:policyName/scale-down-staging", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "alarm_actions.#":*terraform.ResourceAttrDiff{Old:"0", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}
    Diff Two (usually from apply): *terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{"alarm_actions.#":*terraform.ResourceAttrDiff{Old:"0", New:"1", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, "alarm_actions.1809487444":*terraform.ResourceAttrDiff{Old:"", New:"arn:aws:autoscaling:us-east-1:REDACTED:scalingPolicy:ba5ae522-5db0-4f83-96f9-d6570b59df71:resource/ec2/spot-fleet-request/sfr-02fbb2d2-c36a-48e7-ae43-3a498bf24324:policyName/scale-down-staging", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, Meta:map[string]interface {}(nil)}```

```* provider.archive: version = "~> 1.1"
* provider.aws: version = "~> 2.1"
* provider.template: version = "~> 2.1"

I noticed when it creates these resources due to the fleet ID changing, it changes the alarm actions from 0 to 1 and.... it should just not do that.

Deleting the resource had no effect. I believe I will need to manually remove it from the state as it created fine the first time.

@apparentlymart
Copy link
Contributor

(More details/investigation in the comments on hashicorp/terraform#20710.)

@apparentlymart apparentlymart added bug Addresses a defect in current functionality. service/applicationautoscaling labels Mar 15, 2019
@bflad
Copy link
Contributor

bflad commented Mar 17, 2019

Thanks so much @apparentlymart 👍 Looks like we are missing a ForceNew: true on the resource_id argument, which leaves the potential for dangling policies if only resource_id changes and the old target remains in addition to introducing the mismatched attributes during apply. 😖

To further help here, I am also going to update our aws_appautoscaling_policy resource documentation to always use the pattern of referencing the aws_appautoscaling_target attributes of the same names since this provides a workaround for this issue and generally makes writing these resource configurations easier since policies are tightly coupled to their targets anyways.

e.g.

resource "aws_appautoscaling_target" "example" {
  # ... other configuration ...
}

resource "aws_appautoscaling_policy" "example" {
 # ... other configuration ...

  resource_id        = "${aws_appautoscaling_target.example.resource_id}"
  scalable_dimension = "${aws_appautoscaling_target.example.scalable_dimension}"
  service_namespace  = "${aws_appautoscaling_target.example.service_namespace}"
}

bflad added a commit that referenced this issue Mar 17, 2019
…d` updates and ignore `ObjectNotFoundException` on deletion

References:

* #7963
* #5747
* #538
* #486
* #427
* #404

Previously the documentation recommended an ECS setup that used `depends_on` combined with an updateable `resource_id` attribute, that could introduce very subtle bugs in the operation of the `aws_appautoscaling_policy` resource when the either underlying Application AutoScaling Target or target resource (e.g. ECS service) was updated or recreated.

Given the scenario with an `aws_appautoscaling_policy` configuration:

* No direct attributes references to its `aws_appautoscaling_target` parent (usage with or without `depends_on` is inconsequential except without its usage in this case, it would generate errors that the target does not exist due to lack of proper ordering)
* `resource_id` directly references the target resource (e.g. an ECS service)
* The underlying `resource_id` target resource (e.g. an ECS service) is pointed to a new location or the resource is recreated

The `aws_appautoscaling_policy` resource would plan as an resource update of just the `resource_id` attribute instead of resource recreation. Several consquences could occur in this situation depending on the exact ordering and Terraform configuration:

* Since the Application AutoScaling Policy API only supports a `PUT` type operation for creation and update, a new policy would create successfully (given the Application AutoScaling Target was already in place), hiding any coding errors that might have been found if it was attempting to update a non-created policy
* Usage of only `depends_on` to reference the Application AutoScaling Target could miss creating the Application AutoScaling Policy in a single apply since `depends_on` is purely for ordering
* The lack of Application AutoScaling Policy deletion could leave dangling policies on the previous Application AutoScaling Target unless it was updated (which it correctly recreates the resource in Terraform) or otherwise deleted
* The Terraform resource would not know to properly update the value of other computed attributes during plan, such as `arn`, potentially only noticing these attribute values as a new applied value different from the planned value

These situations could surface as Terraform bugs in multiple ways:

* In friendlier cases, a second apply would be required to create the missing policy or update downstream computed references
* In worse cases, Terraform would report errors (depending on the Terraform version) such as `Resource 'aws_appautoscaling_policy.example' does not have attribute 'arn'` and `diffs didn't match during apply` for downstream attribute references to those computed attributes

To prevent these situations, the `ResourceId` of the Application AutoScaling Policy needs be treated as part of the API object ID, similar to Application AutoScaling Targets, and marked `ForceNew: true` in the Terraform resource schema. We also ensure the documentation examples always recommend direct references to the upstream `aws_appautoscaling_target` instead of using `depends_on` so Terraform properly handles recreations when necessary, e.g.

```hcl
resource "aws_appautoscaling_target" "example" {
  # ... other configuration ...
}

resource "aws_appautoscaling_policy" "example" {
 # ... other configuration ...

  resource_id        = "${aws_appautoscaling_target.example.resource_id}"
  scalable_dimension = "${aws_appautoscaling_target.example.scalable_dimension}"
  service_namespace  = "${aws_appautoscaling_target.example.service_namespace}"
}
```

During research of this bug, it was also similarly discovered that the `aws_appautoscaling_policy` resource did not gracefully handle external deletions of the Application AutoScaling Policy without a refresh or potential deletion race conditions with the following error:

```
ObjectNotFoundException: No scaling policy found for service namespace: ecs, resource ID: service/tf-acc-test-9190521664283069857/tf-acc-test-9190521664283069857, scalable dimension: ecs:service:DesiredCount, policy name: tf-acc-test-9190521664283069857
```

We include ignoring this potential error on deletion as part of the comprehesive solution to ensuring resource recreations are successful.

Output from acceptance testing before code update:

```
--- FAIL: TestAccAWSAppautoScalingPolicy_ResourceId_ForceNew (54.69s)
    testing.go:538: Step 1 error: After applying this step, the plan was not empty:

        DIFF:

        UPDATE: aws_cloudwatch_metric_alarm.test
          alarm_actions.3359603714: "arn:aws:autoscaling:us-west-2:--OMITTED--:scalingPolicy:065d03ea-a7a4-4047-9a43-c92ec1871170:resource/ecs/service/tf-acc-test-2456603151506624334/tf-acc-test-2456603151506624334-1:policyName/tf-acc-test-2456603151506624334" => ""
          alarm_actions.4257611624: "" => "arn:aws:autoscaling:us-west-2:--OMITTED--:scalingPolicy:cdc6d280-8a93-4c67-9790-abb47fd167c6:resource/ecs/service/tf-acc-test-2456603151506624334/tf-acc-test-2456603151506624334-2:policyName/tf-acc-test-2456603151506624334"
```

Output from acceptance testing:

```
--- PASS: TestAccAWSAppautoScalingPolicy_disappears (26.48s)
--- PASS: TestAccAWSAppautoScalingPolicy_scaleOutAndIn (28.53s)
--- PASS: TestAccAWSAppautoScalingPolicy_ResourceId_ForceNew (43.25s)
--- PASS: TestAccAWSAppautoScalingPolicy_basic (46.47s)
--- PASS: TestAccAWSAppautoScalingPolicy_spotFleetRequest (61.26s)
--- PASS: TestAccAWSAppautoScalingPolicy_dynamoDb (115.02s)
--- PASS: TestAccAWSAppautoScalingPolicy_multiplePoliciesSameResource (116.06s)
--- PASS: TestAccAWSAppautoScalingPolicy_multiplePoliciesSameName (116.80s)
```
@bflad
Copy link
Contributor

bflad commented Mar 17, 2019

Resource and documentation updates submitted: #7982

@bflad bflad added this to the v2.3.0 milestone Mar 20, 2019
@bflad
Copy link
Contributor

bflad commented Mar 20, 2019

These updates here will be released in version 2.3.0 of the Terraform AWS Provider in the next day or two.

If you are still having trouble after upgrading to version 2.3.0 of the Terraform AWS Provider (when its released) and with a configuration looking similar to the above, please create a new GitHub issue with the relevant details from the issue template and we can further triage. Thanks!

@bflad
Copy link
Contributor

bflad commented Mar 21, 2019

This has been released in version 2.3.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

@ghost
Copy link
Author

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality.
Projects
None yet
2 participants