Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of placement_strategy within aws_ecs_service always forces new resource #13216

Closed
fillup opened this issue Mar 30, 2017 · 3 comments · Fixed by #13220
Closed

Use of placement_strategy within aws_ecs_service always forces new resource #13216

fillup opened this issue Mar 30, 2017 · 3 comments · Fixed by #13220
Assignees

Comments

@fillup
Copy link
Contributor

fillup commented Mar 30, 2017

Description

When using aws_ecs_service, if I specify a placement strategy, it forces a new aws_ecs_service resource on every apply. Even if the values do not change. This is problematic as deleting the existing service to recreate causes service disruption and create_before_destroy has issues because you can't create a second service with the same name.

Terraform Version

Terraform v0.9.1

Affected Resource(s)

  • aws_ecs_service

Terraform Configuration Files

 resource "aws_ecs_service" "service" {
   name = "${var.service_name}"
   cluster = "${var.cluster_id}"
   desired_count = "${var.desired_count}"
   iam_role = "${aws_iam_role.ecsServiceRole.arn}"
   depends_on = ["aws_iam_role_policy.ecsServiceRolePolicy", "aws_alb_listener.https"]

   placement_strategy {
     type = "spread"
     field = "instanceId"
   }

   load_balancer {
     target_group_arn = "${aws_alb_target_group.tg.arn}"
     container_name = "${var.lb_container_name}"
     container_port = "${var.lb_container_port}"
   }

   # Track the latest ACTIVE revision
   task_definition = "${aws_ecs_task_definition.td.family}:${max("${aws_ecs_task_definition.td.revision}", "${data.aws_ecs_task_definition.td.revision}")}"
 }

Debug Output

https://gist.github.com/fillup/af06c64f06943dfda6621898c866d48a

Expected Behavior

I would expect no changes to occur since the setting for placement_strategy did not change.

Actual Behavior

The aws_ecs_service is destroyed and recreated.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. terraform plan - to see plan for replacement even though no changes to configuration were made
  3. terraform apply - to see aws_ecs_service destroyed and recreated

Important Factoids

I assume this is caused because aws_ecs_service can accept up to five placement_strategy blocks and so during plan/apply it doesn't recognize it is still the same one, but I'm just guessing.

@stack72
Copy link
Contributor

stack72 commented Mar 30, 2017

Hi @fillup

Thanks for the issue here - I can see the issue is actually down to casing :(

placement_strategy.#:                     "1" => "1"
placement_strategy.1676812570.field:      "instanceid" => "" (forces new resource)
placement_strategy.1676812570.type:       "spread" => "" (forces new resource)
placement_strategy.3946258308.field:      "" => "instanceId" (forces new resource)
placement_strategy.3946258308.type:       "" => "spread" (forces new resource)

This should not be an issue for the user so I am going to work on getting a fix together for 0.9.3

Sorry for this

Paul

@stack72 stack72 self-assigned this Mar 30, 2017
@stack72 stack72 added bug and removed atlas labels Mar 30, 2017
stack72 added a commit that referenced this issue Mar 30, 2017
…gies

Fixes: #13216

Prior to Terraform 0.9.2, we always set placement_strategies to
lowercase. Therefore, people using it in Terraform 0.9.2 are getting
continual diffs:

```
-/+ aws_ecs_service.mongo
    cluster:                             "arn:aws:ecs:us-west-2:187416307283:cluster/terraformecstest1" => "arn:aws:ecs:us-west-2:187416307283:cluster/terraformecstest1"
    deployment_maximum_percent:          "200" => "200"
    deployment_minimum_healthy_percent:  "100" => "100"
    desired_count:                       "1" => "1"
    name:                                "mongodb" => "mongodb"
    placement_strategy.#:                "1" => "1"
    placement_strategy.1676812570.field: "instanceid" => "" (forces new resource)
    placement_strategy.1676812570.type:  "spread" => "" (forces new resource)
    placement_strategy.3946258308.field: "" => "instanceId" (forces new resource)
    placement_strategy.3946258308.type:  "" => "spread" (forces new resource)
    task_definition:                     "arn:aws:ecs:us-west-2:187416307283:task-definition/mongodb:1991" => "arn:aws:ecs:us-west-2:187416307283:task-definition/mongodb:1991"

Plan: 1 to add, 0 to change, 1 to destroy.
```

This adds a DiffSuppression func to make sure this doesn't trigger a
ForceNew resource:

```
% terraform plan                                                                                                                           ✹ ✭
[WARN] /Users/stacko/Code/go/bin/terraform-provider-aws overrides an internal plugin for aws-provider.
  If you did not expect to see this message you will need to remove the old plugin.
  See https://www.terraform.io/docs/internals/internal-plugins.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_ecs_cluster.default: Refreshing state... (ID: arn:aws:e...ecstest1)
aws_ecs_task_definition.mongo: Refreshing state... (ID: mongodb)
aws_ecs_service.mongo: Refreshing state... (ID: arn:aws:e.../mongodb)
No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.
```

```

```
stack72 added a commit that referenced this issue Mar 30, 2017
…gies (#13220)

Fixes: #13216

Prior to Terraform 0.9.2, we always set placement_strategies to
lowercase. Therefore, people using it in Terraform 0.9.2 are getting
continual diffs:

```
-/+ aws_ecs_service.mongo
    cluster:                             "arn:aws:ecs:us-west-2:187416307283:cluster/terraformecstest1" => "arn:aws:ecs:us-west-2:187416307283:cluster/terraformecstest1"
    deployment_maximum_percent:          "200" => "200"
    deployment_minimum_healthy_percent:  "100" => "100"
    desired_count:                       "1" => "1"
    name:                                "mongodb" => "mongodb"
    placement_strategy.#:                "1" => "1"
    placement_strategy.1676812570.field: "instanceid" => "" (forces new resource)
    placement_strategy.1676812570.type:  "spread" => "" (forces new resource)
    placement_strategy.3946258308.field: "" => "instanceId" (forces new resource)
    placement_strategy.3946258308.type:  "" => "spread" (forces new resource)
    task_definition:                     "arn:aws:ecs:us-west-2:187416307283:task-definition/mongodb:1991" => "arn:aws:ecs:us-west-2:187416307283:task-definition/mongodb:1991"

Plan: 1 to add, 0 to change, 1 to destroy.
```

This adds a DiffSuppression func to make sure this doesn't trigger a
ForceNew resource:

```
% terraform plan                                                                                                                           ✹ ✭
[WARN] /Users/stacko/Code/go/bin/terraform-provider-aws overrides an internal plugin for aws-provider.
  If you did not expect to see this message you will need to remove the old plugin.
  See https://www.terraform.io/docs/internals/internal-plugins.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_ecs_cluster.default: Refreshing state... (ID: arn:aws:e...ecstest1)
aws_ecs_task_definition.mongo: Refreshing state... (ID: mongodb)
aws_ecs_service.mongo: Refreshing state... (ID: arn:aws:e.../mongodb)
No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.
```

```

```
@fillup
Copy link
Contributor Author

fillup commented Mar 31, 2017

Wow, that was amazingly fast, thank you!

@ghost
Copy link

ghost commented Apr 14, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 14, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants