Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_elasticache_parameter_group always dirty #5854

Closed
wr0ngway opened this issue Mar 25, 2016 · 8 comments
Closed

aws_elasticache_parameter_group always dirty #5854

wr0ngway opened this issue Mar 25, 2016 · 8 comments

Comments

@wr0ngway
Copy link

If I set a single parameter in an aws_elasticache_parameter_group, it does get applied, but subsequently the group is always seen as dirty - i.e. the aws side has multiple params in the group (defaults), but the local side only has the one I set.

resource "aws_elasticache_parameter_group" "cache" {
  name = "cache"
  family = "memcached1.4"
  description = "cache param group"

  parameter {
    name = "max_item_size"
    value = "5242880"
  }
}

result of subsequent plan:

~ aws_elasticache_parameter_group.cache
    parameter.#:                "14" => "1"
    parameter.1156960860.name:  "maxconns_fast" => ""
    parameter.1156960860.value: "0" => ""
    parameter.1159395866.name:  "disable_flush_all" => ""
    parameter.1159395866.value: "0" => ""
    parameter.1390948782.name:  "chunk_size_growth_factor" => ""
    parameter.1390948782.value: "1.25" => ""
    parameter.17563164.name:    "error_on_memory_exhausted" => ""
    parameter.17563164.value:   "0" => ""
    parameter.1849789332.name:  "lru_crawler" => ""
    parameter.1849789332.value: "0" => ""
    parameter.2090287968.name:  "slab_reassign" => ""
    parameter.2090287968.value: "0" => ""
    parameter.2216495271.name:  "expirezero_does_not_evict" => ""
    parameter.2216495271.value: "0" => ""
    parameter.2319007506.name:  "lru_maintainer" => ""
    parameter.2319007506.value: "0" => ""
    parameter.2880467714.name:  "chunk_size" => ""
    parameter.2880467714.value: "48" => ""
    parameter.3394731685.name:  "slab_automove" => ""
    parameter.3394731685.value: "0" => ""
    parameter.3901142929.name:  "cas_disabled" => ""
    parameter.3901142929.value: "0" => ""
    parameter.4132665434.name:  "max_item_size" => "max_item_size"
    parameter.4132665434.value: "5242880" => "5242880"
    parameter.4193572145.name:  "hash_algorithm" => ""
    parameter.4193572145.value: "jenkins" => ""
    parameter.749568046.name:   "memcached_connections_overhead" => ""
    parameter.749568046.value:  "100" => ""
@phinze
Copy link
Contributor

phinze commented Mar 25, 2016

Hi @wr0ngway - thanks for the report.

I just tested the config you specified on Terraform v0.6.14 and I'm not seeing the same behavior.

❯ terraform apply
aws_elasticache_parameter_group.cache: Creating...
  description:                "" => "cache param group"
  family:                     "" => "memcached1.4"
  name:                       "" => "cache"
  parameter.#:                "" => "1"
  parameter.4132665434.name:  "" => "max_item_size"
  parameter.4132665434.value: "" => "5242880"
aws_elasticache_parameter_group.cache: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

❯ terraform plan
Refreshing Terraform state prior to plan...

aws_elasticache_parameter_group.cache: Refreshing state... (ID: cache)

No changes. Infrastructure is up-to-date. This means that Terraform
could not detect any differences between your configuration and
the real physical resources that exist. As a result, Terraform
doesn't need to do anything.

Do you notice anything different about my repro attempt that might be causing this?

@wr0ngway
Copy link
Author

Strange - I'm not quite sure of the path I took to get there, but it involved starting with an empty param group, then some combination of playing with that param in the console and/or terraform before settling on the final value in terraform and noticing I couldn't get it to stay clean. I'll see if I can duplicate it better - maybe try create an empty param group, then add the single setting after that first apply?

@phinze
Copy link
Contributor

phinze commented Mar 25, 2016

Interesting! So flopping between this:

resource "aws_elasticache_parameter_group" "cache" {
  name = "cache"
  family = "memcached1.4"
  description = "cache param group"

  # parameter {
  #   name = "max_item_size"
  #   value = "5242880"
  # }
}

And this:

resource "aws_elasticache_parameter_group" "cache" {
  name = "cache"
  family = "memcached1.4"
  description = "cache param group"

  parameter {
    name = "max_item_size"
    value = "5242880"
  }
}

Doesn't seem to trigger any mischief from here on v0.6.14. Let me know if you figure out anything on your side. 👍

@wr0ngway
Copy link
Author

Try this, add the following param settings into your group, then apply, then remove them, then apply, then plan - and you should see dirtiness.

  // We don't need to set these (defaults), but without them this resource is always seen as dirty

  parameter {
    name = "maxconns_fast"
    value = "0"
  }
  parameter {
    name = "disable_flush_all"
    value = "0"
  }
  parameter {
    name = "chunk_size_growth_factor"
    value = "1.25"
  }
  parameter {
    name = "error_on_memory_exhausted"
    value = "0"
  }
  parameter {
    name = "lru_crawler"
    value = "0"
  }
  parameter {
    name = "slab_reassign"
    value = "0"
  }
  parameter {
    name = "expirezero_does_not_evict"
    value = "0"
  }
  parameter {
    name = "lru_maintainer"
    value = "0"
  }
  parameter {
    name = "chunk_size"
    value = "48"
  }
  parameter {
    name = "slab_automove"
    value = "0"
  }
  parameter {
    name = "cas_disabled"
    value = "0"
  }
  parameter {
    name = "hash_algorithm"
    value = "jenkins"
  }
  parameter {
    name = "memcached_connections_overhead"
    value = "100"
  }

@phinze
Copy link
Contributor

phinze commented Mar 25, 2016

Okay yes that does repro! Steps for minimal repro:

resource "aws_elasticache_parameter_group" "cache" {
  name = "cache"
  family = "memcached1.4"
  description = "cache param group"

  parameter {
    name = "max_item_size"
    value = "5242880"
  }
  parameter {
    name = "maxconns_fast"
    value = "0"
  }
}

Apply the above, then apply:

resource "aws_elasticache_parameter_group" "cache" {
  name = "cache"
  family = "memcached1.4"
  description = "cache param group"

  parameter {
    name = "max_item_size"
    value = "5242880"
  }
}

Yields unresolvable diff:

~ aws_elasticache_parameter_group.cache
    parameter.#:                "2" => "1"
    parameter.1156960860.name:  "maxconns_fast" => ""
    parameter.1156960860.value: "0" => ""
    parameter.4132665434.name:  "max_item_size" => "max_item_size"
    parameter.4132665434.value: "5242880" => "5242880"

@myrlund
Copy link

myrlund commented May 4, 2016

I'm seeing a similar issue, possibly related, where casing seems to trigger the dirtiness.

resource "aws_elasticache_parameter_group" "redis_cache" {
  name = "cache"
  family = "redis2.8"
  description = "cache param group"

  parameter {
    name = "notify-keyspace-events"
    value = "Ex"
  }
}

This leads to the following diff on subsequent calls to apply/plan:

~ aws_elasticache_parameter_group.cache
    parameter.202024865.name:  "" => "notify-keyspace-events"
    parameter.202024865.value: "" => "Ex"
    parameter.877125953.name:  "notify-keyspace-events" => ""
    parameter.877125953.value: "ex" => ""

If you don't believe it to be related, let me know, and I'll create a separate issue for it.

@jmasseo
Copy link
Contributor

jmasseo commented Nov 2, 2016

@myrlund Your bug is related to a ToLower() command in structure.go in the AWS provider. I've just encountered this bug myself. The big problem being that E and e are not equivalent in this resource.
It exists in a few places in structure.go, but I think this might be the only place where it has made a functional difference? https://github.com/hashicorp/terraform/blob/master/builtin/providers/aws/structure.go#L655
https://github.com/hashicorp/terraform/blob/master/builtin/providers/aws/structure.go#L686

@ghost
Copy link

ghost commented Apr 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants