Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tag not applied to Elasticache Redis Clusters #5021

Closed
zapman449 opened this issue Jun 28, 2018 · 15 comments
Closed

Tag not applied to Elasticache Redis Clusters #5021

zapman449 opened this issue Jun 28, 2018 · 15 comments
Labels
bug Addresses a defect in current functionality. service/elasticache Issues and PRs that pertain to the elasticache service.

Comments

@zapman449
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Version

# terraform -v
Terraform v0.11.2
+ provider.archive v1.0.3
+ provider.aws v1.22.0
+ provider.null v1.0.0
+ provider.template v1.0.0

A brief test with v0.11.7 didn't indicate that it would help.

Affected Resource(s)

  • aws_elasticache_replication_group

Terraform Configuration Files

resource "aws_elasticache_replication_group" "pass_redis" {
  count                         = "1"
  replication_group_id          = "pass-redis"
  replication_group_description = "Pass Redis"
  node_type                     = "cache.r4.large"
  engine                        = "redis"
  engine_version                = "3.2.4"
  parameter_group_name          = "default.redis3.2"
  port                          = 6379
  number_cache_clusters         = 1
  automatic_failover_enabled    = false

  security_group_ids = [
    "${aws_security_group.elasticache_sg.id}",
  ]

  subnet_group_name        = "${aws_elasticache_subnet_group.default.name}"
  snapshot_retention_limit = 7
  maintenance_window       = "Sun:04:05-Sun:05:05"
  snapshot_window          = "02:05-03:05"
  apply_immediately        = true

  tags {
    Name        = "pass_redis"
    Owner       = "Pass"
    TeamOwner   = "Pass"
    Environment = "staging"
    CostCenter  = "BoatFloaty"
    Role        = "Pass Redis"
  }
}

Debug Output

(will provide if requested)

Expected Behavior

The tags get applied to the elasticache instance(s) in question.

Actual Behavior

In AWS, in a few cases, the first 2-3 tags get applied, but not all of them. In a few other cases, no tags get applied.

The Statefile shows the resources as having the tags applied, but when you get them from the AWS API or console, they are not present

We always include -refresh=true in our terraform plan runs, and those plans, nor later applies indicate that the tags are wrong.

Steps to Reproduce

When I throw together a toy example to reproduce this, I can't. The toy works as expected.

Important Factoids

N/A

References

N/A

@zapman449
Copy link
Author

Interestingly, if I manually apply the tags, Terraform just rolls with it. Won't delete them, nor will it bark about them. It's as if it's blind to the tags after an arbitrary point.

@bflad bflad added bug Addresses a defect in current functionality. service/elasticache Issues and PRs that pertain to the elasticache service. labels Jun 29, 2018
@bflad
Copy link
Contributor

bflad commented Jul 4, 2018

Are folks running into this using a provider that assumes an IAM role cross-account? e.g. using the provider configuration

provider "aws" {
  # ... other configuration ...
  assume_role {
    role_arn     = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
    session_name = "SESSION_NAME"
    external_id  = "EXTERNAL_ID"
  }
}

It could be that the account ID saved into the provider during initialization is the source account ID and not the target account ID as defined by the IAM role ARN.

See also #5064 and the discussion in #5060

@zapman449
Copy link
Author

@bflad We do not assume role in the provider block, but we do assume role before invoking terraform itself. Off to read the links provided.

@bflad
Copy link
Contributor

bflad commented Jul 10, 2018

@zapman449 is that role in a different AWS account?

@zapman449
Copy link
Author

yes. IAM user is in AWS account A, assumes into admin-role in AWS account B.

@potto007
Copy link

@zapman449 that will definitely be fixed by PR #5060 - we have confirmed it with a patched version of Terraform in use in our CI pipeline at Ticketmaster.

@zapman449
Copy link
Author

@potto007 Excellent. Thank you.

@alexcallihan
Copy link

alexcallihan commented Aug 14, 2018

Running into this as well without role assumption, our provider settings:

provider "aws" {
  region  = "${var.region}"
}

Tag we want to add applies, and is visible in the statefile, however in AWS it does not show up. Any tweaks in the console, and then re-running a tf plan are also ignored. Like zapman said above, it's as if after running the apply, nothing happens and then terraform is "blind" to the tags. Just wanted to say we're dealing with it as well but unlike @bflad we aren't assuming a role or doing anything cross-account related with the provider.

@bflad
Copy link
Contributor

bflad commented Aug 15, 2018

Hi folks 👋 For issues related to any cross-account weirdness, using version 1.31.0 of the AWS provider work should hopefully work better in those cases.

If you're continuing to still have trouble on that version, it would be great if we could get a Gist with debug logging enabled so we can further troubleshoot. If you are worried about any sensitive data, it can be encrypted with the HashiCorp GPG Key. Thanks!

@bflad bflad added the waiting-response Maintainers are waiting on response from community or contributor. label Aug 15, 2018
@sepulworld
Copy link

I have a similar issue when creating a new aws_elasticache_replication_group

tf version 0.11.8
aws provider 1.35

Deploying with a tags block I get:

  • module.redis_elasticache.aws_elasticache_replication_group.elastic_cache_rep_group_cluster: 1 error(s) occurred:

  • aws_elasticache_replication_group.elastic_cache_rep_group_cluster: Error creating Elasticache Replication Group: InvalidParameterValue: Tagging not available right now. Please try your query without tags.
    status code: 400, request id: 7d695195-b6b5-11e8-a4fa-cd703230b816

When I remove the tags it doesn't error and deploys.

@sepulworld
Copy link

@bflad
Copy link
Contributor

bflad commented Sep 25, 2018

@sepulworld that error was returned by the Elasticache API and seems like potentially an Elasticache service issue, but sometimes the AWS APIs don't return the most correct errors. Does that error still occur if you create a new replication group with tags?

For others, are you still having the original issue? @zapman449?

@zapman449
Copy link
Author

It seems to be resolved @bflad

@bflad
Copy link
Contributor

bflad commented Oct 2, 2018

Okay, thanks! Let's close this out then. If you're still having problems on recent versions of the AWS provider, please open a new issue with all the details so we can troubleshoot further. 👍

@ghost
Copy link

ghost commented Apr 3, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 3, 2020
@breathingdust breathingdust removed the waiting-response Maintainers are waiting on response from community or contributor. label Sep 17, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/elasticache Issues and PRs that pertain to the elasticache service.
Projects
None yet
Development

No branches or pull requests

6 participants