Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't use cache_nodes output of aws_elasticache_cluster as a list #8794

Closed
brikis98 opened this issue Sep 12, 2016 · 6 comments
Closed

Can't use cache_nodes output of aws_elasticache_cluster as a list #8794

brikis98 opened this issue Sep 12, 2016 · 6 comments

Comments

@brikis98
Copy link
Contributor

Terraform Version

Terraform v0.7.3

Affected Resource(s)

  • aws_elasticache_cluster

Terraform Configuration Files

resource "aws_elasticache_cluster" "memcached" {
  cluster_id = "foo"
  engine = "memcached"

  num_cache_nodes = 3
  node_type = "cache.t2.micro"
}

output "node_addresses" {
  value = ["${aws_elasticache_cluster.memcached.cache_nodes.*.address}"]
}

Expected Behavior

I get a list of 3 node addresses as an output.

Actual Behavior

I get nothing.

Steps to Reproduce

  1. terraform apply

Important Factoids

If I change the output to a hard-coded item, it works correctly:

output "node_addresses" {
  value = "${aws_elasticache_cluster.memcached.cache_nodes.0.address}"
}

But this obviously doesn't work if the number of cache nodes is controlled by a variable. I tried to work around it in various ways, but it looks like cache_nodes isn't a proper list. All the typical list operations, like join, split, and element do not work on it. Therefore, I can't find any way to dynamically return a list of node addresses or ids or to pass them onto another module (e.g. a module that adds CloudWatch alarms to each node).

References

@mattsoftware
Copy link

I am seeing this too, trying to get a list of addresses however ${join(",", aws_elasticache_cluster.main.cache_nodes.*.address)} appears to be empty

@calvinfo
Copy link

+1 we also had this problem.

The current workaround for us is using a template file, but it relies on the underlying format of AWS configuration endpoint in conjunction with the individual node endpoints which is pretty hacky:

resource "aws_elasticache_cluster" "main" {
  cluster_id = "${var.cluster_id}"
  engine = "${var.engine}"
  ...
}

resource "template_file" "hosts" {
  count = "${var.node_count}"
  template = "${file("${path.module}/template.tpl")}"

  vars {
    endpoint = "${aws_elasticache_cluster.main.configuration_endpoint}"
    count = "${format("%04d", count.index + 1)}"
  }
}

/**
 * Outputs.
 */

output "node_endpoints" { value = "${join(",", template_file.hosts.*.rendered)}" }
output "endpoint" { value = "${aws_elasticache_cluster.main.configuration_endpoint}" }

And then the template file:

${replace("${endpoint}", "cfg", "${count}")}

@mitchellh
Copy link
Contributor

This is a dup of #9080 it looks like, or even potentially #8695 which was just fixed.

@calvinfo
Copy link

Woot, thank you! 🎉

@achille-roussel
Copy link

@mitchellh we're still seeing the problem occur, should we re-open?

@ghost
Copy link

ghost commented Apr 16, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 16, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants