-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't interpolate multi resources for nested fields #49
Comments
This comment was originally opened by @erkolson as hashicorp/terraform#3902 (comment). It was migrated here as part of the provider split. The original comment is below. I'm seeing what I assume is the same issue with the
|
This comment was originally opened by @calvinfo as hashicorp/terraform#3902 (comment). It was migrated here as part of the provider split. The original comment is below. FWIW, the workaround we ended up using was this:
We created a module for booting up an elasticache, which gets passed a node_count. And then we create multiple
It's admittedly hacky, since it just relies on how AWS formats their configuration endpoint and then replaces the string to find the addresses of individual nodes. But, it does work, and outputs a list of individual nodes from the module. |
This comment was originally opened by @erkolson as hashicorp/terraform#3902 (comment). It was migrated here as part of the provider split. The original comment is below. Thanks, that is clever. For my needs, this is dirty but will work:
|
This comment was originally opened by @svenwltr as hashicorp/terraform#3902 (comment). It was migrated here as part of the provider split. The original comment is below. This issue is open for more than a year now. Any updates or plans to fix it? |
This comment was originally opened by @phillbaker as hashicorp/terraform#3902 (comment). It was migrated here as part of the provider split. The original comment is below. Not sure if this is exactly the same underlying issue but it looks like splats for lists of nested fields were working in output "url" {
value = "${aws_elasticache_cluster.redis.cache_nodes.*.address}"
} There are quite a few references to this on github, 1 and 2 for example. However, at some point in |
Version$ terraform --version
Terraform v0.11.7
+ provider.aws v1.25.0
+ provider.null v1.0.0 ExplanationThe reason why this is not working as expected is that the HCL interpreter treats splat expressions in some special kind. There are different Issues filed by people having problems with that (#672) This behavior will change with HCL2... BTW as of hashicorp/terraform#2821 splats in outputs as lists are supported for a long time now. WorkaroundYou may use the null_data_source. As #------------------------------------------------------------------------------#
# Test Setup
#------------------------------------------------------------------------------#
provider "aws" {
max_retries = 3
region = "eu-central-1"
profile = "devops"
}
data "aws_vpc" "default" {
default = true
}
data "aws_subnet_ids" "default" {
vpc_id = "${data.aws_vpc.default.id}"
}
data "aws_security_group" "default" {
vpc_id = "${data.aws_vpc.default.id}"
}
#------------------------------------------------------------------------------#
# Memcache Cluster
#------------------------------------------------------------------------------#
variable "cluster_size" {
description = "Number of cluster nodes"
default = 2
}
resource "aws_elasticache_subnet_group" "test" {
name = "test"
subnet_ids = ["${data.aws_subnet_ids.default.ids}"]
}
resource "aws_elasticache_cluster" "test" {
cluster_id = "test"
engine = "memcached"
node_type = "cache.t2.micro"
port = 11211
num_cache_nodes = "${var.cluster_size}"
parameter_group_name = "default.memcached1.4"
subnet_group_name = "${aws_elasticache_subnet_group.test.name}"
security_group_ids = ["${data.aws_security_group.default.id}"]
}
#------------------------------------------------------------------------------#
# Output rendering
#------------------------------------------------------------------------------#
data "null_data_source" "test" {
count = "${var.cluster_size}"
inputs = {
address = "${lookup(aws_elasticache_cluster.test.cache_nodes[count.index], "address")}"
}
}
output "test" {
value = ["${data.null_data_source.test.*.outputs.address}"]
} $ TF_WARN_OUTPUT_ERRORS=1 terraform apply
data.aws_vpc.default: Refreshing state...
data.aws_subnet_ids.default: Refreshing state...
data.aws_security_group.default: Refreshing state...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
<= data.null_data_source.test[0]
id: <computed>
has_computed_default: <computed>
inputs.%: <computed>
outputs.%: <computed>
random: <computed>
<= data.null_data_source.test[1]
id: <computed>
has_computed_default: <computed>
inputs.%: <computed>
outputs.%: <computed>
random: <computed>
+ aws_elasticache_cluster.test
id: <computed>
apply_immediately: <computed>
availability_zone: <computed>
az_mode: <computed>
cache_nodes.#: <computed>
cluster_address: <computed>
cluster_id: "test"
configuration_endpoint: <computed>
engine: "memcached"
engine_version: <computed>
maintenance_window: <computed>
node_type: "cache.t2.micro"
num_cache_nodes: "2"
parameter_group_name: "default.memcached1.4"
port: "11211"
replication_group_id: <computed>
security_group_ids.#: "1"
security_group_ids.3425856862: "sg-7e92ac14"
security_group_names.#: <computed>
snapshot_window: <computed>
subnet_group_name: "test"
+ aws_elasticache_subnet_group.test
id: <computed>
description: "Managed by Terraform"
name: "test"
subnet_ids.#: "3"
subnet_ids.1289025745: "subnet-c128938c"
subnet_ids.3736608364: "subnet-daf525a7"
subnet_ids.4040429980: "subnet-9fb52cf4"
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_elasticache_subnet_group.test: Creating...
description: "" => "Managed by Terraform"
name: "" => "test"
subnet_ids.#: "" => "3"
subnet_ids.1289025745: "" => "subnet-c128938c"
subnet_ids.3736608364: "" => "subnet-daf525a7"
subnet_ids.4040429980: "" => "subnet-9fb52cf4"
aws_elasticache_subnet_group.test: Creation complete after 1s (ID: test)
aws_elasticache_cluster.test: Creating...
apply_immediately: "" => "<computed>"
availability_zone: "" => "<computed>"
az_mode: "" => "<computed>"
cache_nodes.#: "" => "<computed>"
cluster_address: "" => "<computed>"
cluster_id: "" => "test"
configuration_endpoint: "" => "<computed>"
engine: "" => "memcached"
engine_version: "" => "<computed>"
maintenance_window: "" => "<computed>"
node_type: "" => "cache.t2.micro"
num_cache_nodes: "" => "2"
parameter_group_name: "" => "default.memcached1.4"
port: "" => "11211"
replication_group_id: "" => "<computed>"
security_group_ids.#: "" => "1"
security_group_ids.3425856862: "" => "sg-7e92ac14"
security_group_names.#: "" => "<computed>"
snapshot_window: "" => "<computed>"
subnet_group_name: "" => "test"
aws_elasticache_cluster.test: Still creating... (10s elapsed)
aws_elasticache_cluster.test: Still creating... (20s elapsed)
aws_elasticache_cluster.test: Still creating... (30s elapsed)
aws_elasticache_cluster.test: Still creating... (40s elapsed)
aws_elasticache_cluster.test: Still creating... (50s elapsed)
aws_elasticache_cluster.test: Still creating... (1m0s elapsed)
aws_elasticache_cluster.test: Still creating... (1m10s elapsed)
aws_elasticache_cluster.test: Still creating... (1m20s elapsed)
aws_elasticache_cluster.test: Still creating... (1m30s elapsed)
aws_elasticache_cluster.test: Still creating... (1m40s elapsed)
aws_elasticache_cluster.test: Still creating... (1m50s elapsed)
aws_elasticache_cluster.test: Still creating... (2m0s elapsed)
aws_elasticache_cluster.test: Still creating... (2m10s elapsed)
aws_elasticache_cluster.test: Still creating... (2m20s elapsed)
aws_elasticache_cluster.test: Still creating... (2m30s elapsed)
aws_elasticache_cluster.test: Still creating... (2m40s elapsed)
aws_elasticache_cluster.test: Still creating... (2m50s elapsed)
aws_elasticache_cluster.test: Still creating... (3m0s elapsed)
aws_elasticache_cluster.test: Still creating... (3m10s elapsed)
aws_elasticache_cluster.test: Still creating... (3m20s elapsed)
aws_elasticache_cluster.test: Still creating... (3m30s elapsed)
aws_elasticache_cluster.test: Still creating... (3m40s elapsed)
aws_elasticache_cluster.test: Still creating... (3m50s elapsed)
aws_elasticache_cluster.test: Still creating... (4m0s elapsed)
aws_elasticache_cluster.test: Still creating... (4m10s elapsed)
aws_elasticache_cluster.test: Still creating... (4m20s elapsed)
aws_elasticache_cluster.test: Still creating... (4m30s elapsed)
aws_elasticache_cluster.test: Creation complete after 4m38s (ID: test)
data.null_data_source.test[1]: Refreshing state...
data.null_data_source.test[0]: Refreshing state...
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
test = [
test.b6reqy.0001.euc1.cache.amazonaws.com,
test.b6reqy.0002.euc1.cache.amazonaws.com
] |
Hi folks 👋 This issue is resolved in Terraform 0.12, which fully supports indexed splat ( Given this configuration: terraform {
required_providers {
aws = "2.20.0"
}
required_version = "0.12.5"
}
provider "aws" {
region = "us-east-2"
}
resource "aws_elasticache_cluster" "test" {
cluster_id = "bflad-testing"
engine = "memcached"
node_type = "cache.t2.micro"
num_cache_nodes = 3
}
output "test1" {
value = aws_elasticache_cluster.test.cache_nodes[*].address
}
output "test2" {
value = element(aws_elasticache_cluster.test.cache_nodes[*].address, 1)
}
output "test3" {
value = join(",", formatlist("%s:%s", aws_elasticache_cluster.test.cache_nodes[*].address, aws_elasticache_cluster.test.cache_nodes[*].port))
} We can get the various expected outputs that were problematic in Terraform 0.11 and earlier:
Enjoy! 🚀 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
This issue was originally opened by @calvinfo as hashicorp/terraform#3902. It was migrated here as part of the provider split. The original body of the issue is below.
We're attempting to create an elasticache cluster, but unfortunately the interpolation doesn't work for splats of nested fields.
It could be fixed in the config/interpolation, to mark it as a multi, but that'd require changing around the field name behavior as well. Is that behavior which you're willing to support? Otherwise it might be worth changing the cache resource.
The text was updated successfully, but these errors were encountered: