Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

invalid for_each argument #2747

Closed
mmerickel opened this issue Sep 14, 2023 · 8 comments
Closed

invalid for_each argument #2747

mmerickel opened this issue Sep 14, 2023 · 8 comments

Comments

@mmerickel
Copy link

The issue is exactly Issue #2681. The dynamic resource names for local.iam_role_policy_prefix and local.eni_policy are dynamic and are used in the KEY of the for_each statement.

This should be changed (bw-incompat) to statically defined keys for the 3 resources being created.

@bryantbiggs
Copy link
Member

please provide the details requested/required on the issue template

@mmerickel
Copy link
Author

... Ok I will copy the details to a new ticket - but it's the exact example I linked to that is preventing me from bootstrapping a new cluster due to the following error in the eks-managed-node-group module:

│ Error: Invalid for_each argument
│
│   on .terraform/modules/eks_main_202309.eks/modules/eks-managed-node-group/main.tf line 434, in resource "aws_iam_role_policy_attachment" "this":
│  434:   for_each = { for k, v in toset(compact([
│  435:     "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
│  436:     "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
│  437:     var.iam_role_attach_cni_policy ? local.cni_policy : "",
│  438:   ])) : k => v if var.create && var.create_iam_role }
│     ├────────────────
│     │ local.cni_policy is a string, known only after apply
│     │ local.iam_role_policy_prefix is a string, known only after apply
│     │ var.create is true
│     │ var.create_iam_role is true
│     │ var.iam_role_attach_cni_policy is false
│
│ The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will
│ identify the instances of this resource.
│
│ When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.
│
│ Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.

@bryantbiggs
Copy link
Member

just looking at the error isn't enough detail - we need to see a reproduction to know what you are doing (or trying to do)

@mmerickel
Copy link
Author

You are free to ignore this - I put significant effort into trying to make a simple repro and am failing.

Here is what I will leave you with - the terraform does not pass the smell-test - the keys are dynamic based on the value of the partition data source. This is a pitfall of using toset() in terraform. I was able to get past the issue in my env by running a terraform apply -target 'module.eks.module.eks_managed_node_group["foo"].data.aws_partition.current'. After this, the partition info was loaded successfully and the full plan/apply worked on the next run.

Obviously this doesn't happen every time, it's possible to get past it, but it could clearly be done better in this module such that the resource keys are not dependent on the partition data source.

A proper fix (albeit bw-incompat) is this:

diff --git a/modules/eks-managed-node-group/main.tf b/modules/eks-managed-node-group/main.tf
index 1d3fe81..3725176 100644
--- a/modules/eks-managed-node-group/main.tf
+++ b/modules/eks-managed-node-group/main.tf
@@ -287,7 +287,9 @@ resource "aws_launch_template" "this" {
   # Prevent premature access of policies by pods that
   # require permissions on create/destroy that depend on nodes
   depends_on = [
-    aws_iam_role_policy_attachment.this,
+    aws_iam_role_policy_attachment.worker_node,
+    aws_iam_role_policy_attachment.ecr,
+    aws_iam_role_policy_attachment.cni,
   ]
 
   lifecycle {
@@ -430,14 +432,24 @@ resource "aws_iam_role" "this" {
 }
 
 # Policies attached ref https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group
-resource "aws_iam_role_policy_attachment" "this" {
-  for_each = { for k, v in toset(compact([
-    "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy",
-    "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly",
-    var.iam_role_attach_cni_policy ? local.cni_policy : "",
-  ])) : k => v if var.create && var.create_iam_role }
+resource "aws_iam_role_policy_attachment" "worker_node" {
+  count = var.create && var.create_iam_role ? 1 : 0
 
-  policy_arn = each.value
+  policy_arn = "${local.iam_role_policy_prefix}/AmazonEKSWorkerNodePolicy"
+  role       = aws_iam_role.this[0].name
+}
+
+resource "aws_iam_role_policy_attachment" "ecr" {
+  count = var.create && var.create_iam_role ? 1 : 0
+
+  policy_arn = "${local.iam_role_policy_prefix}/AmazonEC2ContainerRegistryReadOnly"
+  role       = aws_iam_role.this[0].name
+}
+
+resource "aws_iam_role_policy_attachment" "cni" {
+  count = var.create && var.create_iam_role && var.iam_role_attach_cni_policy ? 1 : 0
+
+  policy_arn = local.cni_policy
   role       = aws_iam_role.this[0].name
 }

Thanks for your time!

@kahirokunn
Copy link
Contributor

kahirokunn commented Sep 25, 2023

I have the same issue.
@mmerickel Could you please create a PR?

@CPWu
Copy link

CPWu commented Oct 19, 2023

This issue has been reported twice now and has been repeatedly closed. Its definitely an issue when bringing up a cluster with managed node groups at the same time.

@bryantbiggs
Copy link
Member

This issue has been reported twice now and has been repeatedly closed. Its definitely an issue when bringing up a cluster with managed node groups at the same time.

Provide a reproduction - I am willing to bet you have a depends_on which shouldn't be used and once thats removed you won't experience the issue anymore, but thats just a guess without a reproduction 😉

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants