-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node group tags are applied but not recognized during following runs #1961
Comments
I have never experienced this before and I deploy this module several times a day. Could you try the following for your cluster definition: module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.14.0"
cluster_name = local.cluster_name
cluster_version = "1.21"
subnet_ids = [
data.aws_subnet.private-a.id,
data.aws_subnet.private-b.id,
data.aws_subnet.private-c.id
]
vpc_id = data.aws_vpc.prod.id
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
enable_irsa = true
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
addon_version = "v1.8.4-eksbuild.1"
}
kube-proxy = {
resolve_conflicts = "OVERWRITE"
addon_version = "v1.21.2-eksbuild.2"
}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
addon_version = "v1.10.1-eksbuild.1"
service_account_role_arn = module.vpc_cni_irsa.iam_role_arn
}
}
eks_managed_node_group_defaults = {
instance_types = ["m6a.large"]
capacity_type = "SPOT"
iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
tags = {
"k8s.io/cluster-autoscaler/${local.cluster_name}" = "owned"
"k8s.io/cluster-autoscaler/enabled" = "TRUE"
}
}
eks_managed_node_groups = {
spot0 = {
name = "spot0-${local.cluster_name}"
min_size = 1
max_size = 3
desired_size = 1
}
spot1 = {
name = "spot1-${local.cluster_name}"
min_size = 1
max_size = 3
desired_size = 1
}
}
tags = local.tags
} |
thanks for the quick answer and sorry for the delay. If I apply that terraform I get:
So my guess is that my original issue is also related to some weird interaction between the AWS provider default tag settings and the custom tag settings. If I remember correctly, the AWS provider default tag doesn't work with ec2 instances and volumes, setting the tag manually solves that issue. But if I leave away default tags the AWS provider many other resources are not tagged. |
This is an upstream issue that we unfortunately cannot do anything about here hashicorp/terraform-provider-aws#19204 |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
When I apply this terraform:
https://gist.github.com/gijsdpg/3009309d32c9d298b214d2c7ea615e13
The tags are correctly applied everywhere, but when I rerun terraform it doesn't detect the tags and wants to apply them again for the
aws_security_group
,aws_launch_template
,aws_iam_role
andaws_eks_node_group
for every node group.Versions
Tried this with multiple terraform (including
v1.1.7
),eks
module (including18.11.0
) andaws
provider (including4.6.0
) versions.Reproduction
Steps to reproduce the behavior:
not using workspaces, cleared local cache. Just apply this terraform in a new project:
https://gist.github.com/gijsdpg/3009309d32c9d298b214d2c7ea615e13
and reapply, the tags are not properly detected.
Code Snippet to Reproduce
Expected behavior
The tags should be recognized by terraform and not apply again.
Actual behavior
The tags are not recognized by terraform and are applied again.
Terminal Output Screenshot(s)
one of the resources:
The text was updated successfully, but these errors were encountered: