-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: aws-auth cm deletes managed nodegroups entry #1524
fix: aws-auth cm deletes managed nodegroups entry #1524
Conversation
This PR has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This PR has been automatically closed because it has not had recent activity since being marked as stale. |
/reopen |
Can anyone reopen and take a look to this please? |
Exactly. As far as I tested, the managed worker nodes will be unhealthy if the worker role in aws_auth configmap is not with the full path. In the other hand, the AWS IAM authenticatior requires it without the IAM path. Actually there's an ancient issue open about this kubernetes-sigs/aws-iam-authenticator#268. |
# EKS managed node groups needs full role arn | ||
for role in module.node_groups.aws_auth_roles : | ||
{ | ||
rolearn = role["worker_role_arn"] | ||
username = "system:node:{{EC2PrivateDNSName}}" | ||
groups = [ | ||
"system:bootstrappers", | ||
"system:nodes", | ||
], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I'm repeating the worker IAM role ARN to add it with the full IAM path.
mapRoles = replace(yamlencode( | ||
distinct(concat( | ||
local.configmap_roles, | ||
var.map_roles, | ||
)) | ||
) | ||
), "\"", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I'm removing the quotes from the formated yaml to match the diff of kubernetes_config_map.aws_auth.
It is working fine without this changes. |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
PR o'clock
I think there are open issues related to this but I'm not sure.
Description
The first apply of this module when used with
node_groups
generates a correct aws-auth file, that it's updated by AWS to add managed node groups access.The second apply will destroy that entry that leave the managed nodes in DEGRADED state with the :
After applying the changes on this PR, the entry is kept (terraform map entry removed/added):
Checklist