-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to initialize aws-auth
configmap with only self managed node group
#14
Comments
Better idea to add option to create only config map, without jobs. |
I really like your suggestion and thanks for pointing out this edge case with the self managed node group. I forgot only EKS managed mode groups and fargate profiles automatically initialize the aws-auth configmap. This snuck by the automated tests because the complete example provisions all three node types. What do you think about a Another option could be using the terraform-provider-kubectl. It is not official provider but it does have a fair amount of support from the community. This might even be the more a robust solution. This approach would have the benefit of running in remote operations and it wouldn't depend on the cluster to automatically initialize the aws-auth. |
Would be enough , if you will add option to create only config map ( without rbac,sa,jobs ). Coz in case of self managed groups, we need configmap only. |
One tricky thing to consider is that this module supports replacing or patching in-place the partially managed aws-auth config map. What would happen if we have if the user provided |
I agree. If the terraform-kuberenetes-provider supported patching on create with the |
aws-auth
configmap with only self managed node group
I am working on a new release. funny enough, I implemented this logic last night. it will create a new I am going to experiment with the terraform-kubectl-provider before I push out a release. |
@bohdantverdyi this works amazingly well. I am pleasantly surprised. locals {
merged_map_roles = distinct(concat(
try(yamldecode(yamldecode(var.eks.aws_auth_configmap_yaml).data.mapRoles), []),
var.map_roles,
))
aws_auth_configmap_yaml = templatefile("${path.module}/templates/aws_auth_cm.tpl",
{
map_roles = local.merged_map_roles
map_users = var.map_users
map_accounts = var.map_accounts
}
)
}
resource "kubectl_manifest" "patch" {
override_namespace = "kube-system"
yaml_body = local.aws_auth_configmap_yaml
} Could you test this branch to see if it solves your issue? I will plan on releasing tonight if it works. |
I ran a number of tests with the It works with only self managed node groups. all nodes. no nodes. local run. remote run. removing and adding fields. |
Thanks, awesome. |
When module eks uses only self managed group, it means that no aws-auth exists (only managed-group can generate automatically) .
Nodes can't connect to EKS, when no aws-auth exists in cluster, finally job can't execute becuase no nodes connected to cluster (jobs can't connect becuase aws-auth not exists )
Solution would be to add new option to create config-map with aws-auth before job execution .
The text was updated successfully, but these errors were encountered: