Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to initialize aws-auth configmap with only self managed node group #14

Closed
bohdantverdyi opened this issue Mar 23, 2022 · 9 comments · Fixed by #13
Closed

Failed to initialize aws-auth configmap with only self managed node group #14

bohdantverdyi opened this issue Mar 23, 2022 · 9 comments · Fixed by #13
Labels
bug Something isn't working

Comments

@bohdantverdyi
Copy link

When module eks uses only self managed group, it means that no aws-auth exists (only managed-group can generate automatically) .

Nodes can't connect to EKS, when no aws-auth exists in cluster, finally job can't execute becuase no nodes connected to cluster (jobs can't connect becuase aws-auth not exists )

Solution would be to add new option to create config-map with aws-auth before job execution .

@bohdantverdyi bohdantverdyi added the bug Something isn't working label Mar 23, 2022
@bohdantverdyi
Copy link
Author

Better idea to add option to create only config map, without jobs.

@aidanmelen
Copy link
Owner

I really like your suggestion and thanks for pointing out this edge case with the self managed node group. I forgot only EKS managed mode groups and fargate profiles automatically initialize the aws-auth configmap. This snuck by the automated tests because the complete example provisions all three node types.

What do you think about a local_exec = true and give the users the option to run kubectl locally? That will come with its own set of problems since local-exec won't support remote operations where kubectl is not install.

Another option could be using the terraform-provider-kubectl. It is not official provider but it does have a fair amount of support from the community. This might even be the more a robust solution. This approach would have the benefit of running in remote operations and it wouldn't depend on the cluster to automatically initialize the aws-auth.

@bohdantverdyi
Copy link
Author

bohdantverdyi commented Mar 23, 2022

Would be enough , if you will add option to create only config map ( without rbac,sa,jobs ). Coz in case of self managed groups, we need configmap only.

@aidanmelen
Copy link
Owner

Solution would be to add new option to create config-map with aws-auth before job execution .

One tricky thing to consider is that this module supports replacing or patching in-place the partially managed aws-auth config map. What would happen if we have if the user provided patch = true and init-configmap? I suppose we could add a lifecycle ignore to that resource so that it only runs on first run.

@aidanmelen
Copy link
Owner

Better idea to add option to create only config map, without jobs.

I agree. If the terraform-kuberenetes-provider supported patching on create with the kuberenetes_manifest then I would have used that instead of a job.

@aidanmelen aidanmelen changed the title Job couldn't start Failed to initialize aws-auth configmap with only self managed node group Mar 23, 2022
@aidanmelen
Copy link
Owner

Would be enough , if you will add option to create only config map ( without rbac,sa,jobs ). Coz in case of self managed groups, we need configmap only.

I am working on a new release. funny enough, I implemented this logic last night. it will create a new aws-auth configmap when it does not exist and patch when it does exist.

I am going to experiment with the terraform-kubectl-provider before I push out a release.

@aidanmelen aidanmelen linked a pull request Mar 23, 2022 that will close this issue
@aidanmelen
Copy link
Owner

@bohdantverdyi this works amazingly well. I am pleasantly surprised.

locals {
  merged_map_roles = distinct(concat(
    try(yamldecode(yamldecode(var.eks.aws_auth_configmap_yaml).data.mapRoles), []),
    var.map_roles,
  ))

  aws_auth_configmap_yaml = templatefile("${path.module}/templates/aws_auth_cm.tpl",
    {
      map_roles    = local.merged_map_roles
      map_users    = var.map_users
      map_accounts = var.map_accounts
    }
  )
}

resource "kubectl_manifest" "patch" {
  override_namespace = "kube-system"
  yaml_body          = local.aws_auth_configmap_yaml
}

Could you test this branch to see if it solves your issue? I will plan on releasing tonight if it works.

@aidanmelen
Copy link
Owner

aidanmelen commented Mar 24, 2022

I ran a number of tests with the kubectl_manifest. it is the most robust implementation that I have used so far. It is also really simple which I think is a plus with anything that deals with authentication.

It works with only self managed node groups. all nodes. no nodes. local run. remote run. removing and adding fields.

@bohdantverdyi
Copy link
Author

Thanks, awesome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants