-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instead of shelling out use the kubernetes provider and config_map resource #189
Comments
I had gone down this road a bit as an experiment a little while back, and decided it wasn't worth the effort for what is gained, but I'll provide my notes for reference. To my knowledge, the terraform kubernetes provider still does not support See: hashicorp/terraform-provider-kubernetes#161 The only way to use the It will still require Also, most of the existing
You also need a net new data resource for combining the 2 pieces that make up current
You do replace In the end, the same dependencies for ensuring all eks resources are rendered before the A general discussion on expanding the modules capabilities around k8s providers is here for reference: #99 |
Thanks for taking the time to comment on this, it was really helpful, and your design decisions make more sense now. I found that the new version of the provider in theory should implement the exec functionality. However, your points are completely valid, and i see that it would only reduce the requirement for kubectl to be installed, and still require workarounds, so shelling out isnt that bad. I am going to bring this up with our EKS SA at Amazon as I feel like configuring the auth of workers via roles is something that should be fronted by an API, not directly in the config map, as its so tightly coupled to AWS resources. I could imagine that if the aws-iam-authenticator could accept multiple yaml config files this would be possible. |
@willejs were you able to convince the EKS team to manage worker auth outside of K8S? I think it would be much cleaner that way. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
When using manage_aws_auth, shelling out and running kubectl in a null resource, whilst works, is not ideal, or a terraform best practice. Whilst you can turn this off, the worker lifecycle is tied to this being updated with new role arns of nodes for iam authenticator to auth said nodes.
I think swapping out the null resources to the kubernetes provider and the config_map resource would be a much slicker solution solving lots of problms.
However, this then brings up issues of having two providers in this module, and it becoming quite bloated. Perhaps splitting the module out to control plane, workers and config modules would make more sense? Perhaps this can still stay the main repo, but we could make other modules and include them?
I am going to make a module for the auth config and see how this works. Has this been attempted before?
I'm submitting a...
The text was updated successfully, but these errors were encountered: