Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instead of shelling out use the kubernetes provider and config_map resource #189

Closed
3 tasks
willejs opened this issue Nov 14, 2018 · 4 comments
Closed
3 tasks

Comments

@willejs
Copy link

willejs commented Nov 14, 2018

I have issues

When using manage_aws_auth, shelling out and running kubectl in a null resource, whilst works, is not ideal, or a terraform best practice. Whilst you can turn this off, the worker lifecycle is tied to this being updated with new role arns of nodes for iam authenticator to auth said nodes.

I think swapping out the null resources to the kubernetes provider and the config_map resource would be a much slicker solution solving lots of problms.

However, this then brings up issues of having two providers in this module, and it becoming quite bloated. Perhaps splitting the module out to control plane, workers and config modules would make more sense? Perhaps this can still stay the main repo, but we could make other modules and include them?

I am going to make a module for the auth config and see how this works. Has this been attempted before?

I'm submitting a...

  • bug report
  • [?] feature request
  • support request
  • kudos, thank you, warm fuzzy
@mmcaya
Copy link
Contributor

mmcaya commented Nov 15, 2018

I had gone down this road a bit as an experiment a little while back, and decided it wasn't worth the effort for what is gained, but I'll provide my notes for reference.

To my knowledge, the terraform kubernetes provider still does not support exec authentication providers, like the one required for eks.

See: hashicorp/terraform-provider-kubernetes#161

The only way to use the kubernetes provider is through the config_path option,
requiring all the same kubeconfig rendering logic in the module today, or implementing one of the documented work arounds from the noted issue.

It will still require aws-iam-authenticator either way.

Also, most of the existing data "template_file" resources as still needed for rendering the specific and dynamic config map value structure needed to be supplied to mapRoles, mapUsers, mapAccounts config map keys.

data "template_file" "worker_role_arns"
data "template_file" "map_users" {
data "template_file" "map_roles" {
data "template_file" "map_accounts" {

You also need a net new data resource for combining the 2 pieces that make up current mapRoles from the template, (or rework that logic in a different way):

${worker_role_arn}
${map_roles} 

You do replace
data "template_file" "config_map_aws_auth" {
and
resource "null_resource" "update_config_map_aws_auth" {
with a single
resource "kubernetes_config_map" "aws_auth" {
but at the cost of the additional provider dependency

In the end, the same dependencies for ensuring all eks resources are rendered before the kubernetes provider is usable exist, so the known issue around provider depends_on described in hashicorp/terraform#2430
would also come into play if attempting to both render a valid kubeconfig as input to the kubernetes provider in the same apply in the same module.

A general discussion on expanding the modules capabilities around k8s providers is here for reference: #99

@willejs
Copy link
Author

willejs commented Nov 15, 2018

Thanks for taking the time to comment on this, it was really helpful, and your design decisions make more sense now. I found that the new version of the provider in theory should implement the exec functionality. However, your points are completely valid, and i see that it would only reduce the requirement for kubectl to be installed, and still require workarounds, so shelling out isnt that bad.

I am going to bring this up with our EKS SA at Amazon as I feel like configuring the auth of workers via roles is something that should be fronted by an API, not directly in the config map, as its so tightly coupled to AWS resources. I could imagine that if the aws-iam-authenticator could accept multiple yaml config files this would be possible.

@willejs willejs closed this as completed Nov 15, 2018
@houqp
Copy link

houqp commented Jul 10, 2019

@willejs were you able to convince the EKS team to manage worker auth outside of K8S? I think it would be much cleaner that way.

@github-actions
Copy link

github-actions bot commented Dec 1, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 1, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants