Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubernetes_config_map_v1_data resource #1671

Merged
merged 5 commits into from
Apr 6, 2022
Merged

Conversation

jrhouston
Copy link
Collaborator

Description

This PR adds a resource which allows Terraform to use field manager to manage the data within a ConfigMap that already exists. The implementation of this resource is similar to kubernetes_annotations (#1660) and kubernetes_labels (#1616) but does not require the dynamic client, so does not need the user to specify the apiVersion and kind.

Acceptance tests

  • Have you added an acceptance test for the functionality being added?
  • Have you run the acceptance tests on this branch?

Output from acceptance testing:

=== RUN   TestAccKubernetesConfigMapV1Data_basic
--- PASS: TestAccKubernetesConfigMapV1Data_basic (34.48s)
PASS
ok  	github.com/hashicorp/terraform-provider-kubernetes/kubernetes	35.445s

Release Note

Release note for CHANGELOG:

Add kubernetes_config_map_v1_data resource

References

#723 – one of the frequent use cases in this issue is patching configmaps

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

@jrhouston jrhouston force-pushed the configmap-data-resource branch from 0495501 to 6e78753 Compare March 31, 2022 14:07
@github-actions github-actions bot added size/XL and removed size/M labels Mar 31, 2022
Copy link
Contributor

@arybolovlev arybolovlev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two minor comments.

"kubernetes_annotations": resourceKubernetesAnnotations(),
"kubernetes_labels": resourceKubernetesLabels(),
"kubernetes_annotations": resourceKubernetesAnnotations(),
"kubernetes_config_map_v1_data": resourceKubernetesConfigMapV1Data(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of our resources have a version at the end of their names, shall we continue this practice and call it kubernetes_config_map_data_v1? I am not sure though if that one won't overlap with any native Kubernetes resources in the future. I am totally fine with the current name and can see the logic here too, so I am just curious.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I wasn't sure how to place the version – I formatted it like this to reflect that we're managing the data field of the kubernetes_config_map_v1 resource since the data field isn't versioned.

@aidanmelen
Copy link

🥇

@aidanmelen
Copy link

aidanmelen commented Apr 1, 2022

Since this PR is aimed at solving #723, I ran a few test runs patching the aws-auth configmap that was automatically generated by an AWS EKS cluster. That test run was successful 🥳

resource "kubernetes_config_map_v1_data" "aws_auth" {
  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }

  data = {
    "mapRoles"    = <<-EOT
    - rolearn: arn:aws:iam::111111111111:role/DemoEKS-NodeInstanceRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::111111111111:role/TeamRole
      username: TeamRole
      groups:
      - system:masters
    EOT
    "mapUsers"    = <<-EOT
    - userarn: arn:aws:iam::111111111111:user/sukumar-test-test
      username: sukumar
      groups:
        - system:masters
    EOT
  }

  force = true
}

and the apply...

Terraform will perform the following actions:

  # module.eks_auth.kubernetes_config_map_v1_data.aws_auth will be created
  + resource "kubernetes_config_map_v1_data" "aws_auth" {
      + data  = {
          + "mapAccounts" = <<-EOT
                - "777777777777"
                - "888888888888"
            EOT
          + "mapRoles"    = <<-EOT
                - "groups":
                  - "system:bootstrappers"
                  - "system:nodes"
                  "rolearn": "arn:aws:iam::326504147071:role/foo-eks-node-group-20220401053147969300000001"
                  "username": "system:node:{{EC2PrivateDNSName}}"
                - "groups":
                  - "system:bootstrappers"
                  - "system:nodes"
                  "rolearn": "arn:aws:iam::326504147071:role/boo-node-group-20220401053147969900000003"
                  "username": "system:node:{{EC2PrivateDNSName}}"
                - "groups":
                  - "system:bootstrappers"
                  - "system:nodes"
                  - "system:node-proxier"
                  "rolearn": "arn:aws:iam::326504147071:role/bar-20220401053147969700000002"
                  "username": "system:node:{{SessionName}}"
                - "groups":
                  - "system:masters"
                  "rolearn": "arn:aws:iam::66666666666:role/role1"
                  "username": "role1"
            EOT
          + "mapUsers"    = <<-EOT
                - "groups":
                  - "system:masters"
                  "userarn": "arn:aws:iam::66666666666:user/user1"
                  "username": "user1"
                - "groups":
                  - "system:masters"
                  "userarn": "arn:aws:iam::66666666666:user/user2"
                  "username": "user2"
            EOT
        }
      + force = true
      + id    = (known after apply)

      + metadata {
          + name      = "aws-auth"
          + namespace = "kube-system"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.eks_auth.kubernetes_config_map_v1_data.aws_auth: Creating...
module.eks_auth.kubernetes_config_map_v1_data.aws_auth: Creation complete after 1s [id=kube-system/aws-auth]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

This resource is consistent with the kubectl patch configmap/aws-auth --patch-file aws-auth-configmap.yaml -n kube-system cmd as it will also fail if the resource does not exist.

@aidanmelen
Copy link

aidanmelen commented Apr 1, 2022

This resource will fail if the configmap does not exist. I realize that is by design... But please consider that the aws-auth configmap is only created when managed eks node groups or fargate profiles are added to the cluster. As a result, there are at least two scenarios where the aws-auth configmap will not exist on cluster creation. For example:

  1. A cluster with self managed node groups will not automatically create an aws-auth configmap.

  2. An AWS EKS cluster that is deployed without a data plane will not automatically create an aws-auth configmap.

I will admit that these are edge case scenarios, but think it would be nice to have a general workaround for this type of scenario.

I tried getting around this by using the kubernetes_resource data resource to check if the aws-auth configmap exists on creation. For example:

data "kubernetes_resource" "aws_auth" {
  api_version = "v1"
  kind        = "ConfigMap"

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }
}

output "aws_auth_cm" {
  value = try(data.kubernetes_resource.aws_auth, null)
}

but then i encounter this issue with the provider

╷
│ Error: Provider produced null object
│ 
│ Provider "provider[\"registry.terraform.io/hashicorp/kubernetes\"]" produced a null value for
│ data.kubernetes_resource.aws_auth.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

again, I understand that is by design as it is consistent with other data resources like data.aws_ami that also throw an error when no resource is found. But I think it would be nice if the data resource would return a null value when a resource does not exist so that the user can handle the missing resource rather than having the provider throw an error. I think this would create a sane work around for the issue described above.

@aidanmelen
Copy link

I suppose another workaround could be to indirectly check for the existence of aws-auth by checking the managed node groups or fargate profile roles outputs from the eks module. If any exist, then use the kubernetes_config_map_v1_data resource. Otherwise, use the kubernetes_manifest data resource to create the configmap with field_manager.force_conflicts = true.

@jrhouston
Copy link
Collaborator Author

Thanks for testing this out @aidanmelen and for the great discussion.

Throwing an error when the data source doesn't exist is pretty standard across terraform providers and so I would prefer to stay consistent with that pattern.

However, I do have plans (soon) to add a kubernetes_resources resource that would be a generic way to query for a list of resources like kubectl's --field-selector and --label-selector, which would return an empty list if nothing is found. I think that's something you could use to work around the use-case you've described above.

@aidanmelen
Copy link

aidanmelen commented Apr 1, 2022

Throwing an error when the data source doesn't exist is pretty standard across terraform providers and so I would prefer to stay consistent with that pattern.

It is unfortunate that Terraform took that approach for data resources. But I agree, it is best to stay consistent.

@aidanmelen
Copy link

aidanmelen commented Apr 1, 2022

However, I do have plans (soon) to add a kubernetes_resources resource that would be a generic way to query for a list of resources like kubectl's --field-selector and --label-selector, which would return an empty list if nothing is found. I think that's something you could use to work around the use-case you've described above.

I like that a lot. I vote yes. It would be really similiar to the pattern used by the kubectl_server_version resource in the kubectl provider.

Copy link
Contributor

@arybolovlev arybolovlev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor comment. The rest looks great and works as expected! 🚀 👍🏻

Copy link
Contributor

@arybolovlev arybolovlev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! 🚀

@github-actions
Copy link

github-actions bot commented May 7, 2022

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 7, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants