Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an EKS cluster to an existing argocd #58

Closed
nitrocode opened this issue Jan 11, 2023 · 6 comments
Closed

Add an EKS cluster to an existing argocd #58

nitrocode opened this issue Jan 11, 2023 · 6 comments

Comments

@nitrocode
Copy link

nitrocode commented Jan 11, 2023

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

What is the outcome that you are trying to reach?

I have a single argocd cluster to manage multiple eks clusters. I have my argocd cluster installed in an eks server red in my shared-corp account. I created a new eks cluster blue in my production account and I'd like to add my new eks cluster blue to my existing argocd.

Currently, I do this by running argocd cluster add <cluster> --grpc-web

which creates the following resources in my new eks cluster blue

  • clusterrole
  • clusterrolebinding
  • kubernetes secret
  • kubernetes service account

It then creates a blue-secret in my argocd eks cluster which contains the necessary information to connect to the new eks cluster.


Adding this feature also enables us to create a new module to bootstrap the new eks cluster with a set of foundational apps. We would then have the full flow.

  • Existing argocd to manage all eks clusters
  • New eks cluster is born using terraform-aws-eks-blueprints terraform module
  • eks cluster is added to existing argocd cluster via kubernetes/argocd terraform provider using a kubernetes-addons/argocd-add-cluster
  • eks cluster is bootstrapped with foundational helm charts using argocd's app of apps via kubernetes/argocd terraform provider using a kubernetes-addons/argocd-bootstrap-cluster

Describe the solution you would like

I'd like a new module i.e. modules/kubernetes-addons/add-eks-cluster-to-argocd that will create all the necessary declarative resources.

Using the hashicorp/kubernetes provider

https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs

This is tested code

# main.tf

# cluster roles and binding

resource "kubernetes_cluster_role" "argocd_manager" {
  metadata {
    name = "argocd-manager-role"
  }

  rule {
    api_groups = ["*"]
    resources  = ["*"]
    verbs      = ["*"]
  }

  rule {
    non_resource_urls = ["*"]
    verbs             = ["*"]
  }
}

resource "kubernetes_cluster_role_binding" "argocd_manager" {
  metadata {
    name = "argocd-manager-role-binding"
  }

  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = kubernetes_cluster_role.argocd_manager.metadata[0].name
  }

  subject {
    kind      = "ServiceAccount"
    name      = kubernetes_service_account.argocd_manager.metadata[0].name
    namespace = kubernetes_service_account.argocd_manager.metadata[0].namespace
  }
}

# kubernetes secret

resource "kubernetes_secret" "argocd_manager" {
  metadata {
    # Cannot use `generated_name` like the "argocd cluster add" tool does due to the inability to
    # link the generated name for this secret back to the service account.
    # generate_name = "argocd-manager-token-"
    name          = "argocd-manager-token"
    namespace     = "kube-system"
    annotations = {
      "kubernetes.io/service-account.name" = "argocd-manager"
    }
  }
  # required for 1.24+
  # https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets
  type = "kubernetes.io/service-account-token"
}

resource "kubernetes_service_account" "argocd_manager" {
  metadata {
    name      = "argocd-manager"
    namespace = "kube-system"
  }

  secret {
    name = "argocd-manager-token"
    # cannot set to this value without receiving 'secrets "argocd-manager-token-77cq8" not found'
    # name = kubernetes_secret.argocd_manager.metadata[0].name
  }
}

data "kubernetes_secret" "argocd_manager" {
  metadata {
    name      = "argocd-manager-token"
    namespace = "kube-system"
  }

  depends_on = [
    kubernetes_service_account.argocd_manager,
  ]
}

resource "kubernetes_secret" "argocd_cluster_secret" {
  provider = kubernetes.argocd

  metadata {
    name = "${var.cluster_name}-secret"
    namespace = "argocd"
    labels = {
      "argocd.argoproj.io/secret-type" = "cluster"
    }
  }

  data = {
    name   = var.cluster_name
    server = module.eks.cluster_endpoint
    config = jsonencode({
      "bearerToken" = data.kubernetes_secret.argocd_manager.data["token"]
      "tlsClientConfig" = {
        "insecure"   = false
        "caData"     = base64encode(data.kubernetes_secret.argocd_manager.data["ca.crt"])
        "serverName" = "kubernetes.default.svc.cluster.local"
      }
    })
  }
}

Using the oboukili/argocd provider

https://registry.terraform.io/providers/oboukili/argocd/latest/docs

This I did not test

data "aws_eks_cluster" "default" {
  name = "cluster"
}

resource "argocd_cluster" "default" {
  server     = format("https://%s", data.aws_eks_cluster.default.endpoint)
  name       = "eks"
  namespaces = ["default", "optional"]

  config {
    aws_auth_config {
      cluster_name = "myekscluster"
      role_arn     = "arn:aws:iam::<123456789012>:role/<role-name>"
    }
    tls_client_config {
      ca_data = data.aws_eks_cluster.default.certificate_authority[0].data
    }
  }
}

Describe alternatives you have considered

  • Continue using argocd cluster add <cluster> --grpc-web

Additional context

@github-actions
Copy link

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

@nitrocode
Copy link
Author

Unstale

@bryantbiggs bryantbiggs transferred this issue from aws-ia/terraform-aws-eks-blueprints Mar 17, 2023
@FernandoMiguel
Copy link

we initially used a very similar approach to this but very quickly dropped it
first you need to do cross cluster provider or worst, cross account provider alias.
then this is a very clicky approach, where you will need to run code in two different states to reconcile.

eventually we went with a more decoupled approach where the target cluster writes its secret to hashicorp vault, and the argo-cd deployment has an operator that imports those secrets to then add the cluster to argo-cd.

@askulkarni2
Copy link
Contributor

cc: @csantanapr , @candonov

Joining existing clusters seems to be an interesting use-case.

@nitrocode
Copy link
Author

I use the above terraform for a few months and it automatically adds the cluster successfully to an existing central argocd. I think this would be great as its own module.

@bryantbiggs
Copy link
Contributor

thank you for the issue! At this time we are re-evaluating the integration between Terraform and GitOps operators like ArgoCD. We have removed the integration from the current project and will be tracking all feedback input in #114 while developing the next iteration of this integration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants