Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rancher 2.5 Fleet/Continious Delivery Resources/Datasources #502

Closed
mitchellmaler opened this issue Nov 4, 2020 · 6 comments
Closed

Rancher 2.5 Fleet/Continious Delivery Resources/Datasources #502

mitchellmaler opened this issue Nov 4, 2020 · 6 comments
Assignees

Comments

@mitchellmaler
Copy link

Request to add resources and datasources to manage fleet/continuous delivery in Rancher 2.5+

@tsproull
Copy link

tsproull commented Dec 9, 2020

I'd like the ability to specify a fleet workspace for cluster imports on the rancher2_cluster resource.

@philomory
Copy link

As a rough workaround, you can set up your cluster groups and git repos from terraform using the kubernetes_manifest resource from the kuberentes-alpha provider (eventually to be moved to the "real" kubernetes provider).

It's not a very nice process, but it does allow you to manage Fleet configuration from Terraform for now, while waiting for an update to the rancher2 provider (or a new Fleet-specific provider).

@sgran
Copy link

sgran commented Jul 9, 2021

Is there likely to be any work even to add the fleet workspace parameter? That doesn't seem hard, and we would be happy to knock up a PR if it's likely to get released.

@rawmind0
Copy link
Contributor

rawmind0 commented Jul 9, 2021

@sgran , more than happy to accept a PR with this feature

sgran pushed a commit to sgran/terraform-provider-rancher2 that referenced this issue Jul 15, 2021
This closes rancher/rancher rancher#502 by adding a new parameter that allows
setting of the FleetWorkspaceName value.

Signed-off-by: Stephen Gran <steve.gran@anaplan.com>
@bennysp
Copy link

bennysp commented Nov 12, 2021

I have successfully used the below in the terraform and it works, as long as Rancher is deployed. However, when you install from scratch and if your Rancher install is part of the same TF folder as your k8s manifest, you will get the below error.

Code:

# Create creds for use with Cont Delivery
resource "rancher2_secret_v2" "github-creds" {
  depends_on = [
    rancher2_bootstrap.admin
  ]
  cluster_id = "local"
  name = var.cred-gh-name
  namespace = var.fleet_namespace
  type = "kubernetes.io/ssh-auth"
  data = {
      ssh-publickey = base64decode(local.gh_creds.gh_ssh_pub)
      ssh-privatekey = base64decode(local.gh_creds.gh_ssh_priv)
  }
}

# Add Longhorn for Cont Delivery
resource "kubernetes_manifest" "contdel-longhorn" {
  depends_on = [
    rancher2_secret_v2.github-creds,
    helm_release.helm_rancher,
    null_resource.wait4kubcfg
  ]
  manifest = {
    "apiVersion" = "fleet.cattle.io/v1alpha1"
    "kind"       = "GitRepo"
    "metadata" = {
      "name"      = "longhorn"
      "annotations" = {
        "field.cattle.io/description" = "Persistent Storage for Rancher clusters"
      }
      "namespace" = var.fleet_namespace
    }
    "spec" = {
      "branch" = "main"
      "clientSecretName" = var.cred-gh-name
      "insecureSkipTLSVerify": "true"
      "paths" = [
        "/helm/longhorn"
      ]
      "repo" = var.fleet_gh_url
      "targets" = [{
        "clusterSelector" = {}
          }]
    }
  }
}

Error:

│ Error: Failed to determine GroupVersionResource for manifest
│ 
│   with kubernetes_manifest.contdel-longhorn,
│   on rancher_contdelivery.tf line 19, in resource "kubernetes_manifest" "contdel-longhorn":
│   19: resource "kubernetes_manifest" "contdel-longhorn" {
│ 
│ no matches for kind "GitRepo" in group "fleet.cattle.io

It appears that this is a bug in the kubernetes_manifest, as you can see in this issue:
hashicorp/terraform-provider-kubernetes#1367

It looks like people are working around this by creating a CRD but I have not tried that yet.

@matttrach
Copy link
Collaborator

This issue was last commented on 3 years ago and the mentioned upstream link has a workaround. Personally, I feel there is a point in most Terraform configs where you need to break it down into multiple configs and orchestrate it. I have orchestrated with CI or Terraform itself in the past. I lean more towards Terraform orchestrating Terraform in my modules to reduce the number of dependencies. I have found orchestrating with CI to be very easy to manage and simple to understand. I am going to close this issue as stale, but feel free to open a new one if this is still an issue.

@matttrach matttrach self-assigned this Sep 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants