Skip to content
This repository has been archived by the owner on Aug 11, 2021. It is now read-only.

Unable to deploy a kubernetes manifest when relying on another resource #123

Open
cwoolum opened this issue Sep 24, 2020 · 15 comments
Open
Labels

Comments

@cwoolum
Copy link

cwoolum commented Sep 24, 2020

Terraform Version and Provider Version

0.13.0

Kubernetes Version

1.17.9

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
resource "azurerm_user_assigned_identity" "analytics-identity" {
  name                = "sp-api-analytics-${var.environment}"
  resource_group_name = azurerm_resource_group.analytics-resource-group.name
  location            = "West US"
}

resource "kubernetes_manifest" "azure_identity" {
  provider = kubernetes-alpha

  manifest = {
    "apiVersion" = "aadpodidentity.k8s.io/v1"
    "kind"       = "AzureIdentity"
    "metadata" = {
      "aadpodidentity.k8s.io/Behavior" = "namespaced"
      "name"                           = "analytics-identity"
      "namespace"                      = "api"
    }
    "spec" = {
      "clientID"   = azurerm_user_assigned_identity.analytics-identity.client_id
      "resourceID" = azurerm_user_assigned_identity.analytics-identity.id
      "type"       = 0
    }
  }

  depends_on = [
    azurerm_user_assigned_identity.analytics-identity
  ]
}

Debug Output

Panic Output

Expected Behavior

I would expect the resource would be deployed and validation of the clientID and resourceID fields would be deferred until the azurerm_user_assigned_identity has been created.

Actual Behavior

Error: rpc error: code = Unknown desc = value is not known

Steps to Reproduce

  1. Set the provider to server_side_planning = true
  2. terraform apply

Important Factoids

If not using server side planning, the plan portion fails with

Error: rpc error: code = Unknown desc = failed to get resource type from OpenAPI (ID io.k8s.api.aadpodidentity.v1.AzureIdentityBinding): invalid type identifier

Using static values for clientID and resourceID does work correctly

References

  • GH-1234

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@cwoolum cwoolum added the bug Something isn't working label Sep 24, 2020
@ojizero
Copy link

ojizero commented Oct 26, 2020

The second error you got (when disabling service side planning) seems to be Terraform incorrectly identifying the OpenAPI ID for the object types, it defaults to prefixing things to io.k8s.api, so when you use any CRD Terraform will fail to find it.

I'm facing a similar issue were I'm deploying from a CRD (ClickhouseInstallation CRD from Altinity's Clickhouse operator), if running with server side planning it attempts to do a dry run but fails in case I depend on other resources (like namespace for example) and turning of server side planning breaks things as it no longer identifies the CRD since it looks for he wrong ID in the OpenAPI response.

@Nainterceptor
Copy link

Hello,

I've the same error with a simple random_password on an ArangoDB CRD.
(without server_side_planning = true configuration)

resource "random_password" "arangodb-password" {
  length  = 32
  special = false
}

resource "kubernetes_manifest" "test" {
  provider = kubernetes-alpha
  depends_on = [random_password.arangodb-password]

  manifest = {
    "apiVersion" = "database.arangodb.com/v1alpha"
    "kind" = "ArangoDeployment"
    "metadata" = {
      "name" = "reviews-instance"
      "namespace" = kubernetes_namespace.arangodb.metadata[0].name // OK here
    }
    "spec" = {
      "mode" = "Single"
      "bootstrap" = {
        "passwordSecretNames" = {
          "root" = random_password.arangodb-password.result //Failure here
        }
      }
      "dbservers" = {
        "resources" = {
          "limits" = {
            "cpu" = "400m"
            "memory" = "512Mi"
          }
          "requests" = {
            "cpu" = "100m"
            "memory" = "256Mi"
          }
        }
      }
    }
  }
}

@marcellodesales
Copy link

I'm getting into the same problem, maybe related to #129 (comment)... I'm using server_side_planning = true...

@alexsomesan
Copy link
Member

This is expected to happen when using "server-side planning" because in that mode, the provider will make dry-run API calls to Kubernetes during the planning step. However, at plan time, no resources have been created yet so interpolating values from other resources will produce this conflict.

The long term solution is in PR #41.
Until that merges, you could work around this issue by using the -target argument to terraform apply to first apply the dependent resources and then apply the kubernetes_manifest resource in a later apply step.

For example, you could do:

> terraform apply -target=azurerm_user_assigned_identity.analytics-identity

and once that's finished, you can now do another terraform apply to finally create the kubernetes_manifest resource.

Further operations should behave as expected as long as all dependent resources have state available in the state file.

I hope that makes enough sense.

You can read more about the -target parameter here: https://www.terraform.io/docs/commands/plan.html#resource-targeting

@rabidscorpio
Copy link

@alexsomesan PR #41 hasn't had a commit since June, is there something preventing that PR from getting merged?

Work on this provider seems to be extremely sporadic with just 60 commits in almost a year. Given that this is supposed to replace the main kubernetes provider and how popular kubernetes is, is there any roadmap for this? Or release date or plan?

@aareet
Copy link
Member

aareet commented Nov 2, 2020

Hi @rabidscorpio, the work on this alpha provider is experimental and involves some trial and error as we arrive at an acceptable implementation internally. Rest assured we're working to have this merged into the official provider ASAP and will share a date when we're able.

Since the provider currently bypasses the SDK in order to achieve CRD support, we are working to update the provider to SDKv2. This is a prerequisite for #41 to be merged.

@rabidscorpio
Copy link

@aareet I appreciate the response, thank you. It looks like there's mainly a single developer working on this when kubernetes is experiencing such a huge amount of popularity so it's frustrating that the project seems to be moving so slowly.

I'm new to the kubernetes provider (but not the AWS provider) so I'm reluctant to put effort into implementing the mainline kubernetes provider when it's just going to be replaced with this one.

@aareet
Copy link
Member

aareet commented Nov 2, 2020

I understand and appreciate your patience while we work through the various steps to GA :) Just to note, this provider is not intended to replace the official provider. Rather, we have been working on terraform-plugin-mux as a path to expose the manifest resource in the official provider. So if you were to implement something new in the official provider, it would likely not be replaced by the work being done here.

@BrandonALXEllisSS
Copy link

#41 was superseded by #151, which was merged... but I'm still running into this same problem...?

Has anyone been able to get this to work?

@dwrusse
Copy link

dwrusse commented Jun 10, 2021

Also not working for us. In our case, the kubernetes_manifest is dependent on a helm_release to deploy additional api resources.

@alexsomesan
Copy link
Member

@dwrusse and @BrandonALXEllisSS can you share your configuration and provider versions so we can try to reproduce your issues?

@dwrusse
Copy link

dwrusse commented Jun 11, 2021

resource "helm_release" "cert-manager" {
  name       = "cert-manager"
  repository = "https://charts.jetstack.io"
  chart      = "cert-manager"
  namespace  = "monitoring"
  version    = "1.3.1"

  set {
      name = "podLabels.aadpodidbinding"
      value = "cert-manager"
  }
  set {
    name = "installCRDs"
    value = true
  }

  set {
    name = "extraArgs"
    value = "{--issuer-ambient-credentials}"
  }
}


resource "kubernetes_manifest" "staging-issuer" {
    provider = kubernetes-alpha
    manifest = {
      "apiVersion" = "cert-manager.io/v1"
      "kind" = "Issuer"
      "metadata" = {
        "name" = "letsencrypt-staging"
        "namespace" = "monitoring"
      }
      "spec" = {
        "acme" = {
          "email" = REDACTED
          "privateKeySecretRef" = {
            "name" = "letsencrypt-staging"
          }
          "server" = "https://acme-staging-v02.api.letsencrypt.org/directory"
          "solvers" = [
            {
              "dns01" = {
                "azureDNS" = {
                  "environment" = "AzurePublicCloud"
                  "hostedZoneName" = data.azurerm_dns_zone.sub.name
                  "resourceGroupName" = data.azurerm_dns_zone.sub.resource_group_name
                  "subscriptionID" = data.azurerm_client_config.current.subscription_id
                }
              }
            },
          ]
        }
      }
    }
    depends_on = [helm_release.cert-manager]
}

@NybbleHub
Copy link

Hi all,

I have the same problem as @dwrusse, my Cert-manager configuration is exactly the same. Here is my playbook:

  1. Create an AKS cluster
  2. Get Kube_Admin resources for 'kubernetes-alpha' provider configuration
  3. Install Cert-manager with CRDs thought Helm
  4. Create ClusterIssuer thought Kubernetes manifest

Here is information about Terraform and providers versions:

Terraform v1.0.1
on darwin_amd64

  • provider registry.terraform.io/gavinbunney/kubectl v1.11.2
  • provider registry.terraform.io/hashicorp/azuread v1.6.0
  • provider registry.terraform.io/hashicorp/azurerm v2.64.0
  • provider registry.terraform.io/hashicorp/helm v2.2.0
  • provider registry.terraform.io/hashicorp/kubernetes v2.3.2
  • provider registry.terraform.io/hashicorp/kubernetes-alpha v0.5.0
  • provider registry.terraform.io/hashicorp/local v2.1.0

Here is the error I get with plan:

 Error: Failed to construct REST client
 
   with module.cert-manager.kubernetes_manifest.acme-cluster-issuer,
   on modules/cert-manager/cert-manager.tf line 40, in resource "kubernetes_manifest" "acme-cluster-issuer":
   40: resource "kubernetes_manifest" "acme-cluster-issuer" {
 
cannot create REST client: no client config

@sambonbonne
Copy link

Hello,

I have a similar problem. My modules flow:

  • module environments/local creates a Kubernetes cluster with kind and inject some resources
  • this module includes the common/kubernetes module and injects the kubernetes-alpha provider (configured with the ouput of the provider creating the kind cluster) in order to install the RabbitMQ cluster operator manifest

Versions information:

Terraform v1.0.2
on linux_amd64
+ provider registry.terraform.io/gavinbunney/kubectl v1.11.2
+ provider registry.terraform.io/hashicorp/external v2.1.0
+ provider registry.terraform.io/hashicorp/helm v2.2.0
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.3.2
+ provider registry.terraform.io/hashicorp/kubernetes-alpha v0.5.0
+ provider registry.terraform.io/hashicorp/null v3.1.0
+ provider registry.terraform.io/hashicorp/time v0.7.1
+ provider registry.terraform.io/hashicorp/tls v3.1.0
+ provider registry.terraform.io/kyma-incubator/kind v0.0.9

The error:

╷
│ Error: Failed to construct REST client
│
│   with module.kind_kubernetes.kubernetes_manifest.rabbitmq_namespace,
│   on ../../common/kubernetes/resources.tf line 130, in resource "kubernetes_manifest" "rabbitmq_namespace":
│  130: resource "kubernetes_manifest" "rabbitmq_namespace" {
│
│ cannot create REST client: no client config
╵

@carlcauchi
Copy link

carlcauchi commented Aug 8, 2021

resource "helm_release" "cert-manager" {
  name       = "cert-manager"
  repository = "https://charts.jetstack.io"
  chart      = "cert-manager"
  namespace  = "monitoring"
  version    = "1.3.1"

  set {
      name = "podLabels.aadpodidbinding"
      value = "cert-manager"
  }
  set {
    name = "installCRDs"
    value = true
  }

  set {
    name = "extraArgs"
    value = "{--issuer-ambient-credentials}"
  }
}


resource "kubernetes_manifest" "staging-issuer" {
    provider = kubernetes-alpha
    manifest = {
      "apiVersion" = "cert-manager.io/v1"
      "kind" = "Issuer"
      "metadata" = {
        "name" = "letsencrypt-staging"
        "namespace" = "monitoring"
      }
      "spec" = {
        "acme" = {
          "email" = REDACTED
          "privateKeySecretRef" = {
            "name" = "letsencrypt-staging"
          }
          "server" = "https://acme-staging-v02.api.letsencrypt.org/directory"
          "solvers" = [
            {
              "dns01" = {
                "azureDNS" = {
                  "environment" = "AzurePublicCloud"
                  "hostedZoneName" = data.azurerm_dns_zone.sub.name
                  "resourceGroupName" = data.azurerm_dns_zone.sub.resource_group_name
                  "subscriptionID" = data.azurerm_client_config.current.subscription_id
                }
              }
            },
          ]
        }
      }
    }
    depends_on = [helm_release.cert-manager]
}

@dwrusse same issue here...

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests