-
Notifications
You must be signed in to change notification settings - Fork 976
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to deploy a kubernetes manifest when relying on another resource #1380
Comments
Can you clarify if you mean using a |
@Quintasan, same here with cert-manager helm chart and the kubernetes_manifest |
same thing with any CRD. If you run your terraform code from scratch (with no resources existing yet) and you want your The point of using the Right now the other possibility is to handle the deployment of a stack in two steps:
|
@Raclaw-jl, agree on you! This should be fixed, but Terraform had always problems with the CRD support... 😄 |
This is a terraform limitation, not specific to kubernetes. The limitation comes from not having all the data required at planning stage. Another example of this limitations would be planning new namespaces in a still-to-be-created k8s cluster. Edit: The discussion previously shared didn't 100% match its scope with this known terraform limitation |
Hope it helps someone Working hack is to use different modules for helm release and resources based on CRD
resource "helm_release" "cert-manager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
namespace = "ingress"
create_namespace = true
version = "1.5.3"
set {
name = "installCRDs"
value = true
}
timeout = 150
}
resource "kubernetes_manifest" "issuer" {
manifest = {
apiVersion = "cert-manager.io/v1"
kind = "ClusterIssuer"
....
}
module "cert-manager" {
source = "./modules/cert-manager"
}
module "certificates" {
depends_on = [module.cert-manager]
source = "./modules/certificates"
}
|
Same for me. Cannot use depends_on for Kubernetes terraform resource. Awaiting for this feature. |
Also facing a similar issue, installing In case it helps anyone, I ended up using a different workaround. You can wrap the CRD as a
|
This issue needs to be fixed, but there is a workaround for those interested (mentioned here) : uses terraform-provider-kubectl which allow you to apply a yaml file without checking that type and apiVersion exist during plan stage example from mentioned issue :
|
The `kubernetes_manifest` resource in provider `hashicorp/kubernetes` has a known issue[1] where resources created in a manifest can't depend on other resources that don't exist yet. To work around this, we instead use `gavinbunney/kubectl`'s `kubectl_manifest` resource, which does not have this problem because it uses a different mechanism for planning. [1] hashicorp/terraform-provider-kubernetes#1380 Resolves #1088
The `kubernetes_manifest` resource in provider `hashicorp/kubernetes` has a known issue[1] where resources created in a manifest can't depend on other resources that don't exist yet. To work around this, we instead use `gavinbunney/kubectl`'s `kubectl_manifest` resource, which does not have this problem because it uses a different mechanism for planning. [1] hashicorp/terraform-provider-kubernetes#1380 Resolves #1088
Hi all i'm facing a similar issue when using the depends on flag with the stated kubernetes provider where my code is as follows
This seems to result in
using terraform 1.1.3 whilst the docs do say that kubeconfig does need to present to use kubernetes_manifest I want to understand why this is as other resources which get deployed to the cluster such as a storage class or namespace do not require a kubeconfig, rather it seems to be derived by the values
as a result, I'm a little bit baffled by the dependency on the kubeconfig Doc links referenced: |
Hi, I bumped into the same issue while injecting an SSH key into a kubernetes_manifest resource:
The error I get:
|
@emilianofs
try
The above should produce the result if not see below you could also use the an alterative can be found below where the data is rendered in memory
|
@dc232 Removing yamldecode()
Using template_file resource:
The only way I can get it work is by defining the manifest in HCL instead of using the YAML template. There is a really useful tool to convert a YAML manifest to HCL k2tf: https://github.com/sl1pm4t/k2tf |
is this issue present in the roadmap ? |
This issue still exists |
Can any of the recent reporters please provide an example that causes this issue? |
Unfortunately, I have no example in my saved snippets, but it seems I still remember my thoughts why it happens. If I right remember to reproduce this you need two resources in your terraform project:
If I right understand the problem is when terraform refreshes state it tries to query kubernetes to check if CRD item (2nd point) exists. But k8s returns an error, cause CRD itself wasn't yet created (so API for this CRD does not exist yet). P.S. Sorry for not providing an actual reproduction snippet, but I have no cluster to reproduce this right now |
This module would not work, if terraform {
required_providers {
kubernetes = {
source = "registry.terraform.io/hashicorp/kubernetes"
version = "2.12.1"
}
helm = {
source = "hashicorp/helm"
version = "2.6.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.14.0"
}
}
}
variable "namespace" {
type = string
description = "k8s namespace used in this module"
}
variable "email" {
type = string
description = "Email address that Let's Encrypt will use to send notifications about expiring certificates and account-related issues to."
sensitive = true
}
variable "api_token" {
type = string
description = "API Token for Cloudflare"
sensitive = true
}
resource "helm_release" "cert_manager" {
name = "cert-manager"
namespace = var.namespace
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = "1.9.1"
set {
name = "installCRDs"
value = "true"
}
}
# Make the API Token a secret available globally
resource "kubernetes_secret_v1" "letsencrypt_cloudflare_api_token_secret" {
metadata {
name = "letsencrypt-cloudflare-api-token-secret"
namespace = var.namespace
}
data = {
"api-token" = var.api_token
}
}
resource "kubectl_manifest" "letsencrypt_issuer_staging" {
yaml_body = templatefile(
"${path.module}/letsencrypt-issuer.tpl.yaml",
{
"name" = "letsencrypt-staging"
"email" = var.email
"server" = "https://acme-staging-v02.api.letsencrypt.org/directory"
"api_token_secret_name" = kubernetes_secret_v1.letsencrypt_cloudflare_api_token_secret.metadata.0.name
"api_token_secret_data_key" = keys(kubernetes_secret_v1.letsencrypt_cloudflare_api_token_secret.data).0
}
)
depends_on = [
# Need to install the CRDs first
helm_release.cert_manager
]
}
resource "kubectl_manifest" "letsencrypt_issuer_production" {
yaml_body = templatefile(
"${path.module}/letsencrypt-issuer.tpl.yaml",
{
"name" = "letsencrypt-production"
"email" = var.email
"server" = "https://acme-v02.api.letsencrypt.org/directory"
"api_token_secret_name" = kubernetes_secret_v1.letsencrypt_cloudflare_api_token_secret.metadata.0.name
"api_token_secret_data_key" = keys(kubernetes_secret_v1.letsencrypt_cloudflare_api_token_secret.data).0
}
)
depends_on = [
# Need to install the CRDs first
helm_release.cert_manager
]
} |
@winston0410 thanks a lot for sharing the example! |
Are there any plans to resolve this issue? I am running into the same issue as @alexsomesan but, unfortunately, don't have the option to run multiple operations. One of the main reasons for going with terraform for our k8s setup was having a single tool for cloud and cluster setup. |
@Blunderchips not being able to do multi stage applies is a problem that you will find in many cases, such as when you provision a cluster through google cloud and then want to install something on it. Terraform just can't compute the final state, and that's the main reason for multi stage applies. I am running a setup like the one you mention and it works wonders. The reason why the |
I tried to migrate to kubernetes_manifest after kubectl_manifest started to behave flaky and producing inconsistent results provisioning ClusterIssuer for cert-manager. This is the only workaround I could find without requiring a separate run context. The itscontained chart is no longer available, I replaced it by https://artifacthub.io/packages/helm/wikimedia/raw |
Regarding the suggestions in #1380 (comment) and #1380 (comment), @DaniJG posted a nice self-contained explanation on Medium at Avoid the Terraform kubernetes_manifest resource. |
Unfortunately the |
On Thu, 30 Nov 2023 at 12:52, Mina Farrokhnia ***@***.***> wrote:
@robertobado <https://github.com/robertobado> You mentioned that you
replace it with https://artifacthub.io/packages/helm/itscontained/raw as
itscontained chart is not working, I am wondering which repository did you
use?
I've also tested this and it did not work :
repository = "https://charts.itscontained.io"
chart = "raw"
version = "0.2.5"
I tried to find a new working repo from artifacthub
<https://artifacthub.io/packages/helm/itscontained/raw?modal=install>
link but by clicking on INSTALL, it shows me the same repo that I've tried
earlier.
Here is the error that I am facing with:
helm repo add itscontained https://charts.itscontained.io
Error: looks like "https://charts.itscontained.io" is not a valid chart repository or cannot be reached: Get "https://charts.itscontained.io/index.yaml": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2023-11-30T12:04:34+01:00 is after 2023-09-06T01:35:52Z
This currently works for me in
```
chart = "raw"
repository = "https://helm-charts.wikimedia.org/stable/"
version = "0.3.0"
```
|
we're using this one from dysnix: |
kubectl_manifests doesn't look maintained indeed. When some apply fails, for some reason. Object gets tainted. Next plan/apply would re-create objects. That is: delete everything, then create everything. At which point, that helm provider is pretty much the worst thing i've ever used managing Kubernetes. And my company did write their own ungodly ansible-playbooks-wrapped-in-go terraform provider, ... How come we don't have a single viable/feature-complete terraform provider managing Kubernetes?! |
My issue is not related to having a CRD dependency but rather a simple variable interpolation within the yamldecode() encoded text in the |
As mentioned on #1380 (comment) It seems like removing |
I just tried this (maybe incorrectly?), and it didn’t work for me, I guess that’s what the thumbs down meant, but I wasn’t sure. It really feels bad to have to use an extra workspace/module just to have the CRD applied before being able to use it. This one though seems to pass the planning phase (but I have other demons to fight before being sure it also applies) |
Yes, that one works - it works for me as I confirmed in #1380 (comment) See also my #1380 (comment) on @DaniJG 's solution. |
Since https://github.com/hashicorp/terraform-provider-kubernetes-alpha was archived and we can no longer comment on hashicorp/terraform-provider-kubernetes-alpha#123 then I'm just going to cross-post it to here so this doesn't get forgotten
The text was updated successfully, but these errors were encountered: