Skip to content
This repository has been archived by the owner on Aug 11, 2021. It is now read-only.

Plan/apply fails when namespace does not yet exist #57

Closed
Starefossen opened this issue Jun 6, 2020 · 5 comments · Fixed by #151
Closed

Plan/apply fails when namespace does not yet exist #57

Starefossen opened this issue Jun 6, 2020 · 5 comments · Fixed by #151
Labels
bug Something isn't working

Comments

@Starefossen
Copy link

Starefossen commented Jun 6, 2020

Terraform Version and Provider Version

Terraform v0.12.26
+ provider.external v1.2.0
+ provider.helm v1.2.2
+ provider.kubernetes v1.11.3
+ provider.kubernetes-alpha (unversioned)
+ provider.ldap v1.0.0
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.random v2.2.1
+ provider.template v2.1.2

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

resource "kubernetes_namespace" "test_namespace" {
  metadata {
    name = "my-ns"
  }
}

resource "kubernetes_manifest" "test_configmap" {
  provider = kubernetes-alpha

  manifest = {
    "apiVersion" = "v1"
    "kind" = "ConfigMap"
    "metadata" = {
      "name" = "test-config"
      "namespace" = kubernetes_namespace.test_namespace.metadata.0.name
    }
    "data" = {
      "foo" = "bar"
    }
  }
}

Debug Output

Panic Output

Expected Behavior

What should have happened?

Expected the plan/apply to work where it first creates the namespace as defined and then additional resources within it.

Actual Behavior

What actually happened?

Error: rpc error: code = Unknown desc = update dry-run failed: namespaces "my-ns" not found

Steps to Reproduce

Use the above example.

Important Factoids

References

  • GH-1234

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@Starefossen Starefossen added the bug Something isn't working label Jun 6, 2020
@alexsomesan
Copy link
Member

Hi @Starefossen,

TL;DR; This is a known limitation and should not manifest anymore when using local planning, which is being worked on in this PR: #41

What you are describing is unfortunately a known limitation of using server side planning. It's mentioned in the blog post where we announced the provider under "Limitations" https://www.hashicorp.com/blog/deploy-any-resource-with-the-new-kubernetes-provider-for-hashicorp-terraform/

The issue here is that the provider, when used in server side planning mode, makes dry-run API calls to the K8s API to make sure the resource is configured correctly and also to get default values of missing attributes.

When two or more resources are involved in an operation, Terraform schedules all of the Plan actions before all of the Apply actions. This means that the dry-run call for the ConfigMap resource is done before any of the Apply calls are made. As such the Namespace is not yet present on the API side and causes the dry-run call for the ConfigMap to fail.

Unfortunately there is no way to address this problem when using server-side plannign without reordering the sequence of Plan / Apply call that Terraform itself makes to the provider for each resource. The problem should not be present when using local planning, once PR #41 will be merged.

@dirien
Copy link

dirien commented Sep 24, 2020

so this renders the kubernetes provider in some terms usless until #41 is ready?

@alexsomesan
Copy link
Member

alexsomesan commented Sep 25, 2020 via email

@eahrend
Copy link

eahrend commented Sep 29, 2020

Just to provide an example the inverse of this, I have a terraform module that creates the namespace, then creates the objects inside that namespace using the kubernetes provider, ideally for a decent developer experience, I should be be able to do one of two things:

  1. Create the namespace resource, then if I attempt to create the same namespace from a different module the uses the kubernetes provider with the same parameters terraform will see that the namespace already exists, and skip the apply.
  2. In any object other than the namespace resource, if a namespace that does not exist is declared, it will be created, if it already exists, skip the creation of the namespace.

A specific use case for this:

  1. I create a pubsub topic in GCP and I create a GKE cluster.
  2. I create a service account and key with specific rights to publish to that pubsub topic.
  3. I take that service account key and create a secret in an application service namespace.
  4. I also create a pubsub suscription for that topic in GCP and create the service account with rights to consume messages as well as upload files to GCS.
  5. I want to create another secret in that namespace with the key from step 4.

The problem comes when they are both in the same namespace as they are part of the same application suite, but secret is for a subscriber and the other secret is for the publisher. So one resource will leverage the module and create the namespace while the other will error out (since the namespace already exists). I could remedy this by creating a 1:1 mapping of namespaces and microservices, however to simplify billing reports, I want both the publisher and the subscriber in the same namespace.

Here is the terraform module code I am using that causes this race condition:

resource "kubernetes_namespace" "default" {
  metadata {
    name = var.namespace
  }
}


resource "kubernetes_secret" "default" {
  depends_on = [kubernetes_namespace.default]
  metadata {
    name = var.kube_secret_name
    namespace = var.namespace
  }
  data = {
    secret = var.secret_data
  }
  type = var.secret_type
}

data "google_client_config" "default" {
    provider = google
}

/******************************************
  Configure provider
 *****************************************/
# terraform version to use
terraform {
  required_version = "~> 0.13"
  required_providers {
    google = {
      source  = "hashicorp/google"
    }
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "~> 1.13.2"
    }
  }
}


provider "kubernetes" {
  load_config_file       = false
  host                   = "https://${var.cluster_endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
}

data "google_client_config" "default" {
  provider = google
}

variable "kube_secret_name" {
  type = string
}
variable "secret_data" {
  type = string
  description = "the actual secret"
}
variable "cluster_endpoint" {
  type = string
  description = "cluster endpoint url"
}
variable "cluster_ca_certificate" {
  type = string
  description = "base64 encoded string of the cluster ca cert"
}

/****************************
END OF REQUIRED VARIABLES
****************************/


variable "namespace" {
  type    = string
  description = "namespace where the secret shall be stored"
  default = "default"
}

variable "secret_type" {
  type = string
  default = "Opaque"
}


@ghost
Copy link

ghost commented Apr 8, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Apr 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
5 participants