Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using nomad namespace causes inconsistent result after apply error #69

Closed
JanBerktold opened this issue Jul 19, 2019 · 6 comments · Fixed by #70
Closed

Using nomad namespace causes inconsistent result after apply error #69

JanBerktold opened this issue Jul 19, 2019 · 6 comments · Fixed by #70
Assignees
Labels

Comments

@JanBerktold
Copy link

When setting a nomad namespace, the provider always produces an Provider produced inconsistent result after apply error. The example can be fixed by setting the namespace input to an empty string.

Terraform Version

Terraform v0.12.4
+ provider.nomad v1.4.0

Affected Resource(s)

  • nomad_job

Terraform Configuration Files

resource "nomad_job" "memcached-job" {
  jobspec = templatefile(
    "${path.module}/job-spec.tpl",
    {
      name            = "testing-job"
      namespace       = "caching-platform"
    }
  )
}

The template:

job "${name}" {
  %{ if namespace != "" }
  namespace = "${namespace}"
  %{ endif } 
  datacenters = ["alpha"]
  type = "service"

  group "cache" {
    count = 1
    task "redis" {
      driver = "docker"
      config {
        image = "redis:3.2"
        port_map {
          db = 6379
        }
      }
    }
  }
}

Debug Output

https://gist.github.com/JanBerktold/cc66903f5a30d118be34da42ca727851

Panic Output

Not applicable.

Expected Behavior

Expected Nomad job to run in namespace.

Actual Behavior

Error: Provider produced inconsistent result after apply

When applying changes to nomad_job.memcached-job, provider "nomad" produced an
unexpected new value for was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Steps to Reproduce

  1. terraform apply
@cgbaker cgbaker self-assigned this Jul 23, 2019
@cgbaker cgbaker added the bug label Jul 23, 2019
@cgbaker
Copy link
Contributor

cgbaker commented Jul 23, 2019

Thanks for the report, @JanBerktold , working on this now.

@cgbaker
Copy link
Contributor

cgbaker commented Jul 24, 2019

The root cause seems to be that the calls to the Nomad API (specifically, read and delete) do not reference the namespace, and therefore may result in 404s. One limited workaround is to set the NOMAD_NAMESPACE environment variable before calling terraform plan/apply/destroy/... (where it will be picked up when instantiating the Nomad API client). This workaround assumes that the operator workflow supports that; it also assumes that the Terraform spec only references a single namespace.

I will release a 1.4.1 provider when this is fixed. I'm updating the Nomad Job datasource as well.

@cgbaker
Copy link
Contributor

cgbaker commented Jul 31, 2019

@JanBerktold , just released 1.4.1 with support for this. let me know how it works.

@robloxrob
Copy link

Thank you for the quick turn around!

@wallacepf
Copy link

Hello @cgbaker, running the latest version of nomad's provider + latest version of TF I'm hitting the same error.

Debug output:

https://gist.github.com/wallacepf/7bf4863122f539159bd1bb4329cf67b1

Terraform version:

�Terraform v1.0.9

Terraform configuration file

terraform {
    required_providers {
        nomad = {
            version = "~> 1.4.15"
        }
    }
}

provider "nomad" {
    address = "http://192.168.86.27:4646"
}

resource "nomad_namespace" "cicd" {
    name = "cicd"
    description = "Namespace for CICD"
}

resource "nomad_job" "gl-runner" {
    jobspec = file ("${path.module}/jobs/gl-runner.nomad")
    hcl2 {
        enabled = "true"
        vars = {
            "datacenters" = "[\"NUC\"]",
            "namespace" = nomad_namespace.cicd.name
        }
    }
}

Job File

variable "datacenters" {
    type = list(string)
}

variable "namespace" {
    type = string
}

job "gitlab-runners" {
  datacenters = var.datacenters
  type        = "service"
  namespace = var.namespace

  group "runners" {
      task "runners" {
          driver = "docker"
          config {
              image = "gitlab/gitlab-runner:alpine"
              volumes = [
                  "/srv/gitlab-runner/config:/etc/gitlab-runner"
              ]
          }
      }
  }
}

@cgbaker
Copy link
Contributor

cgbaker commented Oct 22, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants