Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TF Helm 3 not recognizing existing deployments on cluster after running helm-2to3 update on said deployments #429

Closed
pkelleratwork opened this issue Feb 28, 2020 · 8 comments

Comments

@pkelleratwork
Copy link

pkelleratwork commented Feb 28, 2020

Terraform Version

v0.12.18

Expected Behavior

terraform plan/apply should recognize existing helm deploys

Actual Behavior

terraform is not recognizing existing helm deploys and wants to redeploy them. they error out saying the resource already exists

Steps to Reproduce

  1. deploy apps with terraform provider ~> 0.1.0 and helm 2 charts
  2. upgrade helm 2 deployments on cluster using helm-2to3 migration tool
  3. change terraform provider to ~> 1.0
  4. terraform plan
  5. terraform wants to redeploy app - cannot tell it already exists
  6. terraform apply errors out saying resource already exists

I'm wondering if the state file is causing this? its like terraform doesnt know it exists after updating

@spaghettifunk
Copy link

spaghettifunk commented Mar 2, 2020

We are having the same issue 😞 did you find any alternative by any chance? Besides, remove and deploy the charts

@pkelleratwork
Copy link
Author

nope - im sorta stuck with it at this point. glad i tried on a sandbox cluster first! Also glad I'm not the only one seeing it too.

@pkelleratwork
Copy link
Author

wondering if there is any update or word if this issue has been looked at. thx.

@WebSpider
Copy link

Seeing this too

@dak1n1
Copy link
Contributor

dak1n1 commented Apr 10, 2020

Hi, everyone! I'm working on reproducing this issue. If you could let me know the following information, that would be helpful:

  • The output of terraform version.
  • The output of helm version.
  • A snippet of your terraform config (see my example main.tf below) mostly to see which helm charts and versions are being used.
  • Any log output or shell output you might have (probably put that into a gist so it's easier to read).

Thanks!

Here is the result of my initial testing. I ran into a different problem that varied per chart. I didn't see the exact issue described, but that could be a result of the versions I'm using.

[dakini@dax 20200407]$ terraform0.12.23 version
Terraform v0.12.23
+ provider.helm v1.1.1

[dakini@dax 20200407]$ helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}

And even the version of this tool might have something to do with it:

[dakini@dax 20200407]$ helm plugin list
NAME    VERSION DESCRIPTION                                                               
2to3    0.5.1   migrate and cleanup Helm v2 configuration and releases in-place to Helm v3

Here is the terraform config I used for testing.

[dakini@dax 20200407]$ cat main.tf
provider "helm" {
  version = "1.1.1"
#  version = "0.10.4"
}

data "helm_repository" "stable" {
  name = "stable"
  url  = "https://kubernetes-charts.storage.googleapis.com"
}


resource "helm_release" "redis" {
  name       = "my-redis-release"
  repository = "stable"
  chart      = "redis"
  version    = "10.5.7"
}

resource "helm_release" "mariadb" {
  name       = "my-mariadb-release"
  repository = "stable"
  chart      = "mariadb"
  version    = "7.3.14"
}

resource "helm_release" "consul" {
  name       = "my-consul-release"
  repository = "stable"
  chart      = "consul"
  version    = "3.9.5"
}

The result of the test was that the consul chart worked (although it did do an unnecessary rolling restart of the app during the initial terraform run after the helm 2-to-3 migration). But the mariadb and redis charts failed completely, but that appears to be due to a chart bug in the stable repo (helm/charts#20969). The end result was this:

[dakini@dax 20200407]$ kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
my-consul-release-0           1/1     Running   0          4m8s
my-consul-release-1           1/1     Running   0          4m28s
my-consul-release-2           1/1     Running   0          4m48s
my-mariadb-release-master-0   1/1     Running   0          12m
my-mariadb-release-slave-0    1/1     Running   0          12m
my-redis-release-master-0     1/1     Running   0          12m
my-redis-release-slave-0      1/1     Running   0          12m
my-redis-release-slave-1      1/1     Running   0          10m

[dakini@dax 20200407]$ helm list
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                APP VERSION
my-consul-release       default         2               2020-04-10 10:10:30.439290487 -0700 PDT deployed        consul-3.9.5         1.5.3
my-mariadb-release      default         2               2020-04-10 10:10:29.057210269 -0700 PDT failed          mariadb-7.3.14       10.3.22
my-redis-release        default         2               2020-04-10 10:10:32.417930256 -0700 PDT failed          redis-10.5.7         5.0.7

Full log output plus commands to reproduce are here:
https://gist.githubusercontent.com/dak1n1/8c0f612861c35ac8a76f70729798988f/raw/ba75e9119b3b99215d72b10b998934c437de7981/gistfile1.txt

Let me know if anyone is running into a different problem and I can try to help. Thanks!

@kim0
Copy link

kim0 commented May 2, 2020

I'm running the redis chart in production, and currently worried about doing the migration!
Does this break consistently ? Is there some workaround to fix it once redis gets in the failed state ?

@aareet aareet added the stale label May 13, 2020
@aareet aareet added 4 and removed 4 labels Jun 4, 2020
@aareet
Copy link
Contributor

aareet commented Jul 2, 2020

Closing since this has waited for reproduction information for about 2 months.

@aareet aareet closed this as completed Jul 2, 2020
@ghost
Copy link

ghost commented Aug 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Aug 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants