Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

helm_release_set not working when environment_variables are set #64

Open
mcandio opened this issue Oct 26, 2021 · 5 comments
Open

helm_release_set not working when environment_variables are set #64

mcandio opened this issue Oct 26, 2021 · 5 comments

Comments

@mcandio
Copy link

mcandio commented Oct 26, 2021

I have the following helm_release_set resource:

resource "helmfile_release_set" "external-dns" {
  working_directory = "${path.module}/helmfiles/external-dns"
  kubeconfig        = pathexpand("~/.kube/config")
  environment_variables = local.external_dns_environment_variables
  depends_on = [
    aws_iam_openid_connect_provider.cluster,
    module.eks_iam_role
  ]
}

locals {
  external_dns_environment_variables = { 
    "KUBECTX"               = "var.context"
    "EKS_ROLE_ARN"          = "${module.eks_iam_role.service_account_role_arn}"
  }
}

and this is my helmfile.yaml:

---
repositories:
  - name: external-dns
    url: https://kubernetes-sigs.github.io/external-dns/

helmDefaults:
  wait: true
  timeout: 120
  atomic: false
  createNamespace: true
  kubeContext: {{ requiredEnv "KUBECTX" }}

releases:
  - name: external-dns
    namespace: kube-system
    chart: external-dns/external-dns
    values:

      - serviceAccount:
          create: true
          annotations: {
             eks.amazonaws.com/role-arn: {{ requiredEnv "EKS_ROLE_ARN" }},
            }
          name: "external-dns"

        rbac:
          create: true
        podAnnotations: {
          eks.amazonaws.com/role-arn: {{ requiredEnv "EKS_ROLE_ARN" }},
        }

        service:
          port: 7979

        logLevel: info
        logFormat: text

        interval: 1m
        triggerLoopOnEvent: false

        sources:
          - service
          - ingress

        policy: upsert-only

        registry: txt
        txtOwnerId: "external-dns"
        txtPrefix: "external-dns"

        provider: aws

        extraArgs: [
          --aws-zone-type=public
          ]

The fact is that I'm using this your provider since a long time ago but now it's not working, the resource seems to fail when the diff can't evaluate the requiredEnv, but, funny thing is var.context is already set.
If the previous resources are created, the diff goes well, but the strange thing is var.context not working now.

this is the error that I'm getting:

│ Error: diffing release set: running helmfile diff: running command: /usr/local/bin/helmfile: exit status 1
│ in ./helmfile.yaml: error during helmfile.yaml.part.0 parsing: template: stringTemplate:11:18: executing "stringTemplate" at <requiredEnv "KUBECTX">: error calling requiredEnv: required env var `KUBECTX` is not set
│ 
│ 
│   on external-dns.tf line 1, in resource "helmfile_release_set" "external-dns":
│    1: resource "helmfile_release_set" "external-dns" {

I don't understand what could be wrong.
I really use this provider a lot, any thoughts here?

Thanks mumoshu for this excellent provider.

I have read that I can specify the kubeconfig to "" but this is not how it was working.

Hope someone can help me, really lost here.

@mcandio
Copy link
Author

mcandio commented Oct 26, 2021

So I figured out that the environment_variables only works with "${var.context}" which is very strange

@mcandio
Copy link
Author

mcandio commented Jan 4, 2022

well, this is very tricky.
The readme says I can skip using kubeconfig parameter but I can't .

source

Releasing state lock. This may take a few moments...
╷
│ Error: Missing required argument
│
│   on teleport.tf line 1, in resource "helmfile_release_set" "teleport":
│    1: resource "helmfile_release_set" "teleport" {
│
│ The argument "kubeconfig" is required, but no definition was found.

It also says I can set the KUBECONFIG env var in environment variables.

So the final result is helmfile_release_set is not applying itself if kubeconfig is empty and helmfile_release_set does not work if kubeconfig = pathexpand("~/.kube/config") but there are environment variables set not yet known by terraform.

I'm applying this via atlantis.

I need help here @mumoshu

@mcandio
Copy link
Author

mcandio commented Jan 4, 2022

Maybe I am using the tool inappropriately but I cannot see where can I find a good example.

My release set is quite simple:


---
repositories:
  - name: teleport
    url: https://charts.releases.teleport.dev

helmDefaults:
  wait: true
  timeout: 120
  atomic: false
  createNamespace: true
  kubeContext: {{ requiredEnv "KUBECTX" }}

releases:
  - name: teleport
    namespace: {{ requiredEnv "NAMESPACE" }}
    chart: teleport/teleport
    values:

      - license:
          enabled: false
        proxy:
          tls:
            enabled: false
            usetlssecret: true
            secretName: tls-web
        image:
          tag: "7.3.0"

        config:
          public_address: {{ requiredEnv "TELEPORT_DOMAIN_NAME" }}
          listen_addr: 0.0.0.0
          auth_public_address: {{ requiredEnv "TELEPORT_DOMAIN_NAME" }}
          teleport:
            storage:
              type: dynamodb
              region: us-east-1
              table_name: teleport
              audit_events_uri:  ['dynamodb://events_teleport', 'file:///var/lib/teleport/audit/events', 'stdout://']
              audit_sessions_uri: s3://{{ requiredEnv "TEST_VALUE" }}/records
              continuous_backups: true
            auth_service:
              authentication:
                type: local
                second_factor: "off"
              session_control_timeout: 30m
              client_idle_timeout: never
              tokens:
              - proxy,node:token
              - trusted_cluster:token
            ssh_service:
              enabled: true
              public_addr: 0.0.0.0

        service:
          type: LoadBalancer
          ports:
            proxyweb:
              port: 443
              targetPort: 3080
              protocol: TCP
            authssh:
              port: 3025
              targetPort: 3025
              protocol: TCP
            proxykube:
              port: 3026
              targetPort: 3026
              protocol: TCP
            proxyssh:
              port: 3023
              targetPort: 3023
              protocol: TCP
            proxytunnel:
              port: 3024
              targetPort: 3024
              protocol: TCP
          annotations: 
            service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ requiredEnv "TELEPORT_ACM_ARN" }}
            service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
            external-dns.alpha.kubernetes.io/hostname: {{ requiredEnv "TELEPORT_DOMAIN_NAME" }}


        persistence:
          enabled: true
          accessMode: ReadWriteMany
          storageClass: efs-sc
          storageSize: 50Gi

terraform.tf

terraform {
  required_version = ">=1.0.0"
  backend "s3" {
  }
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 3.57.0"
    }
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.4.1"
    }
    helmfile = {
      source = "mumoshu/helmfile"
      version = "0.14.0"
    }
  }
}

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = var.context
}

provider "helmfile" {
  max_diff_output_len = 16384
}

provider "aws" {
  region  = var.region
  profile = var.profile
}

teleport.tf

resource "helmfile_release_set" "teleport" {
  working_directory     = "./helmfiles/teleport"
  kubeconfig            = pathexpand("~/.kube/config")
  environment_variables = {
    KUBECTX               = data.aws_eks_cluster.cluster_arn.arn
    NAMESPACE             = var.namespace
    CLUSTER_NAME          = data.aws_eks_cluster.cluster_arn.arn
    REGION                = var.region
    TELEPORT_ACM_ARN      = aws_acm_certificate_validation.teleport.certificate_arn
    TELEPORT_DOMAIN_NAME  = local.domain_name
    S3_BUCKET_NAME        = aws_s3_bucket.teleport_audit.id
    TEST_VALUE            = aws_s3_bucket.teleport_test.id
  }
  depends_on = [
    aws_s3_bucket.teleport_audit,
    data.aws_eks_cluster.cluster_arn,
    aws_acm_certificate_validation.teleport
    ]
}

The other files where the environment variables references are just resources not yet created.
I know that helmfile cannot run a diff if the variables are not yet defined. But sadly I found myself struggling with 2 things.

If I set kubeconfig to "" and add KUBECONFIG=pathexpand("~/.kube/config"), the release is never applied, the state just updates but no action is run in the cluster.

So, what is the correct way to make this work?

thanks in advise @mumoshu

@mcandio
Copy link
Author

mcandio commented Jan 4, 2022

also, when I update the terraform provider to 0.14.1 I'm facing the following error:

helmfile_release_set.teleport: Refreshing state... [id=c75lhsa2vuolt2735110]
╷
│ Error: diffing release set: running helmfile diff: running command: /usr/local/bin/helmfile: exit status 3
│ err: no releases found that matches specified selector() and environment(default), in any helmfile

the helmfile is the one in the previous messages

@mcandio
Copy link
Author

mcandio commented Jan 5, 2022

following up.

Seems that the provider can't handle the environment variables in version 0.14.0:

2022-01-04T19:43:27.157-0300 [DEBUG] ReferenceTransformer: "aws_s3_bucket_public_access_block.access_policy" references: []
2022-01-04T19:43:27.157-0300 [INFO]  ReferenceTransformer: reference not found: "aws_s3_bucket.teleport_audit"
2022-01-04T19:43:27.157-0300 [INFO]  ReferenceTransformer: reference not found: "data.aws_eks_cluster.cluster_arn"
2022-01-04T19:43:27.157-0300 [INFO]  ReferenceTransformer: reference not found: "aws_acm_certificate_validation.teleport"
2022-01-04T19:43:27.157-0300 [INFO]  ReferenceTransformer: reference not found: "aws_s3_bucket.teleport_test_2"
2022-01-04T19:43:27.157-0300 [INFO]  ReferenceTransformer: reference not found: "var.namespace"
2022-01-04T19:43:27.157-0300 [INFO]  ReferenceTransformer: reference not found: "var.region"
2022-01-04T19:43:27.157-0300 [INFO]  ReferenceTransformer: reference not found: "local.domain_name"
2022-01-04T19:43:27.157-0300 [DEBUG] ReferenceTransformer: "helmfile_release_set.teleport" references: []

all those references already exists in the terraform state.

If I add them via

export KUBECTX="data.aws_eks_cluster.cluster_arn" (the value is the equivalent to that datasource).
The provider works just fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant