Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform plan/apply keeps forcing rebuild of AKS #6215

Closed
ronamosa opened this issue Mar 22, 2020 · 4 comments
Closed

terraform plan/apply keeps forcing rebuild of AKS #6215

ronamosa opened this issue Mar 22, 2020 · 4 comments

Comments

@ronamosa
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

terraform -v
Terraform v0.12.24

  • provider.azuread v0.8.0
  • provider.azurerm v2.0.0
  • provider.local v1.4.0
  • provider.null v2.1.2
  • provider.random v2.2.1
  • provider.tls v2.1.1

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

module "service_principal" {
  source         = "../../security/service_principal"
  principal_name = var.principal_name
}

module "ssh-key" {
  source         = "../../security/ssh-key"
}

resource "azurerm_kubernetes_cluster" "main" {
  lifecycle {
    ignore_changes = [
      default_node_pool[0].node_count
    ]
  }

  name                            = "${var.prefix}-aks-cluster"
  location                        = var.location
  resource_group_name             = var.resource_group_name
  dns_prefix                      = var.prefix
  kubernetes_version              = var.kubernetes_version
  node_resource_group             = "${var.prefix}-aks-worker"
  api_server_authorized_ip_ranges = var.api_auth_ips

  default_node_pool {
    name                = substr(var.default_node_pool.name, 0, 12)
    node_count          = var.default_node_pool.node_count
    vm_size             = var.default_node_pool.vm_size
    type                = "VirtualMachineScaleSets"
    max_pods            = 250
    os_disk_size_gb     = 128
    vnet_subnet_id      = azurerm_subnet.kubesubnet.id 
    enable_auto_scaling = var.default_node_pool.cluster_auto_scaling
    min_count           = var.default_node_pool.cluster_auto_scaling_min_count
    max_count           = var.default_node_pool.cluster_auto_scaling_max_count
  }

  service_principal {
    client_id     = module.service_principal.spn_application_id
    client_secret = module.service_principal.spn_password
  }

  role_based_access_control {
    enabled = true

    azure_active_directory {
      client_app_id     = var.aad_client_application_id
      server_app_id     = var.aad_server_application_id
      server_app_secret = var.aad_server_application_secret
      tenant_id         = var.aad_tenant_id
    }
  }

  linux_profile {
    admin_username = "vmuser1"

    ssh_key {
      key_data = module.ssh-key.public_ssh_key
    }
  }

  addon_profile {
    http_application_routing {
      enabled = false 
    }
  }

  network_profile {
    load_balancer_sku  = "standard"
    network_plugin     = "azure"
    network_policy     = "calico"
    dns_service_ip     = "10.0.0.10"
    docker_bridge_cidr = "172.17.0.1/16"
    service_cidr       = "10.0.0.0/16"
  }

  tags = var.tags
  
}

Expected Behavior

Nothing changed, expect it to not rebuild the AKS cluster.

Actual Behavior

Wants to destroy and rebuild the cluster even though nothing has changed.

I can see admin_username in the plan cited as the reason for the rebuild

        service_principal {
            client_id     = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
            client_secret = (sensitive value)
        }

      - windows_profile {
          - admin_username = "azureuser" -> null # forces replacement
        }

But my cluster is Linux based... so I'm not sure why the plan is concerned with windows.

Steps to Reproduce

  1. terraform apply
@viresh-contino
Copy link

I added this for azurerm 2.2.0 provider which appears to fix this. I got this from @mikhailshilkov message

 lifecycle {
    ignore_changes = [windows_profile]
  }

@tombuildsstuff
Copy link
Contributor

Duplicate of #6235

@ronamosa
Copy link
Author

awesome, thanks @viresh-contino !

antoonhuiskens pushed a commit to antoonhuiskens/aks-terraform that referenced this issue Apr 20, 2020
@ghost
Copy link

ghost commented Apr 24, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 24, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants