Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform crash when trying to create aks cluster on AzureChinaCloud #6462

Closed
masterphenix opened this issue Apr 14, 2020 · 2 comments · Fixed by #7233
Closed

Terraform crash when trying to create aks cluster on AzureChinaCloud #6462

masterphenix opened this issue Apr 14, 2020 · 2 comments · Fixed by #7233

Comments

@masterphenix
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.12.24

  • provider.azurerm v2.5.0

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

provider "azurerm" {
  subscription_id = "xxxx"
  tenant_id       = "xxxx"
  environment     = "china"
  version         = "= 2.5.0"
  features {}
}

resource "azurerm_resource_group" "k8s-rg" {
  name     = "k8s-rg"
  location = "chinanorth2"
}

resource "azurerm_subnet" "k8s-agent-subnet" {
  name                 = "k8s-agent-subnet"
  address_prefix       = "10.116.8.0/21"
  virtual_network_name = azurerm_virtual_network.MyVNet.name
  resource_group_name  = azurerm_virtual_network.MyVNet.resource_group_name
}

resource "azurerm_kubernetes_cluster" "k8s-cluster" {
  name                = mycluster-aks"
  location            = azurerm_resource_group.k8s-rg.location
  resource_group_name = azurerm_resource_group.k8s-rg.name
  dns_prefix          = "mycluster-aks"
  kubernetes_version  = "1.17.0"

  default_node_pool {
    name       = "mynodepool"
    vm_size    = "Standard_DS12_v2"
    availability_zones = [ 1 ]
    enable_auto_scaling = false
    node_count = 1
    node_taints = [
        "node.kubernetes.io/dedicated=monitoring:NoSchedule"
    ]
    node_labels = {
        "node.kubernetes.io/dedicated" = "monitoring" 
    }
    max_pods = 30
    os_disk_size_gb = 255
    type = "VirtualMachineScaleSets"
    vnet_subnet_id = azurerm_subnet.k8s-agent-subnet.id
  }

  linux_profile {
      admin_username = "root"
      ssh_key {
          key_data = var.root_keys["sandbox"]
      }
  }

  network_profile {
      network_plugin = "azure"
      network_policy = "calico"
      dns_service_ip = "10.200.0.10"
      docker_bridge_cidr = "172.17.0.1/16"
      outbound_type = "loadBalancer"
      service_cidr = "10.200.0.0/16"
      load_balancer_sku = "standard"
  }

  addon_profile {}

  role_based_access_control {
      enabled = true
  }

  service_principal {
    client_id = "xxxxxxxx"
    client_secret = "xxxxxxx"
  }
}

Debug Output

https://gist.github.com/masterphenix/33f95885e6cdd3a2126b7b1442e250e3

Panic Output

https://gist.github.com/masterphenix/8cd3a9bd47f053aaf5fe6e90c946ee37

Expected Behavior

The azurerm_kubernetes_cluster should be created, or a "normal" handled error should have occured.

Actual Behavior

When trying to create the azurerm_kubernetes_cluster, a terraform crash occurs.
I first tried to create the cluster without addon_profile {} ; when not specifying an addon_profile, there is no crash, but the following error occurs :

Error: creating Managed Kubernetes Cluster "mycluster-aks" (Resource Group "k8s-rg"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="OperationNotAllowed" Message="Addon 'azurepolicy' is not supported in this cloud environment."

  on k8s17-cnn2-sandbox-rg.tf line 15, in resource "azurerm_kubernetes_cluster" "k8s-cluster":
  15: resource "azurerm_kubernetes_cluster" "k8s-cluster" {

Same thing happens when explicitly disabling all addons :

  addon_profile {
    aci_connector_linux {
      enabled = false
    }

    azure_policy {
      enabled = false
    }

    http_application_routing {
      enabled = false
    }

    kube_dashboard {
      enabled = false
    }

    oms_agent {
      enabled = false
    }
  }

So it really seems that the crash is due to the empty addon_profile block.

Steps to Reproduce

  1. terraform apply

Important Factoids

Running in Azure China.

References

  • #0000
@ghost
Copy link

ghost commented Jun 11, 2020

This has been released in version 2.14.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.14.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Jul 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Jul 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants