Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AKS Managed Identity configuration can't be used without service_principal block #6178

Closed
ghost opened this issue Mar 19, 2020 · 12 comments · Fixed by #6095
Closed

AKS Managed Identity configuration can't be used without service_principal block #6178

ghost opened this issue Mar 19, 2020 · 12 comments · Fixed by #6095

Comments

@ghost
Copy link

ghost commented Mar 19, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

  • Terraform version 0.12.20
  • Azurerm Provider version 2.1.0

Affected Resource(s)

  • azurerm_kubernetes_cluster

Terraform Configuration Files

resource "azurerm_kubernetes_cluster" "aks" {
  name                = "aks"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = "aks"
  kubernetes_version  = "1.15.7"

  default_node_pool {
    name        = "default"
    min_count   = 1
    max_count   = 5
    vm_size     = "Standard_D2_v2"
    type        = "VirtualMachineScaleSets"
    node_labels = {
      "pool"= "infra"
    }
    enable_auto_scaling = true
    vnet_subnet_id      = azurerm_subnet.aks_subnet.id
  }

  role_based_access_control {
    enabled = true
  }

  network_profile {
    network_plugin     = "azure"
    network_policy     = "azure"
    docker_bridge_cidr = "172.17.0.1/16"
    service_cidr       = "10.0.16.0/24"
    dns_service_ip     = "10.0.16.10"
    load_balancer_sku  = "Standard"
  }

  identity {
    type = "SystemAssigned"
  }

  lifecycle {
    ignore_changes = [default_node_pool[0].node_count]
  }
}

Debug Output

+ terraform plan -var-file=terraform.tfvars -out=plan.tfplan

Error: �"service_principal": required field is not set

�  on aks.tf line 1, in resource "azurerm_kubernetes_cluster" "aks":
   1: resource "azurerm_kubernetes_cluster" "aks" �{�

Expected Behavior

According to #5278, now that system managed identity for AKS is available we should be able to skip the service_principal block in the AKS configuration. The current configuration forces you to set service_principal (I believe the update changed back when rebased on PR #5339).

Actual Behavior

Steps to Reproduce

Run terraform with the above configuration for AKS. I have a bare subnet and resource group deployed that I am executing terraform against.

Important Factoids

Nothing special

References

@ghost
Copy link
Author

ghost commented Mar 19, 2020

Apologies, it looks like during the preview period they still need a service principal for specific integration components: https://docs.microsoft.com/en-us/azure/aks/use-managed-identity.

Closing for later

@ghost ghost closed this as completed Mar 19, 2020
@ghost
Copy link
Author

ghost commented Mar 20, 2020

Ah, according to these docs published last night, a service principal is no longer required: https://github.com/Azure/AKS/releases/tag/2020-03-16. Reopening to ask that we determine how to remove the service_principal block from required fields

@ghost ghost reopened this Mar 20, 2020
torresdal added a commit to SparebankenVest/terraform-provider-azurerm that referenced this issue Mar 20, 2020
Ref hashicorp#6178 service_principal is no longer required as Managed Identity is now GA: https://github.com/Azure/AKS/releases/tag/2020-03-16
@anlutro
Copy link

anlutro commented Mar 25, 2020

Is there a workaround until the fix gets merged?

@allantargino
Copy link
Contributor

Hi @anlutro, you can try to set both client_id and client_secret with zeros:

provider "azurerm" {
  version = "=2.3.0"
  features {}
}

resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

resource "azurerm_kubernetes_cluster" "example" {
  name                = "yourclustername"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  dns_prefix          = "yourdns"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2_v2"
  }

  service_principal {
    client_id     = "00000000-0000-0000-0000-000000000000"
    client_secret = "00000000000000000000000000000000"
  }

  identity {
      type = "SystemAssigned"
  }

  tags = {
    Environment = "Production"
  }
}

@EPinci
Copy link

EPinci commented Mar 30, 2020

I've been able to use to "zero" workaround and deploy a cluster with managed identity.
The identity output variable can get me the system managed identity but not the user managed identity that is provisioned to be used for authorizations (according to https://docs.microsoft.com/en-us/azure/aks/use-managed-identity).
Any plan to expose that as well as an output?

@EPinci
Copy link

EPinci commented Mar 30, 2020

Also, just noticed that any run after the initial deployment results in a (kind of expected):

Error: Error updating Service Principal for Kubernetes Cluster "aks-d-edo" (Resource Group "test-dev-edo"): containerservice.ManagedClustersClient#ResetServicePrincipalProfile: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Updating service principal profile is not allowed on MSI cluster."

@capusta
Copy link

capusta commented Mar 31, 2020

also 000 is not a valid id to use wrt AD

Error: Error creating Managed Kubernetes Cluster "akstest" (Resource Group "aks-example"): containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ServicePrincipalNotFound" Message="Service principal clientID: 00000000-0000-0000-0000-000000000000 not found in Active Directory tenant xxxx-redacted-xxx-610d8db26xxx, Please see https://aka.ms/aks-sp-help for more details."

@ghost
Copy link

ghost commented Apr 9, 2020

This has been released in version 2.5.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.5.0"
}
# ... other configuration ...

@syedhassaanahmed
Copy link

@tombuildsstuff The identity block that is exported from the AKS Resource, also needs to be added in the AKS Data Source. Existing attribute service_principal.0.client_id can't be used anymore as the AKS API now returns below;

"servicePrincipalProfile": {
    "clientId": "msi"
  }

@tombuildsstuff
Copy link
Contributor

@syedhassaanahmed mind opening a separate issue to track that?

@syedhassaanahmed
Copy link

@tombuildsstuff After digging more, I conclude that the exported principal from Identity block isn't useful and instead identityProfile.kubeletidentity.objectId is what is needed in subsequent Azure AD role assignments. This is addressed as part of #6359

@ghost
Copy link

ghost commented May 6, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators May 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.