Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform plan always force to recreate the AKS cluster. There is nothing change in CI #6287

Closed
pawankmr301 opened this issue Mar 27, 2020 · 8 comments

Comments

@pawankmr301
Copy link

pawankmr301 commented Mar 27, 2020

AKS cluster created. but when i run the terraform plan command it showing following resource must be replace. but there is no change in configuration.

/+ resource "azurerm_kubernetes_cluster" "k8s" {
        api_server_authorized_ip_ranges = [
            "0.0.0.0/0",
        ]
        dns_prefix                      = "sandbox-k8s"
      - enable_pod_security_policy      = false -> null
      ~ fqdn                            = "sandbox-k8s-1be719fc.hcp.eastus.azmk8s.io" -> (known after apply)
      ~ id                              = "/subscriptions/671664f1-0e7c-4a76-b548-7462b990aa42/resourcegroups/sandbox-rg/providers/Microsoft.ContainerService/managedClusters/sandbox-k8s" -> (known after apply)
      ~ kube_admin_config               = [
          - {
              - client_certificate     = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUUvVENDQXVXZ0F3SUJBZ0lSQUtUWjAvSTNmOGg3VndjL293cy9wR0l3RFFZSktvWklodmNOQVFFTEJRQXcKRFRFTE1Ba0dBMVVFQXhNQ1kyRXdIaGNOTWpBd016STNNVGt6T0RRMldoY05Nakl3TXpJM01UazBPRFEyV2pBdwpNUmN3RlFZRFZRUUtFdzV6ZVhOMFpXMDZiV0Z6ZEdWeWN6RVZNQk1HQTFVRUF4TU1iV0Z6ZEdWeVkyeHBaVzUwCk1JSUNJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBZzhBTUlJQ0NnS0NBZ0VBekRBQnFZd0F2Vld5K3d5a3lHdGIKeVpRcFFJR1c4OFVxcGNIZ3cyL2tyL3AyMW5GdkhBSlp6NFlwTHVYUE4rTzlrcWlkcjNaMk1ER25IY1ZYcmJLaApSK2E1Wms1YVEvODh0WGp5ZDBRZ0Y2TnYxM2F0cjNBN0N5Y0I0ZUMyR3VMRXovNGgrZll1M0RSY28yNmY4RE9vCmFXbi8wOWxxZEg3a053ZS9KRXFHL3J5UUdRZ0RrREtTbnErb2EwckFrN2JDOG1uY1RTRHVXOVIzaDBJYnB0d04KamtmSS9JeDlTV0tGNERtMlg0Q2xHcUErMElSSVZMczF2b3hSOGpHMUoxYVlUdnFtSDRmNkpOOFlia0FYaC9YRQpDR0VKa045M2NMS0lPTWNhUXk4R2U0bEVXUFlid0NsRjYvUE9kWldDb0l2T01vZnZFUU1SNi9NRjMvK1luMlkrCmgrSEl4RzVjc0ZieWthT2p3bmNhZEk0bTVwNms5UmZWWVorOGRSVFdOUGFLZTVFTkJYS1M0VVl2UjFtNU1abXUKR3pWR1IvRS95b2U1SVVscmU4OHgrZGJPaWQydGNUdStacnZodHBFK3ZJTG4rSjJJUGNZOE1WM0RzNUZqZlBqMgoycVFSb2YwU0VwaWpMNW1YU0tEQUd1NmtEWG9KTHYwMnFWNmo3RlIwMDYxN2ZHcUZRZVIwTXNlQUY2L0ZpNjBZCnBXM0RLcmdsU2ZpOTJKSzdGRWFaOGZMR1Z0amtZd2I0UjE1a295WnpXOG0yYXdDTUpBTlJ6K0FXNlNPNnBEamEKdWQwVGUwaE82TzhacC9zQTY3b3lHWVp4MjR4ZUM3dlMxKytKSUJnTThPZUVmb3dBNnh2NUVXQ0VMdVZQOVZxMQplNEFTeW50TUxhU3o2MDV1bVpsaW1NY0NBd0VBQWFNMU1ETXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkCkpRUU1NQW9HQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnSUIKQUhscE9KSXNFMElWZkFSUzM5akI3aXVzZHVyUkdpUFYzVitYU2hpaHkyelgrT2FjdDAyK280dmJvMTBzWUxzcQoxcTB1ZVJRSmJTNXRRaSthK1NJcWVqV0w3Uno0VDVpUy9LemFnYXdURGJQSFBUaWZPa2hwSnFMcDh2L0NUcVdDCjdtTlJLTDNNaE5WVE5NQXFqV2dIU3hCRDZkL081UGUzUGlvQ1NmRzlNRDlXdTdSZi84bFAvZStjdnBuTVpjVDMKdmUvcE5VcWFhNThlUUNUMTI1Z3AyWERab2VHVDEzNGZodHprRzZESExvTUp6VHJoS0hQeExIdmh4ckVTVU55Ugp4bkxpVVJYeXNUZTFnVW9OdlMyNEE0Z3c4Vm5GcmxVZ0dYTHlwT0d4emR5QkxZbFBFc2J1V1ViUnYyM1EyZmdLCjUrYk1qR05WQ09CSTNIVnEyMG9oUDQrTTV3YzNlQ0lZMDZMYVpEcmthbDB0VTk4QTR1M2N5bkhUSzhJUDRtb3UKOVZRZEk0WlpBdlU1dVZaZGxOdUplcUxqakE2K2ZtQVgxQ0NtR3d1Q3Fldlc0bHFmS2RwcmRDbjdEZjAzc25OVwpUWDFXWmtXSlV3NDRTNWtaWlhYQlgvekpOQ1l1YTU4S2tkWDllZVdZRjFXdHNQQXVoNWNPY05JZkhLT1d1dmZSCi83TFJURURTOGZkV0lRbmdET21kL1p0aWRKaHliRzJ3ZUYrMUwydFZaOEgxLzQ2bDZTTzB0YjNTVm84NXhJR1EKNXdFMklBOURPTXJicXlCK2EycDFNSUl6WnVEaXA2aHk0aWVwZHdJanBsYWN2aE9FMVpUSEhEVFdGRUVRelozWQpQVTM5T0MrcDBvOUx2NXZEbVJoMHkrZE9GMDZhZDcyeTZhMFB6cDdkNG1TVwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
              - client_key             = "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS1FJQkFBS0NBZ0VBekRBQnFZd0F2Vld5K3d5a3lHdGJ5WlFwUUlHVzg4VXFwY0hndzIva3IvcDIxbkZ2CkhBSlp6NFlwTHVYUE4rTzlrcWlkcjNaMk1ER25IY1ZYcmJLaFIrYTVaazVhUS84OHRYanlkMFFnRjZOdjEzYXQKcjNBN0N5Y0I0ZUMyR3VMRXovNGgrZll1M0RSY28yNmY4RE9vYVduLzA5bHFkSDdrTndlL0pFcUcvcnlRR1FnRAprREtTbnErb2EwckFrN2JDOG1uY1RTRHVXOVIzaDBJYnB0d05qa2ZJL0l4OVNXS0Y0RG0yWDRDbEdxQSswSVJJClZMczF2b3hSOGpHMUoxYVlUdnFtSDRmNkpOOFlia0FYaC9YRUNHRUprTjkzY0xLSU9NY2FReThHZTRsRVdQWWIKd0NsRjYvUE9kWldDb0l2T01vZnZFUU1SNi9NRjMvK1luMlkraCtISXhHNWNzRmJ5a2FPanduY2FkSTRtNXA2awo5UmZWWVorOGRSVFdOUGFLZTVFTkJYS1M0VVl2UjFtNU1abXVHelZHUi9FL3lvZTVJVWxyZTg4eCtkYk9pZDJ0CmNUdStacnZodHBFK3ZJTG4rSjJJUGNZOE1WM0RzNUZqZlBqMjJxUVJvZjBTRXBpakw1bVhTS0RBR3U2a0RYb0oKTHYwMnFWNmo3RlIwMDYxN2ZHcUZRZVIwTXNlQUY2L0ZpNjBZcFczREtyZ2xTZmk5MkpLN0ZFYVo4ZkxHVnRqawpZd2I0UjE1a295WnpXOG0yYXdDTUpBTlJ6K0FXNlNPNnBEamF1ZDBUZTBoTzZPOFpwL3NBNjdveUdZWngyNHhlCkM3dlMxKytKSUJnTThPZUVmb3dBNnh2NUVXQ0VMdVZQOVZxMWU0QVN5bnRNTGFTejYwNXVtWmxpbU1jQ0F3RUEKQVFLQ0FnQjRLblIvL0hYZTkxVjY2Nmo2amp3WmVNbUNQZ0hCZXJLcmxGc2JpWGxUbTVwZWY2bDlHV0VUZ1lneApJbXYrSDFoTXZGRDRsMFhjL1F1MXVKTHVGV2RYTWxEQUJiS1h1OWxWajFWYW5Vd3B2VFN2NEJjOVhWWlNMT1d0CnNocFoza1VuaHc4bDFSMGsyT2dBQXJXN25oc3Axb2tKZXpWc1U5VXBFYlVLdlpXT3l4Z2VzeG10dmlXTm9DRUUKNjV4RUVsWE9aa0NSWDZVSGhCWElFejdFeDFKdS9vNHBHdUVNMUIrbm95OVNLeHAvZmZrWnJCQUpWWmUraVVRYwp2ZVkzdlgyclZWVjR6d1NHdUlqZk1YYm51OXBmZUFReUwyRlNnSzhzZ3ZoUXU1Y2NFdCtYWDFkYmtxNmt1dm94Ck53a0dST0RYMmluMm9CRG5CRUhiM3NkS1g5akNBQWpRR3A2a0NGc3p4Ym5YQkUrV2VHQnJra1ZlaGN5T0ZEdFoKZUhaaUduUWZJR0x4cmxEU0JWQ04yNkxIOEZOZk9KM3pxdEFTaE0wMzlMazlEYmRzc0QxNEpLUDFnSXU4bzZZagprN0YwMjVHa0ZFTjdIbExLZ3RkNmVPWmJPREVOZkhIbFV0MkN6REg3Qm1CRWNjbzU0SEs3S1VBMWNSSU5TbVBoCk5FeEZuK1IrQmMrTlUvTlRDclhLN3FoMkZyRW4ycFJNbDQ4MndkY2J1ZWdlV2lsQTBoeVYzQkFuRXp4SHZCeHQKR0JxczVvZmx6Q0FpR0o5bHU2dkFRa3Q0ODUzcEcyam1nRm1WOUIrWEdHZ0xXSGIwU3BXeHlwd3JNQXV2UVlidQp2eHFONmI1WVFzN2M5YkROdVFPWG13RXJ2Nmp4YVZMS0NoSE5WSDlLQWdPcC9NbFZrUUtDQVFFQTZ2bERudlI3CnQ5MER3UVZMWWpaSmtjRzB0QnF0ZVFSMy9nSVJmNHpyV2toQTIwdGZMOVFZUGhYa21aNkltWmMyRzBKVXM5cGEKSVpveW5LNlROWFhPbzkzVlJIdVFDVnU2b280V1MvMTdKQXdlRExqZVR5djBuVDFaZ1Q1SjVQT282eDdWaUJoZQpGdE1GeUl5MzNWNGI3Z0dMN0NGUW9ZdTM0RE5rcHpxckUwTzZhYzIvVG5GMUhpNFZoc0ZNZ2g2Mmtqc3ViMHlOCi92REpubDFyM0RleWdsNVdrYTltWGNsdWh2YVp1RU9aWkxmaFZWU2ZQNk9KOWFDTURLenpzYVF5SVRCZGxhRysKTUZ1aCtDUk9KUC9pWVdkdUpWNUp4ODNyUkZNVzJvR0JIUmdZY1RuWC9NekVYVnVaTnVMbjZSaGJKdjdFMnhOOApyV1haYmVpQlFSbndQd0tDQVFFQTNuVi9sRkoycTdhMTluRG1sd0JWQTdlV0IreXdJNmFqdFNQSmdLVVBVL1pKClR4dVVjMkdXSjVjTzVEdjAyanZsSmUyekU3M0dWYnBpRktCR3JodmRhUVZzMEVqNGJtTHVEWUcvakIyOEU3WisKYm9tQzIzZ0QyRTZQMmJuWU0zMzdqcm1jcEcxOEJqZnpwdzZ6cVo1T1pTNTZ0Ris4aG0zNTFaRjgxaWFMb2RTSApSM3RINmdlb3hlcGt2SmtJbTJMNVNXZ1haSktUQkFSV2lHT0dEdWl6N2o0NW9uM250ZUo5YllxY0ZvM0QvaVgwCmwyZVBIU0ZsdkVGOWVWRHVsbmt5Mmg3b0kzL1RFcWZ3Y0lGRnhjUmQvWG1XeFlYMzQwS3d6ZDdndTFYeUFKaTgKclFoN1VjcTIxMVJPNGQ0a290Mmc4TFB4UWFic2FMcHpFVG1aTWQwMWVRS0NBUUVBeWlSK1kvVDNxZ2xjL1dUTQpvVVVLclhYek95M29Kc0FOYWx2bEtkSFZkdW5Kd2Q1cG1QL2hpeWZTYmNYUXhqRXJ2dStsWnNSbDNacUdCL3kzCmI3cFZkdXVVV1VIamUrbUorZVk2a0UwVTdzSHQ2QXZ5VkRJQVVuVkExc3I3VHhlL0xlWnMxSlFCL2FpMHhQOHAKL2lyRDVGRTRTbit1bWd0VXVHLytHMCtCZFNWQzdWcXZSWkNkck1VQWg5a3JOSld2Z3NXZkFPamZEMytlTTdzeQoyMVNmSVVCL1FQMmdGdmIvT3VSVEFnNUZuRHdFZ2lBMmo4emxGb1p6ZFVRa3NhSUw3amxwaVJ5SVBablhFVjVLClM5SkdzbjBFYU9sZ2w1OHkwcWlZZElvTXhUYkJjRWJPNVJCVHJlenJOaWltVWdVK0JhSVBRRmpYWFdUTlZKdzgKOWdDV2lRS0NBUUJ0QzVHZkpoRVB1UDlYTHg2Sk5GdDM5L3p1STRKdjUxWm1DNFhScnBkbDQ5c3BrTzVpNUpvVApEQVA3c3J3aVlhdkF2TlRFUDRsQ0twdTdXcGxxN1RQS01DT1hzYjA5ZHZjVDNkOEVFdXRIaW5STlFOREpQZm84CmhMNFV6dm1BdkNlY1hiWEFELzcwbVdheEpBN0RmVnNXVkFOSCtZblRKNE9Jd2NrOHZDWkVESWJIYmxIaEVTSVUKNmdHNVJSYW4vSXNRQlpzNTdITnVTNllUTXgzYW53emNJdHBqb1czOGhXcFFVTnFVWDZlTUpCVFNScXdyYmx5TgpQRmtDSjc3Y09jcklpcy9sM3RtYlVvRVUxUGxicVJjZGtnVnJDM0Zva0I5VXV3VFVMNXZ4Mk96YnNNV3FZVURoCkttMkVZRHo3TE1LZ1dzUlBGMERaVVZQb1JLQ29oYk9aQW9JQkFRREZnQmZQRWErZ2pqNWIzZ3pzR0ZRVjQ5cUQKK3NhZUFBWjB1SGFaQ014UzlzVGowRHRONG1PNXpLb2pNbmxiNC9hWkN0dndJeG1FMS9oSU8rNGlaYUFRV3FYVQo0czNZVm1FajVtWnJ4dmRjMXpCYXRYZVV0SzcvbDZJbDBmRkVUeFdCM1F5SE9uN1doU0ZmQXRWNitCUExJYlllCllhdkh6a0JxV0hNMGtEWHVBWUx2VTh6N2FRcVpzM0toeGtORHpKd0I5a1o0Sy96QTJMZzZBempPcXJTVmRwQjkKRWREdVRaSlZtV0taVjNvdHR1WG9HWEQrV25aQ1UxSy9HWVRWaXlEa2htOGp0R3ZKVGhwTFpIVTY5bFZnSytoQQo1L05td2hDRGJ0NGxQSEFtQjhZR3dwMHVkQ0xFUGV6RnlGakNrRENMWURsUUlFdFB3d2xvUFFzbjFjdnoKLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K"
              - cluster_ca_certificate = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5VENDQXJHZ0F3SUJBZ0lRUjJmMml4bkdKQWVzWUl5NUNQT2ZDakFOQmdrcWhraUc5dzBCQVFzRkFEQU4KTVFzd0NRWURWUVFERXdKallUQWdGdzB5TURBek1qY3hPVE00TkRaYUdBOHlNRFV3TURNeU56RTVORGcwTmxvdwpEVEVMTUFrR0ExVUVBeE1DWTJFd2dnSWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUNEd0F3Z2dJS0FvSUNBUURLCjVQMjNTK0grNXdrMGp6czIvMURzNzhTSG1xMEJieU5RbkZZejlub3dQUzlGTU9WUDlyb1BWVFF0NGJrZ2RBdUcKVXZhWDNDUkR5M0kxNHc5WkFWbEVlVVd5ckE1TEZEVTM4V29LWE84VWVPbGFGaDg5aU9KL2wyV3lZcitneElTbgpsT3h0cXkrS0NCNnEzeEZMTVVoSEFIaXdHb0ljLzd4NFZ1VGM0MXRvTm9kVGxhMkEreXJ0eHo0R2o1b2JMa2puCnh4aXdYU08rSGxiRWU4WXMzZXZIQlJTM3RZZXRFMlpqQ0hxU3dIK1RLVFhHZ25mMGNVRFJoN2NQOTdJT0luR2MKcGszYWtiUjZ4YXgxY1NKVzFlcGZ2bXltaXRMcER5dGZFWWo2S2Zib2tFeHJmakRvOEJnZUFlcnhQcUFEM29NNwpXTjl4OUVjTVJPR0cwYTRvbExzMmU1bjEwaWUvcFZvYUNiaHFqRmI1V2JxdmpNZ1hHR05JTFRVZVgyaVB4TlBhClRkTFREZWlBa2dSeThrSThYcVNaaUZEMjl5UEFSMklncmhaNnFlcDZDOTY0VzRSTkZhMThxa1pBUmp5cDE1YjIKSzBIK2kyYWtaMGQwa1FjY2xENUttYmZtU1pxL0VYS1pVSWJwckxOL2JLcXZrVzFGbnVvTlN6QUVDQzVpUm1Vago3UWdOUkdtVURSNHM4ZFBVUVlWVE15Z0dFTHJjZjh5ODFrOG0vTTNwMG12U0RvekNtWWpMZ1hIOEwwV1U5elRxCitIa1RuUmUrZDFWNXVRQkpnSWlMUFZ5azAyeDFoNkN4NXJwaFBCSXh3WFhNWXZNSm4xb0xiRmZpQTgrQUlzZHUKTnEwTTZRRXRucjhRa0Vac2VMK2w4ZUVWN0VNclNFQW1jY1BiL0ZRU2d3SURBUUFCb3lNd0lUQU9CZ05WSFE4QgpBZjhFQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FnRUFDYThnCnhkZGZKWkRDQ0VtVzlWcjVBaWhsdklTcHdjblFJSTZwSS9UbDI2UlpHTkhkOERMaEcvaTZHTFJnOTA4Z2UwZDUKN1Jsa0xRZHJnZ1E0Y1M1ZkszM0pkcWFUUFVHR3lHbUswSjNQWDhNdm9nUDR0aEVHY3ZGb0o0WDZpMXNYbTMyRgpRS01HRzVnblUzamh5TXVuS3kzSjJDRkJUNmVYOUNOdGl5NEZNdlVlcGhFQjZFcE9qZnUxUUVzOCt6OThzNWk4CjZhT24za0FMNWEyTkpuT2k5Q1puSGc2MDVlcEN1bXFYdkV3bzY3QncrcEhVZFovVFFPcnJPbW1tUGtvUlZteGkKdGd0Z2VjYVkvdnhCM0ZiZy8rNURFLy9QWmgyYXZ0ZzU1REV6U2Y3YThqLzlGSUw5Zlc2RUhqbThlYnRQa3ZGMgpsQ1lyajluZDNkUmZ6NjNQV09vMUVyL25SejZaeTBHZjNSU2E4WkVndlFUNmwralp0OWNWTFA4QUo5eHpQdTdlCll2cW1TUnhGcHFGZURXdzJpSFcrNGMyZG9hYktYMmFPMXcrSjkvcjBkYmgvcHM2SVdLNUFXN2tFNmQxT28wOFIKVitSNWhIZ2c0OE5XcE9oaEY5NXdBaUJnZGp4OXBKWDd5MWVsbWM5UHNuQ2V3SVY2K1pmTGVHME9YdzBRWWlMYgpreDk0b05qaG1LSFF6OWJlS3hsU0d3ZUJIaW83cEhGcU1CcjdmMXY5RU5JamE4VnJXWThyTlJXV25PL0hBN0tzCjNkOTE3MjJBTlZxWmdCL0t6dFdXKzhtOHYvcGhaWFZxWll3bmJwczBBZWZxNlpPSkQxS3BrZ3E1TFQ4OHNHMWYKVVhING1BQkluNFgwN1BiK0I1M2JqYkdWenNjZnF2cjluWTVzN29zPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
              - host                   = "https://sandbox-k8s-1be719fc.hcp.eastus.azmk8s.io:443"
              - password               = "7358eeb4060479b63cea360b74d4b0868728eb9af8e860434386474d8eaf10de3dc062616a9fac1f56242df7a3187baede44faa36853091fd9a2fc1d11861abc"
              - username               = "clusterAdmin_sandbox-rg_sandbox-k8s"
            },
        ] -> (known after apply)
      ~ kube_admin_config_raw           = (sensitive value)
      ~ kube_config                     = [
          - {
              - client_certificate     = ""
              - client_key             = ""
              - cluster_ca_certificate = "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUV5VENDQXJHZ0F3SUJBZ0lRUjJmMml4bkdKQWVzWUl5NUNQT2ZDakFOQmdrcWhraUc5dzBCQVFzRkFEQU4KTVFzd0NRWURWUVFERXdKallUQWdGdzB5TURBek1qY3hPVE00TkRaYUdBOHlNRFV3TURNeU56RTVORGcwTmxvdwpEVEVMTUFrR0ExVUVBeE1DWTJFd2dnSWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUNEd0F3Z2dJS0FvSUNBUURLCjVQMjNTK0grNXdrMGp6czIvMURzNzhTSG1xMEJieU5RbkZZejlub3dQUzlGTU9WUDlyb1BWVFF0NGJrZ2RBdUcKVXZhWDNDUkR5M0kxNHc5WkFWbEVlVVd5ckE1TEZEVTM4V29LWE84VWVPbGFGaDg5aU9KL2wyV3lZcitneElTbgpsT3h0cXkrS0NCNnEzeEZMTVVoSEFIaXdHb0ljLzd4NFZ1VGM0MXRvTm9kVGxhMkEreXJ0eHo0R2o1b2JMa2puCnh4aXdYU08rSGxiRWU4WXMzZXZIQlJTM3RZZXRFMlpqQ0hxU3dIK1RLVFhHZ25mMGNVRFJoN2NQOTdJT0luR2MKcGszYWtiUjZ4YXgxY1NKVzFlcGZ2bXltaXRMcER5dGZFWWo2S2Zib2tFeHJmakRvOEJnZUFlcnhQcUFEM29NNwpXTjl4OUVjTVJPR0cwYTRvbExzMmU1bjEwaWUvcFZvYUNiaHFqRmI1V2JxdmpNZ1hHR05JTFRVZVgyaVB4TlBhClRkTFREZWlBa2dSeThrSThYcVNaaUZEMjl5UEFSMklncmhaNnFlcDZDOTY0VzRSTkZhMThxa1pBUmp5cDE1YjIKSzBIK2kyYWtaMGQwa1FjY2xENUttYmZtU1pxL0VYS1pVSWJwckxOL2JLcXZrVzFGbnVvTlN6QUVDQzVpUm1Vago3UWdOUkdtVURSNHM4ZFBVUVlWVE15Z0dFTHJjZjh5ODFrOG0vTTNwMG12U0RvekNtWWpMZ1hIOEwwV1U5elRxCitIa1RuUmUrZDFWNXVRQkpnSWlMUFZ5azAyeDFoNkN4NXJwaFBCSXh3WFhNWXZNSm4xb0xiRmZpQTgrQUlzZHUKTnEwTTZRRXRucjhRa0Vac2VMK2w4ZUVWN0VNclNFQW1jY1BiL0ZRU2d3SURBUUFCb3lNd0lUQU9CZ05WSFE4QgpBZjhFQkFNQ0FxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FnRUFDYThnCnhkZGZKWkRDQ0VtVzlWcjVBaWhsdklTcHdjblFJSTZwSS9UbDI2UlpHTkhkOERMaEcvaTZHTFJnOTA4Z2UwZDUKN1Jsa0xRZHJnZ1E0Y1M1ZkszM0pkcWFUUFVHR3lHbUswSjNQWDhNdm9nUDR0aEVHY3ZGb0o0WDZpMXNYbTMyRgpRS01HRzVnblUzamh5TXVuS3kzSjJDRkJUNmVYOUNOdGl5NEZNdlVlcGhFQjZFcE9qZnUxUUVzOCt6OThzNWk4CjZhT24za0FMNWEyTkpuT2k5Q1puSGc2MDVlcEN1bXFYdkV3bzY3QncrcEhVZFovVFFPcnJPbW1tUGtvUlZteGkKdGd0Z2VjYVkvdnhCM0ZiZy8rNURFLy9QWmgyYXZ0ZzU1REV6U2Y3YThqLzlGSUw5Zlc2RUhqbThlYnRQa3ZGMgpsQ1lyajluZDNkUmZ6NjNQV09vMUVyL25SejZaeTBHZjNSU2E4WkVndlFUNmwralp0OWNWTFA4QUo5eHpQdTdlCll2cW1TUnhGcHFGZURXdzJpSFcrNGMyZG9hYktYMmFPMXcrSjkvcjBkYmgvcHM2SVdLNUFXN2tFNmQxT28wOFIKVitSNWhIZ2c0OE5XcE9oaEY5NXdBaUJnZGp4OXBKWDd5MWVsbWM5UHNuQ2V3SVY2K1pmTGVHME9YdzBRWWlMYgpreDk0b05qaG1LSFF6OWJlS3hsU0d3ZUJIaW83cEhGcU1CcjdmMXY5RU5JamE4VnJXWThyTlJXV25PL0hBN0tzCjNkOTE3MjJBTlZxWmdCL0t6dFdXKzhtOHYvcGhaWFZxWll3bmJwczBBZWZxNlpPSkQxS3BrZ3E1TFQ4OHNHMWYKVVhING1BQkluNFgwN1BiK0I1M2JqYkdWenNjZnF2cjluWTVzN29zPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
              - host                   = "https://sandbox-k8s-1be719fc.hcp.eastus.azmk8s.io:443"
              - password               = ""
              - username               = "clusterUser_sandbox-rg_sandbox-k8s"
            },
        ] -> (known after apply)
      ~ kube_config_raw                 = (sensitive value)
        kubernetes_version              = "1.14.8"
        location                        = "eastus"
        name                            = "sandbox-k8s"
      ~ node_resource_group             = "MC_sandbox-rg_sandbox-k8s_eastus" -> (known after apply)
      + private_fqdn                    = (known after apply)
      - private_link_enabled            = false -> null
        resource_group_name             = "sandbox-rg"
        tags                            = {
            "application"        = "aks"
            "applicationversion" = "1.0.0"
            "team"               = "devops"
            "tier"               = "Infrastructure"
        }

        addon_profile {

            kube_dashboard {
                enabled = true
            }
        }

      ~ default_node_pool {
            availability_zones    = [
                "1",
                "2",
            ]
            enable_auto_scaling   = false
          - enable_node_public_ip = false -> null
          - max_count             = 0 -> null
          ~ max_pods              = 30 -> (known after apply)
          - min_count             = 0 -> null
            name                  = "defaultnode"
            node_count            = 1
          - node_taints           = [] -> null
          ~ os_disk_size_gb       = 100 -> (known after apply)
            type                  = "VirtualMachineScaleSets"
            vm_size               = "Standard_DS1_v2"
            vnet_subnet_id        = "/subscriptions/671664f1-0e7c-4a76-b548-7462b990aa42/resourceGroups/sandbox-rg/providers/Microsoft.Network/virtualNetworks/sandbox-vnet/subnets/sandbox-prvt-sn"
        }

        linux_profile {
            admin_username = "aks-admin"

            ssh_key {
                key_data = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDLEHSeEYbbHUC9vGnk8pzdE2j2y8CfCmDfUOWV/n0UOD6o8WB9PEOSLBIX8+cf8FcRiAvGEnIJl+WUgj4pX7aycTABghLhNke9F6f2pJDCX2dDHVLwrXU1gyc/tdfqEgoty6+nIn8EOjuD6LXT0qo7eZYTYPz/8i7d6KwfCmxY6RdR6TG3vUTmTf64XnqW3yzZ57AQoV20XPytNbbnlRJPFr4om9DOAs8S41osQIEsAxt0Ia3T1fdvw6h3k/FGsthYSi0642Q2PdhptP0/SHG88e4jNQ2JEcKlQtroQV/fp0nd2lAXQ6tqhhpTu79fD7mRNdbDDn5QIAg0eGDTK6q5 aks-admin@toolkit-tsttwo-5f85846cf9-wv7ml"
            }
        }

      ~ network_profile {
            dns_service_ip     = "10.0.64.10"
            docker_bridge_cidr = "172.17.0.1/16"
          ~ load_balancer_sku  = "Standard" -> "standard"
            network_plugin     = "azure"
            network_policy     = "calico"
          + pod_cidr           = (known after apply)
            service_cidr       = "10.0.64.0/20"

          ~ load_balancer_profile {
              ~ effective_outbound_ips    = [
                  - "/subscriptions/671664f1-0e7c-4a76-b548-7462b990aa42/resourceGroups/MC_sandbox-rg_sandbox-k8s_eastus/providers/Microsoft.Network/publicIPAddresses/2a1ddc35-14ce-40c9-9991-bac42c5d7dad",
                ] -> (known after apply)
              ~ managed_outbound_ip_count = 1 -> (known after apply)
              ~ outbound_ip_address_ids   = [] -> (known after apply)
              ~ outbound_ip_prefix_ids    = [] -> (known after apply)
            }
        }

        role_based_access_control {
            enabled = true

            azure_active_directory {
                client_app_id     = "9edb4f4a-3baa-43cf-993c-8dd1a2f4071f"
                server_app_id     = "c2885691-1889-4555-aaaa-83934b83df76"
                server_app_secret = (sensitive value)
                tenant_id         = "1451c58b-3e41-4f28-b436-f049127ec4af"
            }
        }

        service_principal {
            client_id     = "8959d1bd-6fff-42c7-8513-8c7315493e0c"
            client_secret = (sensitive value)
        }

      - windows_profile {
          - admin_username = "azureuser" -> null # forces replacement
        }
    }

@aristosvo
Copy link
Collaborator

aristosvo commented Mar 28, 2020

Can you give us a bit more information, for example (a redacted version of) your configuration (.tf) files?

@pawankmr301
Copy link
Author

@aristosvo Please check the following configuration of terraform.

k8s.tf

Create Kubernetes Cluster

resource azurerm_kubernetes_cluster k8s {
name = "${var.resource_prefix}-k8s"
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = "${var.resource_prefix}-k8s"
kubernetes_version = var.kubernetes_version
api_server_authorized_ip_ranges = var.allowed_ipaddress

Define default node pool for kubernetes worker node

default_node_pool {
name = "defaultnode"
node_count = var.nodecount
vm_size = var.vm_size
vnet_subnet_id = var.vnet_subnet_id
enable_auto_scaling = var.enable_auto_scaling
type = "VirtualMachineScaleSets"
availability_zones = var.availability_zones
}

Define service principal

service_principal {
client_id = var.service_prinicipal_client_id
client_secret = var.service_prinicipal_client_secret

}

Define username should be available on nodes

linux_profile {
admin_username = var.username

ssh_key {
  key_data = var.key_data
}

}

Enable kubernetes dashboard

addon_profile {
kube_dashboard {
enabled = true
}
}

Enable role base access control for aks cluster

role_based_access_control {
enabled = true

azure_active_directory {
  client_app_id     = var.client_app_id
  server_app_id     = var.server_app_id
  server_app_secret = var.server_app_secret
  tenant_id         = var.tenant_id
}

}

Define network profile loadbalancer type, cni, network policy, service IP ranges, docker bridge cidr

network_profile {
load_balancer_sku = var.load_balancer_sku
network_plugin = var.network_plugin
network_policy = var.network_policy
dns_service_ip = var.dns_service_ip
docker_bridge_cidr = var.docker_bridge_cidr
# Generate the address_prefix 10.0.64.0/20
service_cidr = cidrsubnet(tostring(join(", ", var.vnet_address_space)), 4, 4)
}

tags = var.tags
}

Create node pool for kubernetes cluster

resource azurerm_kubernetes_cluster_node_pool aks {
lifecycle {
ignore_changes = [
node_count
]
}

Define node pool configuration

kubernetes_cluster_id = azurerm_kubernetes_cluster.k8s.id
name = "nodepool"
node_count = var.nodecount
vm_size = var.vm_size
availability_zones = var.availability_zones
max_pods = var.max_pods
os_disk_size_gb = var.os_disk_size_gb
os_type = var.os_type
vnet_subnet_id = var.vnet_subnet_id
node_taints = null
enable_auto_scaling = true
min_count = var.min_count
max_count = var.max_count
}

provider "kubernetes" {
load_config_file = false
host = azurerm_kubernetes_cluster.k8s.kube_config.0.host
username = azurerm_kubernetes_cluster.k8s.kube_config.0.username
password = azurerm_kubernetes_cluster.k8s.kube_config.0.password
client_key = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)
client_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)
}

@aristosvo
Copy link
Collaborator

aristosvo commented Mar 28, 2020

Do you define the version of azurerm somehow? If not, when did you create your cluster for the first time and when did you try to update? Example of version provider:

provider "azurerm" {
    version = "~>1.44.0"
}

BTW, if you could update your formatting in your previous messages it would help a lot. For example, always use ``` around your code:

# Create Kubernetes Cluster
resource azurerm_kubernetes_cluster k8s {
  name                            = "${var.resource_prefix}-k8s"
  location                        = var.location
  resource_group_name             = var.resource_group_name
  dns_prefix                      = "${var.resource_prefix}-k8s"
  kubernetes_version              = var.kubernetes_version
  api_server_authorized_ip_ranges = var.allowed_ipaddress

  # Define default node pool for kubernetes worker node
  default_node_pool {
    name                = "defaultnode"
    node_count          = var.nodecount
    vm_size             = var.vm_size
    vnet_subnet_id      = var.vnet_subnet_id
    enable_auto_scaling = var.enable_auto_scaling
    type                = "VirtualMachineScaleSets"
    availability_zones  = var.availability_zones
  }

  # Define service principal
  service_principal {
    client_id     = var.service_prinicipal_client_id
    client_secret = var.service_prinicipal_client_secret

  }

  # Define username should be available on nodes
  linux_profile {
    admin_username = var.username

    ssh_key {
      key_data = var.key_data
    }

  }

  # Enable kubernetes dashboard
  addon_profile {
    kube_dashboard {
      enabled = true
    }
  }

  # Enable role base access control for aks cluster
  role_based_access_control {
    enabled = true

    azure_active_directory {
      client_app_id     = var.client_app_id
      server_app_id     = var.server_app_id
      server_app_secret = var.server_app_secret
      tenant_id         = var.tenant_id
    }

  }

  # Define network profile loadbalancer type, cni, network policy, service IP ranges, docker bridge cidr
  network_profile {
    load_balancer_sku  = var.load_balancer_sku
    network_plugin     = var.network_plugin
    network_policy     = var.network_policy
    dns_service_ip     = var.dns_service_ip
    docker_bridge_cidr = var.docker_bridge_cidr
    
    # Generate the address_prefix 10.0.64.0/20
    service_cidr = cidrsubnet(tostring(join(", ", var.vnet_address_space)), 4, 4)
  }

  tags = var.tags
}

# Create node pool for kubernetes cluster
resource azurerm_kubernetes_cluster_node_pool aks {
  lifecycle {
    ignore_changes = [
      node_count
    ]
  }

  # Define node pool configuration
  kubernetes_cluster_id = azurerm_kubernetes_cluster.k8s.id
  name                  = "nodepool"
  node_count            = var.nodecount
  vm_size               = var.vm_size
  availability_zones    = var.availability_zones
  max_pods              = var.max_pods
  os_disk_size_gb       = var.os_disk_size_gb
  os_type               = var.os_type
  vnet_subnet_id        = var.vnet_subnet_id
  node_taints           = null
  enable_auto_scaling   = true
  min_count             = var.min_count
  max_count             = var.max_count
}

provider "kubernetes" {
  load_config_file       = false
  host                   = azurerm_kubernetes_cluster.k8s.kube_config.0.host
  username               = azurerm_kubernetes_cluster.k8s.kube_config.0.username
  password               = azurerm_kubernetes_cluster.k8s.kube_config.0.password
  client_key             = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)
  client_certificate     = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)
}

@pawankmr301
Copy link
Author

@aristosvo Today I was trying to create the cluster. Once cluster get created I just run the terraform plan command. plan command will showing k8s cluster forces replacement. I am using the following provider of azurerm.

├── provider.azuread ~> 0.3
├── provider.azurerm ~>2.0.0
└── provider.null ~> 2.0

provider "azurerm" {
  version = "~>2.0.0"
  features {}
}

provider "azuread" {
  version = "~> 0.3"
}

provider "null" {
  version = "~> 2.0"
}

@sujiar37
Copy link

sujiar37 commented Mar 29, 2020

@pawankmr301, @aristosvo The plan would suggest following replacement

  - windows_profile {
      - admin_username = "azureuser" -> null # forces replacement
    }

The plan returns no changes once I had append the windows_profile block like below,

provider "azurerm" {
  features {}
  required_version = ">= 2.0.0"
}
...
  linux_profile {
    admin_username = "azadmin"
    ssh_key {
      key_data     = "${file(var.public_ssh_key_path)}"
    }
  }

  windows_profile {
    admin_username = "azureuser"  
  }

OR add under the resource to ignore it

# Bug - https://github.com/terraform-providers/terraform-provider-azurerm/issues/6215

 lifecycle {
    ignore_changes = [windows_profile]
  }

@yamaszone
Copy link

I am seeing the same behavior where successive run without no source code change forces redeployment due to windows_profile mentioned above.

In addition, I am also seeing an issue where ServicePrincipalProfile is incorrectly set to msi instead of the configured client_id via ARM_CLIENT_ID. I can see the incorrect profile via az aks show --resource-group <resource-group-name> --name <aks-cluster-name> --query servicePrincipalProfile.clientId -o tsv which always returns msi. When I run Terraform plan after a successful cluster deployment, I see the following:

      ~ service_principal {
          ~ client_id     = "msi" -> "53ba6f2b-6d52-4f5c-8ae0-7adc20808854"
            client_secret = (sensitive value)
        }

where it says Terraform will change client_id value from misconfigured msi to the actual client ID. AKS support is claiming that this incorrect ServicePrincipalProfile configuration is causing Cluster Autoscaler fail to scale up/down workers for permission issue. I am using Terraform azurerm provider v2.2.0.

@tombuildsstuff
Copy link
Contributor

hi @pawankmr301

Thanks for opening this issue :)

Taking a look through this appears to be a duplicate of #6235 - rather than having multiple issues open tracking the same thing I'm going to close this issue in favour of that one; would you mind subscribing to #6235 for updates?

Thanks!

@ghost
Copy link

ghost commented Apr 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants