-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terraform plan always force to recreate the AKS cluster. There is nothing change in CI #6287
Comments
Can you give us a bit more information, for example (a redacted version of) your configuration (.tf) files? |
@aristosvo Please check the following configuration of terraform. k8s.tf Create Kubernetes Clusterresource azurerm_kubernetes_cluster k8s { Define default node pool for kubernetes worker nodedefault_node_pool { Define service principalservice_principal { } Define username should be available on nodeslinux_profile {
} Enable kubernetes dashboardaddon_profile { Enable role base access control for aks clusterrole_based_access_control {
} Define network profile loadbalancer type, cni, network policy, service IP ranges, docker bridge cidrnetwork_profile { tags = var.tags Create node pool for kubernetes clusterresource azurerm_kubernetes_cluster_node_pool aks { Define node pool configurationkubernetes_cluster_id = azurerm_kubernetes_cluster.k8s.id provider "kubernetes" { |
Do you define the version of azurerm somehow? If not, when did you create your cluster for the first time and when did you try to update? Example of version provider: provider "azurerm" {
version = "~>1.44.0"
} BTW, if you could update your formatting in your previous messages it would help a lot. For example, always use ``` around your code: # Create Kubernetes Cluster
resource azurerm_kubernetes_cluster k8s {
name = "${var.resource_prefix}-k8s"
location = var.location
resource_group_name = var.resource_group_name
dns_prefix = "${var.resource_prefix}-k8s"
kubernetes_version = var.kubernetes_version
api_server_authorized_ip_ranges = var.allowed_ipaddress
# Define default node pool for kubernetes worker node
default_node_pool {
name = "defaultnode"
node_count = var.nodecount
vm_size = var.vm_size
vnet_subnet_id = var.vnet_subnet_id
enable_auto_scaling = var.enable_auto_scaling
type = "VirtualMachineScaleSets"
availability_zones = var.availability_zones
}
# Define service principal
service_principal {
client_id = var.service_prinicipal_client_id
client_secret = var.service_prinicipal_client_secret
}
# Define username should be available on nodes
linux_profile {
admin_username = var.username
ssh_key {
key_data = var.key_data
}
}
# Enable kubernetes dashboard
addon_profile {
kube_dashboard {
enabled = true
}
}
# Enable role base access control for aks cluster
role_based_access_control {
enabled = true
azure_active_directory {
client_app_id = var.client_app_id
server_app_id = var.server_app_id
server_app_secret = var.server_app_secret
tenant_id = var.tenant_id
}
}
# Define network profile loadbalancer type, cni, network policy, service IP ranges, docker bridge cidr
network_profile {
load_balancer_sku = var.load_balancer_sku
network_plugin = var.network_plugin
network_policy = var.network_policy
dns_service_ip = var.dns_service_ip
docker_bridge_cidr = var.docker_bridge_cidr
# Generate the address_prefix 10.0.64.0/20
service_cidr = cidrsubnet(tostring(join(", ", var.vnet_address_space)), 4, 4)
}
tags = var.tags
}
# Create node pool for kubernetes cluster
resource azurerm_kubernetes_cluster_node_pool aks {
lifecycle {
ignore_changes = [
node_count
]
}
# Define node pool configuration
kubernetes_cluster_id = azurerm_kubernetes_cluster.k8s.id
name = "nodepool"
node_count = var.nodecount
vm_size = var.vm_size
availability_zones = var.availability_zones
max_pods = var.max_pods
os_disk_size_gb = var.os_disk_size_gb
os_type = var.os_type
vnet_subnet_id = var.vnet_subnet_id
node_taints = null
enable_auto_scaling = true
min_count = var.min_count
max_count = var.max_count
}
provider "kubernetes" {
load_config_file = false
host = azurerm_kubernetes_cluster.k8s.kube_config.0.host
username = azurerm_kubernetes_cluster.k8s.kube_config.0.username
password = azurerm_kubernetes_cluster.k8s.kube_config.0.password
client_key = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_key)
client_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.client_certificate)
cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s.kube_config.0.cluster_ca_certificate)
} |
@aristosvo Today I was trying to create the cluster. Once cluster get created I just run the terraform plan command. plan command will showing k8s cluster forces replacement. I am using the following provider of azurerm. ├── provider.azuread ~> 0.3
|
@pawankmr301, @aristosvo The plan would suggest following replacement
The plan returns no changes once I had append the
OR add under the resource to ignore it
|
I am seeing the same behavior where successive run without no source code change forces redeployment due to In addition, I am also seeing an issue where
where it says Terraform will change |
hi @pawankmr301 Thanks for opening this issue :) Taking a look through this appears to be a duplicate of #6235 - rather than having multiple issues open tracking the same thing I'm going to close this issue in favour of that one; would you mind subscribing to #6235 for updates? Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
AKS cluster created. but when i run the terraform plan command it showing following resource must be replace. but there is no change in configuration.
The text was updated successfully, but these errors were encountered: