Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AKS cluster gets recreated after upgrading to 4.33.0 and higher because of publicNetworkAccessEnabled #1028

Open
muellermatthias opened this issue Feb 9, 2022 · 2 comments
Labels
kind/bug Some behavior is incorrect or out of spec

Comments

@muellermatthias
Copy link

Hello!

  • Vote on this issue by adding a 👍 reaction
  • To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already)

Issue details

Steps to reproduce

  1. Create an AKS cluster with version 4.32.0 or lower of the provider
  2. Upgrade to 4.33.0 or higher
  3. Run pulumi up

Expected:
There should be no changes
Actual:
Pulumi wants to replace the cluster:

 +-azure:containerservice/kubernetesCluster:KubernetesCluster: (replace)
  [id=/subscriptions/...]
  [urn=urn:pulumi....]
  [provider: urn:pulumi...providers:azure::default_4_27_0... => urn:pulumi...providers:azure::default_4_33_0...]
+ publicNetworkAccessEnabled: true

I checked the changes in the Terraform release and public_network_access_enabled was added in that version.
It has the option ForceNew: true
https://github.com/hashicorp/terraform-provider-azurerm/blob/03210c7fde66e5745bdf80b507f2f4a78c31ede4/internal/services/containers/kubernetes_cluster_resource.go#L663

One workaround I found is using ignoreChanges:
ignoreChanges: ["publicNetworkAccessEnabled"],

@muellermatthias muellermatthias added the kind/bug Some behavior is incorrect or out of spec label Feb 9, 2022
@danielrbradley
Copy link
Member

Thanks for reporting. That looks like a good temporary workaround.

Might this also be fixed by a pulumi refresh to get the new default value into your state; at which point you should be able to remove the ignoreChanges after?

@Gerrit-K
Copy link

I just stumbled over this and checked @danielrbradley 's suggestion right away. Unfortunately the field doesn't seem to get updated when running pulumi refresh and a subsequent pulumi up still marks the cluster to be recreated. The workaround using ignoreChanges does work, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

No branches or pull requests

3 participants