Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue with default_node_pool min/max counts + update documentation #6357

Closed
wants to merge 1 commit into from

Conversation

adback03
Copy link

@adback03 adback03 commented Apr 3, 2020

The max/min counts in the default node pool should be in the range of 0 to 100, not 1 and 100. This is a similar issue to that of #6020, but was never fixed for the default node pool.

I have also updated both sets of documentation to reflect these changes.

One minor update I snuck in this PR as well is just fixing the documentation for vnet_subnet_id. It is not required.

EDIT: This is potentially related to #6094 as well.

@tombuildsstuff
Copy link
Contributor

hey @adback03

Thanks for this PR :)

That the min_count and max_count fields of the azurerm_kubernetes_cluster_node_pool resource can be set to 0 is actually a bug we're intending to fix in the v2.5 release - the root-cause of this being this error message here, where this value should be set to null rather than 0 (and this confusion stems from the error message being incorrect).

There's recently been some breaking changes in the AKS API - meaning that unfortunately we're going to have to include some breaking changes in the v2.5 release of the Azure Provider - as such we're going to take this opportunity to fix this bug and require that this field is set to null in both the azurerm_kubernetes_cluster and azurerm_kubernetes_cluster_node_pool resources (and fix the error messages).

These values can be set conditionally using null rather than 0 like so:

locals {
  enable_auto_scale = false
  fixed_count = 2
  min_count   = 1
  max_count   = 4
}

resource "azurerm_kubernetes_cluster_node_pool" "test" {
  # ...
  enable_auto_scaling = local.enable_auto_scale
  node_count          = local.enable_auto_scale ? null : local.fixed_count
  min_count           = local.enable_auto_scale ? local.min_count : null
  max_count           = local.enable_auto_scale ? local.max_count : null
}

As such whilst I'd like to thank you for this contribution, since we're looking to take a different approach here I'm going close this PR for the moment - however the changes to the vnet_subnet_id field look good, so to be able to get that in I hope you don't mind but I'm going to pull that change into the other (upcoming) PR so that we can ship that fix.

Thanks!

tombuildsstuff added a commit to wagnst/terraform-provider-azurerm that referenced this pull request Apr 6, 2020
@ghost
Copy link

ghost commented Apr 9, 2020

This has been released in version 2.5.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 2.5.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented May 6, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators May 6, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants