-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kubernetes_version for nodepools. #4327
Comments
Hey @asubmani, Also how about adding an own resource for the agent_pool as it also exists in the API as own resource type? |
PR: #4355 |
Possible scenario I am looking to avoid is: ERROR: Orchestrator version cannot be higher than |
|
I thought of something like that but i didn't find any constraints in that regard. So i completely omitted a validation and hoping that either the API is validating it correctly and/or the user knows what she/he does. The only thing that seems valid to me would be to check if the |
@asubmani @r0bnet the versions do need to adhere to some rules, defined here: https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#upgrade-a-cluster-control-plane-with-multiple-node-pools However, IMO it's better to defer the validation to AKS as the source of truth so you're not trying to constantly stay in sync with AKS if the rules change over time. |
To illustrate the moving target of rules for TF and a desire to avoid it - the supported window of versions for AKS is moving from N-3 to N-2 in a few months to align with Kubernetes supported windows (mentioned here: Azure/AKS#1235). |
Can't be done according to MS support. Reason: #4355 (comment) |
Do you know if this constraint is still valid @r0bnet? As far as I can see in There is also this comment from @tombuildsstuff: 659816f#diff-6947e91c40156730f5c531d15ca3d798R151 |
@evenh you mean the version contraints? I think so. At least documentation states that. Regarding setting the version at all (even with the default agent pool) it might work but i'm not sure. You should check if it's working. Just add the orchestrator version to the agent pool and deploy a cluster with a different version as the master has. |
Also not sure what constraint is being mentioned - but the node pool vs control plane upgrade constraints are here: |
The differences might be in "default" nodepool vs "additional node pools" The SDK/API allows (I have done this using @jluk > regardless of N-3 or N-2, the nodepool (leaving aside the default pool created at cluster creation time) cannot be higher than the "control plane". So checking this during az cli uses "kubernetes-version" to indicate the version of kubernetes for the nodepool |
This has been released in version 2.14.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 2.14.0"
}
# ... other configuration ... |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Community Note
Description
AKS supports having nodepools running different versions of Kubernetes. Please add property that allows setting kubernetes version for the
agent_pool_profile
Affected Resource(s)
Potential Terraform Configuration
References
The text was updated successfully, but these errors were encountered: