-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Ensure the correct service CIDR and IP family is used in the rendered user data #2963
fix: Ensure the correct service CIDR and IP family is used in the rendered user data #2963
Conversation
@@ -388,6 +389,9 @@ module "disabled_eks_managed_node_group" { | |||
source = "../../modules/eks-managed-node-group" | |||
|
|||
create = false | |||
|
|||
# Hard requirement | |||
cluster_service_cidr = "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the first time we require something to be set for "disabled" modules, as far as I see. Can we still make it optional to be consistent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've come up with a way to make this only required when create = true
using precondition
on a null_resource
that does nothing (but enforce the check) https://github.com/terraform-aws-modules/terraform-aws-eks/pull/2963/files#diff-87747f89f95b121dfc7a5f6e40ee0a56f1f98449581c02bd34f2171210c2f14dR1-R12
When create = true
and you do not provide a cluster_service_cidr
you now get this error which is what we (I) want since it is required when creating a nodegroup:
I did have to bump the Terraform MSV from 1.3.0
to 1.3.2
to pick up this patch hashicorp/terraform#31846
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why 6? and LGTM :)
modules/_user_data/main.tf
Outdated
resource "null_resource" "validate_cluster_service_cidr" { | ||
lifecycle { | ||
precondition { | ||
condition = var.create ? length(local.cluster_service_cidr) > 6 : true # length(local.cluster_service_cidr) == 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where does 6
come from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just picked something 😅 - the smallest is 4 with IPv6 ::/0
but thats not a valid service CIDR range. On the IPv4 side, the smallest is 11 with something like 10.0.0.0/16
- I didn't want to be overly restrictive or pedantic on what value so I went with 6 🤷🏽♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Theres probably a better way, but the gist of the check is that something needs to be provided when create = true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleaned up the commented out code and added a note on the arbitrary value of 6 in 1f2ef2a
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation! Something like this is a bit more flexible (probably): try(cidrhost(local.cluster_service_cidr, 0) != "::", false)
## [20.8.3](v20.8.2...v20.8.3) (2024-03-12) ### Bug Fixes * Ensure the correct service CIDR and IP family is used in the rendered user data ([#2963](#2963)) ([aeb9f0c](aeb9f0c))
This PR is included in version 20.8.3 🎉 |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Description
cluster_service_cidr
a hard requirement - because of this change in behavior, users who use disconnected nodegroups (those created outside the root module, using the sub-module directly) could be caught off guard by nodes not joining if they do not add this value. For users of the root module approach (most common), this is a no-op on their part since the module ensures the necessary values are passed through to the child modules correctlytoset()
with static keys for node IAM role policy attachment #2962 - this was missed in that PR1.3.0
to1.3.2
to pick up patch for nestedprecondition
(insidefor_each
) "panic: duplicate checkable objects report" in terraform apply hashicorp/terraform#31846Motivation and Context
AL2023_*
AMI types; ensure AL2023 user data receives cluster service CIDR #2960, I noticed that not all nodes had an IPv6 internal IP - this is odd given that the cluster is an IPv6 clusterBefore (#2960):
After (this PR):
Breaking Changes
How Has This Been Tested?
examples/*
to demonstrate and validate my change(s)examples/*
projectspre-commit run -a
on my pull request