-
Notifications
You must be signed in to change notification settings - Fork 9.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_launch_template with eks managed node groups does not allow adding metadata_options #25298
Comments
Hey @lorelei-rupp-imprivata 👋 Thank you for taking the time to raise this! So that we have all of the necessary information in order to look into this, can you supply (redacted as necessary) debug logs as well? |
Here is the TF Debug piece for the failure. Its really just reflecting what the aws console says too that it cannot connect to the cluster
|
Similar issue #25909 |
Still an issue, is there work slated to fix this? |
This may actually be fixed now, in provider 4.46.0, I am seeing the workers join the cluster |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Community Note
Terraform CLI and Terraform AWS Provider Version
Terraform 0.14.7
AWS Provider 3.73.3
Affected Resource(s)
aws_launch_template
Terraform Configuration Files
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
We have the resource defined like this:
Expected Behavior
We are trying to lock down our Instance Metadata according to https://docs.bridgecrew.io/docs/bc_aws_general_31 and best practices
We expected to be able to configure metadata_option on our launch template, and have worker nodes come online and connect to the cluster
Actual Behavior
When adding the metadata options like the snippet above we see these errors
error waiting for EKS Node Group (saas-checkovfi2-eks:saas-checkovfi2-eks-app-workers-us-west-2a) to create: unexpected state 'CREATE_FAILED', wanted target 'ACTIVE'. last error: 1 error occurred:
* i-09df7a27f0d426e1a, i-0fd09d65187ba7f7e: NodeCreationFailure: Instances failed to join the kubernetes cluster
*
Steps to Reproduce
Create eks managed node group with a launch template that sets metadata_options
The text was updated successfully, but these errors were encountered: