Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR Creating EKS Node Group: InvalidRequestException: Cannot specify instance types in launch template and API Request #1386

Closed
ivialex-mcd opened this issue May 24, 2021 · 19 comments

Comments

@ivialex-mcd
Copy link

Description

When I try to create a managed node group using the launch template I am giving the error below:

image

Versions

  • Terraform:

$ terraform -version
Terraform v0.15.0
on windows_amd64

  • Provider(s):

provider registry.terraform.io/hashicorp/aws v3.40.0
provider registry.terraform.io/hashicorp/cloudinit v2.2.0
provider registry.terraform.io/hashicorp/kubernetes v1.13.3
provider registry.terraform.io/hashicorp/local v2.1.0
provider registry.terraform.io/hashicorp/null v3.1.0
provider registry.terraform.io/hashicorp/random v3.1.0
provider registry.terraform.io/hashicorp/template v2.2.0
provider registry.terraform.io/terraform-aws-modules/http v2.4.0

  • Module:

terraform-aws-eks

Reproduction

Steps to reproduce the behavior:
I'm trying to deploy EKS and Nodes Groups using the example launch_templates_with_managed_node_groups as a model.
And I'm using Git Bash to run the terraform commands.

Code Snippet to Reproduce

Example launch_templates_with_managed_node_groups

Expected behavior

Actual behavior

After I runned the commando terraform apply -auto-approve show this error below:

image

Terminal Output Screenshot(s)

image

Additional context

N/A

@barryib
Copy link
Member

barryib commented May 24, 2021

Can you please share your configuration ? It sounds like you're passing wrong instance types to the LT.

@ivialex-mcd
Copy link
Author

Can you please share your configuration ? It sounds like you're passing wrong instance types to the LT.

Hi @barryib ,

Follow my files .tf

launchtemplate.tf

resource "aws_kms_key" "ebs" {
description = "EBS Secret Encryption Key"
}

resource "aws_launch_template" "node_group_gp" {
name_prefix = "lt-eks-node-group-gp-${local.cluster_name}-"
description = "EKS Launch-Template for Node Group General Purpose"
update_default_version = true

block_device_mappings {
device_name = "/dev/xvda"

ebs {
  volume_size           = 100
  volume_type           = "gp2"
  delete_on_termination = true
  encrypted             = true
  kms_key_id            = aws_kms_key.ebs.arn
}

}

instance_type = "t3.medium"
key_name = ""

monitoring {
enabled = true
}

network_interfaces {
associate_public_ip_address = false
delete_on_termination = true
security_groups = [module.eks.worker_security_group_id]
}

image_id = data.aws_ami.my_worker_ami.id

user_data = base64encode(
data.template_file.launch_template_userdata.rendered,
)

Supplying custom tags to EKS instances is another use-case for LaunchTemplates

tag_specifications {
resource_type = "instance"

tags = {
  EKSNodeTypeDescription = "EKS Node Group General Purpose"
}

}

Supplying custom tags to EKS instances root volumes is another use-case for LaunchTemplates. (doesnt add tags to dynamically provisioned volumes via PVC tho)

tag_specifications {
resource_type = "volume"

tags = {
  EKSNodeTypeDescription = "EKS Node Group General Purpose"
}

}

Tag the LT itself

tags = {
EKSNodeTypeDescription = "EKS Node Group General Purpose"
}

lifecycle {
create_before_destroy = true
}
}

resource "aws_launch_template" "node_group_lg" {
name_prefix = "lt-eks-node-group-lg-${local.cluster_name}-"
description = "EKS Launch-Template for Node Group Larger Nodes"
update_default_version = true

block_device_mappings {
device_name = "/dev/xvda"

ebs {
  volume_size           = 200
  volume_type           = "io1"
  delete_on_termination = true
  encrypted             = true
  kms_key_id            = aws_kms_key.ebs.arn
}

}

instance_type = "m5.large"
key_name = "key-pair-eks-cluster-rfm-dev"

monitoring {
enabled = true
}

network_interfaces {
associate_public_ip_address = false
delete_on_termination = true
security_groups = [module.eks.worker_security_group_id]
}

image_id = data.aws_ami.my_worker_ami.id

user_data = base64encode(
data.template_file.launch_template_userdata.rendered,
)

Supplying custom tags to EKS instances is another use-case for LaunchTemplates

tag_specifications {
resource_type = "instance"

tags = {
  EKSNodeTypeDescription = "EKS Node Group Larger Nodes"
}

}

Supplying custom tags to EKS instances root volumes is another use-case for LaunchTemplates. (doesnt add tags to dynamically provisioned volumes via PVC tho)

tag_specifications {
resource_type = "volume"

tags = {
  EKSNodeTypeDescription = "EKS Node Group Larger Nodes"
}

}

Tag the LT itself

tags = {
EKSNodeTypeDescription = "EKS Node Group Larger Nodes"
}

lifecycle {
create_before_destroy = true
}
}

main.tf

Modules

#-----------------------------------------------------------------------------------------------------------------#

Terraform State Backend

module "tfstate-backend" {
source = "cloudposse/tfstate-backend/aws"
version = "0.33.0"

namespace = local.tfstate_namespace
stage = local.tfstate_stage
name = local.tfstate_name
attributes = local.tfstate_attributes
force_destroy = local.tfstate_force_destroy
terraform_backend_config_file_path = local.tfstate_terraform_backend_config_file_path
terraform_backend_config_file_name = local.tfstate_terraform_backend_config_file_name

}

#-----------------------------------------------------------------------------------------------------------------#

EKS Cluster

Resources

resource "aws_kms_key" "eks" {
description = "EKS Secret Encryption Key"
}

resource "aws_security_group_rule" "workers_ingress_bastion_host" {
description = "Allow workers group on port 22 to receive communication from the bastion host."
protocol = "tcp"
security_group_id = module.eks.worker_security_group_id
source_security_group_id = local.bastion_host_security_group_id
from_port = 22
to_port = 22
type = "ingress"
}

Module

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "16.1.0"

cluster_iam_role_name = local.cluster_iam_role_name
cluster_name = local.cluster_name
cluster_security_group_id = local.cluster_security_group_id
cluster_version = local.cluster_version
cluster_enabled_log_types = local.cluster_enabled_log_types
cluster_encryption_config = local.cluster_encryption_config
cluster_endpoint_private_access = local.cluster_endpoint_private_access
manage_cluster_iam_resources = local.cluster_manage_iam_resources
subnets = data.aws_subnet_ids.my_private_subnets.ids
vpc_id = data.aws_vpc.my_vpc.id

node_groups = {
node_group_gp = local.node_group_gp,
node_group_lg = local.node_group_lg,
}
}

versions.tf

terraform {

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.36"
}

http = {
  source  = "terraform-aws-modules/http"
  version = "2.4.1"
}

local      = ">= 1.4"
null       = ">= 2.1"
template   = ">= 2.1"
random     = ">= 2.1"
kubernetes = "~> 1.11"

}
}

data.tf

Data Sources

#-----------------------------------------------------------------------------------------------------------------#

VPC

data "aws_vpc" "my_vpc" {
id = local.vpc_id
}

data "aws_subnet_ids" "my_private_subnets" {
vpc_id = data.aws_vpc.my_vpc.id

filter {
name = "tag:Name"
values = local.vpc_private_subnets
}
}

data "aws_subnet" "my_private_subnets" {
count = length(data.aws_subnet_ids.my_private_subnets.ids)
id = tolist(data.aws_subnet_ids.my_private_subnets.ids)[count.index]
}

#-----------------------------------------------------------------------------------------------------------------#

EKS Cluster

data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}

data "aws_ami" "my_worker_ami" {
most_recent = true
owners = local.worker_ami_owners_id

filter {
name = "name"
values = local.worker_ami_names
}
}

#-----------------------------------------------------------------------------------------------------------------#

Launch Template

data "template_file" "launch_template_userdata" {
template = file("${path.module}/templates/userdata.sh.tpl")

vars = {
cluster_name = local.cluster_name
endpoint = module.eks.cluster_endpoint
cluster_auth_base64 = module.eks.cluster_certificate_authority_data

bootstrap_extra_args = ""
kubelet_extra_args   = ""

}
}

provider.tf

Providers

#-----------------------------------------------------------------------------------------------------------------#

AWS

provider "aws" {
region = var.region

assume_role {
# Amazon Resource Name (ARN) of the IAM Role to assume
role_arn = "arn:aws:iam::${var.account_id}:role/${var.role_name}"
# Session name to use when assuming the role.
session_name = var.session_name
}
}

EKS Cluster

provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
}

locals.tf

Local variables

locals {
#-----------------------------------------------------------------------------------------------------------------#

Terraform State Backend

tfstate_namespace = "${var.org}-${var.product}"
tfstate_stage = "dev"
tfstate_name = "terraform-backend"
tfstate_attributes = ["state"]
tfstate_force_destroy = false
tfstate_terraform_backend_config_file_path = "."
tfstate_terraform_backend_config_file_name = "backend.tf"

#-----------------------------------------------------------------------------------------------------------------#

EKS Cluster

cluster_iam_role_name = ""
cluster_name = "eks-cluster-${var.org}-${var.product}-${var.env}"
cluster_security_group_id = "<SECURITY GROUP ID"
cluster_version = "1.19"
cluster_manage_iam_resources = false
cluster_endpoint_private_access = true
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]

cluster_encryption_config = [
{
provider_key_arn = aws_kms_key.eks.arn
resources = ["secrets"]
}
]

bastion_host_security_group_id = ""
worker_ami_owners_id = [""]
worker_ami_names = [""]

node_group_gp = {
min_capacity = 2
desired_capacity = 2
max_capacity = 4
iam_role_arn = ""
key_name = ""

launch_template_id      = aws_launch_template.node_group_gp.id
launch_template_version = aws_launch_template.node_group_gp.default_version

additional_tags = {
  EKSNodeTypeDescription = "EKS Node Group General Purpose"
}

}

node_group_lg = {
min_capacity = 2
desired_capacity = 2
max_capacity = 4
iam_role_arn = ""
key_name = ""

launch_template_id      = aws_launch_template.node_group_lg.id
launch_template_version = aws_launch_template.node_group_lg.default_version

additional_tags = {
  EKSNodeTypeDescription = "EKS Node Group Larger Nodes"
}

}

#-----------------------------------------------------------------------------------------------------------------#

VPC

vpc_id = ""
vpc_private_subnets = ["<subnet 1a>", "<subnet 1b>"]
vpc_public_subnets = ["<subnet 1a>", "<subnet 1b>"]

}

@ivialex-mcd
Copy link
Author

Hi @barryib ,

How are you doing?

Do you have any updates about this error?

Thanks.

@bryansakowski-mcd
Copy link

This looks similar to #1211, and I'm able to reproduce the error with the current example code for 'launch_templates_with_managed_node_groups' with a custom launch template on module version v16.2.0. Removing the instance_type from the launch template and specifying on the node_group instead resulted in a successful apply.

@barryib
Copy link
Member

barryib commented May 28, 2021

It's hard for me to read you code, because it's not formated correctly.

I suspect that you're passing something which is not supported in a Managed Node Group launch template.

Furthermore, if your needs are simples, you can set create_launch_template in your node groups to let the module create the Launch Template for you. Please see docs for more info.

@ivialex
Copy link

ivialex commented May 31, 2021

It's hard for me to read you code, because it's not formated correctly.

I suspect that you're passing something which is not supported in a Managed Node Group launch template.

Furthermore, if your needs are simples, you can set create_launch_template in your node groups to let the module create the Launch Template for you. Please see docs for more info.

Hi @barryib ,

I will test setting create_launch_template and Let will know about my tests asap.

Thanks.

@p53
Copy link

p53 commented Jun 2, 2021

@barryib i tried to create empty launch template only containing instance type and still getting same error:

resource "aws_launch_template" "default" {
  name_prefix            = "eks-example-"
  instance_type                = "t3.small"
}

  node_groups_defaults = {
    ami_release_version = "1.20.4-20210519"
    desired_capacity          = 1
    max_capacity     = 10
    min_capacity     = 1
    launch_template_id      = aws_launch_template.default.id
    launch_template_version = aws_launch_template.default.default_version
  }

  node_groups = {
    "first" = {
      subnets = [module.vpc.public_subnets[0]]
    },
    "second" = {
      subnets = [module.vpc.public_subnets[1]]
    },
    "third" = {
      subnets = [module.vpc.public_subnets[2]]
    }

Error:

ws_launch_template.default: Modifying... [id=lt-0257ba28ae1a3cd24]
aws_launch_template.default: Modifications complete after 1s [id=lt-0257ba28ae1a3cd24]
module.eks.module.node_groups.aws_eks_node_group.workers["second"]: Creating...
module.eks.module.node_groups.aws_eks_node_group.workers["third"]: Creating...
module.eks.module.node_groups.aws_eks_node_group.workers["first"]: Creating...

Error: error creating EKS Node Group (test-eks-NOLdp8gJ/test-eks-NOLdp8gJ-first-neutral-pup): InvalidRequestException: Cannot specify instance types in launch template and API request
{
  RespMetadata: {
    StatusCode: 400,
    RequestID: "897b63e5-2903-4a43-bb93-6fc44bb3ebdd"
  },
  ClusterName: "test-eks-NOLdp8gJ",
  Message_: "Cannot specify instance types in launch template and API request",
  NodegroupName: "test-eks-NOLdp8gJ-first-neutral-pup"
}

  on .terraform/modules/eks/modules/node_groups/node_groups.tf line 1, in resource "aws_eks_node_group" "workers":
   1: resource "aws_eks_node_group" "workers" {



Error: error creating EKS Node Group (test-eks-NOLdp8gJ/test-eks-NOLdp8gJ-second-curious-mudfish): InvalidRequestException: Cannot specify instance types in launch template and API request
{
  RespMetadata: {
    StatusCode: 400,
    RequestID: "5b38b7e1-49ff-425f-9d0c-2e747ce4a8de"
  },
  ClusterName: "test-eks-NOLdp8gJ",
  Message_: "Cannot specify instance types in launch template and API request",
  NodegroupName: "test-eks-NOLdp8gJ-second-curious-mudfish"
}

  on .terraform/modules/eks/modules/node_groups/node_groups.tf line 1, in resource "aws_eks_node_group" "workers":
   1: resource "aws_eks_node_group" "workers" {



Error: error creating EKS Node Group (test-eks-NOLdp8gJ/test-eks-NOLdp8gJ-third-golden-seasnail): InvalidRequestException: Cannot specify instance types in launch template and API request
{
  RespMetadata: {
    StatusCode: 400,
    RequestID: "46c2c088-9f78-47e1-bd66-9d45bebf6abc"
  },
  ClusterName: "test-eks-NOLdp8gJ",
  Message_: "Cannot specify instance types in launch template and API request",
  NodegroupName: "test-eks-NOLdp8gJ-third-golden-seasnail"
}

  on .terraform/modules/eks/modules/node_groups/node_groups.tf line 1, in resource "aws_eks_node_group" "workers":
   1: resource "aws_eks_node_group" "workers" {

@daroga0002
Copy link
Contributor

AWS API seems be rejecting a requests, I propose you to enable debug on terraform by setting env variable TF_LOG=trace and then collect output from this try (it will produce a lot of output, but we need only snippet from node group creation). The best execute it with terraform apply -parallelism=1 it will take a lot of time as there will be just one thread but output will be very clear as logs will be ordered.

More info about debug:
https://www.terraform.io/docs/internals/debugging.html

@p53
Copy link

p53 commented Jun 9, 2021

@daroga0002 ok i resolved it by adding instance_types=[] to node_groups_defaults

@daroga0002
Copy link
Contributor

So does this was some misconfigurstion on your side or bug in module?

Can you paste here your module invocation code showing change?

@p53
Copy link

p53 commented Jun 9, 2021

Not sure if i could name it misconfiguration, it isn't stated explicitly anywhere

resource "aws_launch_template" "default" {
  name_prefix            = "eks-example-"
  instance_type                = "t3.small"
}

  node_groups_defaults = {
    ami_release_version = "1.20.4-20210519"
    instance_types = [] #<---added this
    desired_capacity          = 1
    max_capacity     = 10
    min_capacity     = 1
    launch_template_id      = aws_launch_template.default.id
    launch_template_version = aws_launch_template.default.default_version
  }

@william00179
Copy link

node_groups_defaults = {

I can confirm this also fixes the problem for me, using the launch_templates_with_managed_node_groups example.

@ahilmathew
Copy link

ahilmathew commented Jun 23, 2021

Using instance_types = [] inside node_groups_defaults solved this issue for me. :)

@martijnvdp
Copy link
Contributor

yes same here instance_types = [] solved it

@stale
Copy link

stale bot commented Oct 1, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Oct 1, 2021
@stale
Copy link

stale bot commented Oct 9, 2021

This issue has been automatically closed because it has not had recent activity since being marked as stale.

@stale stale bot closed this as completed Oct 9, 2021
@Art3mK
Copy link

Art3mK commented Oct 28, 2021

instance_types = [] for managed node group with launch template defined solved issue for me as well. I think this issue is still valid.

@jai
Copy link

jai commented Jan 30, 2022

I'm still experiencing this issue in 17.1.0

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 15, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests