Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The key pair 'deployer-key' does not exist #1851

Closed
shubhambhartiya opened this issue May 7, 2015 · 9 comments
Closed

The key pair 'deployer-key' does not exist #1851

shubhambhartiya opened this issue May 7, 2015 · 9 comments
Labels
bug provider/aws waiting-response An issue/pull request is waiting for a response from the community

Comments

@shubhambhartiya
Copy link

I have written a script for aws to create a vpc, 2 instances etc with cloud config file which worked perfectly first.
But when we were destroying it, it gave this error.
"Error refreshing state: 1 error(s) occurred:
* 1 error(s) occurred:
* Error retrieving KeyPair: The key pair 'deployer-key' does not exist"

Again on terraform plan, it gave the same error while everything is the same as what we applied before.

error

@catsby
Copy link
Contributor

catsby commented May 7, 2015

Hey @shubhambhartiya thanks for reporting this. Do you a have a bare minimum config file that can demonstrate this? If you do, that will help me, of course please remove any secrets.

@shubhambhartiya
Copy link
Author

Here is the basic config. I have not included the cloud init file and the keys.

provider "aws" {
  access_key  = "${var.access_key}"
  secret_key  = "${var.secret_key}"
  region      = "${var.region}"
}
resource "aws_vpc" "default" {
  cidr_block = "${var.vpc_cidr}"
  enable_dns_hostnames = true
  tags {
    Name = "AJ"
  }
}
resource "aws_instance" "app" {
  count = 2
  ami = "${lookup(var.amis, var.region)}"
  instance_type = "t2.micro"
  subnet_id = "${aws_subnet.private.id}"
  security_groups = ["${aws_security_group.default.id}"]
  key_name = "${aws_key_pair.deployer.key_name}"
  source_dest_check = false
  user_data = "${file(\"cloud-config/app.yml\")}"
  tags = {
    Name = "AJ-app-${count.index}"
  }
}
resource "aws_elb" "app" {
  name = "AJ-elb"
  subnets = ["${aws_subnet.public.id}"]
  security_groups = ["${aws_security_group.default.id}", "${aws_security_group.web.id}"]
  listener {
    instance_port = 80
    instance_protocol = "http"
    lb_port = 80
    lb_protocol = "http"
  }
  instances = ["${aws_instance.app.*.id}"]
}
resource "aws_key_pair" "deployer" {
  key_name = "deployer-key"
  public_key = "${file(\"ssh/insecure-deployer.pub\")}"
}
resource "aws_instance" "nat" {
  ami = "${lookup(var.amis, var.region)}"
  instance_type = "t2.micro"
  subnet_id = "${aws_subnet.public.id}"
  security_groups = ["${aws_security_group.default.id}", "${aws_security_group.nat.id}"]
  key_name = "${aws_key_pair.deployer.key_name}"
  source_dest_check = false
  tags = { 
    Name = "nat"
  }
  connection {
    user = "ubuntu"
    key_file = "ssh/insecure-deployer"
  }
  provisioner "remote-exec" {
    inline = [
      "sudo iptables -t nat -A POSTROUTING -j MASQUERADE",
      "echo 1 > /proc/sys/net/ipv4/conf/all/forwarding",
      "curl -sSL https://get.docker.com/ubuntu/ | sudo sh",
      "sudo mkdir -p /etc/openvpn",
      "sudo docker run --name ovpn-data -v /etc/openvpn busybox",
      "sudo docker run --volumes-from ovpn-data --rm gosuri/openvpn ovpn_genconfig -p ${var.vpc_cidr} -u udp://${aws_instance.nat.public_ip}"
    ]
  }
}
resource "aws_subnet" "private" {
  vpc_id            = "${aws_vpc.default.id}"
  cidr_block        = "${var.private_subnet_cidr}"
  availability_zone = "us-west-2b"
  map_public_ip_on_launch = false
  depends_on = ["aws_instance.nat"]
  tags { 
    Name = "private" 
  }
}
resource "aws_route_table" "private" {
  vpc_id = "${aws_vpc.default.id}"
  route {
    cidr_block = "0.0.0.0/0"
    instance_id = "${aws_instance.nat.id}"
  }
}
resource "aws_route_table_association" "private" {
  subnet_id = "${aws_subnet.private.id}"
  route_table_id = "${aws_route_table.private.id}"
}
resource "aws_internet_gateway" "default" {
  vpc_id = "${aws_vpc.default.id}"
}
resource "aws_subnet" "public" {
  vpc_id            = "${aws_vpc.default.id}"
  cidr_block        = "${var.public_subnet_cidr}"
  availability_zone = "us-west-2b"
  map_public_ip_on_launch = true
  depends_on = ["aws_internet_gateway.default"]
  tags { 
    Name = "public" 
  }
}
resource "aws_route_table" "public" {
  vpc_id = "${aws_vpc.default.id}"
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = "${aws_internet_gateway.default.id}"
  }
}
resource "aws_route_table_association" "public" {
  subnet_id = "${aws_subnet.public.id}"
  route_table_id = "${aws_route_table.public.id}"
}
resource "aws_security_group" "default" {
  name = "default-AJ"
  description = "Default security group that allows inbound and outbound traffic from all instances in the VPC"
  vpc_id = "${aws_vpc.default.id}"
  ingress {
    from_port   = "0"
    to_port     = "0"
    protocol    = "-1"
    self        = true
  }
  tags {
    Name = "AJ-default-vpc"
  }
}
resource "aws_security_group" "nat" {
  name = "nat-AJ"
  description = "Security group for nat instances that allows SSH and VPN traffic from internet"
  vpc_id = "${aws_vpc.default.id}"
  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 1194
    to_port   = 1194
    protocol  = "udp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags {
    Name = "nat-AJ"
  }
}
resource "aws_security_group" "web" {
  name = "web-AJ"
  description = "Security group for web that allows web traffic from internet"
  vpc_id = "${aws_vpc.default.id}"
  ingress {
    from_port = 80
    to_port   = 80
    protocol  = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port = 443
    to_port   = 443
    protocol  = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags {
    Name = "web-AJ"
  }
}
variable "access_key" { }

variable "secret_key" { }

variable "region"     { 
  description = "AWS region to host your network"
  default     = "us-west-2" 
}

variable "vpc_cidr" {
  description = "CIDR for VPC"
  default     = "10.128.0.0/16"
}

variable "public_subnet_cidr" {
  description = "CIDR for public subnet"
  default     = "10.128.0.0/24"
}

variable "private_subnet_cidr" {
  description = "CIDR for private subnet"
  default     = "10.128.1.0/24"
}

/* Ubuntu 14.04 amis by region */
variable "amis" {
  description = "Base AMI to launch the instances with"
  default = {
    us-west-2 = "ami-3389b803" 
  }
}

@catsby
Copy link
Contributor

catsby commented May 12, 2015

Is there any chance that the KeyPair was destroyed via the console or other means?
I can only reproduce this issue if I create a KeyPair with Terraform, and then destroy it externally.

At that point, we look for the key and error because we can't find it.
We could alternatively simply mark the resource as destroyed and report "nothing to do".
Do you think that's better behavior?

@catsby catsby added the waiting-response An issue/pull request is waiting for a response from the community label May 12, 2015
@phinze
Copy link
Contributor

phinze commented May 14, 2015

Just hit something that looks eerily similar to this:

Error applying plan:

2 error(s) occurred:

* 1 error(s) occurred:

* 1 error(s) occurred:

* Error launching source instance: InvalidKeyPair.NotFound: The key pair '${module.base.key_name}' does not exist
* 1 error(s) occurred:

* 1 error(s) occurred:

* Error creating launch configuration: ValidationError: The key pair '${module.base.key_name}' does not exist

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

My version has no interpolation performed though, which may mean it's pointing to another issue.

@shubhambhartiya
Copy link
Author

The key-pair wasn't destroyed by the console. I can see it in the in the configuration file.
Also, if I delete the terraform.tfstate and terraform.tfvars and delete the VPC from the console, I was not able to reproduce the above error.

bitglue pushed a commit to bitglue/terraform that referenced this issue May 18, 2015
When refreshing a keypair, update state appropriately rather than crash
if the keypair no longer exists on AWS.

Likely fixes hashicorp#1851.
@jhoblitt
Copy link

I think I've seen this twice now and the only way to resolve the error was to remove the keypair via the aws console. I've tried to recreate the error and have been unable to. If it happens again, I'll save the state files and try a before/after comparison. Tainting the key pair resource and destroying/applying all resources was unable to clear the error.

@jhoblitt
Copy link

I've figured out how to recreate the error (although I still don't understand how I first encountered it). Apply the resource, delete tfstate, attempt to apply the resource again. This will create duplicates of all of the other aws resource types I've tested with but ssh keys are special in that the "name" is the primary attribute.

@catsby catsby closed this as completed in 9e2ecaf May 21, 2015
@aking1012
Copy link

For me, it was having generating the resource and referring to it by resource name - then it referenced before it got created and some tfstate being saved to git and/or a bailing out on a destroy.

Removing tfstate files altogether didn't fix it though, so I'm thinking referenced before assignment. Went back and added it manually and switched the order of the variable and resource declarations in vars.

@ghost
Copy link

ghost commented Apr 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug provider/aws waiting-response An issue/pull request is waiting for a response from the community
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants