-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The key pair 'deployer-key' does not exist #1851
Comments
Hey @shubhambhartiya thanks for reporting this. Do you a have a bare minimum config file that can demonstrate this? If you do, that will help me, of course please remove any secrets. |
Here is the basic config. I have not included the cloud init file and the keys. provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
resource "aws_vpc" "default" {
cidr_block = "${var.vpc_cidr}"
enable_dns_hostnames = true
tags {
Name = "AJ"
}
}
resource "aws_instance" "app" {
count = 2
ami = "${lookup(var.amis, var.region)}"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.private.id}"
security_groups = ["${aws_security_group.default.id}"]
key_name = "${aws_key_pair.deployer.key_name}"
source_dest_check = false
user_data = "${file(\"cloud-config/app.yml\")}"
tags = {
Name = "AJ-app-${count.index}"
}
}
resource "aws_elb" "app" {
name = "AJ-elb"
subnets = ["${aws_subnet.public.id}"]
security_groups = ["${aws_security_group.default.id}", "${aws_security_group.web.id}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
instances = ["${aws_instance.app.*.id}"]
}
resource "aws_key_pair" "deployer" {
key_name = "deployer-key"
public_key = "${file(\"ssh/insecure-deployer.pub\")}"
}
resource "aws_instance" "nat" {
ami = "${lookup(var.amis, var.region)}"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.public.id}"
security_groups = ["${aws_security_group.default.id}", "${aws_security_group.nat.id}"]
key_name = "${aws_key_pair.deployer.key_name}"
source_dest_check = false
tags = {
Name = "nat"
}
connection {
user = "ubuntu"
key_file = "ssh/insecure-deployer"
}
provisioner "remote-exec" {
inline = [
"sudo iptables -t nat -A POSTROUTING -j MASQUERADE",
"echo 1 > /proc/sys/net/ipv4/conf/all/forwarding",
"curl -sSL https://get.docker.com/ubuntu/ | sudo sh",
"sudo mkdir -p /etc/openvpn",
"sudo docker run --name ovpn-data -v /etc/openvpn busybox",
"sudo docker run --volumes-from ovpn-data --rm gosuri/openvpn ovpn_genconfig -p ${var.vpc_cidr} -u udp://${aws_instance.nat.public_ip}"
]
}
}
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.private_subnet_cidr}"
availability_zone = "us-west-2b"
map_public_ip_on_launch = false
depends_on = ["aws_instance.nat"]
tags {
Name = "private"
}
}
resource "aws_route_table" "private" {
vpc_id = "${aws_vpc.default.id}"
route {
cidr_block = "0.0.0.0/0"
instance_id = "${aws_instance.nat.id}"
}
}
resource "aws_route_table_association" "private" {
subnet_id = "${aws_subnet.private.id}"
route_table_id = "${aws_route_table.private.id}"
}
resource "aws_internet_gateway" "default" {
vpc_id = "${aws_vpc.default.id}"
}
resource "aws_subnet" "public" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${var.public_subnet_cidr}"
availability_zone = "us-west-2b"
map_public_ip_on_launch = true
depends_on = ["aws_internet_gateway.default"]
tags {
Name = "public"
}
}
resource "aws_route_table" "public" {
vpc_id = "${aws_vpc.default.id}"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.default.id}"
}
}
resource "aws_route_table_association" "public" {
subnet_id = "${aws_subnet.public.id}"
route_table_id = "${aws_route_table.public.id}"
}
resource "aws_security_group" "default" {
name = "default-AJ"
description = "Default security group that allows inbound and outbound traffic from all instances in the VPC"
vpc_id = "${aws_vpc.default.id}"
ingress {
from_port = "0"
to_port = "0"
protocol = "-1"
self = true
}
tags {
Name = "AJ-default-vpc"
}
}
resource "aws_security_group" "nat" {
name = "nat-AJ"
description = "Security group for nat instances that allows SSH and VPN traffic from internet"
vpc_id = "${aws_vpc.default.id}"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 1194
to_port = 1194
protocol = "udp"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "nat-AJ"
}
}
resource "aws_security_group" "web" {
name = "web-AJ"
description = "Security group for web that allows web traffic from internet"
vpc_id = "${aws_vpc.default.id}"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags {
Name = "web-AJ"
}
}
variable "access_key" { }
variable "secret_key" { }
variable "region" {
description = "AWS region to host your network"
default = "us-west-2"
}
variable "vpc_cidr" {
description = "CIDR for VPC"
default = "10.128.0.0/16"
}
variable "public_subnet_cidr" {
description = "CIDR for public subnet"
default = "10.128.0.0/24"
}
variable "private_subnet_cidr" {
description = "CIDR for private subnet"
default = "10.128.1.0/24"
}
/* Ubuntu 14.04 amis by region */
variable "amis" {
description = "Base AMI to launch the instances with"
default = {
us-west-2 = "ami-3389b803"
}
} |
Is there any chance that the KeyPair was destroyed via the console or other means? At that point, we look for the key and error because we can't find it. |
Just hit something that looks eerily similar to this:
My version has no interpolation performed though, which may mean it's pointing to another issue. |
The key-pair wasn't destroyed by the console. I can see it in the in the configuration file. |
When refreshing a keypair, update state appropriately rather than crash if the keypair no longer exists on AWS. Likely fixes hashicorp#1851.
I think I've seen this twice now and the only way to resolve the error was to remove the keypair via the aws console. I've tried to recreate the error and have been unable to. If it happens again, I'll save the state files and try a before/after comparison. Tainting the key pair resource and destroying/applying all resources was unable to clear the error. |
I've figured out how to recreate the error (although I still don't understand how I first encountered it). Apply the resource, delete tfstate, attempt to apply the resource again. This will create duplicates of all of the other aws resource types I've tested with but ssh keys are special in that the "name" is the primary attribute. |
For me, it was having generating the resource and referring to it by resource name - then it referenced before it got created and some tfstate being saved to git and/or a bailing out on a destroy. Removing tfstate files altogether didn't fix it though, so I'm thinking referenced before assignment. Went back and added it manually and switched the order of the variable and resource declarations in vars. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have written a script for aws to create a vpc, 2 instances etc with cloud config file which worked perfectly first.
But when we were destroying it, it gave this error.
"Error refreshing state: 1 error(s) occurred:
* 1 error(s) occurred:
* Error retrieving KeyPair: The key pair 'deployer-key' does not exist"
Again on terraform plan, it gave the same error while everything is the same as what we applied before.
The text was updated successfully, but these errors were encountered: