Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_instance.volume_tags gets confused when > 1 volume attached #729

Closed
hashibot opened this issue Jun 13, 2017 · 45 comments · Fixed by #15474
Closed

aws_instance.volume_tags gets confused when > 1 volume attached #729

hashibot opened this issue Jun 13, 2017 · 45 comments · Fixed by #15474
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service.

Comments

@hashibot
Copy link

This issue was originally opened by @FransUrbo as hashicorp/terraform#14107. It was migrated here as part of the provider split. The original body of the issue is below.


It seems the new volume_tags (thanx btw!) gets confused when there's more than one volume attached to an instance.

Terraform Version

0.9.4

Affected Resource(s)

Please list the resources as a list, for example:

  • aws_instance

Terraform Configuration Files

resource "aws_instance" "instance" {
  root_block_device {
    volume_type               = "standard"
    volume_size               = 8
  }

  volume_tags {
    Name                      = "my-instance-root"
  }
}

resource "aws_ebs_volume" "instance" {
  tags {
    Name                      = "my-instance-db"
  }
}

resource "aws_volume_attachment" "instance" {
  instance_id                 = "${aws_instance.instance.id}"
  volume_id                   = "${aws_ebs_volume.instance.id}"
  device_name                 = "/dev/sdb"
}

Expected Behavior

Nothing.

Actual Behavior

Every other time it modified the aws_ebs_volume tags and every other on the aws_instance.

Steps to Reproduce

  1. terraform apply
@kpumuk
Copy link

kpumuk commented Aug 18, 2017

👍 have the same issue

@rickhlx
Copy link

rickhlx commented Oct 12, 2017

+1

@acutchin-bitpusher
Copy link

still seeing this in 0.10.8

@e-carlin
Copy link

e-carlin commented Dec 1, 2017

+1

1 similar comment
@MrMojoRisin49
Copy link

+1

@ddsdevon
Copy link

ddsdevon commented Dec 5, 2017

Still an issue as of terraform 0.11.1
I added a second volume to an existing instance with its own tags, and now aws_instance and aws_ebs_volume take turns performing an update in place to toggle the tags back and forth.

@geekbass
Copy link

geekbass commented Dec 6, 2017

Upgraded to version 0.11.1 and I am seeing this issue as well.

@ghost
Copy link

ghost commented Dec 22, 2017

+1

@SYC1205
Copy link

SYC1205 commented Jan 2, 2018

+1

version 0.11.1 still has this issue

@chan-alex
Copy link

+1
Seeing this in version 0.11.1

@smastrorocco
Copy link

+1
Seeing this in version 0.11.2

@radeksimko radeksimko added the service/ec2 Issues and PRs that pertain to the ec2 service. label Jan 27, 2018
@duhaas2015
Copy link

I can confirm I'm seeing the same behavior in 0.11.3, adding the ignore for volumes tags to the lifecycle fixes the problem for now

@gavD
Copy link

gavD commented May 29, 2018

I am still getting this problem

$ terraform --version
Terraform v0.11.7
+ provider.aws v1.7.0

My code (simplified) looks like:

resource "aws_instance" "allinone" {
  ami                     = "${var.ami}"
  subnet_id               = "${var.subnet_id}"

  root_block_device {
    volume_type = "gp2"
  }

  volume_tags {
    MakeSnapshot   = "False"
  }
}

resource "aws_ebs_volume" "myebs" {

  type              = "gp2"
  size              = "${var.db_data_volume_size}"
  encrypted         = true

  tags {
    MakeSnapshot   = "True"
  }

}

I find that the tag MakeSnapshot on myebs vacillates between False and True with every terraform apply. I've tried making the EBS depends_on the EC2 instance and vica versa and it doesn't help.

Hope that is a clear explanation - it would be great to get a fix on this! :)

@rulloa-accenture
Copy link

rulloa-accenture commented Jun 4, 2018

$ terraform --version
Terraform v0.11.7
+ provider.aws v1.18.0

This issue comes up when you use the volume_tags inside aws_instance with the aws_ebs_volume tag option.
volume_tags { Name = "ebs-root-instance" }

If you use only the tag mode on an aws_ebs_volume, works like a charm.

@arunsandu1
Copy link

any solution with using volume_tags inside aws_instance, from not overriding ebs volume tags? No issues with EBS volume tagging. I want to give a tag to the instance root volume using volume_tags

terraform version:
Terraform v0.11.7

@ddsdevon
Copy link

@roberulloa My current use case is that I am using "volume_tags" to ensure the root volume of the instance is properly tagged, and then I am using aws_ebs_volume on a separately created volume to ensure that specific volume is properly tagged.

I have not yet found a way to modify the root volume created by the aws_instance resource, and the instance's own ebs_block_device option does not appear to allow extra/alternate tags.

@mda590
Copy link

mda590 commented Jun 21, 2018

+1!

Seeing this exact behavior in version:
Terraform v0.11.7

  • provider.aws v1.23.0

@n3ph
Copy link
Contributor

n3ph commented Jun 21, 2018

Confirming on:

  • Terraform v0.11.7
  • provider.aws v1.24.0
provider "aws" {
  max_retries = 3
  region      = "eu-central-1"
  profile     = "devops"
}

data "aws_ami" "amzn2_base" {
  most_recent = true

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["137112412989"]
}

data "aws_vpc" "default" {
  default = true
}

data "aws_subnet_ids" "default" {
  vpc_id = "${data.aws_vpc.default.id}"
}

data "aws_subnet" "test" {
  id = "${data.aws_subnet_ids.default.ids[0]}"
}

resource "aws_instance" "test" {
  ami           = "${data.aws_ami.amzn2_base.id}"
  subnet_id     = "${data.aws_subnet.test.id}"
  instance_type = "t2.nano"

  root_block_device {
    volume_type = "gp2"
  }

  volume_tags {
    Name = "test1"
  }
}

resource "aws_ebs_volume" "test" {
  type              = "gp2"
  size              = "1"
  encrypted         = true
  availability_zone = "${data.aws_subnet.test.availability_zone}"

  tags {
    Name = "test2"
  }
}

resource "aws_volume_attachment" "test" {
  instance_id = "${aws_instance.test.id}"
  volume_id   = "${aws_ebs_volume.test.id}"
  device_name = "/dev/sdb"
}
21:22 n3ph@mag-xps ~/tmp/community_gardening_terraform ✔  terraform apply
data.aws_vpc.default: Refreshing state...
data.aws_ami.amzn2_base: Refreshing state...
data.aws_subnet_ids.default: Refreshing state...
data.aws_subnet.test: Refreshing state...
aws_instance.test: Refreshing state... (ID: i-064ce7c94874db85c)
aws_ebs_volume.test: Refreshing state... (ID: vol-037621e6930139f90)
aws_volume_attachment.test: Refreshing state... (ID: vai-2991852907)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ aws_ebs_volume.test
      tags.Name: "test1" => "test2"


Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_ebs_volume.test: Modifying... (ID: vol-037621e6930139f90)
  tags.Name: "test1" => "test2"
aws_ebs_volume.test: Modifications complete after 1s (ID: vol-037621e6930139f90)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
21:23 n3ph@mag-xps ~/tmp/community_gardening_terraform ✔  terraform apply
data.aws_vpc.default: Refreshing state...
data.aws_ami.amzn2_base: Refreshing state...
data.aws_subnet_ids.default: Refreshing state...
data.aws_subnet.test: Refreshing state...
aws_ebs_volume.test: Refreshing state... (ID: vol-037621e6930139f90)
aws_instance.test: Refreshing state... (ID: i-064ce7c94874db85c)
aws_volume_attachment.test: Refreshing state... (ID: vai-2991852907)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  ~ aws_instance.test
      volume_tags.Name: "test2" => "test1"


Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_instance.test: Modifying... (ID: i-064ce7c94874db85c)
  volume_tags.Name: "test2" => "test1"
aws_instance.test: Modifications complete after 3s (ID: i-064ce7c94874db85c)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
21:23 n3ph@mag-xps ~/tmp/community_gardening_terraform ✔  

@tdmalone
Copy link
Contributor

tdmalone commented Jul 1, 2018

Related tickets: #770 & #884 (possibly all duplicates of the same issue?)

@justin-sunayu
Copy link

+1 Getting this while trying to tag my root volume and data volumes

@gnalawade
Copy link

same issue observed
[gnalawade@jhost db]$ terraform -v
Terraform v0.11.7

  • provider.aws v1.26.0

aws_volume_attachment.sql101-data01-attach: Refreshing state... (ID: vai-1256441354)


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place

Terraform will perform the following actions:

~ aws_instance.sql101
volume_tags.Name: "SQL101.PROD-sdf" => "SQL101.PRDO-sda"

Plan: 0 to add, 1 to change, 0 to destroy.

@spommerening
Copy link

spommerening commented Jul 27, 2018

Same issue here with

Terraform v0.11.7
+ provider.aws v1.29.0

Anyway I found a temporary work-around:

  • Create instance and use volume_tags for setting tag of root volume
  • Include volume_tags in ignore_changes of instance
  • Create and attach additional volumes which can be tagged now

I am very well aware that this work-around will not work for everyone
and that we still need a fix but maybe this helps someone.

@soar
Copy link

soar commented Aug 6, 2018

Same problem with

Terraform v0.11.7
+ provider.aws v1.30.0

@tdmalone
Copy link
Contributor

tdmalone commented Aug 6, 2018

Hi @soar @gnalawade @justin-sunayu @n3ph @mda590

The issue will still exist until it’s fixed. There’s no need to add additional comments confirming it’s still present - if you’re affected, add a thumbs up reaction to the initial post in this thread instead.

Comments that don’t include additional information such as troubleshooting, workarounds etc add noise for the subscribers and risk being locked by the maintainers. It also makes it more difficult to find discussion of the actual issue.

Thanks!

@vfoucault
Copy link
Contributor

+1

@morokin
Copy link

morokin commented May 1, 2019

In the meanwhile, year 2019,
terraform --version
Terraform v0.11.13

  • provider.aws v2.2.0

Issue still there

@tdmalone
Copy link
Contributor

tdmalone commented May 1, 2019

@morokin Add a plus one on the initial post in this thread instead - Hashicorp report on those to determine which features to work on. As I mentioned in my above comment that you -1'ed, your post just adds needless noise for others in the same position who are subscribed to this thread, and doesn't actually help the issue get fixed or prioritised.

@bkmeneguello
Copy link

My suggestion is to add "tags" attribute to both "root_ebs_volume" and "ebs_block_device" while keeping the "volume_tags". This way "volume_tags" could be a fallback when no tags are defined in the previous elements (and to retain compatibility). If someone don't want all instance block devices tagged with "volume_tags" just suppress it and tag blocks individually.

@philiphope
Copy link

2 years. How long do we have to wait for this bug to be fixed?

@rifelpet
Copy link
Contributor

rifelpet commented Jun 6, 2019

Just for reference to anyone wishing to fix this, the problem stems from the fact that when updating volume_tags in place, terraform sets tags on all volume IDs attached to the instance rather than only the volumes created from the original ec2.RunInstances used to create the instance. Terraform is already able to distinguish the root_block_device from other block devices but not distinguishing ebs_block_devices defined in an aws_instance versus a separate aws_ebs_volume resource. The ec2.RunInstances response does contain the volume IDs of the other block devices which could be used to differentiate an ebs_block_device from an aws_ebs_volume, but any fix to track this information and use it would only fix it for instances created with a version of the aws provider containing this fix, not existing instances nor imported instances.

You can confirm this ambiguity with a terraform show. The definition of this aws_instance has a root_block_device block but no ebs_block_device block, rather it has separate aws_ebs_volume and aws_volume_attachment resources. The separate volume resource's volume ID is now stored as an attribute of the aws_instance:

aws_instance.foo:
  id = i-0e2ee1ae34f051b98
  ebs_block_device.# = 1
  ebs_block_device.4138763540.delete_on_termination = false
  ebs_block_device.4138763540.device_name = /dev/xvdq
  ebs_block_device.4138763540.volume_id = vol-07b923a8c02cc95f0
  ebs_block_device.4138763540.volume_size = 750
  ebs_block_device.4138763540.volume_type = gp2
  ebs_optimized = true
  ...
  root_block_device.# = 1
  root_block_device.0.delete_on_termination = true
  root_block_device.0.iops = 0
  root_block_device.0.volume_id = vol-0aed163a5aa8bf9d8
  root_block_device.0.volume_size = 20
  root_block_device.0.volume_type = standard
aws_ebs_volume.foo:
  id = vol-07b923a8c02cc95f0
  ...
aws_volume_attachment.foo:
  id = vai-1985794965
  device_name = /dev/xvdq
  instance_id = i-0e2ee1ae34f051b98
  skip_destroy = true
  volume_id = vol-07b923a8c02cc95f0

It would be pretty straightforward to use the readBlockDevicesFromInstance function to get the root_block_device's volume ID for setVolumeTags but that wouldnt solve the ambiguity with ebs_block_device.

Personally I'm a fan of this partial fix.

Thoughts?

@slessardjr
Copy link

I just overwrote the name of all my kubernetes persistent EBS volumes due to this bug. Is there any plan to fix this? I think the suggestion @rifelpet is a perfect middle ground. Unless the provider makes root_volume_tags and ebs_volume_tags separated.

@muhmud
Copy link

muhmud commented Apr 27, 2020

2 years. How long do we have to wait for this bug to be fixed?

+ another year.

I just started using Terraform, and it's fantastic, but can we not get some traction on this?

The suggestion by @bkmeneguello above seems like it would be easy to put in without breaking anything. In fact, it's what I tried to do intuitively at first before finding it didn't work.

@bkmeneguello
Copy link

I think the proper way is to submit a PR, it should not be that hard.

@binlab
Copy link

binlab commented May 11, 2020

$ terraform version
Terraform v0.12.24
+ provider.aws v2.53.0

the problem still remains

@alanbantuit
Copy link

$ terraform --version
Terraform v0.12.26

  • provider.aws v2.66.0

still wants to change tags on the non-root volume:

  ~ volume_tags                  = {
      ~ "Backup" = "dev_de_data" -> "dev_de_root"
      ~ "Name"   = "DEV|DE|Data" -> "DEV|DE|Root"
    }

That apparently just 1 line changed pull request has still not been merged.

@thomasbiddle
Copy link
Contributor

Running into this now, still.

Terraform v0.13.3
+ provider registry.terraform.io/-/aws v3.8.0
+ provider registry.terraform.io/hashicorp/aws v3.8.0

Having an option to just ignore all EBS volumes would be the best solution IMO.

@jtsoi
Copy link

jtsoi commented Dec 11, 2020

I think #15474 is a good way to fix this.
Until then, here is a work around:

resource "aws_instance" "this" {
 ...
  # Volume
  root_block_device {
    volume_type           = "gp3"
    volume_size           = var.volume_size
    encrypted             = true
    delete_on_termination = true
  }
  # Instance tags
  tags = { ... }

  # Dont define 'volume_tags' here.
  ...
}

data aws_ebs_volume boot_volume {
  filter {
    name   = "attachment.device"
    values = ["/dev/sda1"]
  }
  filter {
    name   = "attachment.instance-id"
    values = [aws_instance.this.id]
  }
}

resource "aws_ec2_tag" boot_volume_tags {
  for_each  = var.boot_volume_tags
  resource_id = data.aws_ebs_volume.boot_volume.volume_id
  key         = each.key
  value       = each.value
}

Then attach additional volumes with 'aws_volume_attachment' and tags on those volumes will not change.

@bevanbennett
Copy link

Just as a note, this is now an issue as of 0.14 (maybe 0.13, we came direct from 0.12) for people who have never set volume_tags.
All my plans are now alternating between the aws_instance volume_tags deleting tags off my extra volumes and the tags specified in aws_ebs_volume putting them back. I'm going to try that workaround, but it's a LOT of new annoyance to implement over our hundreds of modules.

@bevanbennett
Copy link

Confirmed the workaround no longer works in 0.14.4.
Now plans alternate between aws_ec2_tagadding the tags and BOTH aws_ebs_volume (with no tags block) and aws_instance (with no volume_tags block) removing them.

@bevanbennett
Copy link

I can get the old behavior by specifying tags in the aws_ebs_volume resource and putting ignore_changes = [ volume_tags ] into the aws_instance, but it feels like a hack.

@alanbantuit
Copy link

alanbantuit commented Jan 12, 2021 via email

@YakDriver
Copy link
Member

We have merged a fix to the volume_tags issue in #15474. We have added tests to cover the issues observed. Please note that using volume_tags in aws_instance is not compatible with using tags in aws_ebs_volume. You need to use one or the other. Prior to this fix, even following this rule, you would encounter errors. Along with the fix, we've added tags to the root_block_device and ebs_block_device configuration blocks in aws_instance.

Now that the fix is in place, if you find any problems with volume_tags, let us know by opening a new issue.

@brettryan
Copy link

@morokin Add a plus one on the initial post in this thread instead - Hashicorp report on those to determine which features to work on. As I mentioned in my above comment that you -1'ed, your post just adds needless noise for others in the same position who are subscribed to this thread, and doesn't actually help the issue get fixed or prioritised.

This is true, however; if I'm not mistaken issues tend to get closed if you don't comment.

@brettryan
Copy link

I am not using the volume_tags directive but am still experiencing this issue with just tags on the aws_instance and also on the additional volumes. I suspect this is due to me using the root_block_device directive:

resource "aws_instance" "engine" {
  ami = data.aws_ami.amp_centos_base.id
  instance_type = var.instance_type

  count = var.instance_count

  iam_instance_profile = data.aws_iam_instance_profile.service.name
  subnet_id = data.aws_subnet.private.id

  vpc_security_group_ids = [
    # ...
  ]

  root_block_device {
    volume_type = "gp3"
  }

  user_data = <<-EOF
    ...
  EOF

  tags = merge(local.common_tags, map(
    "Name", "${local.name_prefix}_${count.index}",
    "InstanceIndex", count.index
  ))
}

resource "aws_ebs_volume" "data" {
  size   = var.data_size
  count  = var.instance_count
  type   = "gp3"
  availability_zone = element(aws_instance.engine.*.availability_zone, count.index)

  tags = merge(local.common_tags, map(
    "Name", "${local.name_prefix}_${count.index}_data",
    "InstanceIndex", count.index
  ))
}

resource "aws_volume_attachment" "data_att" {
  device_name = "/dev/xvdf"
  volume_id   = element(aws_ebs_volume.data.*.id, count.index)
  instance_id = element(aws_instance.engine.*.id, count.index)
  count       = var.instance_count
}
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/aws v3.23.0
+ provider registry.terraform.io/hashicorp/null v3.0.0

@ghost
Copy link

ghost commented Feb 13, 2021

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Feb 13, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/ec2 Issues and PRs that pertain to the ec2 service.
Projects
None yet