Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Provider: google-cloud] Terraform should expect state changes when deleting instance with multiple disks #78

Closed
hashibot opened this issue Jun 13, 2017 · 3 comments
Assignees
Labels

Comments

@hashibot
Copy link

This issue was originally opened by @zopanix as hashicorp/terraform#13377. It was migrated here as part of the provider split. The original body of the issue is below.


Provider

Google Cloud

Type

Enhancement

Terraform Version

  • 0.9.1
  • 0.9.2

Affected Resource(s)

  • google_compute_instance
  • google_compute_disk

It doesn't seem to affect terrform core. It's just the way google cloud work which breaks this.

Terraform Configuration Files

variable "gfsmst-0" {
  description = ""
  default     = {
    disk_type      = "pd-standard"
    disk_size      = "200"
    machine_type   = "n1-standard-2"
    image          = "centos-7"
    startup_script = ""
    scopes         = "compute-ro"
  }
}
resource "google_compute_disk" "gfsmst-0-data" {
  name         = "gfsmst-0-data"
  type         = "${var.gfsmst-0["disk_type"]}"
  zone         = "${var.zone}"
  size         = "${var.gfsmst-0["disk_size"]}"
}

resource "google_compute_instance" "gfsmst-0" {
  name           = "gfsmst-0"
  description    = "This resource is managed by Terraform, do not edit manually!"
  machine_type   = "${var.gfsmst-0["machine_type"]}"
  zone           = "${var.zone}"
  tags           = ["gfs"]
  can_ip_forward = "true"

  network_interface {
    network = "${google_compute_network.network.self_link}"
    access_config {}
  }
  disk {
    device_name = "boot"
    image       = "${var.gfsmst-0["image"]}"
    type        = "pd-standard"
    auto_delete = true
  }
  disk {
    device_name = "data"
    disk        = "${google_compute_disk.gfsmst-0-data.name}"
    auto_delete = true
  }
  service_account {
    scopes = ["${split(",", var.gfsmst-0["scopes"])}"]
  }
  metadata_startup_script = "${var.gfsmst-0["startup_script"]}"
}

Debug Output

There is no need for debug as this is not a bug, I expected Terraform to act this way when I saw the plan.

Expected Behavior

Context:

I'm creating an instance which has a data disk that does not persist when the server is deleted (auto_delete = true), I know it's a weird case but basically, I want the data to be on a separate disk for snapshot purposes but I don't need the data to be persistent since it's replicated and will get synced on instance update (deletion and recreation).

What I expect

Since the disk's flag is set to auto delete, I expect the plan's to output instance and disk removal because it's going to be automatically removed. Terraform should then remove the instance once that is done, it should wait for the competion of the deletion of all disk's that have the auto_delete flag to true. And then recreate the data disks (in my case) before recreating the server.

Actual Behavior

What actually happened?
Terraform doesn't take into account that the disks that are flag as auto_delete will disappear and output and error on applying the config because the disks are existant according to it's state but in reality, after deleting the instance, the disk were also delete. Terraform should be aware of that since it has enough information.

Here is a plan/apply error

Apply:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

google_compute_disk.gfsmst-0-data: Refreshing state... (ID: gfsmst-0-data)
google_compute_disk.gfsmst-1-data: Refreshing state... (ID: gfsmst-1-data)
google_compute_instance.gfsmst-0: Refreshing state... (ID: gfsmst-0)
google_compute_instance.gfsmst-1: Refreshing state... (ID: gfsmst-1)
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

-/+ google_compute_instance.gfsmst-0
    can_ip_forward:                                      "true" => "true"
    create_timeout:                                     "4" => "4"
    description:                                         "This resource is managed by Terraform, do not edit manually!" => "This resource is managed by Terraform, do not edit manually!"
    disk.#:                                              "2" => "2"
    disk.0.auto_delete:                                  "true" => "true"
    disk.0.device_name:                                  "boot" => "boot"
    disk.0.disk_encryption_key_sha256:                   "" => "<computed>"
    disk.0.image:                                        "[PROEJCT]/[IMAGE]" => "[PROEJCT]/[NEW_IMAGE]" (forces new resource)
    disk.0.type:                                         "pd-standard" => "pd-standard"
    disk.1.auto_delete:                                  "true" => "true"
    disk.1.device_name:                                  "data" => "data"
    disk.1.disk:                                         "gfsmst-0-data" => "gfsmst-0-data"
    disk.1.disk_encryption_key_sha256:                   "" => "<computed>"
    machine_type:                                        "n1-standard-2" => "n1-standard-2"
    metadata_fingerprint:                                "MCuD6CiOu70=" => "<computed>"
    metadata_startup_script:                             "" => ""
    name:                                                "gfsmst-0" => "gfsmst-0"
    network_interface.#:                                 "1" => "1"
    network_interface.0.access_config.#:                 "1" => "1"
    network_interface.0.access_config.0.assigned_nat_ip: "35.190.148.105" => "<computed>"
    network_interface.0.address:                         "10.142.0.3" => "<computed>"
    network_interface.0.name:                            "nic0" => "<computed>"
    network_interface.0.network:                         "https://www.googleapis.com/compute/v1/projects/[PROEJCT]/global/networks/network" => "https://www.googleapis.com/compute/v1/projects/[PROJECT]/global/networks/network"
    self_link:                                           "https://www.googleapis.com/compute/v1/projects/[PROJECT]/zones/us-east1-d/instances/gfsmst-0" => "<computed>"
    service_account.#:                                   "1" => "1"
    service_account.0.email:                             "foo@bar.com" => "<computed>"
    service_account.0.scopes.#:                          "1" => "1"
    service_account.0.scopes.2862113455:                 "https://www.googleapis.com/auth/compute.readonly" => "https://www.googleapis.com/auth/compute.readonly"
    tags.#:                                              "1" => "1"
    tags.1215490064:                                     "gfs" => "gfs"
    tags_fingerprint:                                    "nSOyn9VSvFA=" => "<computed>"
    zone:                                                "us-east1-d" => "us-east1-d"

-/+ google_compute_instance.gfsmst-1
    can_ip_forward:                                      "true" => "true"
    create_timeout:                                      "4" => "4"
    description:                                         "This resource is managed by Terraform, do not edit manually!" => "This resource is managed by Terraform, do not edit manually!"
    disk.#:                                              "2" => "2"
    disk.0.auto_delete:                                  "true" => "true"
    disk.0.device_name:                                  "boot" => "boot"
    disk.0.disk_encryption_key_sha256:                   "" => "<computed>"
    disk.0.image:                                        "[PROJECT]/[IMAGE]" => "[PROEJCT]/[NEW_IMAGE]" (forces new resource)
    disk.0.type:                                         "pd-standard" => "pd-standard"
    disk.1.auto_delete:                                  "true" => "true"
    disk.1.device_name:                                  "data" => "data"
    disk.1.disk:                                         "gfsmst-1-data" => "gfsmst-1-data"
    disk.1.disk_encryption_key_sha256:                   "" => "<computed>"
    machine_type:                                        "n1-standard-2" => "n1-standard-2"
    metadata_fingerprint:                                "MCuD6CiOu70=" => "<computed>"
    metadata_startup_script:                             "" => ""
    name:                                                "gfsmst-1" => "gfsmst-1"
    network_interface.#:                                 "1" => "1"
    network_interface.0.access_config.#:                 "1" => "1"
    network_interface.0.access_config.0.assigned_nat_ip: "35.190.147.173" => "<computed>"
    network_interface.0.address:                         "10.142.0.8" => "<computed>"
    network_interface.0.name:                            "nic0" => "<computed>"
    network_interface.0.network:                         "https://www.googleapis.com/compute/v1/projects/[PROEJCT]/global/networks/network" => "https://www.googleapis.com/compute/v1/projects/[PROEJCT]/global/networks/network"
    self_link:                                           "https://www.googleapis.com/compute/v1/projects/[PROJECT]/zones/us-east1-d/instances/gfsmst-1" => "<computed>"
    service_account.#:                                   "1" => "1"
    service_account.0.email:                             "foo@bar" => "<computed>"
    service_account.0.scopes.#:                          "1" => "1"
    service_account.0.scopes.2862113455:                 "https://www.googleapis.com/auth/compute.readonly" => "https://www.googleapis.com/auth/compute.readonly"
    tags.#:                                              "1" => "1"
    tags.1215490064:                                     "gfs" => "gfs"
    tags_fingerprint:                                    "nSOyn9VSvFA=" => "<computed>"
    zone:                                                "us-east1-d" => "us-east1-d"


Plan: 2 to add, 0 to change, 2 to destroy.

Error Message

Error applying plan:

2 error(s) occurred:

* google_compute_instance.gfsmst-0: 1 error(s) occurred:

* google_compute_instance.gfsmst-0: Error loading disk 'gfsmst-0-data': googleapi: Error 404: The resource 'projects/[PROJECT]/zones/us-east1-d/disks/gfsmst-0-data' was not found, notFound
* google_compute_instance.gfsmst-1: 1 error(s) occurred:

* google_compute_instance.gfsmst-1: Error loading disk 'gfsmst-1-data': googleapi: Error 404: The resource 'projects/[PROJECT]/zones/us-east1-d/disks/gfsmst-1-data' was not found, notFound

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. change the image in the variable
  3. terraform apply

References

I didn't find any open issues related to this.

Work around that work

  • Apply once more. This allows terraform to detect that the data disks aren't there anymore and then schedules them for recreation.
  • Taint the disks for deletion prior to recreation

Work around that doesn't work

  • Do not reference a seperate disk. This could work if the disk was created from an existing image or snapshot. But you either have to specify a disk or a source for this to work.

Comments

I'll look into the code and see if it's a quick fix or not and comment below my findings

@zopanix
Copy link
Contributor

zopanix commented Sep 11, 2017

@danawillow: Hey, just to notice you I created this issue on terraform core
hashicorp/terraform#16065
I'm hoping I was clear enough

@danawillow
Copy link
Contributor

Closing as out-of-date since the attached_disk field doesn't allow setting auto_delete.

luis-silva pushed a commit to luis-silva/terraform-provider-google that referenced this issue May 21, 2019
@ghost
Copy link

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants