Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Order to change/destroy with dependent resources #20196

Closed
maxgio92 opened this issue Feb 1, 2019 · 14 comments · Fixed by #23252
Closed

Order to change/destroy with dependent resources #20196

maxgio92 opened this issue Feb 1, 2019 · 14 comments · Fixed by #23252
Labels
bug core v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@maxgio92
Copy link

maxgio92 commented Feb 1, 2019

Hi all,
I'm trying to write a module for AWS Elasticache service and actually I have two resources:

  • an "aws_elasticache_replication_group"
    that depends on:
  • an "aws_elasticache_parameter_group".

The dependence is not implicit, since I would want to make it optional in order to create a custom parameter group and configure the replication group to use it and otherwise configure in such a way that it will use the default one.

During creation all works fine, but if I have the two resources created and I want to restore the default parameter group (thus also remove the custom one), the change and destroy order does not work in the order in which I would have expected, despite the explicit dependence.

If I am wrong, sorry, I'm a TF newbie :-)

Terraform Version

0.11.8

Terraform Configuration Files

modules/aws_elasticache/variables.tf

# [...]
variable "parameter_group_name" {
  description = "The name of the parameter group to use"
  default     = ""
}

variable "create_parameter_group" {
  description = "True to create a custom parameter group"
  default     = false
}
# [...]

modules/aws_elasticache/main.tf

locals {
  # Ugly workaround
  parameter_group_name          = "${coalesce(var.parameter_group_name, (length(aws_elasticache_parameter_group.this.*.id) == 0 ? "" : element(concat(list(""), aws_elasticache_parameter_group.this.*.id), 1)))}"
  enable_create_parameter_group = "${var.parameter_group_name == "" ? var.create_parameter_group : 0}"
}

# Cluster
resource "aws_elasticache_replication_group" "this" {
  depends_on                    = ["aws_elasticache_parameter_group.this"]

  # [...]

  parameter_group_name          = "${local.parameter_group_name}"

  # [...]
}

# Parameter group
resource "aws_elasticache_parameter_group" "this" {
  count = "${local.enable_create_parameter_group}"

  # [...]
}

main.tf

module "cache" {
  source = "./modules/aws_elasticache"

  # [...]

  #create_parameter_group = true
  #parameters             = "${local.workspace_lists["cache_parameters"]}"
  #parameter_group_family = "redis5.0"

  parameter_group_name = "default.redis5.0"

  # [...]
}

Expected Behavior

First it should be changed the replication group's resource and then destroyed the parameter group resource.

Actual Behavior

It attempts first to destroy the parameter group resource, but since the replication group resource depends on it, it fails.

Steps to Reproduce

  1. main.tf
module "cache" {
  source = "./modules/aws_elasticache"

  # [...]

  create_parameter_group = true

  # [...]
}
  1. terraform plan

  2. terraform apply
    It creates the two resources, the replication group dependent on the parameter group.

  3. main.tf

module "cache" {
  source = "./modules/aws_elasticache"

  # [...]

  parameter_group_name = "default.redis5.0"

  # [...]
}
  1. terraform plan
    It appears that the replication group's "parameter_group_name" field would be change and that the parameter group resource would be destroyed.
  2. terraform apply
    It attempts to destroy first the parameter group, before updating the field "parameter_group_name" of the replication group.
Error: Error applying plan:

1 error(s) occurred:

* module.cache.aws_elasticache_parameter_group.this (destroy): 1 error(s) occurred:

* aws_elasticache_parameter_group.this: InvalidCacheParameterGroupState: One or more cache clusters are still members of this parameter group custom-pg, so the group cannot be deleted.
	status code: 400, request id: ...

Thank you very much!

@createchange
Copy link

createchange commented Feb 3, 2019

I am hitting this same problem right now. I create a network and then a firewall for the network, but it fails the delete because it tries to destroy the network first, upon which the firewall relies:

* google_compute_network.default: The network resource 'projects/jumpserver/global/networks/test-network' is already being used by 'projects/jumpserver/global/
firewalls/test-firewall' 

I have to imagine there is a means to get it to destroy in the correct order without swapping the blocks in the config file. Help appreciated. :)

@gpanula
Copy link

gpanula commented Feb 6, 2019

Maybe try setting the resource's lifecycle to create before destroy?

lifecycle  {
    create_before_destroy=true
 }

Useful reference
https://www.hashicorp.com/blog/zero-downtime-updates-with-terraform

@maxgio92
Copy link
Author

maxgio92 commented Feb 7, 2019

Thank you @gpanula :-) but in my case there's no need to destroy-and-recreate a resource, but to update a resource and only then, destroy the resource on which the first resource depends on.

@paultyng
Copy link
Contributor

@maxgio92 have you confirmed the dependency graph in this case (using terraform graph)?

@maxgio92
Copy link
Author

@paultyng yes

@mangobatao
Copy link

@maxgio92 do you have a workaround for this problem?

@angelhvargas
Copy link

I'm hitting the same issue with openstack provider, during terraform destroy, terraform tries to destroy first the network while the network have floating ips created, I would expect terraform to delete any resource which depends on the network to destroy. How can I workaround this?

@dejwsz
Copy link

dejwsz commented Jun 17, 2019

I had the same issue today targeting Openstack with Terraform v0.12.2 and provider.openstack v1.19.0. TF tried to destroy sub network first while the VM using it and a related port still was there. I had to remove this VM manually and then everything went well.

@dejwsz
Copy link

dejwsz commented Jun 17, 2019

After I added "depends_on" clause to my compute definition it worked very well so the destroy process worked in proper order:

resource "openstack_compute_instance_v2" "my-vm" {
......
depends_on = [
"openstack_networking_subnet_v2.my-subnet"
]
.....
}

@maxgio92
Copy link
Author

@mangobatao I'm sorry but not yet :-(

@jamesgoodhouse
Copy link
Contributor

We are currently experiencing problems we believe are similar to this. We are relying on Terraform using implicit dependencies to figure out the order for creating resources. So far, 100% of the time it does the right thing. However, when it comes time for destroying the resources, it seems like 50% of the time it deletes them in the wrong order, meaning the resource that has a dependency is deleted first and then it attempts to delete the dependent resource next, which fails.

@hashibot hashibot added v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases labels Aug 29, 2019
@PhungXuanAnh
Copy link

I encoutered same problem while destroy aws_api_gateway_deployment, I recieved error:

Error: error deleting API Gateway Deployment (tg332y): BadRequestException: Active stages pointing to this deployment must be moved or deleted status code: 400, request id: 24c14f76-766a-44c4-9610-26ca876ffe08

The error happend because aws_api_gateway_base_path_mapping is still exist although I have added to it an property: depends_on API Gateway Deployment

It seem destroy action is not in correct order base on depends_on property
My workaround is combine terraform with aws cli in provisioner property to remove base_path_mapping before deployment is destroyed
I hope this issue will be solved as soon as possible, or anyone have any idea, you're welcome?

@rdettai
Copy link

rdettai commented Sep 30, 2019

image

@ghost
Copy link

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug core v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.