Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect order of destroying resources - Flaky/inconsistent behavior #23635

Closed
cvbarros opened this issue Dec 10, 2019 · 14 comments · Fixed by #24084
Closed

Incorrect order of destroying resources - Flaky/inconsistent behavior #23635

cvbarros opened this issue Dec 10, 2019 · 14 comments · Fixed by #24084
Assignees
Labels
bug core v0.12 Issues (primarily bugs) reported against v0.12 releases
Milestone

Comments

@cvbarros
Copy link

Terraform Version

0.12.7

Terraform Configuration Files

resource "teamcity_project" "vcs_root_project" {
  name = "vcs_root_project"
}

resource "teamcity_vcs_root_git" "git_test" {
	name = "application"
	project_id = teamcity_project.vcs_root_project.id
	fetch_url = "https://github.com/cvbarros/terraform-provider-teamcity"
	default_branch = "refs/head/master"
	branches = [
    	"+:refs/(pull/*)/head",
    	"+:refs/heads/develop",
  	]
	username_style = "userid"
	submodule_checkout = "checkout"
	enable_branch_spec_tags = true
	modification_check_interval = 60
}

Debug Output

https://gist.github.com/cvbarros/cd508eaeb50c2a310fc2c1057ddc3832
Relevant extract:

2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalApply
2019/12/10 23:31:14 [DEBUG] teamcity_project.vcs_root_project: applying the planned Delete change
2019/12/10 23:31:14 [TRACE] GRPCProvider: ApplyResourceChange
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalRequireState
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalApplyPre
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalIf
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalApplyProvisioners
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalIf
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalIf
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalApply
2019/12/10 23:31:14 [DEBUG] teamcity_vcs_root_git.git_test: applying the planned Delete change
2019/12/10 23:31:14 [TRACE] GRPCProvider: ApplyResourceChange
2019/12/10 23:31:14 [DEBUG]: resourceProjectDelete - Destroying project VcsRootProject
2019/12/10 23:31:14 [DEBUG]: resourceVcsRootGitDelete - Destroying vcs root VcsRootProject_Application
2019/12/10 23:31:14 [INFO]: resourceVcsRootGitDelete - Destroyed vcs root VcsRootProject_Application
2019/12/10 23:31:14 [DEBUG] teamcity_vcs_root_git.git_test: apply errored, but we're indicating that via the Error pointer rather than returning it: Error '404' when deleting vcsRoot: Responding with error, status code: 404 (Not Found).
Details: jetbrains.buildServer.server.rest.errors.NotFoundException: No VCS root found by internal or external id 'VcsRootProject_Application'.
Could not find the entity requested. Check the reference is correct and the user has permissions to access the entity.
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalWriteState
2019/12/10 23:31:14 [TRACE] EvalWriteState: writing current state object for teamcity_vcs_root_git.git_test
2019/12/10 23:31:14 [TRACE] <root>: eval: *terraform.EvalApplyPost
2019/12/10 23:31:14 [ERROR] <root>: eval: *terraform.EvalApplyPost, err: Error '404' when deleting vcsRoot: Responding with error, status code: 404 (Not Found).
Details: jetbrains.buildServer.server.rest.errors.NotFoundException: No VCS root found by internal or external id 'VcsRootProject_Application'.
Could not find the entity requested. Check the reference is correct and the user has permissions to access the entity.
2019/12/10 23:31:14 [ERROR] <root>: eval: *terraform.EvalSequence, err: Error '404' when deleting vcsRoot: Responding with error, status code: 404 (Not Found).
Details: jetbrains.buildServer.server.rest.errors.NotFoundException: No VCS root found by internal or external id 'VcsRootProject_Application'.
Could not find the entity requested. Check the reference is correct and the user has permissions to access the entity.
2019/12/10 23:31:14 [ERROR] <root>: eval: *terraform.EvalOpFilter, err: Error '404' when deleting vcsRoot: Responding with error, status code: 404 (Not Found).
Details: jetbrains.buildServer.server.rest.errors.NotFoundException: No VCS root found by internal or external id 'VcsRootProject_Application'.
Could not find the entity requested. Check the reference is correct and the user has permissions to access the entity.
2019/12/10 23:31:14 [TRACE] [walkDestroy] Exiting eval tree: teamcity_vcs_root_git.git_test (destroy)
2019/12/10 23:31:14 [TRACE] vertex "teamcity_vcs_root_git.git_test (destroy)": visit complete
2019/12/10 23:31:14 [TRACE] dag/walk: upstream of "teamcity_vcs_root_git.git_test (clean up state)" errored, so skipping
2019/12/10 23:31:14 [INFO]: resourceProjectDelete - Destroyed project VcsRootProject

Crash Output

Expected Behavior

Resource teamcity_project.vcs_root_project is a dependency of resource teamcity_vcs_root_git.git_test.
Thus, the destroy should happen in the sequence teamcity_vcs_root_git.git_test -> teamcity_project.vcs_root_project

Actual Behavior

teamcity_project.vcs_root_project got deleted before the destroy for teamcity_vcs_root_git.git_test got called.
For this resource, when the project is deleted, the VCS Root also is removed. Then when Terraform calls destroy for teamcity_vcs_root_git.git_test, a 404 from upstream API occurs.

Steps to Reproduce

The issue happens when running a "Import" acceptance test for the provider, both locally and on CI. First, it was detected on CI
Relevant test case:

func TestAccVcsRootGit_Import(t *testing.T) {
	resName := "teamcity_vcs_root_git.git_test"
	resource.Test(t, resource.TestCase{
		PreCheck:     func() { testAccPreCheck(t) },
		Providers:    testAccProviders,
		CheckDestroy: testAccCheckVcsRootGitDestroy,
		Steps: []resource.TestStep{
			{
				Config: testAccVcsRootGitBasic,
			},
			{
				ResourceName:      resName,
				ImportState:       true,
				ImportStateVerify: true,
			},
		},
	})
}

const testAccVcsRootGitBasic = `
resource "teamcity_project" "vcs_root_project" {
  name = "vcs_root_project"
}

resource "teamcity_vcs_root_git" "git_test" {
	name = "application"
	project_id = teamcity_project.vcs_root_project.id
	fetch_url = "https://github.com/cvbarros/terraform-provider-teamcity"
	default_branch = "refs/head/master"
	branches = [
    	"+:refs/(pull/*)/head",
    	"+:refs/heads/develop",
  	]
	username_style = "userid"
	submodule_checkout = "checkout"
	enable_branch_spec_tags = true
	modification_check_interval = 60
}
`

Full Source

Additional Context

There seems to be some sort of race condition happening, as these failures are intermittent. Below is a gist containing traces, where the same test was ran with no changes (environment, code) and succeeds:
https://gist.github.com/cvbarros/97bd088ae8084c89d340fde2e9db54ea
Relevant extract:

2019/12/10 23:29:46 [DEBUG] teamcity_project.vcs_root_project: applying the planned Delete change
2019/12/10 23:29:46 [TRACE] GRPCProvider: ApplyResourceChange
2019/12/10 23:29:46 [DEBUG]: resourceProjectDelete - Destroying project VcsRootProject
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalRequireState
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalApplyPre
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalIf
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalApplyProvisioners
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalIf
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalIf
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalApply
2019/12/10 23:29:46 [DEBUG] teamcity_vcs_root_git.git_test: applying the planned Delete change
2019/12/10 23:29:46 [TRACE] GRPCProvider: ApplyResourceChange
2019/12/10 23:29:46 [DEBUG]: resourceVcsRootGitDelete - Destroying vcs root VcsRootProject_Application
2019/12/10 23:29:46 [INFO]: resourceVcsRootGitDelete - Destroyed vcs root VcsRootProject_Application
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalWriteState
2019/12/10 23:29:46 [TRACE] EvalWriteState: removing state object for teamcity_vcs_root_git.git_test
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalApplyPost
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalUpdateStateHook
2019/12/10 23:29:46 [TRACE] [walkDestroy] Exiting eval tree: teamcity_vcs_root_git.git_test (destroy)
2019/12/10 23:29:46 [TRACE] vertex "teamcity_vcs_root_git.git_test (destroy)": visit complete
2019/12/10 23:29:46 [TRACE] dag/walk: visiting "teamcity_vcs_root_git.git_test (clean up state)"
2019/12/10 23:29:46 [TRACE] vertex "teamcity_vcs_root_git.git_test (clean up state)": starting visit (*terraform.NodeDestroyResource)
2019/12/10 23:29:46 [TRACE] vertex "teamcity_vcs_root_git.git_test (clean up state)": evaluating
2019/12/10 23:29:46 [TRACE] [walkDestroy] Entering eval tree: teamcity_vcs_root_git.git_test (clean up state)
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalForgetResourceState
2019/12/10 23:29:46 [TRACE] EvalForgetResourceState: Pruned husk of teamcity_vcs_root_git.git_test from state
2019/12/10 23:29:46 [TRACE] [walkDestroy] Exiting eval tree: teamcity_vcs_root_git.git_test (clean up state)
2019/12/10 23:29:46 [TRACE] vertex "teamcity_vcs_root_git.git_test (clean up state)": visit complete
2019/12/10 23:29:46 [INFO]: resourceProjectDelete - Destroyed project VcsRootProject
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalWriteState
2019/12/10 23:29:46 [TRACE] EvalWriteState: removing state object for teamcity_project.vcs_root_project
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalApplyPost
2019/12/10 23:29:46 [TRACE] <root>: eval: *terraform.EvalUpdateStateHook
2019/12/10 23:29:46 [TRACE] [walkDestroy] Exiting eval tree: teamcity_project.vcs_root_project (destroy)
2019/12/10 23:29:46 [TRACE] vertex "teamcity_project.vcs_root_project (destroy)": visit complete
2019/12/10 23:29:46 [TRACE] dag/walk: visiting "teamcity_project.vcs_root_project (clean up state)"

References

@hashibot hashibot added bug core v0.12 Issues (primarily bugs) reported against v0.12 releases labels Dec 10, 2019
@teamterraform
Copy link
Contributor

Hi @cvbarros!

In recent releases we have made adjustments to the handling of dependencies during destroy. Could you give this a try with the latest release of Terraform and see if that improves the behavior?

Looking at your trace log (thanks!) we can see the dependency graph Terraform built for the destroy step:

meta.count-boundary (EachMode fixup) - *terraform.NodeCountBoundary
  teamcity_project.vcs_root_project (clean up state) - *terraform.NodeDestroyResource
  teamcity_vcs_root_git.git_test (clean up state) - *terraform.NodeDestroyResource
provider.teamcity - *terraform.NodeApplyableProvider
provider.teamcity (close) - *terraform.graphNodeCloseProvider
  teamcity_project.vcs_root_project (clean up state) - *terraform.NodeDestroyResource
  teamcity_vcs_root_git.git_test (clean up state) - *terraform.NodeDestroyResource
root - terraform.graphNodeRoot
  meta.count-boundary (EachMode fixup) - *terraform.NodeCountBoundary
  provider.teamcity (close) - *terraform.graphNodeCloseProvider
teamcity_project.vcs_root_project (clean up state) - *terraform.NodeDestroyResource
  teamcity_project.vcs_root_project (destroy) - *terraform.NodeDestroyResourceInstance
teamcity_project.vcs_root_project (destroy) - *terraform.NodeDestroyResourceInstance
  provider.teamcity - *terraform.NodeApplyableProvider
teamcity_vcs_root_git.git_test (clean up state) - *terraform.NodeDestroyResource
  teamcity_vcs_root_git.git_test (destroy) - *terraform.NodeDestroyResourceInstance
teamcity_vcs_root_git.git_test (destroy) - *terraform.NodeDestroyResourceInstance
  provider.teamcity - *terraform.NodeApplyableProvider

It does indeed seem that Terraform hasn't represented the dependency from teamcity_vcs_root_git.git_test to teamcity_project.vcs_root_project, because both of them depend only on provider.teamcity here.

This sort of missing dependency during destroy does seem consistent with the behavior fixed by the recent changes, one of which you've referenced from your issue here and was included for the first time in Terraform 0.12.14.

@cvbarros
Copy link
Author

cvbarros commented Dec 10, 2019

Hi @teamterraform,

Thanks for looking into it - However, the provider is built against the latest release (v0.12.17), as seen here:
https://github.com/cvbarros/terraform-provider-teamcity/blob/f4f499544f593cd79b96bbeb993caf4f86bb082b/go.mod#L10

Also, I was looking at porting the provider to https://github.com/hashicorp/terraform-plugin-sdk, and these latest fixes weren't applied there, so I held that off for now.

Edit: Another thing worth mentioning - There is another situation where the graph is not built correctly, although it involves a more complicated scenario like #18408.

Test on:
https://github.com/cvbarros/terraform-provider-teamcity/blob/f4f499544f593cd79b96bbeb993caf4f86bb082b/teamcity/resource_build_config_test.go#L508
Is a two-step operation to ensure an update of resource works. First configuration is a graph of four resources:

resource "teamcity_project" "build_config_project_test" {
  name = "build_config_project_test"
}
resource "teamcity_build_config" "build_configuration_test" {
	name = "build config test"
	project_id = teamcity_project.build_config_project_test.id
	templates = [ teamcity_build_config.build_configuration_template1.id, teamcity_build_config.build_configuration_template2.id ]
	depends_on = [
		teamcity_build_config.build_configuration_template1,
		teamcity_build_config.build_configuration_template2,
	]
}
resource "teamcity_build_config" "build_configuration_template1" {
	name = "build template 1"
	is_template = true
	project_id = teamcity_project.build_config_project_test.id
}
resource "teamcity_build_config" "build_configuration_template2" {
	name = "build template 2"
	is_template = true
	project_id = teamcity_project.build_config_project_test.id
}

Depicted by:

             +---------+
             | project |
             +----+----+
                  |
         +--------v----------+
         |  build_config     |
         +-+---------------+-+
           |               |
           |               v
+----------v---+   +-------+------+
| template1    |   | template2    |
+--------------+   +--------------+

When updating to the new configuration, that removes template2 and adds a new resource and dependency (to update build config), Terraform tries to destroy template2 prior to updating the dependant resource, build config.

Updated config graph:

             +---------+
             | project |
             +----+----+
                  |
         +--------v----------+
         |  build_config     +----------------+
         +-+---------------+-+                |
           |               |                  |
           |        XX     v     XX           |
+----------v---+   +-XXXX--+--XXX-+   +-------v------+
| template1    |   | temXXXtXXX   |   |  template3   |
+--------------+   +-----XXXXXX---+   +--------------+
                       XXX    XX
                      XX       XXX

In order to work around that in my tests I've kept the template2 on the updated config until a solution is devised:
https://github.com/cvbarros/terraform-provider-teamcity/blob/f4f499544f593cd79b96bbeb993caf4f86bb082b/teamcity/resource_build_config_test.go#L1002

If helpful, I can also provide diagnostic output on this case - but I've reported the first one as it is way simpler to reproduce/understand.

@jbardin jbardin self-assigned this Dec 11, 2019
@jbardin
Copy link
Member

jbardin commented Dec 11, 2019

Hi @cvbarros, thanks for the extra info.

The dependencies are solely handled by core, so the important part here is what is running on the cli. A provider should not be built with terraform 0.12.17, and must use the terraform-plugin-sdk. The changes that you linked cannot be applied to that project, because they are not compatible with the legacy state and acceptance test formats.

That said, you seem to only have failures here during acceptance tests. Is this appearing as a regression when using terraform 0.12.17 (which is not unexpected)? Is this also happening when applying and destroying directly from the cli?

If this is reproducible from the cli, the trace output from that execution would be more helpful. If this is only present when using the terraform-plugin-sdk, there's probably not much we can do other than work around the issue until the work on a new testing framework has progressed enough to take over the testing facilities for the plugin sdk.

@cvbarros
Copy link
Author

Hi James, thanks a bunch for taking a look at it!

A provider should not be built with terraform 0.12.17

This helps a lot, I'll migrate to terraform-plugin-sdk and see if this helps. I'll get back to you with results. In case this is a regression, I'll revert back to TF 0.12.7. Would you happen to know which is the "cut" version for the terraform-plugin-sdk where it branched off core to evolve independently?

Is this also happening when applying and destroying directly from the cli?

I'll run tests using these configurations from the CLI as well and report back the results.

@jbardin
Copy link
Member

jbardin commented Dec 11, 2019

I think the cut is post 0.12.7. The Terraform Plugin docs only list a date, which is September 2019 and coincides roughly with 0.12.8.

@cvbarros
Copy link
Author

cvbarros commented Dec 11, 2019

Provider built against Core v0.12.7

I've completed a set of tests here regarding the 2nd use case (one with the graph, as per my comment above.
Issue happens in both acceptance tests and via CLI versions 0.12.7 and 0.12.17, deterministically.

Given the configuration on the first apply:

provider "teamcity" {
  address  = var.teamcity_url
  username = var.teamcity_username
  password = var.teamcity_password

  version = "~> 0.5.3"
}

resource "teamcity_project" "build_config_project_test" {
  name = "build_config_project_test"
}

resource "teamcity_build_config" "build_configuration_test" {
  name = "build config test"
  project_id = teamcity_project.build_config_project_test.id

  templates = [ teamcity_build_config.build_configuration_template1.id, teamcity_build_config.build_configuration_template2.id ]
}

resource "teamcity_build_config" "build_configuration_template1" {
  name = "build template 1"
  is_template = true
  project_id = teamcity_project.build_config_project_test.id
}

resource "teamcity_build_config" "build_configuration_template2" {
  name = "build template 2"
  is_template = true
  project_id = teamcity_project.build_config_project_test.id
}

Apply works as expected.
0.12.17 TRACE
0.12.7 TRACE

When updating it to the next configuration, which

  • Removes teamcity_build_config.build_configuration_template2,
  • Adds build_configuration_template3
  • Updates the templates attribute of teamcity_build_config.build_configuration_test
    to [ teamcity_build_config.build_configuration_template1.id, teamcity_build_config.build_configuration_template3.id ]:
provider "teamcity" {
  address  = var.teamcity_url
  username = var.teamcity_username
  password = var.teamcity_password

  version = "~> 0.5.3"
}

resource "teamcity_project" "build_config_project_test" {
  name = "build_config_project_test"
}

resource "teamcity_build_config" "build_configuration_test" {
  name = "build config test"
  project_id = teamcity_project.build_config_project_test.id

  templates = [ teamcity_build_config.build_configuration_template1.id, teamcity_build_config.build_configuration_template3.id ]
}

resource "teamcity_build_config" "build_configuration_template1" {
  name = "build template 1"
  is_template = true
  project_id = teamcity_project.build_config_project_test.id
}

resource "teamcity_build_config" "build_configuration_template3" {
  name = "build template 2"
  is_template = true
  project_id = teamcity_project.build_config_project_test.id
}

Apply results in error.
0.12.17 TRACE
0.12.7 TRACE

Error: Error '400' when deleting build type: Responding with error, status code: 400 (Bad Request).
Details: jetbrains.buildServer.server.rest.errors.BadRequestException: Cannot remove template with id 'BuildConfigProjectTest_BuildTemplate2': Build configuration template cannot be removed because it is being used in build configuration "build_config_project_test / build config test"
Invalid request. Please check the request URL and data are correct.

Output of terraform graph -type=plan prior to trying to apply the 2nd configuration:
image

Terraform state at that point:
https://gist.github.com/cvbarros/fd0bfd5bca58da85669ca0d853dfc207

I didn't gather the traces building against terraform-plugin-sdk, as I confirmed previously this problem happens
deterministically when using that as well.

@cvbarros
Copy link
Author

I've also tested building the provider against 0.11.14 and the same CLI version 0.11.14, so most likely not a TF 0.12 issue.

Would it be the case that we need to keep the resource there and do a 2-phase change?

  1. Update the dependant resource to use the new one and keep the old in the confi, apply
  2. Remove the old from the config, apply.

@jbardin
Copy link
Member

jbardin commented Dec 11, 2019

Thanks for the updates @cvbarros!

Terraform wasn't ever able to precisely track the destroy+update dependencies, so any correct destroy order was often accidental. The order now is well defined, but it seems it's not what is needed in this situation. I'll take a closer look here is to see if this a situation that can't be directly modeled by terraform, or just that the update destroy order has been set incorrectly.

@jbardin
Copy link
Member

jbardin commented Dec 12, 2019

Some notes on this while it's fresh,

The graph is connecting build_config_test with build_config_template2 correctly, at least as it's currently defined:

teamcity_build_config.build_configuration_test 
  teamcity_build_config.build_configuration_template2 (destroy)
  teamcity_build_config.build_configuration_template3

This is the "natural" order, as it's equivalent to how it would be updated if template2 were being replaced and skipping the create node. Also note that if template2 were being replaced, making this connection in the other direction would be a cycle.

That second point brings up an interesting situation, because this type of resource (of which it is not the only one in its class) cannot work if the dependency is being replaced. That would require 2 independent updates to build_configuration_test, one to remove the old template, then one to add the newly created version. Terraform of course doesn't have a mechanism at the moment to deal with multiple updates like this.

Offhand I can think of a couple ways to model this, but none are great. There could be a separate build_config_template_registration resource which could handle the adding and removal of the templates, but that requires some hard-coded reference in order to avoid the dependency cycle between itself and the teamcity_build_config.

If the resource is inexpensive and not heavily referenced, requiring replacement when templates changes is a slightly better option. This in effect creates a 2-step update, by first removing the entire resource, templates and all, then later creating a new resource with the new templates.

As for fixing the issue as reported here. In this partial case of an update and a destroy, we may be able to conditionally reverse the edge, provided there are no cycles introduced; meaning there's no associated create node for the destroy, and the dependent is only a resource update, and there are no intermediary dependencies. That's not optimal from for a couple reasons, but mostly it makes the graph harder to understand for both maintainers and users. Having the flow go from a_destroy, a_create, b_update in a replacement scenario, to b_update, a_destroy only when there's no a_create is surprising. It also obviously doesn't solve the replacement scenario, which is unsatisfying.

I don't have a decision on the situation yet, but that should at least document what we're dealing with.

@jbardin
Copy link
Member

jbardin commented Dec 12, 2019

I just remembered the other approach, making the template2 create_before_destroy. We don't have any way to enforce that right now other than documentation, but it's not unheard of in other resources.

This has never been an option for this situation, because create_before_destroy only exists in the configuration. This might be a good reason to store lifecycle information in the state, which I can look into.

@cvbarros
Copy link
Author

Thanks for the swift response!
I'm not very familiar with the inner working of how Terraform builds these graphs 😅 , so it's a bit challenging for me to understand/collaborate fully towards a solution. I can leave that to the experts 👍

As per the workaround suggestions:

This in effect creates a 2-step update, by first removing the entire resource, templates and all, then later creating a new resource with the new templates.

Requiring a 2-step update is also fine workaround, I can document it. However, it feels clunky from usability perspective - like a limitation on the tool. I don't know, just feels counter-intuitive IMO.

There could be a separate build_config_template_registration resource which could handle the adding and removal of the templates

This could be a nice approach. I'll try creating something like a build_template_association resource and see how it looks from config usability and if solves these issues. If it does, I'd be happy with it 💯!

I just remembered the other approach, making the template2 create_before_destroy. We don't have any way to enforce that right now other than documentation, but it's not unheard of in other resources.

I didn't understand if this would be a suggested workaround or a future plan for avoiding it (that would require implementation) - I tested it with create_before_destroy and led to same results (when removing the resource at the same time as removing the reference from the dependant).

@jbardin
Copy link
Member

jbardin commented Dec 12, 2019

Sorry about the braindump there, there was a lot of extra info that was more for documentation of the issue than anything.

After giving it some thought, I think the create_before_destroy idea should be the path we want to go down (and in fact it is what other resources with similar constraints do). The reason being that it is the only way to produce the correct order for complete replacement of a template resource in a single apply step, where we get create, update then delete.

This takes a little effort in the provider resource, as you need to ensure that multiple instances can coexist during the update. If the resource has a field that must be unique, providers often generate a unique identifier for the resource, or append one to a configured prefix. You then just need to document that the resource must be create_before_destroy when using it as a template.

Though as you've found out, this doesn't work for the simple destroy case where the resource is removed from the configuration, because the information only exists in the config. This I believe is something we can remedy, and provide a general solution rather than core trying to guess when the order needs to be reversed.

In the meantime, if this is a common situation for you, I would suggest documenting a 2-step update as the workaround.

@jbardin jbardin added this to the v0.13.0 milestone Dec 13, 2019
@jbardin
Copy link
Member

jbardin commented Dec 13, 2019

I've confirmed that this the change is possible, and the order is enforced in the way that we want for this type of resource.

However, because of the way this is interacting with create_before_destroy, the method for connecting the dependencies won't appear until the next major release to prevent any possible regressions.

@ghost
Copy link

ghost commented Apr 9, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 9, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug core v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants