-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ISSUE] Clusters persist and not added to state after deployment fails due to provider issue #383
Comments
@jancschaefer what do you think should be the expected behavior here? |
I would either expect the clusters not to show up in databricks, if the deployment fails or that the clusters are nontheless added to the state and the provider tries to start up the same existing cluster the next time. This way, if terraform fails twice and then succeeds, I end up with three clusters, although only one was defined in terraform. |
okay, then cluster should be deleted, if it cannot be started. Change has to be added to This might qualify as behavior change and might be put on hold to 0.3. |
Hey adding my 2 cents here this is occurring because in the cluster create is waiting for the cluster to be in a running state before we register the ID. Unfortunately terraform will not be able to taint the resource if it is not aware of the id. Typical behavior of terraform is:
Another alternative would be to register the id right after the /create call is made before we wait for it to be in a running state. |
* Pre-release fixing * Added NAT to BYOVPC terraform module * added instance profile locks * Added sync block for instance profiles integration tests * Fix #383 Cleaning up clusters that fail to start * Added log delivery use case docs * Fix #382 - ignore changes to deployment_name * Fix test and lints * Fix #382 by ignoring incoming prefix for deployment_name for databricks_mws_workspaces * Improve documentation to fix #368 * fix linting issues Co-authored-by: Serge Smertin <serge.smertin@databricks.com>
Hi there,
Thank you for opening an issue. Please note that we try to keep the Databricks Provider issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
Terraform Version
Affected Resource(s)
Please list the resources as a list, for example:
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
Environment variable names
Terraform Configuration Files
Panic Output
Expected Behavior
Either:
Actual Behavior
Databricks cluster created and visible in the workspace, but not added to state because clusters failed to start up, due to an issue with an Azure Policy. The clusters were not added to the state, so once the policy was fixed and the terraform apply succeeded, we had duplicate clusters with the same name.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform apply
Important Factoids
Azure policy:
The text was updated successfully, but these errors were encountered: