-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] Fix deletion of dashboard if it was trashed out of band #4235
Conversation
If integration tests don't run automatically, an authorized user can run them manually by following the instructions below: Trigger: Inputs:
Checks will be approved automatically on success. |
Test Details: go/deco-tests/11863430999 |
// If the dashboard was already trashed, we'll get a 403 (Permission Denied) error. | ||
// There may be other cases where we get a 403, so we first confirm that the | ||
// dashboard state is actually trashed, and if so, return success. | ||
if errors.Is(err, apierr.ErrPermissionDenied) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. Curious -- should we also check for 404 or if a dashboard is trashed already then the only response is 403? Mentioning since the api doc https://docs.databricks.com/api/workspace/lakeview/trash has 404 as one of the responses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will return 404 if it really doesn't exist (not even in the trash).
Good point, though. This fix works around the issue at deletion time, but we should also treat trashed as a 404 at read time so that if it was trashed, the next plan will show creation instead of recreation.
### New Features and Improvements * Add `databricks_mws_network_connectivity_config` and `databricks_mws_network_connectivity_configs` data source ([#3665](#3665)). * Add support partitions in policy data sources ([#4181](#4181)). * Added `databricks_registered_model_versions` data source ([#4100](#4100)). * Update databricks_permissions resource to support vector-search-endpoints ([#4209](#4209)). * add `databricks_serving_endpoints` data source ([#4226](#4226)). ### Bug Fixes * Add validation for `run_as_mode` in `databricks_query` ([#4233](#4233)). * Correct handling of updates for empty comments and `force_destroy` in UC catalog, schema, registered models and volumes ([#4244](#4244)). * Fix deletion of dashboard if it was trashed out of band ([#4235](#4235)). * Fix waiting for `databricks_vector_search_index` readiness ([#4243](#4243)). * Remove single-node validation from interactive clusters ([#4222](#4222)). * Remove single-node validation from jobs clusters ([#4216](#4216)). * Use cluster list API to determine pinned cluster status ([#4203](#4203)). * fix issue cased by setting pause_status in update monitor ([#4242](#4242)). ### Documentation * Clarify workspace provider config ([#4208](#4208)). * Update "Databricks Workspace Creator" permissions on gcp-workspace.md ([#4201](#4201)). * Update `grants.md` references ([#4246](#4246)). * Update description of `group_id` in `databricks_mws_ncc_private_endpoint_rule` ([#4238](#4238)). * remove subnet sharing limitation in AWS ([#4239](#4239)). ### Internal Changes * Bump Go SDK to latest and generate TF structs ([#4249](#4249)). * Mark TestUcAccModelServingProvisionedThroughput as flaky. to be rever… ([#4232](#4232)). * Rename resources directory to products in pluginframework ([#4139](#4139)). * Revert "mark TestUcAccModelServingProvisionedThroughput as flaky. to … ([#4240](#4240)). * Set user agent in some resources implemented in plugin framework ([#4187](#4187)). * make `ApplyAndExpectData` work with nested set ([#4237](#4237)). ### Dependency Updates * Bump dependencies for Plugin Framework and SDK v2 ([#4215](#4215)). * Bump github.com/hashicorp/hcl/v2 from 2.22.0 to 2.23.0 ([#4236](#4236)). * Bump github.com/hashicorp/terraform-plugin-testing from 1.10.0 to 1.11.0 ([#4247](#4247)). ### Exporter * Add `List` operation for `users` service ([#4204](#4204)). * Fix interactive selection of services ([#4245](#4245)).
* Add `databricks_mws_network_connectivity_config` and `databricks_mws_network_connectivity_configs` data source ([#3665](#3665)). * Add support partitions in policy data sources ([#4181](#4181)). * Added `databricks_registered_model_versions` data source ([#4100](#4100)). * Update databricks_permissions resource to support vector-search-endpoints ([#4209](#4209)). * add `databricks_serving_endpoints` data source ([#4226](#4226)). * Add validation for `run_as_mode` in `databricks_query` ([#4233](#4233)). * Correct handling of updates for empty comments and `force_destroy` in UC catalog, schema, registered models and volumes ([#4244](#4244)). * Fix deletion of dashboard if it was trashed out of band ([#4235](#4235)). * Fix waiting for `databricks_vector_search_index` readiness ([#4243](#4243)). * Remove single-node validation from interactive clusters ([#4222](#4222)). * Remove single-node validation from jobs clusters ([#4216](#4216)). * Use cluster list API to determine pinned cluster status ([#4203](#4203)). * fix issue cased by setting pause_status in update monitor ([#4242](#4242)). * Clarify workspace provider config ([#4208](#4208)). * Update "Databricks Workspace Creator" permissions on gcp-workspace.md ([#4201](#4201)). * Update `grants.md` references ([#4246](#4246)). * Update description of `group_id` in `databricks_mws_ncc_private_endpoint_rule` ([#4238](#4238)). * remove subnet sharing limitation in AWS ([#4239](#4239)). * Bump Go SDK to latest and generate TF structs ([#4249](#4249)). * Mark TestUcAccModelServingProvisionedThroughput as flaky. to be rever… ([#4232](#4232)). * Rename resources directory to products in pluginframework ([#4139](#4139)). * Revert "mark TestUcAccModelServingProvisionedThroughput as flaky. to … ([#4240](#4240)). * Set user agent in some resources implemented in plugin framework ([#4187](#4187)). * make `ApplyAndExpectData` work with nested set ([#4237](#4237)). * Bump dependencies for Plugin Framework and SDK v2 ([#4215](#4215)). * Bump github.com/hashicorp/hcl/v2 from 2.22.0 to 2.23.0 ([#4236](#4236)). * Bump github.com/hashicorp/terraform-plugin-testing from 1.10.0 to 1.11.0 ([#4247](#4247)). * Add `List` operation for `users` service ([#4204](#4204)). * Fix interactive selection of services ([#4245](#4245)).
## Changes This is a follow-up to #4235, where the deletion of a trashed dashboard was fixed. This change treats trashed dashboards as deleted at read time, such that the resource shows up in the plan as new instead of a delete+create. ## Tests - [x] `make test` run locally - [ ] relevant change in `docs/` folder - [ ] covered with integration tests in `internal/acceptance` - [ ] relevant acceptance tests are passing - [ ] using Go SDK
### New Features and Improvements * Add `databricks_mws_network_connectivity_config` and `databricks_mws_network_connectivity_configs` data source ([#3665](#3665)). * Add support partitions in policy data sources ([#4181](#4181)). * Added `databricks_registered_model_versions` data source ([#4100](#4100)). * Update databricks_permissions resource to support vector-search-endpoints ([#4209](#4209)). * add `databricks_serving_endpoints` data source ([#4226](#4226)). ### Bug Fixes * Add validation for `run_as_mode` in `databricks_query` ([#4233](#4233)). * Correct handling of updates for empty comments and `force_destroy` in UC catalog, schema, registered models and volumes ([#4244](#4244)). * Fix deletion of dashboard if it was trashed out of band ([#4235](#4235)). * Fix waiting for `databricks_vector_search_index` readiness ([#4243](#4243)). * Remove single-node validation from interactive clusters ([#4222](#4222)). * Remove single-node validation from jobs clusters ([#4216](#4216)). * Use cluster list API to determine pinned cluster status ([#4203](#4203)). * fix issue cased by setting pause_status in update monitor ([#4242](#4242)). ### Documentation * Clarify workspace provider config ([#4208](#4208)). * Update "Databricks Workspace Creator" permissions on gcp-workspace.md ([#4201](#4201)). * Update `grants.md` references ([#4246](#4246)). * Update description of `group_id` in `databricks_mws_ncc_private_endpoint_rule` ([#4238](#4238)). * remove subnet sharing limitation in AWS ([#4239](#4239)). ### Internal Changes * Bump Go SDK to latest and generate TF structs ([#4249](#4249)). * Mark TestUcAccModelServingProvisionedThroughput as flaky. to be rever… ([#4232](#4232)). * Rename resources directory to products in pluginframework ([#4139](#4139)). * Revert "mark TestUcAccModelServingProvisionedThroughput as flaky. to … ([#4240](#4240)). * Set user agent in some resources implemented in plugin framework ([#4187](#4187)). * make `ApplyAndExpectData` work with nested set ([#4237](#4237)). ### Dependency Updates * Bump dependencies for Plugin Framework and SDK v2 ([#4215](#4215)). * Bump github.com/hashicorp/hcl/v2 from 2.22.0 to 2.23.0 ([#4236](#4236)). * Bump github.com/hashicorp/terraform-plugin-testing from 1.10.0 to 1.11.0 ([#4247](#4247)). ### Exporter * Add `List` operation for `users` service ([#4204](#4204)). * Fix interactive selection of services ([#4245](#4245)).
Changes
If a dashboard created by TF was trashed out of band, it could no longer be deleted by TF.
Example TF configuration:
If you apply this, and then run:
The subsequent apply would error out with:
It turns out trashing an already trashed dashboard returns an error.
This change guards against this condition returns success if the underlying dashboard has already been trashed.
Tests
make test
run locallydocs/
folderinternal/acceptance