Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Fix deletion of dashboard if it was trashed out of band #4235

Merged
merged 1 commit into from
Nov 16, 2024

Conversation

pietern
Copy link
Contributor

@pietern pietern commented Nov 15, 2024

Changes

If a dashboard created by TF was trashed out of band, it could no longer be deleted by TF.

Example TF configuration:

terraform {
  required_providers {
    databricks = {
      source = "databricks/databricks"
      version = "1.58.0"
    }
  }
}

data "databricks_current_user" "me" {
  // This data source is used to get the current user's username
}

resource "databricks_dashboard" "this" {
  display_name = "Terraform test"
  parent_path = "/Workspace/Users/${data.databricks_current_user.me.user_name}/tf-dashboard-oob-deletion"
  warehouse_id = "58aa1b363649e722"
  serialized_dashboard = file("nyc_taxi_trip_analysis.lvdash.json")
}

If you apply this, and then run:

databricks lakeview trash <dashboard id>

The subsequent apply would error out with:

Error: cannot delete dashboard: dashboard [01efa39479a1128aae017018e434a747] lifecycle state [TRASHED] is not among the accepted states [Set(ACTIVE)]

It turns out trashing an already trashed dashboard returns an error.

This change guards against this condition returns success if the underlying dashboard has already been trashed.

Tests

  • make test run locally
  • relevant change in docs/ folder
  • covered with integration tests in internal/acceptance
  • relevant acceptance tests are passing
  • using Go SDK

@pietern pietern requested review from a team as code owners November 15, 2024 21:04
@pietern pietern requested review from parthban-db and removed request for a team November 15, 2024 21:04
Copy link

If integration tests don't run automatically, an authorized user can run them manually by following the instructions below:

Trigger:
go/deco-tests-run/terraform

Inputs:

  • PR number: 4235
  • Commit SHA: 16503fc763686a0e86e0ace80c365f9e6c2a8534

Checks will be approved automatically on success.

@eng-dev-ecosystem-bot
Copy link
Collaborator

Test Details: go/deco-tests/11863430999

@pietern pietern changed the title Fix deletion of dashboard if it was trashed out of band [Fix] Fix deletion of dashboard if it was trashed out of band Nov 15, 2024
@pietern pietern enabled auto-merge November 15, 2024 21:09
// If the dashboard was already trashed, we'll get a 403 (Permission Denied) error.
// There may be other cases where we get a 403, so we first confirm that the
// dashboard state is actually trashed, and if so, return success.
if errors.Is(err, apierr.ErrPermissionDenied) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. Curious -- should we also check for 404 or if a dashboard is trashed already then the only response is 403? Mentioning since the api doc https://docs.databricks.com/api/workspace/lakeview/trash has 404 as one of the responses.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will return 404 if it really doesn't exist (not even in the trash).

Good point, though. This fix works around the issue at deletion time, but we should also treat trashed as a 404 at read time so that if it was trashed, the next plan will show creation instead of recreation.

@pietern pietern added this pull request to the merge queue Nov 16, 2024
Merged via the queue into main with commit 27ff289 Nov 16, 2024
14 of 16 checks passed
@pietern pietern deleted the dashboard-trash-delete branch November 16, 2024 00:08
pietern added a commit that referenced this pull request Nov 20, 2024
hectorcast-db added a commit that referenced this pull request Nov 20, 2024
### New Features and Improvements

 * Add `databricks_mws_network_connectivity_config` and `databricks_mws_network_connectivity_configs` data source ([#3665](#3665)).
 * Add support partitions in policy data sources ([#4181](#4181)).
 * Added `databricks_registered_model_versions` data source ([#4100](#4100)).
 * Update databricks_permissions resource to support vector-search-endpoints ([#4209](#4209)).
 * add `databricks_serving_endpoints` data source ([#4226](#4226)).

### Bug Fixes

 * Add validation for `run_as_mode` in `databricks_query` ([#4233](#4233)).
 * Correct handling of updates for empty comments and `force_destroy` in UC catalog, schema, registered models and volumes ([#4244](#4244)).
 * Fix deletion of dashboard if it was trashed out of band ([#4235](#4235)).
 * Fix waiting for `databricks_vector_search_index` readiness ([#4243](#4243)).
 * Remove single-node validation from interactive clusters ([#4222](#4222)).
 * Remove single-node validation from jobs clusters ([#4216](#4216)).
 * Use cluster list API to determine pinned cluster status ([#4203](#4203)).
 * fix issue cased by setting pause_status in update monitor  ([#4242](#4242)).

### Documentation

 * Clarify workspace provider config ([#4208](#4208)).
 * Update "Databricks Workspace Creator" permissions on gcp-workspace.md ([#4201](#4201)).
 * Update `grants.md` references ([#4246](#4246)).
 * Update description of `group_id` in `databricks_mws_ncc_private_endpoint_rule` ([#4238](#4238)).
 * remove subnet sharing limitation in AWS ([#4239](#4239)).

### Internal Changes

 * Bump Go SDK to latest and generate TF structs ([#4249](#4249)).
 * Mark TestUcAccModelServingProvisionedThroughput as flaky. to be rever… ([#4232](#4232)).
 * Rename resources directory to products in pluginframework ([#4139](#4139)).
 * Revert "mark TestUcAccModelServingProvisionedThroughput as flaky. to … ([#4240](#4240)).
 * Set user agent in some resources implemented in plugin framework ([#4187](#4187)).
 * make `ApplyAndExpectData` work with nested set ([#4237](#4237)).

### Dependency Updates

 * Bump dependencies for Plugin Framework and SDK v2 ([#4215](#4215)).
 * Bump github.com/hashicorp/hcl/v2 from 2.22.0 to 2.23.0 ([#4236](#4236)).
 * Bump github.com/hashicorp/terraform-plugin-testing from 1.10.0 to 1.11.0 ([#4247](#4247)).

### Exporter

 * Add `List` operation for `users` service ([#4204](#4204)).
 * Fix interactive selection of services ([#4245](#4245)).
hectorcast-db added a commit that referenced this pull request Nov 20, 2024
 * Add `databricks_mws_network_connectivity_config` and `databricks_mws_network_connectivity_configs` data source ([#3665](#3665)).
 * Add support partitions in policy data sources ([#4181](#4181)).
 * Added `databricks_registered_model_versions` data source ([#4100](#4100)).
 * Update databricks_permissions resource to support vector-search-endpoints ([#4209](#4209)).
 * add `databricks_serving_endpoints` data source ([#4226](#4226)).

 * Add validation for `run_as_mode` in `databricks_query` ([#4233](#4233)).
 * Correct handling of updates for empty comments and `force_destroy` in UC catalog, schema, registered models and volumes ([#4244](#4244)).
 * Fix deletion of dashboard if it was trashed out of band ([#4235](#4235)).
 * Fix waiting for `databricks_vector_search_index` readiness ([#4243](#4243)).
 * Remove single-node validation from interactive clusters ([#4222](#4222)).
 * Remove single-node validation from jobs clusters ([#4216](#4216)).
 * Use cluster list API to determine pinned cluster status ([#4203](#4203)).
 * fix issue cased by setting pause_status in update monitor  ([#4242](#4242)).

 * Clarify workspace provider config ([#4208](#4208)).
 * Update "Databricks Workspace Creator" permissions on gcp-workspace.md ([#4201](#4201)).
 * Update `grants.md` references ([#4246](#4246)).
 * Update description of `group_id` in `databricks_mws_ncc_private_endpoint_rule` ([#4238](#4238)).
 * remove subnet sharing limitation in AWS ([#4239](#4239)).

 * Bump Go SDK to latest and generate TF structs ([#4249](#4249)).
 * Mark TestUcAccModelServingProvisionedThroughput as flaky. to be rever… ([#4232](#4232)).
 * Rename resources directory to products in pluginframework ([#4139](#4139)).
 * Revert "mark TestUcAccModelServingProvisionedThroughput as flaky. to … ([#4240](#4240)).
 * Set user agent in some resources implemented in plugin framework ([#4187](#4187)).
 * make `ApplyAndExpectData` work with nested set ([#4237](#4237)).

 * Bump dependencies for Plugin Framework and SDK v2 ([#4215](#4215)).
 * Bump github.com/hashicorp/hcl/v2 from 2.22.0 to 2.23.0 ([#4236](#4236)).
 * Bump github.com/hashicorp/terraform-plugin-testing from 1.10.0 to 1.11.0 ([#4247](#4247)).

 * Add `List` operation for `users` service ([#4204](#4204)).
 * Fix interactive selection of services ([#4245](#4245)).
github-merge-queue bot pushed a commit that referenced this pull request Nov 20, 2024
## Changes

This is a follow-up to #4235, where the deletion of a trashed dashboard
was fixed. This change treats trashed dashboards as deleted at read
time, such that the resource shows up in the plan as new instead of a
delete+create.

## Tests

- [x] `make test` run locally
- [ ] relevant change in `docs/` folder
- [ ] covered with integration tests in `internal/acceptance`
- [ ] relevant acceptance tests are passing
- [ ] using Go SDK
github-merge-queue bot pushed a commit that referenced this pull request Nov 20, 2024
### New Features and Improvements

* Add `databricks_mws_network_connectivity_config` and
`databricks_mws_network_connectivity_configs` data source
([#3665](#3665)).
* Add support partitions in policy data sources
([#4181](#4181)).
* Added `databricks_registered_model_versions` data source
([#4100](#4100)).
* Update databricks_permissions resource to support
vector-search-endpoints
([#4209](#4209)).
* add `databricks_serving_endpoints` data source
([#4226](#4226)).


### Bug Fixes

* Add validation for `run_as_mode` in `databricks_query`
([#4233](#4233)).
* Correct handling of updates for empty comments and `force_destroy` in
UC catalog, schema, registered models and volumes
([#4244](#4244)).
* Fix deletion of dashboard if it was trashed out of band
([#4235](#4235)).
* Fix waiting for `databricks_vector_search_index` readiness
([#4243](#4243)).
* Remove single-node validation from interactive clusters
([#4222](#4222)).
* Remove single-node validation from jobs clusters
([#4216](#4216)).
* Use cluster list API to determine pinned cluster status
([#4203](#4203)).
* fix issue cased by setting pause_status in update monitor
([#4242](#4242)).


### Documentation

* Clarify workspace provider config
([#4208](#4208)).
* Update "Databricks Workspace Creator" permissions on gcp-workspace.md
([#4201](#4201)).
* Update `grants.md` references
([#4246](#4246)).
* Update description of `group_id` in
`databricks_mws_ncc_private_endpoint_rule`
([#4238](#4238)).
* remove subnet sharing limitation in AWS
([#4239](#4239)).


### Internal Changes

* Bump Go SDK to latest and generate TF structs
([#4249](#4249)).
* Mark TestUcAccModelServingProvisionedThroughput as flaky. to be rever…
([#4232](#4232)).
* Rename resources directory to products in pluginframework
([#4139](#4139)).
* Revert "mark TestUcAccModelServingProvisionedThroughput as flaky. to …
([#4240](#4240)).
* Set user agent in some resources implemented in plugin framework
([#4187](#4187)).
* make `ApplyAndExpectData` work with nested set
([#4237](#4237)).


### Dependency Updates

* Bump dependencies for Plugin Framework and SDK v2
([#4215](#4215)).
* Bump github.com/hashicorp/hcl/v2 from 2.22.0 to 2.23.0
([#4236](#4236)).
* Bump github.com/hashicorp/terraform-plugin-testing from 1.10.0 to
1.11.0
([#4247](#4247)).


### Exporter

* Add `List` operation for `users` service
([#4204](#4204)).
* Fix interactive selection of services
([#4245](#4245)).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants