-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fix] Tolerate databricks_workspace_conf
deletion failures
#3737
Conversation
workspace/resource_workspace_conf.go
Outdated
tflog.Warn(ctx, fmt.Sprintf("Failed to delete databricks_workspace_conf: %s. The workspace configuration may not be fully restored to its original state. For more information, see %s", apiErr.Message, docs.ResourceDocumentationUrl("workspace_conf"))) | ||
return nil | ||
} | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know if this could lead to partial update, or if the conf API rejects everything if the payload contains at least 1 invalid setting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the same question - if everything is rejected, should we set each setting separately?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, you're asking if partial updates succeed for most settings but fail for a few, we could restore most settings to their original state and minimize manual work for customers. If the API partially updates settings but fails for others, then I think we will get this desired behavior. Otherwise, the error message will include the settings that the customer needs to manually check. I will double check with the backing team to understand what might happen with this.
databricks_workspace_conf
deletion failuresdatabricks_workspace_conf
deletion failures
### New Features and Improvements * Added support for `cloudflare_api_token` in `databricks_storage_credential` resource ([#3835](#3835)). * Add `active` attribute to `databricks_user` data source ([#3733](#3733)). * Add `workspace_path` attribute to `databricks_notebook` resource and data source ([#3885](#3885)). * Mark attributes as sensitive in `databricks_mlflow_webhook` ([#3825](#3825)). * Added notification destination resource ([#3820](#3820)). ### Bug Fixes * Automatically assign `IS_OWNER` permission to sql warehouse if not specified ([#3829](#3829)). * Corrected kms arn format in `data_aws_unity_catalog_policy` ([#3823](#3823)). * Fix crash when destroying `databricks_compliance_security_profile_workspace_setting` ([#3883](#3883)). * Fixed read method of `databricks_entitlements` resource ([#3858](#3858)). * Retry cluster update on "INVALID_STATE" ([#3890](#3890)). * Save Pipeline resource to state in addition to spec ([#3869](#3869)). * Tolerate `databricks_workspace_conf` deletion failures ([#3737](#3737)). * Update Go SDK ([#3826](#3826)). * cluster key update for `databricks_sql_table` should not force new ([#3824](#3824)). * reading `databricks_metastore_assignment` when importing resource ([#3827](#3827)). ### Documentation * Add troubleshooting instructions for `databricks OAuth is not supported for this host` error ([#3815](#3815)). * Clarify setting of permissions for workspace objects ([#3884](#3884)). * Document missing task attributes in `databricks_job` resource ([#3817](#3817)). * Fixed documentation for `databricks_schemas` data source and `databricks_metastore_assignment` resource ([#3851](#3851)). * clarified `spot_bid_max_price` option for `databricks_cluster` ([#3830](#3830)). * marked `databricks_sql_dashboard` as legacy ([#3836](#3836)). ### Internal Changes * Refactor exporter: split huge files into smaller ones ([#3870](#3870)). * Refactored `client.ClientForHost` to use Go SDK method ([#3735](#3735)). * Revert "Rewriting DLT pipelines using SDK" ([#3838](#3838)). * Rewrite DLT pipelines using SDK ([#3839](#3839)). * Rewriting DLT pipelines using SDK ([#3792](#3792)). * Update Go SDK ([#3808](#3808)). * refactored `databricks_mws_permission_assignment` to Go SDK ([#3831](#3831)). ### Dependency Updates * Bump databricks-sdk-go to 0.44.0 ([#3896](#3896)). * Bump github.com/zclconf/go-cty from 1.14.4 to 1.15.0 ([#3775](#3775)). ### Exporter * Add retry on "Operation timed out" error ([#3897](#3897)). * Add support for Vector Search assets ([#3828](#3828)). * Add support for `databricks_notification_destination` ([#3861](#3861)). * Add support for `databricks_online_table` ([#3816](#3816)). * Don't export model serving endpoints with foundational models ([#3845](#3845)). * Fix generation of `autotermination_minutes = 0` ([#3881](#3881)). * Generate `databricks_workspace_binding` instead of legacy `databricks_catalog_workspace_binding` ([#3812](#3812)). * Ignore DLT pipelines deployed via DABs ([#3857](#3857)). * Improve exporting of `databricks_model_serving` ([#3821](#3821)). * Refactoring: remove legacy code ([#3864](#3864)).
### New Features and Improvements * Added `databricks_notification_destination` resource ([#3820](#3820)). * Added support for `cloudflare_api_token` in `databricks_storage_credential` resource ([#3835](#3835)). * Add `active` attribute to `databricks_user` data source ([#3733](#3733)). * Add `workspace_path` attribute to `databricks_notebook` resource and data source ([#3885](#3885)). * Mark attributes as sensitive in `databricks_mlflow_webhook` ([#3825](#3825)). ### Bug Fixes * Automatically assign `IS_OWNER` permission to sql warehouse if not specified ([#3829](#3829)). * Corrected kms arn format in `data_aws_unity_catalog_policy` ([#3823](#3823)). * Fix crash when destroying `databricks_compliance_security_profile_workspace_setting` ([#3883](#3883)). * Fixed read method of `databricks_entitlements` resource ([#3858](#3858)). * Retry cluster update on "INVALID_STATE" ([#3890](#3890)). * Save Pipeline resource to state in addition to spec ([#3869](#3869)). * Tolerate `databricks_workspace_conf` deletion failures ([#3737](#3737)). * Update Go SDK ([#3826](#3826)). * cluster key update for `databricks_sql_table` should not force new ([#3824](#3824)). * reading `databricks_metastore_assignment` when importing resource ([#3827](#3827)). ### Documentation * Add troubleshooting instructions for `databricks OAuth is not supported for this host` error ([#3815](#3815)). * Clarify setting of permissions for workspace objects ([#3884](#3884)). * Document missing task attributes in `databricks_job` resource ([#3817](#3817)). * Fixed documentation for `databricks_schemas` data source and `databricks_metastore_assignment` resource ([#3851](#3851)). * clarified `spot_bid_max_price` option for `databricks_cluster` ([#3830](#3830)). * marked `databricks_sql_dashboard` as legacy ([#3836](#3836)). ### Internal Changes * Refactor exporter: split huge files into smaller ones ([#3870](#3870)). * Refactored `client.ClientForHost` to use Go SDK method ([#3735](#3735)). * Revert "Rewriting DLT pipelines using SDK" ([#3838](#3838)). * Rewrite DLT pipelines using SDK ([#3839](#3839)). * Rewriting DLT pipelines using SDK ([#3792](#3792)). * Update Go SDK ([#3808](#3808)). * refactored `databricks_mws_permission_assignment` to Go SDK ([#3831](#3831)). ### Dependency Updates * Bump databricks-sdk-go to 0.44.0 ([#3896](#3896)). * Bump github.com/zclconf/go-cty from 1.14.4 to 1.15.0 ([#3775](#3775)). ### Exporter * Add retry on "Operation timed out" error ([#3897](#3897)). * Add support for Vector Search assets ([#3828](#3828)). * Add support for `databricks_notification_destination` ([#3861](#3861)). * Add support for `databricks_online_table` ([#3816](#3816)). * Don't export model serving endpoints with foundational models ([#3845](#3845)). * Fix generation of `autotermination_minutes = 0` ([#3881](#3881)). * Generate `databricks_workspace_binding` instead of legacy `databricks_catalog_workspace_binding` ([#3812](#3812)). * Ignore DLT pipelines deployed via DABs ([#3857](#3857)). * Improve exporting of `databricks_model_serving` ([#3821](#3821)). * Refactoring: remove legacy code ([#3864](#3864)).
### New Features and Improvements * Added `databricks_notification_destination` resource ([#3820](#3820)). * Added support for `cloudflare_api_token` in `databricks_storage_credential` resource ([#3835](#3835)). * Add `active` attribute to `databricks_user` data source ([#3733](#3733)). * Add `workspace_path` attribute to `databricks_notebook` resource and data source ([#3885](#3885)). * Mark attributes as sensitive in `databricks_mlflow_webhook` ([#3825](#3825)). ### Bug Fixes * Automatically assign `IS_OWNER` permission to sql warehouse if not specified ([#3829](#3829)). * Corrected kms arn format in `data_aws_unity_catalog_policy` ([#3823](#3823)). * Fix crash when destroying `databricks_compliance_security_profile_workspace_setting` ([#3883](#3883)). * Fixed read method of `databricks_entitlements` resource ([#3858](#3858)). * Retry cluster update on "INVALID_STATE" ([#3890](#3890)). * Save Pipeline resource to state in addition to spec ([#3869](#3869)). * Tolerate `databricks_workspace_conf` deletion failures ([#3737](#3737)). * Update Go SDK ([#3826](#3826)). * cluster key update for `databricks_sql_table` should not force new ([#3824](#3824)). * reading `databricks_metastore_assignment` when importing resource ([#3827](#3827)). ### Documentation * Add troubleshooting instructions for `databricks OAuth is not supported for this host` error ([#3815](#3815)). * Clarify setting of permissions for workspace objects ([#3884](#3884)). * Document missing task attributes in `databricks_job` resource ([#3817](#3817)). * Fixed documentation for `databricks_schemas` data source and `databricks_metastore_assignment` resource ([#3851](#3851)). * clarified `spot_bid_max_price` option for `databricks_cluster` ([#3830](#3830)). * marked `databricks_sql_dashboard` as legacy ([#3836](#3836)). ### Internal Changes * Refactor exporter: split huge files into smaller ones ([#3870](#3870)). * Refactored `client.ClientForHost` to use Go SDK method ([#3735](#3735)). * Revert "Rewriting DLT pipelines using SDK" ([#3838](#3838)). * Rewrite DLT pipelines using SDK ([#3839](#3839)). * Rewriting DLT pipelines using SDK ([#3792](#3792)). * Update Go SDK ([#3808](#3808)). * refactored `databricks_mws_permission_assignment` to Go SDK ([#3831](#3831)). ### Dependency Updates * Bump databricks-sdk-go to 0.44.0 ([#3896](#3896)). * Bump github.com/zclconf/go-cty from 1.14.4 to 1.15.0 ([#3775](#3775)). ### Exporter * Add retry on "Operation timed out" error ([#3897](#3897)). * Add support for Vector Search assets ([#3828](#3828)). * Add support for `databricks_notification_destination` ([#3861](#3861)). * Add support for `databricks_online_table` ([#3816](#3816)). * Don't export model serving endpoints with foundational models ([#3845](#3845)). * Fix generation of `autotermination_minutes = 0` ([#3881](#3881)). * Generate `databricks_workspace_binding` instead of legacy `databricks_catalog_workspace_binding` ([#3812](#3812)). * Ignore DLT pipelines deployed via DABs ([#3857](#3857)). * Improve exporting of `databricks_model_serving` ([#3821](#3821)). * Refactoring: remove legacy code ([#3864](#3864)).
Changes
Currently, deleting
databricks_workspace_conf
can fail in some cases. The provider currently tries to do a best-effort to reset config values to an "original" state by setting all boolean fields tofalse
and string fields to""
. However, the provider isn't aware of three key bits of information: the set of allowed keys in the workspace configuration, the types of values accepted for those keys, and the default values for new workspaces (the last of which can vary from workspace to workspace).As a result, it does a best-effort approach to interpret the value for each key as a boolean, setting it to
false
on delete if it can be interpreted as a boolean, otherwise setting the value for the key to""
. This can fail, however, if a user has specified a value likeTRue
, this may not be interpreted as a boolean by the provider. The provider then tries to reset its value to""
, which fails.This PR adds two layers of protection around deletion: first, it attempts to lower-case the value before checking if it is a boolean to handle possible issues with case-sensitivity with the values in workspace_conf. Secondly, if the configuration update fail due to invalid key types, the provider will print a warning message guiding users to our docs page to help them review their workspace config, but it will consider the resource to be successfully deleted.
As part of this, I added a small internal module to provide links to docs pages when the provider needs to print them in log messages.
Resolves #3316, #1802.
Tests
make test
run locallydocs/
folderinternal/acceptance