Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for Unity Catalog databricks_metastores data source #2017

Merged
merged 49 commits into from
Jul 19, 2023

Conversation

guillesd
Copy link
Contributor

@guillesd guillesd commented Feb 17, 2023

Addresses #1651

This PR would enable users to retrieve existing metastores, which will allow them to get the needed ID of a Metastore by name. This ID can then be used to assign a metastore to a workspace, without the need of hardcoding an ID for different workspace deployments.

Unit tests passing
Integration test passing

@nkvuong
Copy link
Contributor

nkvuong commented Feb 17, 2023

@guillesd we should wait until the account API for Unity Catalog is available before implementing this data source.

for this PR, you also need to add the mapping to provider.go

@guillesd
Copy link
Contributor Author

Hi @nkvuong! As discussed in the issue #1651, every unity endpoint uses the Workspace API, so i don't see why we shouldn't make this data source available. Even the databricks_metastore resource uses the Workspace API

@guillesd
Copy link
Contributor Author

@nkvuong I added the mapping to provider.go


This data source exports the following attributes:

* `metastores` - list of [databricks_metastore](../resources/share.md)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's pointing to the wrong file

Comment on lines 19 to 33
data.Name = append(data.Name, v.name)
data.StorageRoot = append(data.StorageRoot, v.storage_root)
data.DefaultDacID = append(data.DefaultDacID, v.default_data_access_config_id)
data.Owner = append(data.Owner, v.owner)
data.MetastoreID = append(data.MetastoreID, v.metastore_id)
data.Region = append(data.Region, v.region)
data.Cloud = append(data.Cloud, v.cloud)
data.GlobalMetastoreId = append(data.GlobalMetastoreId, v.global_metastore_id)
data.CreatedAt = append(data.CreatedAt, v.created_at)
data.CreatedBy = append(data.CreatedBy, v.created_by)
data.UpdatedAt = append(data.UpdatedAt, v.updated_at)
data.UpdatedBy = append(data.UpdatedBy, v.updated_by)
data.DeltaSharingScope = append(data.DeltaSharingScope, v.delta_sharing_scope)
data.DeltaSharingRecipientTokenLifetimeInSeconds = append(data.DeltaSharingRecipientTokenLifetimeInSeconds, v.delta_sharing_recipient_token_lifetime_in_seconds)
data.DeltaSharingOrganizationName = (data.DeltaSharingOrganizationName, v.delta_sharing_organization_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How the data is look like in use? I don't see where do you expose the metastores attribute...

catalog/data_metastores_test.go Outdated Show resolved Hide resolved
@nfx
Copy link
Contributor

nfx commented Feb 20, 2023

SDK is merged in #1848 , please update code accordingly

@guillesd
Copy link
Contributor Author

@alexott I fixed:

  • test/linting errors (sorry for this, my bad)
  • wrong reference in docs
    I also added:
  • A specific data test

@nfx not sure what you mean by "SDK is merged update code accordingly"

@nfx
Copy link
Contributor

nfx commented Feb 20, 2023

@guillesd it means that you have to use https://pkg.go.dev/github.com/databricks/databricks-sdk-go@v0.3.2/service/unitycatalog#MetastoresAPI

e.g.

w, err := c.WorkspaceClient()
..
w.Metastores....

@guillesd
Copy link
Contributor Author

ok @nfx, so if I understand correctly then all the API functions with this type of structure:

func (a MetastoresAPI) listMetastores() (mis Metastores, err error) {
	err = a.client.Get(a.context, "/unity-catalog/metastores", nil, &mis)
	return
}

will be replaced with their SDK counterpart?

@nfx
Copy link
Contributor

nfx commented Feb 21, 2023

will be replaced with their SDK counterpart?

@guillesd that's correct

@guillesd
Copy link
Contributor Author

Hi @nfx @alexott !

I made the change requested to implement this new resource using the databricks-go-sdk. I only did this for my specific change (the data_metastores.go data source). There are some test failing non-related to my PR.

I recommend to make an issue to change the contributing guide, since this still points developers to the old way (creating the functions that call the API endpoints).

Could you review and approve?

@nfx
Copy link
Contributor

nfx commented Mar 17, 2023

@guillesd all tests must pass

nfx
nfx previously requested changes Mar 17, 2023

func DataSourceMetastores() *schema.Resource {
type MetastoresData struct {
Metastores []unitycatalog.MetastoreInfo `json:"metastores,omitempty" tf:"computed"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we just return the list of metastore IDs and add a databricks_metastore data source to compliment this one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've seen indeed that this is something that is done with other data sources... but for me this is not really nice for the user, since they would need two resource blocks to get the information of my metastores (first the data_metastores, then the data_metastore passing the specific ID of the metastore).

catalog/resource_metastore.go Outdated Show resolved Hide resolved
@codecov-commenter
Copy link

codecov-commenter commented Mar 17, 2023

Codecov Report

Merging #2017 (c81d646) into master (b52eed9) will decrease coverage by 0.01%.
The diff coverage is 85.71%.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2017      +/-   ##
==========================================
- Coverage   88.13%   88.13%   -0.01%     
==========================================
  Files         143      144       +1     
  Lines       11903    11917      +14     
==========================================
+ Hits        10491    10503      +12     
- Misses        941      942       +1     
- Partials      471      472       +1     
Impacted Files Coverage Δ
catalog/resource_metastore.go 100.00% <ø> (ø)
catalog/data_metastores.go 84.61% <84.61%> (ø)
provider/provider.go 94.04% <100.00%> (+0.03%) ⬆️

@guillesd
Copy link
Contributor Author

guillesd commented Mar 17, 2023

@nfx the tests do push in the CI job, so unless you think there are any open points that need to be addressed, we are good to go!

Copy link
Contributor

@mgyucht mgyucht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's double check with UC team how they feel about data sources implemented with accounts client only before going forward with this.

@adamcain-db
Copy link

Let's double check with UC team how they feel about data sources implemented with accounts client only before going forward with this.

On behalf of the UC team, we are quite in favor of the terraform provider using the account-level UC Metastore CRUD/assign APIs instead of the workspce-level UC Metastore APIs. Thanks!

@tanmay-db tanmay-db requested a review from mgyucht June 30, 2023 17:01
Copy link
Contributor

@mgyucht mgyucht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost good to go, but let's improve the integration test.

func TestUcAccDataSourceMetastore(t *testing.T) {
accountLevel(t, step{
Template: `
data "databricks_metastores" "this" {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a reasonable integration test for the "no metastores" case. Can we add an integration test for the "one or more metastores case"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mgyucht not sure how easy it is to test both scenarios in a single account, unless we add a filter option to this data source

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are going to add another data source for specific metastore that contains the full information. Creating a specific metastore and getting it's details will be part of integration test for that data source. Marking this as non blocking for this data source since we have permanent metastores for our account. Also verified locally looking at the results of using the resource.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check has been added in the test

@nkvuong
Copy link
Contributor

nkvuong commented Jul 7, 2023

Account-level APIs for UC now works without the additional header 🎉

Acceptance tests passed

Copy link
Contributor

@nkvuong nkvuong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would make more sense to return a list of name -> id mapping, otherwise the id on itself is not very useful

@tanmay-db tanmay-db requested a review from nkvuong July 18, 2023 17:03
docs/data-sources/metastores.md Outdated Show resolved Hide resolved
@tanmay-db tanmay-db dismissed nfx’s stale review July 19, 2023 14:11

Received approvals, requested changes were long time back, integration test passing

@tanmay-db tanmay-db merged commit 0a23d5e into databricks:master Jul 19, 2023
nkvuong added a commit that referenced this pull request Jul 26, 2023
…2017)

* add documentation for databricks_metastores data source

* add API endpoint for listing metastores

* add metastores data resource

* add test for metastores data source

* add metastores datasource to resource mapping

* fix reference to wrong resource docs

* add a Metastores struct for the response of the API, use this in the datSource

* update terraform specific object attributes

* add new data test

* remove slice_set property from MetastoreData

* use databricks-go-sdk for data_metastore.go

* removed listMetastores endpoint since it's unused

* make sure tests also use the unitycatalog.MetastoreInfo from the sdk

* remove redundant resource

* test -dev

* fix

* fmt

* cleanup

* Added AccountClient to DatabricksClient and AccountData

* upd

* cleanup

* accountLevel

* upd

* add example

* list

* cleanup

* docs

* remove dead code

* wip

* use maps

* upd

* cleanup

* comments

* -

* remove redundant test

---------

Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: vuong-nguyen <44292934+nkvuong@users.noreply.github.com>
@mgyucht mgyucht mentioned this pull request Aug 1, 2023
github-merge-queue bot pushed a commit that referenced this pull request Aug 24, 2023
* first draft

* account client check

* account client check

* Fixed `databricks_service_principals` data source issue with empty filter (#2185)

* fix `databricks_service_principals` data source issue with empty filter

* fix acc tests

* Allow rotating `token` block in `databricks_mws_workspaces` resource by only changing `comment` field (#2114)

Tested manually for the following cases

Without this PR the provider recreates the entire workspace on a token update
With changes in this PR only the token is refreshed
When both token and storage_configuration_id are changed then the entire workspace is recreated
Additional unit tests also added that allow checks that patch workspace calls are not made when only token is changed

Also added an integration test to check tokens are successfully updated

* Excludes roles in scim API list calls to reduce load on databricks scim service (#2181)

* Exclude roles in scim API list calls

* more test fixes

* Update SDK to v0.6.0 (#2186)

* Update SDK to v0.6.0

* go mod tidy

* update sdk to 0.7.0

* add integration tests

* fix acceptance tests

* fix tests

* add account-level API support for `metastore_data_access`

* add account API support for `databricks_storage_credential`

* address feedback

* refactor to `WorkspaceOrAccountRequest`

* fix acceptance tests

* Release v1.21.0 (#2471)

Release v1.21.0 of the Terraform Provider for Databricks.

## Changes
 * Added condition_task to the [`databricks_job`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource (private preview) ([#2459](#2459)).
 * Added `AccountData`, `AccountClient` and define generic databricks data utilites for defining workspace and account-level data sources ([#2429](#2429)).
 * Added documentation link to existing Databricks Terraform modules ([#2439](#2439)).
 * Added experimental compute field to [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource ([#2401](#2401)).
 * Added import example to doc for [databricks_group_member](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/group_member) resource ([#2453](#2453)).
 * Added support for subscriptions in dashboards & alert SQL tasks in [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) ([#2447](#2447)).
 * Fixed model serving integration test ([#2460](#2460), [#2461](#2461)).
 * Fixed [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource file arrival trigger parameter name ([#2438](#2438)).
 * Fixed catalog_workspace_binding_test ([#2463](#2463), [#2451](#2451)).

No breaking changes in this release.

* Install mlflow cluster using  in model serving test if the cluster is already running (#2470)

* Bump golang.org/x/mod from 0.11.0 to 0.12.0 (#2462)

Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.11.0 to 0.12.0.
- [Commits](golang/mod@v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/mod
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Exporter: make resource names more unique to avoid duplicate resources errors (#2452)

This includes following changes:

* Add user ID to the `databricks_user` resource name to avoid clashes on names like, `user+1` and `user_1`
* Add user/sp/group ID to the name of the `databricks_group_member` resource
* Remove too aggressive name normalization pattern that also leads to the generation of
  the duplicate resource names for different resources

* Add documentation notes about legacy cluster type & data access (#2437)

* Add documentation notes about legacy cluster type & data access

* Update docs/resources/cluster.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Update docs/resources/mount.md

Co-authored-by: Miles Yucht <miles@databricks.com>

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Use random catalog name in SQL table integration tests (#2473)

The fixed value prevented concurrent integration test runs.

* Link model serving docs to top level README (#2474)

* Add one more item to the troubleshooting guide (#2477)

It's related to use OAuth for authentication but not providing `account_id` in the
provider configuration.

* Added `databricks_access_control_rule_set` resource for managing account-level access (#2371)

* Added `acl_principal_id` attribute to `databricks_user`, `databricks_group` & `databricks_service_principal` for easier use with `databricks_access_control_rule_set` (#2485)

It should simplify specification of principals in the `databricks_access_control_rule_set`
so instead of this (string with placeholders):

```
   grant_rules {
     principals = ["groups/${databricks_group.ds.display_name}"]
     role       = "roles/servicePrincipal.user"
   }
```

it will be simpler to refer like this:

```
   grant_rules {
     principals = [databricks_group.ds.acl_principal_id]
     role       = "roles/servicePrincipal.user"
   }
```

* Added support for Unity Catalog `databricks_metastores` data source  (#2017)

* add documentation for databricks_metastores data source

* add API endpoint for listing metastores

* add metastores data resource

* add test for metastores data source

* add metastores datasource to resource mapping

* fix reference to wrong resource docs

* add a Metastores struct for the response of the API, use this in the datSource

* update terraform specific object attributes

* add new data test

* remove slice_set property from MetastoreData

* use databricks-go-sdk for data_metastore.go

* removed listMetastores endpoint since it's unused

* make sure tests also use the unitycatalog.MetastoreInfo from the sdk

* remove redundant resource

* test -dev

* fix

* fmt

* cleanup

* Added AccountClient to DatabricksClient and AccountData

* upd

* cleanup

* accountLevel

* upd

* add example

* list

* cleanup

* docs

* remove dead code

* wip

* use maps

* upd

* cleanup

* comments

* -

* remove redundant test

---------

Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: vuong-nguyen <44292934+nkvuong@users.noreply.github.com>

* Added support for Unity Catalog `databricks_metastore` data source (#2492)

Enable fetching account level metastore information through id for a single metastore.

* Supported new Delve binary name format (#2497)

https://github.com/go-delve/delve/blob/master/CHANGELOG.md#1210-2023-06-23 changes the naming of the delve debug binary. This PR changes isInDebug to accommodate old and new versions of Delve.

* Add code owners for Terraform (#2498)

* Removed unused dlvLoadConfig configuration from settings.json (#2499)

* Fix provider after updating SDK to 0.13 (#2494)

* Fix provider after updating SDK to 0.13

* add unit test

* split test

* Added `control_run_state` flag to the `databricks_job` resource for continuous jobs (#2466)

This PR introduces a new flag, control_run_state, to replace the always_running flag. This flag only applies to continuous jobs. Its behavior is described below:

For jobs with pause_status = PAUSED, it is a no-op on create and stops the active job run on update (if applicable).
For jobs with pause_status = UNPAUSED, it starts a job run on create and stops the active job run on update (if applicable).
The job does not need to be started, as that is handled by the Jobs service itself.

This fixes #2130.

* Added exporter for `databricks_workspace_file` resource (#2493)

* Preliminary changes to make workspace files implementation

- make `NotebooksAPI.List` to return directories as well when called in the recursive
  mode (same as non-recursive behavior)
- Because of that, remove the separate `ListDirectories`
- Extend `workspace.ObjectStatus` with additional fields (will be required for
  incremental notebooks export)
- Cache listing of all workspace objects, and then use it for all operations - list
  notebooks, list directories, list workspace files

* Added exporting of workspace files

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Supported boolean values in `databricks_sql_alert` alerts (#2506)

* Added more common issues for troubleshooting (#2486)

* add troubleshooting

* fix doc category

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Fixed handling of comments in `databricks_sql_table` resource (#2472)

* column comments and single quote escape

* Delimiter collision avoidance table comment

* compatible with user single quote escape

* unit tests for parseComment

* corrected fmt

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Added clarification that `databricks_schema` and `databricks_sql_table` should be imported by their full name, not just by name (#2491)

Co-authored-by: Miles Yucht <miles@databricks.com>

* Updated `databricks_user` with `force = true` to check for error message prefix (#2510)

This fixes #2500

* fix force delete

* remove orphaned code

* fix acceptance tests

* upgrade go sdk

* fix metastoreinfo struct

* docs update

* fix acceptance tests

* fix tests

* updated docs

* fix tests

* rename test

* update tests

* fix tests

* fix test

* add state upgrader

* fix struct

* fix tests

* feedback

* feedback

* fix acc test

* fix test

* fix test

* fix test

* feedback

* fix acc tests

* feedback

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Serge Smertin <259697+nfx@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Gautham Sunjay <gauthamsunjay17@gmail.com>
Co-authored-by: guillesd <74136033+guillesd@users.noreply.github.com>
Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com>
Co-authored-by: Fabian Jakobs <fabian.jakobs@databricks.com>
Co-authored-by: klus <lus.karol@gmail.com>
nkvuong added a commit that referenced this pull request Sep 7, 2023
* first draft

* account client check

* account client check

* Fixed `databricks_service_principals` data source issue with empty filter (#2185)

* fix `databricks_service_principals` data source issue with empty filter

* fix acc tests

* Allow rotating `token` block in `databricks_mws_workspaces` resource by only changing `comment` field (#2114)

Tested manually for the following cases

Without this PR the provider recreates the entire workspace on a token update
With changes in this PR only the token is refreshed
When both token and storage_configuration_id are changed then the entire workspace is recreated
Additional unit tests also added that allow checks that patch workspace calls are not made when only token is changed

Also added an integration test to check tokens are successfully updated

* Excludes roles in scim API list calls to reduce load on databricks scim service (#2181)

* Exclude roles in scim API list calls

* more test fixes

* Update SDK to v0.6.0 (#2186)

* Update SDK to v0.6.0

* go mod tidy

* update sdk to 0.7.0

* add integration tests

* fix acceptance tests

* fix tests

* add account-level API support for `metastore_data_access`

* add account API support for `databricks_storage_credential`

* address feedback

* refactor to `WorkspaceOrAccountRequest`

* fix acceptance tests

* Release v1.21.0 (#2471)

Release v1.21.0 of the Terraform Provider for Databricks.

 * Added condition_task to the [`databricks_job`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource (private preview) ([#2459](#2459)).
 * Added `AccountData`, `AccountClient` and define generic databricks data utilites for defining workspace and account-level data sources ([#2429](#2429)).
 * Added documentation link to existing Databricks Terraform modules ([#2439](#2439)).
 * Added experimental compute field to [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource ([#2401](#2401)).
 * Added import example to doc for [databricks_group_member](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/group_member) resource ([#2453](#2453)).
 * Added support for subscriptions in dashboards & alert SQL tasks in [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) ([#2447](#2447)).
 * Fixed model serving integration test ([#2460](#2460), [#2461](#2461)).
 * Fixed [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource file arrival trigger parameter name ([#2438](#2438)).
 * Fixed catalog_workspace_binding_test ([#2463](#2463), [#2451](#2451)).

No breaking changes in this release.

* Install mlflow cluster using  in model serving test if the cluster is already running (#2470)

* Bump golang.org/x/mod from 0.11.0 to 0.12.0 (#2462)

Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.11.0 to 0.12.0.
- [Commits](golang/mod@v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/mod
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Exporter: make resource names more unique to avoid duplicate resources errors (#2452)

This includes following changes:

* Add user ID to the `databricks_user` resource name to avoid clashes on names like, `user+1` and `user_1`
* Add user/sp/group ID to the name of the `databricks_group_member` resource
* Remove too aggressive name normalization pattern that also leads to the generation of
  the duplicate resource names for different resources

* Add documentation notes about legacy cluster type & data access (#2437)

* Add documentation notes about legacy cluster type & data access

* Update docs/resources/cluster.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Update docs/resources/mount.md

Co-authored-by: Miles Yucht <miles@databricks.com>

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Use random catalog name in SQL table integration tests (#2473)

The fixed value prevented concurrent integration test runs.

* Link model serving docs to top level README (#2474)

* Add one more item to the troubleshooting guide (#2477)

It's related to use OAuth for authentication but not providing `account_id` in the
provider configuration.

* Added `databricks_access_control_rule_set` resource for managing account-level access (#2371)

* Added `acl_principal_id` attribute to `databricks_user`, `databricks_group` & `databricks_service_principal` for easier use with `databricks_access_control_rule_set` (#2485)

It should simplify specification of principals in the `databricks_access_control_rule_set`
so instead of this (string with placeholders):

```
   grant_rules {
     principals = ["groups/${databricks_group.ds.display_name}"]
     role       = "roles/servicePrincipal.user"
   }
```

it will be simpler to refer like this:

```
   grant_rules {
     principals = [databricks_group.ds.acl_principal_id]
     role       = "roles/servicePrincipal.user"
   }
```

* Added support for Unity Catalog `databricks_metastores` data source  (#2017)

* add documentation for databricks_metastores data source

* add API endpoint for listing metastores

* add metastores data resource

* add test for metastores data source

* add metastores datasource to resource mapping

* fix reference to wrong resource docs

* add a Metastores struct for the response of the API, use this in the datSource

* update terraform specific object attributes

* add new data test

* remove slice_set property from MetastoreData

* use databricks-go-sdk for data_metastore.go

* removed listMetastores endpoint since it's unused

* make sure tests also use the unitycatalog.MetastoreInfo from the sdk

* remove redundant resource

* test -dev

* fix

* fmt

* cleanup

* Added AccountClient to DatabricksClient and AccountData

* upd

* cleanup

* accountLevel

* upd

* add example

* list

* cleanup

* docs

* remove dead code

* wip

* use maps

* upd

* cleanup

* comments

* -

* remove redundant test

---------

Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: vuong-nguyen <44292934+nkvuong@users.noreply.github.com>

* Added support for Unity Catalog `databricks_metastore` data source (#2492)

Enable fetching account level metastore information through id for a single metastore.

* Supported new Delve binary name format (#2497)

https://github.com/go-delve/delve/blob/master/CHANGELOG.md#1210-2023-06-23 changes the naming of the delve debug binary. This PR changes isInDebug to accommodate old and new versions of Delve.

* Add code owners for Terraform (#2498)

* Removed unused dlvLoadConfig configuration from settings.json (#2499)

* Fix provider after updating SDK to 0.13 (#2494)

* Fix provider after updating SDK to 0.13

* add unit test

* split test

* Added `control_run_state` flag to the `databricks_job` resource for continuous jobs (#2466)

This PR introduces a new flag, control_run_state, to replace the always_running flag. This flag only applies to continuous jobs. Its behavior is described below:

For jobs with pause_status = PAUSED, it is a no-op on create and stops the active job run on update (if applicable).
For jobs with pause_status = UNPAUSED, it starts a job run on create and stops the active job run on update (if applicable).
The job does not need to be started, as that is handled by the Jobs service itself.

This fixes #2130.

* Added exporter for `databricks_workspace_file` resource (#2493)

* Preliminary changes to make workspace files implementation

- make `NotebooksAPI.List` to return directories as well when called in the recursive
  mode (same as non-recursive behavior)
- Because of that, remove the separate `ListDirectories`
- Extend `workspace.ObjectStatus` with additional fields (will be required for
  incremental notebooks export)
- Cache listing of all workspace objects, and then use it for all operations - list
  notebooks, list directories, list workspace files

* Added exporting of workspace files

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Supported boolean values in `databricks_sql_alert` alerts (#2506)

* Added more common issues for troubleshooting (#2486)

* add troubleshooting

* fix doc category

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Fixed handling of comments in `databricks_sql_table` resource (#2472)

* column comments and single quote escape

* Delimiter collision avoidance table comment

* compatible with user single quote escape

* unit tests for parseComment

* corrected fmt

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Added clarification that `databricks_schema` and `databricks_sql_table` should be imported by their full name, not just by name (#2491)

Co-authored-by: Miles Yucht <miles@databricks.com>

* Updated `databricks_user` with `force = true` to check for error message prefix (#2510)

This fixes #2500

* fix force delete

* remove orphaned code

* fix acceptance tests

* upgrade go sdk

* fix metastoreinfo struct

* docs update

* fix acceptance tests

* fix tests

* updated docs

* fix tests

* rename test

* update tests

* fix tests

* fix test

* add state upgrader

* fix struct

* fix tests

* feedback

* feedback

* fix acc test

* fix test

* fix test

* fix test

* feedback

* fix acc tests

* feedback

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Serge Smertin <259697+nfx@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Gautham Sunjay <gauthamsunjay17@gmail.com>
Co-authored-by: guillesd <74136033+guillesd@users.noreply.github.com>
Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com>
Co-authored-by: Fabian Jakobs <fabian.jakobs@databricks.com>
Co-authored-by: klus <lus.karol@gmail.com>
nkvuong added a commit that referenced this pull request Oct 3, 2023
* first draft

* account client check

* account client check

* Fixed `databricks_service_principals` data source issue with empty filter (#2185)

* fix `databricks_service_principals` data source issue with empty filter

* fix acc tests

* Allow rotating `token` block in `databricks_mws_workspaces` resource by only changing `comment` field (#2114)

Tested manually for the following cases

Without this PR the provider recreates the entire workspace on a token update
With changes in this PR only the token is refreshed
When both token and storage_configuration_id are changed then the entire workspace is recreated
Additional unit tests also added that allow checks that patch workspace calls are not made when only token is changed

Also added an integration test to check tokens are successfully updated

* Excludes roles in scim API list calls to reduce load on databricks scim service (#2181)

* Exclude roles in scim API list calls

* more test fixes

* Update SDK to v0.6.0 (#2186)

* Update SDK to v0.6.0

* go mod tidy

* update sdk to 0.7.0

* add integration tests

* fix acceptance tests

* fix tests

* add account-level API support for `metastore_data_access`

* add account API support for `databricks_storage_credential`

* address feedback

* refactor to `WorkspaceOrAccountRequest`

* fix acceptance tests

* Release v1.21.0 (#2471)

Release v1.21.0 of the Terraform Provider for Databricks.

## Changes
 * Added condition_task to the [`databricks_job`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource (private preview) ([#2459](#2459)).
 * Added `AccountData`, `AccountClient` and define generic databricks data utilites for defining workspace and account-level data sources ([#2429](#2429)).
 * Added documentation link to existing Databricks Terraform modules ([#2439](#2439)).
 * Added experimental compute field to [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource ([#2401](#2401)).
 * Added import example to doc for [databricks_group_member](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/group_member) resource ([#2453](#2453)).
 * Added support for subscriptions in dashboards & alert SQL tasks in [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) ([#2447](#2447)).
 * Fixed model serving integration test ([#2460](#2460), [#2461](#2461)).
 * Fixed [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource file arrival trigger parameter name ([#2438](#2438)).
 * Fixed catalog_workspace_binding_test ([#2463](#2463), [#2451](#2451)).

No breaking changes in this release.

* Install mlflow cluster using  in model serving test if the cluster is already running (#2470)

* Bump golang.org/x/mod from 0.11.0 to 0.12.0 (#2462)

Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.11.0 to 0.12.0.
- [Commits](golang/mod@v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/mod
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Exporter: make resource names more unique to avoid duplicate resources errors (#2452)

This includes following changes:

* Add user ID to the `databricks_user` resource name to avoid clashes on names like, `user+1` and `user_1`
* Add user/sp/group ID to the name of the `databricks_group_member` resource
* Remove too aggressive name normalization pattern that also leads to the generation of
  the duplicate resource names for different resources

* Add documentation notes about legacy cluster type & data access (#2437)

* Add documentation notes about legacy cluster type & data access

* Update docs/resources/cluster.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Update docs/resources/mount.md

Co-authored-by: Miles Yucht <miles@databricks.com>

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Use random catalog name in SQL table integration tests (#2473)

The fixed value prevented concurrent integration test runs.

* Link model serving docs to top level README (#2474)

* Add one more item to the troubleshooting guide (#2477)

It's related to use OAuth for authentication but not providing `account_id` in the
provider configuration.

* Added `databricks_access_control_rule_set` resource for managing account-level access (#2371)

* Added `acl_principal_id` attribute to `databricks_user`, `databricks_group` & `databricks_service_principal` for easier use with `databricks_access_control_rule_set` (#2485)

It should simplify specification of principals in the `databricks_access_control_rule_set`
so instead of this (string with placeholders):

```
   grant_rules {
     principals = ["groups/${databricks_group.ds.display_name}"]
     role       = "roles/servicePrincipal.user"
   }
```

it will be simpler to refer like this:

```
   grant_rules {
     principals = [databricks_group.ds.acl_principal_id]
     role       = "roles/servicePrincipal.user"
   }
```

* Added support for Unity Catalog `databricks_metastores` data source  (#2017)

* add documentation for databricks_metastores data source

* add API endpoint for listing metastores

* add metastores data resource

* add test for metastores data source

* add metastores datasource to resource mapping

* fix reference to wrong resource docs

* add a Metastores struct for the response of the API, use this in the datSource

* update terraform specific object attributes

* add new data test

* remove slice_set property from MetastoreData

* use databricks-go-sdk for data_metastore.go

* removed listMetastores endpoint since it's unused

* make sure tests also use the unitycatalog.MetastoreInfo from the sdk

* remove redundant resource

* test -dev

* fix

* fmt

* cleanup

* Added AccountClient to DatabricksClient and AccountData

* upd

* cleanup

* accountLevel

* upd

* add example

* list

* cleanup

* docs

* remove dead code

* wip

* use maps

* upd

* cleanup

* comments

* -

* remove redundant test

---------

Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: vuong-nguyen <44292934+nkvuong@users.noreply.github.com>

* Added support for Unity Catalog `databricks_metastore` data source (#2492)

Enable fetching account level metastore information through id for a single metastore.

* Supported new Delve binary name format (#2497)

https://github.com/go-delve/delve/blob/master/CHANGELOG.md#1210-2023-06-23 changes the naming of the delve debug binary. This PR changes isInDebug to accommodate old and new versions of Delve.

* Add code owners for Terraform (#2498)

* Removed unused dlvLoadConfig configuration from settings.json (#2499)

* Fix provider after updating SDK to 0.13 (#2494)

* Fix provider after updating SDK to 0.13

* add unit test

* split test

* Added `control_run_state` flag to the `databricks_job` resource for continuous jobs (#2466)

This PR introduces a new flag, control_run_state, to replace the always_running flag. This flag only applies to continuous jobs. Its behavior is described below:

For jobs with pause_status = PAUSED, it is a no-op on create and stops the active job run on update (if applicable).
For jobs with pause_status = UNPAUSED, it starts a job run on create and stops the active job run on update (if applicable).
The job does not need to be started, as that is handled by the Jobs service itself.

This fixes #2130.

* Added exporter for `databricks_workspace_file` resource (#2493)

* Preliminary changes to make workspace files implementation

- make `NotebooksAPI.List` to return directories as well when called in the recursive
  mode (same as non-recursive behavior)
- Because of that, remove the separate `ListDirectories`
- Extend `workspace.ObjectStatus` with additional fields (will be required for
  incremental notebooks export)
- Cache listing of all workspace objects, and then use it for all operations - list
  notebooks, list directories, list workspace files

* Added exporting of workspace files

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Supported boolean values in `databricks_sql_alert` alerts (#2506)

* Added more common issues for troubleshooting (#2486)

* add troubleshooting

* fix doc category

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Fixed handling of comments in `databricks_sql_table` resource (#2472)

* column comments and single quote escape

* Delimiter collision avoidance table comment

* compatible with user single quote escape

* unit tests for parseComment

* corrected fmt

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Added clarification that `databricks_schema` and `databricks_sql_table` should be imported by their full name, not just by name (#2491)

Co-authored-by: Miles Yucht <miles@databricks.com>

* Updated `databricks_user` with `force = true` to check for error message prefix (#2510)

This fixes #2500

* fix force delete

* remove orphaned code

* fix acceptance tests

* upgrade go sdk

* fix metastoreinfo struct

* docs update

* fix acceptance tests

* fix tests

* updated docs

* fix tests

* rename test

* update tests

* fix tests

* fix test

* add state upgrader

* fix struct

* fix tests

* feedback

* feedback

* fix acc test

* fix test

* fix test

* fix test

* feedback

* fix acc tests

* feedback

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Serge Smertin <259697+nfx@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Gautham Sunjay <gauthamsunjay17@gmail.com>
Co-authored-by: guillesd <74136033+guillesd@users.noreply.github.com>
Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com>
Co-authored-by: Fabian Jakobs <fabian.jakobs@databricks.com>
Co-authored-by: klus <lus.karol@gmail.com>
github-merge-queue bot pushed a commit that referenced this pull request Dec 18, 2023
* refactor to support account-level ip acl

* add doc

* add acceptance tests

* Refactor `databricks_schema` to Go SDK (#2572)

* refactor `databricks_schema` to Go SDK

* clean up

* Refactor `databricks_external_location` to Go SDK (#2546)

* refactor external location to Go SDK

* keep force_new

* switch struct to sdk

* Refresh `databricks_grants` with latest permissible grants (#2567)

* refresh latest grants

* fix test

* fix typo

* add test

* Fixed databricks_access_control_rule_set integration test in Azure (#2591)

* Update Go SDK to v0.17.0 (#2599)

* Update Go SDK to 0.17.0

* go mod tidy

* RunJobTask job_id type fix (#2588)

* fix run job id bug

* update tests

* update exporter tests

* Add `databricks_connection` resource to support Lakehouse Federation (#2528)

* first draft

* add foreign catalog

* update doc

* Fixed `databricks_job` resource to clear instance-specific attributes when `instance_pool_id` is specified (#2507)

NodeTypeID cannot be set in jobsAPI.Update() if InstancePoolID is specified.
If both are specified, assume InstancePoolID takes precedence and NodeTypeID is only computed.

Closes #2502.
Closes #2141.

* Added `full_refresh` attribute to the `pipeline_task` in `databricks_job` (#2444)

This allows to force full refresh of the pipeline from the job.

This fixes #2362

* Configured merge queue for the provider (#2533)

* misc doc updates (#2516)

* Bump github.com/databricks/databricks-sdk-go from 0.13.0 to 0.14.1 (#2523)

Bumps [github.com/databricks/databricks-sdk-go](https://github.com/databricks/databricks-sdk-go) from 0.13.0 to 0.14.1.
- [Release notes](https://github.com/databricks/databricks-sdk-go/releases)
- [Changelog](https://github.com/databricks/databricks-sdk-go/blob/main/CHANGELOG.md)
- [Commits](databricks/databricks-sdk-go@v0.13.0...v0.14.1)

---
updated-dependencies:
- dependency-name: github.com/databricks/databricks-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* Fix IP ACL read (#2515)

* Add support for `USE_MARKETPLACE_ASSETS` privilege to metastore (#2505)

* Update docs to include USE_MARKETPLACE_ASSETS privilege

* Add USE_MARKETPLACE_ASSETS to metastore privileges

* Add git job_source to job resource (#2538)

* Add git job_source to job resource

* lint

* fix test

* Use go sdk type

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source (#2458)

* Allow search SQL Warehouses by name in `databricks_sql_warehouse` data source

Right now it's possible to search only by the warehouse ID, but it's not always convenient
although it's possible by using `databricks_sql_warehouses` data source + explicit
filtering.  This PR adds a capability to search by either SQL warehouse name or ID.

This fixes #2443

* Update docs/data-sources/sql_warehouse.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Address review comments

also change documentation a bit to better match the data source - it was copied from the
resource as-is.

* More fixes from review

* code review comments

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Late jobs support (aka health conditions) in `databricks_job` resource (#2496)

* Late jobs support (aka health conditions) in `databricks_job` resource

Added support for `health` block that is used to detect late jobs.  Also, this PR includes
following changes:

* Added `on_duration_warning_threshold_exceeded` attribute to email & webhook notifications (needed for late jobs support)
* Added `notification_settings` on a task level & use jobs & task notification structs from Go SDK
* Reorganized documentation for task block as it's getting more & more attributes

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Update docs/resources/job.md

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* address review comments

* add list of tasks

* more review chanes

---------

Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>

* feedback

* update struct

* add suppress diff

* fix suppress diff

* fix acceptance tests

* test feedback

* make id a pair

* better sensitive options handling

* reorder id pair

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: marekbrysa <53767523+marekbrysa@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bvdboom <bvdboom@users.noreply.github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>

* Documentation changes (#2576)

* add troubleshooting guide for grants/permissions config drifts

* update authentication in index

* feedback

* Exporter: Incremental export of notebooks, SQL objects and some other resources (#2563)

* First pass on incremental export of notebooks/files & SQL objects

* use optional modified_at field

* Add `-incremental` & `-update-since` command-line options

* Incremental generation of `import.sh` and `vars.tf`

* Incremental export of the TF resources themselves

* Added incremental support for the rest of the objects

also updated documentation & compatibility matrix

* Add tests for incremental generation

* Add clarification about periodic full export

* Store last run timestamp in the file on disk and use with `-incremental`

* Address initial review comments

* Incremental export of MLflow Webhooks, expanded tests to cover merge of variables

* fix test

* Fix reflection method marshallJSON for CMK in mws workspace (#2605)

* Fix reflection method marshallJSON for CMK in mws workspace

* Add UT

* Add missing documentation for CMK support on GCP (#2604)

* add CMK on GCP to docs

* feedback

* Add `owner` parameter to `databricks_share` resource (#2594)

* add `owner` parameter to `databricks_share`

* suppress diff

* Exporter: command-line option to control output format for notebooks (#2569)

New command-line option `-notebooksFormat` allows to export notebooks in DBC and IPython formats.

This fixes #2568

* Fix creation of views with comments using `databricks_sql_table` (#2589)

* mark column type as omitempty

* add acc test

* escape names for sql

* add test & suppress_diff

* fix tests

* fix acc test

* update doc

* fix acc tests

* fix doc

* Add account-level API support for Unity Catalog objects (#2182)

* first draft

* account client check

* account client check

* Fixed `databricks_service_principals` data source issue with empty filter (#2185)

* fix `databricks_service_principals` data source issue with empty filter

* fix acc tests

* Allow rotating `token` block in `databricks_mws_workspaces` resource by only changing `comment` field (#2114)

Tested manually for the following cases

Without this PR the provider recreates the entire workspace on a token update
With changes in this PR only the token is refreshed
When both token and storage_configuration_id are changed then the entire workspace is recreated
Additional unit tests also added that allow checks that patch workspace calls are not made when only token is changed

Also added an integration test to check tokens are successfully updated

* Excludes roles in scim API list calls to reduce load on databricks scim service (#2181)

* Exclude roles in scim API list calls

* more test fixes

* Update SDK to v0.6.0 (#2186)

* Update SDK to v0.6.0

* go mod tidy

* update sdk to 0.7.0

* add integration tests

* fix acceptance tests

* fix tests

* add account-level API support for `metastore_data_access`

* add account API support for `databricks_storage_credential`

* address feedback

* refactor to `WorkspaceOrAccountRequest`

* fix acceptance tests

* Release v1.21.0 (#2471)

Release v1.21.0 of the Terraform Provider for Databricks.

 * Added condition_task to the [`databricks_job`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource (private preview) ([#2459](#2459)).
 * Added `AccountData`, `AccountClient` and define generic databricks data utilites for defining workspace and account-level data sources ([#2429](#2429)).
 * Added documentation link to existing Databricks Terraform modules ([#2439](#2439)).
 * Added experimental compute field to [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource ([#2401](#2401)).
 * Added import example to doc for [databricks_group_member](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/group_member) resource ([#2453](#2453)).
 * Added support for subscriptions in dashboards & alert SQL tasks in [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) ([#2447](#2447)).
 * Fixed model serving integration test ([#2460](#2460), [#2461](#2461)).
 * Fixed [databricks_job](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/job) resource file arrival trigger parameter name ([#2438](#2438)).
 * Fixed catalog_workspace_binding_test ([#2463](#2463), [#2451](#2451)).

No breaking changes in this release.

* Install mlflow cluster using  in model serving test if the cluster is already running (#2470)

* Bump golang.org/x/mod from 0.11.0 to 0.12.0 (#2462)

Bumps [golang.org/x/mod](https://github.com/golang/mod) from 0.11.0 to 0.12.0.
- [Commits](golang/mod@v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/mod
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Exporter: make resource names more unique to avoid duplicate resources errors (#2452)

This includes following changes:

* Add user ID to the `databricks_user` resource name to avoid clashes on names like, `user+1` and `user_1`
* Add user/sp/group ID to the name of the `databricks_group_member` resource
* Remove too aggressive name normalization pattern that also leads to the generation of
  the duplicate resource names for different resources

* Add documentation notes about legacy cluster type & data access (#2437)

* Add documentation notes about legacy cluster type & data access

* Update docs/resources/cluster.md

Co-authored-by: Miles Yucht <miles@databricks.com>

* Update docs/resources/mount.md

Co-authored-by: Miles Yucht <miles@databricks.com>

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Use random catalog name in SQL table integration tests (#2473)

The fixed value prevented concurrent integration test runs.

* Link model serving docs to top level README (#2474)

* Add one more item to the troubleshooting guide (#2477)

It's related to use OAuth for authentication but not providing `account_id` in the
provider configuration.

* Added `databricks_access_control_rule_set` resource for managing account-level access (#2371)

* Added `acl_principal_id` attribute to `databricks_user`, `databricks_group` & `databricks_service_principal` for easier use with `databricks_access_control_rule_set` (#2485)

It should simplify specification of principals in the `databricks_access_control_rule_set`
so instead of this (string with placeholders):

```
   grant_rules {
     principals = ["groups/${databricks_group.ds.display_name}"]
     role       = "roles/servicePrincipal.user"
   }
```

it will be simpler to refer like this:

```
   grant_rules {
     principals = [databricks_group.ds.acl_principal_id]
     role       = "roles/servicePrincipal.user"
   }
```

* Added support for Unity Catalog `databricks_metastores` data source  (#2017)

* add documentation for databricks_metastores data source

* add API endpoint for listing metastores

* add metastores data resource

* add test for metastores data source

* add metastores datasource to resource mapping

* fix reference to wrong resource docs

* add a Metastores struct for the response of the API, use this in the datSource

* update terraform specific object attributes

* add new data test

* remove slice_set property from MetastoreData

* use databricks-go-sdk for data_metastore.go

* removed listMetastores endpoint since it's unused

* make sure tests also use the unitycatalog.MetastoreInfo from the sdk

* remove redundant resource

* test -dev

* fix

* fmt

* cleanup

* Added AccountClient to DatabricksClient and AccountData

* upd

* cleanup

* accountLevel

* upd

* add example

* list

* cleanup

* docs

* remove dead code

* wip

* use maps

* upd

* cleanup

* comments

* -

* remove redundant test

---------

Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: vuong-nguyen <44292934+nkvuong@users.noreply.github.com>

* Added support for Unity Catalog `databricks_metastore` data source (#2492)

Enable fetching account level metastore information through id for a single metastore.

* Supported new Delve binary name format (#2497)

https://github.com/go-delve/delve/blob/master/CHANGELOG.md#1210-2023-06-23 changes the naming of the delve debug binary. This PR changes isInDebug to accommodate old and new versions of Delve.

* Add code owners for Terraform (#2498)

* Removed unused dlvLoadConfig configuration from settings.json (#2499)

* Fix provider after updating SDK to 0.13 (#2494)

* Fix provider after updating SDK to 0.13

* add unit test

* split test

* Added `control_run_state` flag to the `databricks_job` resource for continuous jobs (#2466)

This PR introduces a new flag, control_run_state, to replace the always_running flag. This flag only applies to continuous jobs. Its behavior is described below:

For jobs with pause_status = PAUSED, it is a no-op on create and stops the active job run on update (if applicable).
For jobs with pause_status = UNPAUSED, it starts a job run on create and stops the active job run on update (if applicable).
The job does not need to be started, as that is handled by the Jobs service itself.

This fixes #2130.

* Added exporter for `databricks_workspace_file` resource (#2493)

* Preliminary changes to make workspace files implementation

- make `NotebooksAPI.List` to return directories as well when called in the recursive
  mode (same as non-recursive behavior)
- Because of that, remove the separate `ListDirectories`
- Extend `workspace.ObjectStatus` with additional fields (will be required for
  incremental notebooks export)
- Cache listing of all workspace objects, and then use it for all operations - list
  notebooks, list directories, list workspace files

* Added exporting of workspace files

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Supported boolean values in `databricks_sql_alert` alerts (#2506)

* Added more common issues for troubleshooting (#2486)

* add troubleshooting

* fix doc category

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Fixed handling of comments in `databricks_sql_table` resource (#2472)

* column comments and single quote escape

* Delimiter collision avoidance table comment

* compatible with user single quote escape

* unit tests for parseComment

* corrected fmt

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Added clarification that `databricks_schema` and `databricks_sql_table` should be imported by their full name, not just by name (#2491)

Co-authored-by: Miles Yucht <miles@databricks.com>

* Updated `databricks_user` with `force = true` to check for error message prefix (#2510)

This fixes #2500

* fix force delete

* remove orphaned code

* fix acceptance tests

* upgrade go sdk

* fix metastoreinfo struct

* docs update

* fix acceptance tests

* fix tests

* updated docs

* fix tests

* rename test

* update tests

* fix tests

* fix test

* add state upgrader

* fix struct

* fix tests

* feedback

* feedback

* fix acc test

* fix test

* fix test

* fix test

* feedback

* fix acc tests

* feedback

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Serge Smertin <259697+nfx@users.noreply.github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Gautham Sunjay <gauthamsunjay17@gmail.com>
Co-authored-by: guillesd <74136033+guillesd@users.noreply.github.com>
Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com>
Co-authored-by: Fabian Jakobs <fabian.jakobs@databricks.com>
Co-authored-by: klus <lus.karol@gmail.com>

* Fix UC acceptance test (#2613)

* fix acc test

* remove deprecated field from sdk

* Release v1.24.0 (#2614)

* Release v1.24.0

* Release v1.24.0

* Release v1.24.0

* Release v1.24.0

* Fixed verification of workspace reachability by using scim/me which is always available  (#2618)

* add flag to skip verification

* cleanup

* cleanup

* -

* test

* test

* test

* test

* -

* -

* Release v1.24.1 (#2625)

* Release v1.24.1

* go upd

* new line

* Add doc strings for ResourceFixtures (#2633)

* Add doc strings for ResourceFixtures

* fmt

* Update qa/testing.go

Co-authored-by: Miles Yucht <miles@databricks.com>

* Update qa/testing.go

Co-authored-by: Miles Yucht <miles@databricks.com>

---------

Co-authored-by: Miles Yucht <miles@databricks.com>

* Bump github.com/hashicorp/hcl/v2 from 2.17.0 to 2.18.0 (#2636)

Bumps [github.com/hashicorp/hcl/v2](https://github.com/hashicorp/hcl) from 2.17.0 to 2.18.0.
- [Release notes](https://github.com/hashicorp/hcl/releases)
- [Changelog](https://github.com/hashicorp/hcl/blob/main/CHANGELOG.md)
- [Commits](hashicorp/hcl@v2.17.0...v2.18.0)

---
updated-dependencies:
- dependency-name: github.com/hashicorp/hcl/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* terrafmt; updated share and share recipient docs (#2641)

* update documentation (#2644)

* fix response

* remove preview path

* rename test

* fix create call

* add wait for acceptance tests

* fix test

* feedback

* use golang struct

* add wait time for acc test

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Miles Yucht <miles@databricks.com>
Co-authored-by: Krishna Swaroop K <krishna.swaroop@databricks.com>
Co-authored-by: marekbrysa <53767523+marekbrysa@users.noreply.github.com>
Co-authored-by: Alex Ott <alexey.ott@databricks.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: bvdboom <bvdboom@users.noreply.github.com>
Co-authored-by: shreyas-goenka <88374338+shreyas-goenka@users.noreply.github.com>
Co-authored-by: Gabor Ratky <gabor.ratky@databricks.com>
Co-authored-by: Rohan Kabra <rohan.kabra@databricks.com>
Co-authored-by: Serge Smertin <259697+nfx@users.noreply.github.com>
Co-authored-by: Pieter Noordhuis <pieter.noordhuis@databricks.com>
Co-authored-by: Gautham Sunjay <gauthamsunjay17@gmail.com>
Co-authored-by: guillesd <74136033+guillesd@users.noreply.github.com>
Co-authored-by: Tanmay Rustagi <tanmay.rustagi@databricks.com>
Co-authored-by: Tanmay Rustagi <88379306+tanmay-db@users.noreply.github.com>
Co-authored-by: Fabian Jakobs <fabian.jakobs@databricks.com>
Co-authored-by: klus <lus.karol@gmail.com>
Co-authored-by: Oleh Mykolaishyn <owlleg6@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants