From 5662914b5357ff401c83d587d7ec3d9f1c8fa438 Mon Sep 17 00:00:00 2001 From: Serge Smertin Date: Mon, 17 Jul 2023 18:52:41 +0200 Subject: [PATCH] Release 0.2.0 * Add Issue Templates ([#208](https://github.com/databricks/databricks-sdk-py/pull/208)). * Fixed notebook native auth for jobs ([#209](https://github.com/databricks/databricks-sdk-py/pull/209)). * Replace `datatime.timedelta()` with `datetime.timedelta()` in codebase ([#207](https://github.com/databricks/databricks-sdk-py/pull/207)). * Support dod in python sdk ([#212](https://github.com/databricks/databricks-sdk-py/pull/212)). * [DECO-1115] Add local implementation for `dbutils.widgets` ([#93](https://github.com/databricks/databricks-sdk-py/pull/93)). * Fix error message, ExportFormat -> ImportFormat ([#220](https://github.com/databricks/databricks-sdk-py/pull/220)). * Regenerate Python SDK using recent OpenAPI Specification ([#229](https://github.com/databricks/databricks-sdk-py/pull/229)). * Make workspace client also return runtime dbutils when in dbr ([#210](https://github.com/databricks/databricks-sdk-py/pull/210)). * Use .ConstantName defining target enum states for waiters ([#230](https://github.com/databricks/databricks-sdk-py/pull/230)). * Fix enum deserialization ([#234](https://github.com/databricks/databricks-sdk-py/pull/234)). * Fix enum deserialization, take 2 ([#235](https://github.com/databricks/databricks-sdk-py/pull/235)). * Added toolchain configuration to `.codegen.json` ([#236](https://github.com/databricks/databricks-sdk-py/pull/236)). * Make OpenAPI spec location configurable ([#237](https://github.com/databricks/databricks-sdk-py/pull/237)). API Changes: * Added `update()` method for [w.tables](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/tables.html) workspace-level service. * Added `databricks.sdk.service.catalog.UpdateTableRequest` dataclass. * Added `schema` field for `databricks.sdk.service.iam.PartialUpdate`. * Added `databricks.sdk.service.iam.PatchSchema` dataclass. * Added `trigger_info` field for `databricks.sdk.service.jobs.BaseRun`. * Added `health` field for `databricks.sdk.service.jobs.CreateJob`. * Added `job_source` field for `databricks.sdk.service.jobs.GitSource`. * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.JobEmailNotifications`. * Added `health` field for `databricks.sdk.service.jobs.JobSettings`. * Added `trigger_info` field for `databricks.sdk.service.jobs.Run`. * Added `run_job_output` field for `databricks.sdk.service.jobs.RunOutput`. * Added `run_job_task` field for `databricks.sdk.service.jobs.RunTask`. * Added `email_notifications` field for `databricks.sdk.service.jobs.SubmitRun`. * Added `health` field for `databricks.sdk.service.jobs.SubmitRun`. * Added `email_notifications` field for `databricks.sdk.service.jobs.SubmitTask`. * Added `health` field for `databricks.sdk.service.jobs.SubmitTask`. * Added `notification_settings` field for `databricks.sdk.service.jobs.SubmitTask`. * Added `health` field for `databricks.sdk.service.jobs.Task`. * Added `run_job_task` field for `databricks.sdk.service.jobs.Task`. * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.TaskEmailNotifications`. * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.WebhookNotifications`. * Added `databricks.sdk.service.jobs.JobSource` dataclass. * Added `databricks.sdk.service.jobs.JobSourceDirtyState` dataclass. * Added `databricks.sdk.service.jobs.JobsHealthMetric` dataclass. * Added `databricks.sdk.service.jobs.JobsHealthOperator` dataclass. * Added `databricks.sdk.service.jobs.JobsHealthRule` dataclass. * Added `databricks.sdk.service.jobs.JobsHealthRules` dataclass. * Added `databricks.sdk.service.jobs.RunJobOutput` dataclass. * Added `databricks.sdk.service.jobs.RunJobTask` dataclass. * Added `databricks.sdk.service.jobs.TriggerInfo` dataclass. * Added `databricks.sdk.service.jobs.WebhookNotificationsOnDurationWarningThresholdExceededItem` dataclass. * Removed `whl` field for `databricks.sdk.service.pipelines.PipelineLibrary`. * Changed `delete_personal_compute_setting()` method for [a.account_settings](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_settings.html) account-level service with new required argument order. * Changed `read_personal_compute_setting()` method for [a.account_settings](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_settings.html) account-level service with new required argument order. * Changed `etag` field for `databricks.sdk.service.settings.DeletePersonalComputeSettingRequest` to be required. * Changed `etag` field for `databricks.sdk.service.settings.ReadPersonalComputeSettingRequest` to be required. * Added [w.clean_rooms](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/clean_rooms.html) workspace-level service. * Added `databricks.sdk.service.sharing.CentralCleanRoomInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomAssetInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomCatalog` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomCatalogUpdate` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomCollaboratorInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomNotebookInfo` dataclass. * Added `databricks.sdk.service.sharing.CleanRoomTableInfo` dataclass. * Added `databricks.sdk.service.sharing.ColumnInfo` dataclass. * Added `databricks.sdk.service.sharing.ColumnMask` dataclass. * Added `databricks.sdk.service.sharing.ColumnTypeName` dataclass. * Added `databricks.sdk.service.sharing.CreateCleanRoom` dataclass. * Added `databricks.sdk.service.sharing.DeleteCleanRoomRequest` dataclass. * Added `databricks.sdk.service.sharing.GetCleanRoomRequest` dataclass. * Added `databricks.sdk.service.sharing.ListCleanRoomsResponse` dataclass. * Added `databricks.sdk.service.sharing.UpdateCleanRoom` dataclass. * Changed `query` field for `databricks.sdk.service.sql.Alert` to `databricks.sdk.service.sql.AlertQuery` dataclass. * Changed `value` field for `databricks.sdk.service.sql.AlertOptions` to `any` dataclass. * Removed `is_db_admin` field for `databricks.sdk.service.sql.User`. * Removed `profile_image_url` field for `databricks.sdk.service.sql.User`. * Added `databricks.sdk.service.sql.AlertQuery` dataclass. OpenAPI SHA: e20d2b10a181b1e865716de25f42e86d7e3f0270, Date: 2023-07-17 --- CHANGELOG.md | 79 +++++++++++++++ databricks/sdk/service/compute.py | 8 +- databricks/sdk/service/iam.py | 2 +- databricks/sdk/service/jobs.py | 130 +++++++++++++++++++++++-- databricks/sdk/service/pipelines.py | 5 +- databricks/sdk/service/provisioning.py | 6 +- databricks/sdk/service/sql.py | 4 +- databricks/sdk/version.py | 2 +- docs/account/account-billing.rst | 6 +- docs/account/account-catalog.rst | 6 +- docs/account/account-iam.rst | 6 +- docs/account/account-oauth2.rst | 6 +- docs/account/account-provisioning.rst | 6 +- docs/account/account-settings.rst | 6 +- docs/account/groups.rst | 12 ++- docs/account/index.rst | 6 +- docs/account/service_principals.rst | 4 +- docs/account/settings.rst | 40 +++++--- docs/account/users.rst | 6 +- docs/workspace/alerts.rst | 6 +- docs/workspace/clusters.rst | 2 +- docs/workspace/command_execution.rst | 6 +- docs/workspace/dashboards.rst | 3 +- docs/workspace/groups.rst | 12 ++- docs/workspace/index.rst | 6 +- docs/workspace/instance_profiles.rst | 18 ++-- docs/workspace/jobs.rst | 28 ++++-- docs/workspace/policy_families.rst | 51 +++++++++- docs/workspace/queries.rst | 18 ++-- docs/workspace/service_principals.rst | 4 +- docs/workspace/serving_endpoints.rst | 14 +-- docs/workspace/tables.rst | 16 +++ docs/workspace/users.rst | 6 +- docs/workspace/workspace-catalog.rst | 6 +- docs/workspace/workspace-compute.rst | 6 +- docs/workspace/workspace-files.rst | 6 +- docs/workspace/workspace-iam.rst | 6 +- docs/workspace/workspace-jobs.rst | 6 +- docs/workspace/workspace-ml.rst | 6 +- docs/workspace/workspace-pipelines.rst | 6 +- docs/workspace/workspace-serving.rst | 6 +- docs/workspace/workspace-settings.rst | 6 +- docs/workspace/workspace-sharing.rst | 7 +- docs/workspace/workspace-sql.rst | 6 +- docs/workspace/workspace-workspace.rst | 6 +- 45 files changed, 450 insertions(+), 153 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index e36659872..fe78eaacf 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,84 @@ # Version changelog +## 0.2.0 + +* Add Issue Templates ([#208](https://github.com/databricks/databricks-sdk-py/pull/208)). +* Fixed notebook native auth for jobs ([#209](https://github.com/databricks/databricks-sdk-py/pull/209)). +* Replace `datatime.timedelta()` with `datetime.timedelta()` in codebase ([#207](https://github.com/databricks/databricks-sdk-py/pull/207)). +* Support dod in python sdk ([#212](https://github.com/databricks/databricks-sdk-py/pull/212)). +* [DECO-1115] Add local implementation for `dbutils.widgets` ([#93](https://github.com/databricks/databricks-sdk-py/pull/93)). +* Fix error message, ExportFormat -> ImportFormat ([#220](https://github.com/databricks/databricks-sdk-py/pull/220)). +* Regenerate Python SDK using recent OpenAPI Specification ([#229](https://github.com/databricks/databricks-sdk-py/pull/229)). +* Make workspace client also return runtime dbutils when in dbr ([#210](https://github.com/databricks/databricks-sdk-py/pull/210)). +* Use .ConstantName defining target enum states for waiters ([#230](https://github.com/databricks/databricks-sdk-py/pull/230)). +* Fix enum deserialization ([#234](https://github.com/databricks/databricks-sdk-py/pull/234)). +* Fix enum deserialization, take 2 ([#235](https://github.com/databricks/databricks-sdk-py/pull/235)). +* Added toolchain configuration to `.codegen.json` ([#236](https://github.com/databricks/databricks-sdk-py/pull/236)). +* Make OpenAPI spec location configurable ([#237](https://github.com/databricks/databricks-sdk-py/pull/237)). + +API Changes: + + * Added `update()` method for [w.tables](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/tables.html) workspace-level service. + * Added `databricks.sdk.service.catalog.UpdateTableRequest` dataclass. + * Added `schema` field for `databricks.sdk.service.iam.PartialUpdate`. + * Added `databricks.sdk.service.iam.PatchSchema` dataclass. + * Added `trigger_info` field for `databricks.sdk.service.jobs.BaseRun`. + * Added `health` field for `databricks.sdk.service.jobs.CreateJob`. + * Added `job_source` field for `databricks.sdk.service.jobs.GitSource`. + * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.JobEmailNotifications`. + * Added `health` field for `databricks.sdk.service.jobs.JobSettings`. + * Added `trigger_info` field for `databricks.sdk.service.jobs.Run`. + * Added `run_job_output` field for `databricks.sdk.service.jobs.RunOutput`. + * Added `run_job_task` field for `databricks.sdk.service.jobs.RunTask`. + * Added `email_notifications` field for `databricks.sdk.service.jobs.SubmitRun`. + * Added `health` field for `databricks.sdk.service.jobs.SubmitRun`. + * Added `email_notifications` field for `databricks.sdk.service.jobs.SubmitTask`. + * Added `health` field for `databricks.sdk.service.jobs.SubmitTask`. + * Added `notification_settings` field for `databricks.sdk.service.jobs.SubmitTask`. + * Added `health` field for `databricks.sdk.service.jobs.Task`. + * Added `run_job_task` field for `databricks.sdk.service.jobs.Task`. + * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.TaskEmailNotifications`. + * Added `on_duration_warning_threshold_exceeded` field for `databricks.sdk.service.jobs.WebhookNotifications`. + * Added `databricks.sdk.service.jobs.JobSource` dataclass. + * Added `databricks.sdk.service.jobs.JobSourceDirtyState` dataclass. + * Added `databricks.sdk.service.jobs.JobsHealthMetric` dataclass. + * Added `databricks.sdk.service.jobs.JobsHealthOperator` dataclass. + * Added `databricks.sdk.service.jobs.JobsHealthRule` dataclass. + * Added `databricks.sdk.service.jobs.JobsHealthRules` dataclass. + * Added `databricks.sdk.service.jobs.RunJobOutput` dataclass. + * Added `databricks.sdk.service.jobs.RunJobTask` dataclass. + * Added `databricks.sdk.service.jobs.TriggerInfo` dataclass. + * Added `databricks.sdk.service.jobs.WebhookNotificationsOnDurationWarningThresholdExceededItem` dataclass. + * Removed `whl` field for `databricks.sdk.service.pipelines.PipelineLibrary`. + * Changed `delete_personal_compute_setting()` method for [a.account_settings](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_settings.html) account-level service with new required argument order. + * Changed `read_personal_compute_setting()` method for [a.account_settings](https://databricks-sdk-py.readthedocs.io/en/latest/account/account_settings.html) account-level service with new required argument order. + * Changed `etag` field for `databricks.sdk.service.settings.DeletePersonalComputeSettingRequest` to be required. + * Changed `etag` field for `databricks.sdk.service.settings.ReadPersonalComputeSettingRequest` to be required. + * Added [w.clean_rooms](https://databricks-sdk-py.readthedocs.io/en/latest/workspace/clean_rooms.html) workspace-level service. + * Added `databricks.sdk.service.sharing.CentralCleanRoomInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomAssetInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomCatalog` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomCatalogUpdate` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomCollaboratorInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomNotebookInfo` dataclass. + * Added `databricks.sdk.service.sharing.CleanRoomTableInfo` dataclass. + * Added `databricks.sdk.service.sharing.ColumnInfo` dataclass. + * Added `databricks.sdk.service.sharing.ColumnMask` dataclass. + * Added `databricks.sdk.service.sharing.ColumnTypeName` dataclass. + * Added `databricks.sdk.service.sharing.CreateCleanRoom` dataclass. + * Added `databricks.sdk.service.sharing.DeleteCleanRoomRequest` dataclass. + * Added `databricks.sdk.service.sharing.GetCleanRoomRequest` dataclass. + * Added `databricks.sdk.service.sharing.ListCleanRoomsResponse` dataclass. + * Added `databricks.sdk.service.sharing.UpdateCleanRoom` dataclass. + * Changed `query` field for `databricks.sdk.service.sql.Alert` to `databricks.sdk.service.sql.AlertQuery` dataclass. + * Changed `value` field for `databricks.sdk.service.sql.AlertOptions` to `any` dataclass. + * Removed `is_db_admin` field for `databricks.sdk.service.sql.User`. + * Removed `profile_image_url` field for `databricks.sdk.service.sql.User`. + * Added `databricks.sdk.service.sql.AlertQuery` dataclass. + +OpenAPI SHA: e20d2b10a181b1e865716de25f42e86d7e3f0270, Date: 2023-07-17 + ## 0.1.12 * Beta release ([#198](https://github.com/databricks/databricks-sdk-py/pull/198)). diff --git a/databricks/sdk/service/compute.py b/databricks/sdk/service/compute.py index af2f6e6b6..28520e17b 100755 --- a/databricks/sdk/service/compute.py +++ b/databricks/sdk/service/compute.py @@ -209,8 +209,8 @@ def from_dict(cls, d: Dict[str, any]) -> 'CloudProviderNodeInfo': class CloudProviderNodeStatus(Enum): - NOT_AVAILABLE_IN_REGION = 'NotAvailableInRegion' - NOT_ENABLED_ON_SUBSCRIPTION = 'NotEnabledOnSubscription' + NOTAVAILABLEINREGION = 'NotAvailableInRegion' + NOTENABLEDONSUBSCRIPTION = 'NotEnabledOnSubscription' @dataclass @@ -3061,8 +3061,8 @@ class TerminationReasonCode(Enum): INVALID_SPARK_IMAGE = 'INVALID_SPARK_IMAGE' IP_EXHAUSTION_FAILURE = 'IP_EXHAUSTION_FAILURE' JOB_FINISHED = 'JOB_FINISHED' - K8S_AUTOSCALING_FAILURE = 'K8S_AUTOSCALING_FAILURE' - K8S_DBR_CLUSTER_LAUNCH_TIMEOUT = 'K8S_DBR_CLUSTER_LAUNCH_TIMEOUT' + KS_AUTOSCALING_FAILURE = 'K8S_AUTOSCALING_FAILURE' + KS_DBR_CLUSTER_LAUNCH_TIMEOUT = 'K8S_DBR_CLUSTER_LAUNCH_TIMEOUT' METASTORE_COMPONENT_UNHEALTHY = 'METASTORE_COMPONENT_UNHEALTHY' NEPHOS_RESOURCE_MANAGEMENT = 'NEPHOS_RESOURCE_MANAGEMENT' NETWORK_CONFIGURATION_FAILURE = 'NETWORK_CONFIGURATION_FAILURE' diff --git a/databricks/sdk/service/iam.py b/databricks/sdk/service/iam.py index abd2692fb..5240e6d36 100755 --- a/databricks/sdk/service/iam.py +++ b/databricks/sdk/service/iam.py @@ -537,7 +537,7 @@ class PatchOp(Enum): class PatchSchema(Enum): - URN_IETF_PARAMS_SCIM_API_MESSAGES20_PATCH_OP = 'urn:ietf:params:scim:api:messages:2.0:PatchOp' + URN_IETF_PARAMS_SCIM_API_MESSAGES_PATCHOP = 'urn:ietf:params:scim:api:messages:2.0:PatchOp' @dataclass diff --git a/databricks/sdk/service/jobs.py b/databricks/sdk/service/jobs.py index eb4a9f017..580405a4a 100755 --- a/databricks/sdk/service/jobs.py +++ b/databricks/sdk/service/jobs.py @@ -257,6 +257,7 @@ class CreateJob: email_notifications: Optional['JobEmailNotifications'] = None format: Optional['Format'] = None git_source: Optional['GitSource'] = None + health: Optional['JobsHealthRules'] = None job_clusters: Optional['List[JobCluster]'] = None max_concurrent_runs: Optional[int] = None name: Optional[str] = None @@ -279,6 +280,7 @@ def as_dict(self) -> dict: if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.format is not None: body['format'] = self.format.value if self.git_source: body['git_source'] = self.git_source.as_dict() + if self.health: body['health'] = self.health.as_dict() if self.job_clusters: body['job_clusters'] = [v.as_dict() for v in self.job_clusters] if self.max_concurrent_runs is not None: body['max_concurrent_runs'] = self.max_concurrent_runs if self.name is not None: body['name'] = self.name @@ -301,6 +303,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'CreateJob': email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), format=_enum(d, 'format', Format), git_source=_from_dict(d, 'git_source', GitSource), + health=_from_dict(d, 'health', JobsHealthRules), job_clusters=_repeated(d, 'job_clusters', JobCluster), max_concurrent_runs=d.get('max_concurrent_runs', None), name=d.get('name', None), @@ -498,14 +501,14 @@ class GetRunRequest: class GitProvider(Enum): - AWS_CODE_COMMIT = 'awsCodeCommit' - AZURE_DEV_OPS_SERVICES = 'azureDevOpsServices' - BITBUCKET_CLOUD = 'bitbucketCloud' - BITBUCKET_SERVER = 'bitbucketServer' - GIT_HUB = 'gitHub' - GIT_HUB_ENTERPRISE = 'gitHubEnterprise' - GIT_LAB = 'gitLab' - GIT_LAB_ENTERPRISE_EDITION = 'gitLabEnterpriseEdition' + AWSCODECOMMIT = 'awsCodeCommit' + AZUREDEVOPSSERVICES = 'azureDevOpsServices' + BITBUCKETCLOUD = 'bitbucketCloud' + BITBUCKETSERVER = 'bitbucketServer' + GITHUB = 'gitHub' + GITHUBENTERPRISE = 'gitHubEnterprise' + GITLAB = 'gitLab' + GITLABENTERPRISEEDITION = 'gitLabEnterpriseEdition' @dataclass @@ -625,6 +628,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'JobCompute': @dataclass class JobEmailNotifications: no_alert_for_skipped_runs: Optional[bool] = None + on_duration_warning_threshold_exceeded: Optional['List[str]'] = None on_failure: Optional['List[str]'] = None on_start: Optional['List[str]'] = None on_success: Optional['List[str]'] = None @@ -633,6 +637,10 @@ def as_dict(self) -> dict: body = {} if self.no_alert_for_skipped_runs is not None: body['no_alert_for_skipped_runs'] = self.no_alert_for_skipped_runs + if self.on_duration_warning_threshold_exceeded: + body['on_duration_warning_threshold_exceeded'] = [ + v for v in self.on_duration_warning_threshold_exceeded + ] if self.on_failure: body['on_failure'] = [v for v in self.on_failure] if self.on_start: body['on_start'] = [v for v in self.on_start] if self.on_success: body['on_success'] = [v for v in self.on_success] @@ -641,6 +649,8 @@ def as_dict(self) -> dict: @classmethod def from_dict(cls, d: Dict[str, any]) -> 'JobEmailNotifications': return cls(no_alert_for_skipped_runs=d.get('no_alert_for_skipped_runs', None), + on_duration_warning_threshold_exceeded=d.get('on_duration_warning_threshold_exceeded', + None), on_failure=d.get('on_failure', None), on_start=d.get('on_start', None), on_success=d.get('on_success', None)) @@ -731,6 +741,7 @@ class JobSettings: email_notifications: Optional['JobEmailNotifications'] = None format: Optional['Format'] = None git_source: Optional['GitSource'] = None + health: Optional['JobsHealthRules'] = None job_clusters: Optional['List[JobCluster]'] = None max_concurrent_runs: Optional[int] = None name: Optional[str] = None @@ -751,6 +762,7 @@ def as_dict(self) -> dict: if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.format is not None: body['format'] = self.format.value if self.git_source: body['git_source'] = self.git_source.as_dict() + if self.health: body['health'] = self.health.as_dict() if self.job_clusters: body['job_clusters'] = [v.as_dict() for v in self.job_clusters] if self.max_concurrent_runs is not None: body['max_concurrent_runs'] = self.max_concurrent_runs if self.name is not None: body['name'] = self.name @@ -772,6 +784,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'JobSettings': email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), format=_enum(d, 'format', Format), git_source=_from_dict(d, 'git_source', GitSource), + health=_from_dict(d, 'health', JobsHealthRules), job_clusters=_repeated(d, 'job_clusters', JobCluster), max_concurrent_runs=d.get('max_concurrent_runs', None), name=d.get('name', None), @@ -816,6 +829,54 @@ class JobSourceDirtyState(Enum): NOT_SYNCED = 'NOT_SYNCED' +class JobsHealthMetric(Enum): + """Specifies the health metric that is being evaluated for a particular health rule.""" + + RUN_DURATION_SECONDS = 'RUN_DURATION_SECONDS' + + +class JobsHealthOperator(Enum): + """Specifies the operator used to compare the health metric value with the specified threshold.""" + + GREATER_THAN = 'GREATER_THAN' + + +@dataclass +class JobsHealthRule: + metric: Optional['JobsHealthMetric'] = None + op: Optional['JobsHealthOperator'] = None + value: Optional[int] = None + + def as_dict(self) -> dict: + body = {} + if self.metric is not None: body['metric'] = self.metric.value + if self.op is not None: body['op'] = self.op.value + if self.value is not None: body['value'] = self.value + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> 'JobsHealthRule': + return cls(metric=_enum(d, 'metric', JobsHealthMetric), + op=_enum(d, 'op', JobsHealthOperator), + value=d.get('value', None)) + + +@dataclass +class JobsHealthRules: + """An optional set of health rules that can be defined for this job.""" + + rules: Optional['List[JobsHealthRule]'] = None + + def as_dict(self) -> dict: + body = {} + if self.rules: body['rules'] = [v.as_dict() for v in self.rules] + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> 'JobsHealthRules': + return cls(rules=_repeated(d, 'rules', JobsHealthRule)) + + @dataclass class ListJobsRequest: """List jobs""" @@ -2068,6 +2129,7 @@ class SubmitRun: access_control_list: Optional['List[iam.AccessControlRequest]'] = None email_notifications: Optional['JobEmailNotifications'] = None git_source: Optional['GitSource'] = None + health: Optional['JobsHealthRules'] = None idempotency_token: Optional[str] = None notification_settings: Optional['JobNotificationSettings'] = None run_name: Optional[str] = None @@ -2081,6 +2143,7 @@ def as_dict(self) -> dict: body['access_control_list'] = [v.as_dict() for v in self.access_control_list] if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.git_source: body['git_source'] = self.git_source.as_dict() + if self.health: body['health'] = self.health.as_dict() if self.idempotency_token is not None: body['idempotency_token'] = self.idempotency_token if self.notification_settings: body['notification_settings'] = self.notification_settings.as_dict() if self.run_name is not None: body['run_name'] = self.run_name @@ -2094,6 +2157,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'SubmitRun': return cls(access_control_list=_repeated(d, 'access_control_list', iam.AccessControlRequest), email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), git_source=_from_dict(d, 'git_source', GitSource), + health=_from_dict(d, 'health', JobsHealthRules), idempotency_token=d.get('idempotency_token', None), notification_settings=_from_dict(d, 'notification_settings', JobNotificationSettings), run_name=d.get('run_name', None), @@ -2123,6 +2187,7 @@ class SubmitTask: depends_on: Optional['List[TaskDependency]'] = None email_notifications: Optional['JobEmailNotifications'] = None existing_cluster_id: Optional[str] = None + health: Optional['JobsHealthRules'] = None libraries: Optional['List[compute.Library]'] = None new_cluster: Optional['compute.ClusterSpec'] = None notebook_task: Optional['NotebookTask'] = None @@ -2141,6 +2206,7 @@ def as_dict(self) -> dict: if self.depends_on: body['depends_on'] = [v.as_dict() for v in self.depends_on] if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.existing_cluster_id is not None: body['existing_cluster_id'] = self.existing_cluster_id + if self.health: body['health'] = self.health.as_dict() if self.libraries: body['libraries'] = [v.as_dict() for v in self.libraries] if self.new_cluster: body['new_cluster'] = self.new_cluster.as_dict() if self.notebook_task: body['notebook_task'] = self.notebook_task.as_dict() @@ -2161,6 +2227,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'SubmitTask': depends_on=_repeated(d, 'depends_on', TaskDependency), email_notifications=_from_dict(d, 'email_notifications', JobEmailNotifications), existing_cluster_id=d.get('existing_cluster_id', None), + health=_from_dict(d, 'health', JobsHealthRules), libraries=_repeated(d, 'libraries', compute.Library), new_cluster=_from_dict(d, 'new_cluster', compute.ClusterSpec), notebook_task=_from_dict(d, 'notebook_task', NotebookTask), @@ -2185,6 +2252,7 @@ class Task: description: Optional[str] = None email_notifications: Optional['TaskEmailNotifications'] = None existing_cluster_id: Optional[str] = None + health: Optional['JobsHealthRules'] = None job_cluster_key: Optional[str] = None libraries: Optional['List[compute.Library]'] = None max_retries: Optional[int] = None @@ -2212,6 +2280,7 @@ def as_dict(self) -> dict: if self.description is not None: body['description'] = self.description if self.email_notifications: body['email_notifications'] = self.email_notifications.as_dict() if self.existing_cluster_id is not None: body['existing_cluster_id'] = self.existing_cluster_id + if self.health: body['health'] = self.health.as_dict() if self.job_cluster_key is not None: body['job_cluster_key'] = self.job_cluster_key if self.libraries: body['libraries'] = [v.as_dict() for v in self.libraries] if self.max_retries is not None: body['max_retries'] = self.max_retries @@ -2242,6 +2311,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'Task': description=d.get('description', None), email_notifications=_from_dict(d, 'email_notifications', TaskEmailNotifications), existing_cluster_id=d.get('existing_cluster_id', None), + health=_from_dict(d, 'health', JobsHealthRules), job_cluster_key=d.get('job_cluster_key', None), libraries=_repeated(d, 'libraries', compute.Library), max_retries=d.get('max_retries', None), @@ -2280,12 +2350,17 @@ def from_dict(cls, d: Dict[str, any]) -> 'TaskDependency': @dataclass class TaskEmailNotifications: + on_duration_warning_threshold_exceeded: Optional['List[str]'] = None on_failure: Optional['List[str]'] = None on_start: Optional['List[str]'] = None on_success: Optional['List[str]'] = None def as_dict(self) -> dict: body = {} + if self.on_duration_warning_threshold_exceeded: + body['on_duration_warning_threshold_exceeded'] = [ + v for v in self.on_duration_warning_threshold_exceeded + ] if self.on_failure: body['on_failure'] = [v for v in self.on_failure] if self.on_start: body['on_start'] = [v for v in self.on_start] if self.on_success: body['on_success'] = [v for v in self.on_success] @@ -2293,7 +2368,9 @@ def as_dict(self) -> dict: @classmethod def from_dict(cls, d: Dict[str, any]) -> 'TaskEmailNotifications': - return cls(on_failure=d.get('on_failure', None), + return cls(on_duration_warning_threshold_exceeded=d.get('on_duration_warning_threshold_exceeded', + None), + on_failure=d.get('on_failure', None), on_start=d.get('on_start', None), on_success=d.get('on_success', None)) @@ -2470,12 +2547,18 @@ def from_dict(cls, d: Dict[str, any]) -> 'Webhook': @dataclass class WebhookNotifications: + on_duration_warning_threshold_exceeded: Optional[ + 'List[WebhookNotificationsOnDurationWarningThresholdExceededItem]'] = None on_failure: Optional['List[Webhook]'] = None on_start: Optional['List[Webhook]'] = None on_success: Optional['List[Webhook]'] = None def as_dict(self) -> dict: body = {} + if self.on_duration_warning_threshold_exceeded: + body['on_duration_warning_threshold_exceeded'] = [ + v.as_dict() for v in self.on_duration_warning_threshold_exceeded + ] if self.on_failure: body['on_failure'] = [v.as_dict() for v in self.on_failure] if self.on_start: body['on_start'] = [v.as_dict() for v in self.on_start] if self.on_success: body['on_success'] = [v.as_dict() for v in self.on_success] @@ -2483,11 +2566,28 @@ def as_dict(self) -> dict: @classmethod def from_dict(cls, d: Dict[str, any]) -> 'WebhookNotifications': - return cls(on_failure=_repeated(d, 'on_failure', Webhook), + return cls(on_duration_warning_threshold_exceeded=_repeated( + d, 'on_duration_warning_threshold_exceeded', + WebhookNotificationsOnDurationWarningThresholdExceededItem), + on_failure=_repeated(d, 'on_failure', Webhook), on_start=_repeated(d, 'on_start', Webhook), on_success=_repeated(d, 'on_success', Webhook)) +@dataclass +class WebhookNotificationsOnDurationWarningThresholdExceededItem: + id: Optional[str] = None + + def as_dict(self) -> dict: + body = {} + if self.id is not None: body['id'] = self.id + return body + + @classmethod + def from_dict(cls, d: Dict[str, any]) -> 'WebhookNotificationsOnDurationWarningThresholdExceededItem': + return cls(id=d.get('id', None)) + + class JobsAPI: """The Jobs API allows you to create, edit, and delete jobs. @@ -2588,6 +2688,7 @@ def create(self, email_notifications: Optional[JobEmailNotifications] = None, format: Optional[Format] = None, git_source: Optional[GitSource] = None, + health: Optional[JobsHealthRules] = None, job_clusters: Optional[List[JobCluster]] = None, max_concurrent_runs: Optional[int] = None, name: Optional[str] = None, @@ -2621,6 +2722,8 @@ def create(self, :param git_source: :class:`GitSource` (optional) An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. + :param health: :class:`JobsHealthRules` (optional) + An optional set of health rules that can be defined for this job. :param job_clusters: List[:class:`JobCluster`] (optional) A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. @@ -2681,6 +2784,7 @@ def create(self, email_notifications=email_notifications, format=format, git_source=git_source, + health=health, job_clusters=job_clusters, max_concurrent_runs=max_concurrent_runs, name=name, @@ -3299,6 +3403,7 @@ def submit(self, access_control_list: Optional[List[iam.AccessControlRequest]] = None, email_notifications: Optional[JobEmailNotifications] = None, git_source: Optional[GitSource] = None, + health: Optional[JobsHealthRules] = None, idempotency_token: Optional[str] = None, notification_settings: Optional[JobNotificationSettings] = None, run_name: Optional[str] = None, @@ -3320,6 +3425,8 @@ def submit(self, :param git_source: :class:`GitSource` (optional) An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. + :param health: :class:`JobsHealthRules` (optional) + An optional set of health rules that can be defined for this job. :param idempotency_token: str (optional) An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the @@ -3354,6 +3461,7 @@ def submit(self, request = SubmitRun(access_control_list=access_control_list, email_notifications=email_notifications, git_source=git_source, + health=health, idempotency_token=idempotency_token, notification_settings=notification_settings, run_name=run_name, @@ -3372,6 +3480,7 @@ def submit_and_wait( access_control_list: Optional[List[iam.AccessControlRequest]] = None, email_notifications: Optional[JobEmailNotifications] = None, git_source: Optional[GitSource] = None, + health: Optional[JobsHealthRules] = None, idempotency_token: Optional[str] = None, notification_settings: Optional[JobNotificationSettings] = None, run_name: Optional[str] = None, @@ -3382,6 +3491,7 @@ def submit_and_wait( return self.submit(access_control_list=access_control_list, email_notifications=email_notifications, git_source=git_source, + health=health, idempotency_token=idempotency_token, notification_settings=notification_settings, run_name=run_name, diff --git a/databricks/sdk/service/pipelines.py b/databricks/sdk/service/pipelines.py index e232ddf3e..465ec36b1 100755 --- a/databricks/sdk/service/pipelines.py +++ b/databricks/sdk/service/pipelines.py @@ -624,7 +624,6 @@ class PipelineLibrary: jar: Optional[str] = None maven: Optional['compute.MavenLibrary'] = None notebook: Optional['NotebookLibrary'] = None - whl: Optional[str] = None def as_dict(self) -> dict: body = {} @@ -632,7 +631,6 @@ def as_dict(self) -> dict: if self.jar is not None: body['jar'] = self.jar if self.maven: body['maven'] = self.maven.as_dict() if self.notebook: body['notebook'] = self.notebook.as_dict() - if self.whl is not None: body['whl'] = self.whl return body @classmethod @@ -640,8 +638,7 @@ def from_dict(cls, d: Dict[str, any]) -> 'PipelineLibrary': return cls(file=_from_dict(d, 'file', FileLibrary), jar=d.get('jar', None), maven=_from_dict(d, 'maven', compute.MavenLibrary), - notebook=_from_dict(d, 'notebook', NotebookLibrary), - whl=d.get('whl', None)) + notebook=_from_dict(d, 'notebook', NotebookLibrary)) @dataclass diff --git a/databricks/sdk/service/provisioning.py b/databricks/sdk/service/provisioning.py index b911e4500..c66aa98da 100755 --- a/databricks/sdk/service/provisioning.py +++ b/databricks/sdk/service/provisioning.py @@ -432,8 +432,8 @@ class ErrorType(Enum): network ACL.""" CREDENTIALS = 'credentials' - NETWORK_ACL = 'networkAcl' - SECURITY_GROUP = 'securityGroup' + NETWORKACL = 'networkAcl' + SECURITYGROUP = 'securityGroup' SUBNET = 'subnet' VPC = 'vpc' @@ -988,7 +988,7 @@ class VpcStatus(Enum): class WarningType(Enum): """The AWS resource associated with this warning: a subnet or a security group.""" - SECURITY_GROUP = 'securityGroup' + SECURITYGROUP = 'securityGroup' SUBNET = 'subnet' diff --git a/databricks/sdk/service/sql.py b/databricks/sdk/service/sql.py index 261a0822b..f5cf70937 100755 --- a/databricks/sdk/service/sql.py +++ b/databricks/sdk/service/sql.py @@ -2229,8 +2229,8 @@ class TerminationReasonCode(Enum): INVALID_SPARK_IMAGE = 'INVALID_SPARK_IMAGE' IP_EXHAUSTION_FAILURE = 'IP_EXHAUSTION_FAILURE' JOB_FINISHED = 'JOB_FINISHED' - K8S_AUTOSCALING_FAILURE = 'K8S_AUTOSCALING_FAILURE' - K8S_DBR_CLUSTER_LAUNCH_TIMEOUT = 'K8S_DBR_CLUSTER_LAUNCH_TIMEOUT' + KS_AUTOSCALING_FAILURE = 'K8S_AUTOSCALING_FAILURE' + KS_DBR_CLUSTER_LAUNCH_TIMEOUT = 'K8S_DBR_CLUSTER_LAUNCH_TIMEOUT' METASTORE_COMPONENT_UNHEALTHY = 'METASTORE_COMPONENT_UNHEALTHY' NEPHOS_RESOURCE_MANAGEMENT = 'NEPHOS_RESOURCE_MANAGEMENT' NETWORK_CONFIGURATION_FAILURE = 'NETWORK_CONFIGURATION_FAILURE' diff --git a/databricks/sdk/version.py b/databricks/sdk/version.py index e6d0c4f45..7fd229a32 100644 --- a/databricks/sdk/version.py +++ b/databricks/sdk/version.py @@ -1 +1 @@ -__version__ = '0.1.12' +__version__ = '0.2.0' diff --git a/docs/account/account-billing.rst b/docs/account/account-billing.rst index 6b369368b..ea434aa29 100644 --- a/docs/account/account-billing.rst +++ b/docs/account/account-billing.rst @@ -1,12 +1,12 @@ Billing ======= - + Configure different aspects of Databricks billing and usage. - + .. toctree:: :maxdepth: 1 - + billable_usage budgets log_delivery \ No newline at end of file diff --git a/docs/account/account-catalog.rst b/docs/account/account-catalog.rst index 98ddf2f74..d235579af 100644 --- a/docs/account/account-catalog.rst +++ b/docs/account/account-catalog.rst @@ -1,12 +1,12 @@ Unity Catalog ============= - + Configure data governance with Unity Catalog for metastores, catalogs, schemas, tables, external locations, and storage credentials - + .. toctree:: :maxdepth: 1 - + metastore_assignments metastores storage_credentials \ No newline at end of file diff --git a/docs/account/account-iam.rst b/docs/account/account-iam.rst index 3cf39e0b3..1c74cd15a 100644 --- a/docs/account/account-iam.rst +++ b/docs/account/account-iam.rst @@ -1,12 +1,12 @@ Identity and Access Management ============================== - + Manage users, service principals, groups and their permissions in Accounts and Workspaces - + .. toctree:: :maxdepth: 1 - + access_control groups service_principals diff --git a/docs/account/account-oauth2.rst b/docs/account/account-oauth2.rst index f8fc02ff4..f504ce4c0 100644 --- a/docs/account/account-oauth2.rst +++ b/docs/account/account-oauth2.rst @@ -1,12 +1,12 @@ OAuth ===== - + Configure OAuth 2.0 application registrations for Databricks - + .. toctree:: :maxdepth: 1 - + custom_app_integration o_auth_enrollment published_app_integration diff --git a/docs/account/account-provisioning.rst b/docs/account/account-provisioning.rst index a9c3f4aa8..5107ab3ad 100644 --- a/docs/account/account-provisioning.rst +++ b/docs/account/account-provisioning.rst @@ -1,12 +1,12 @@ Provisioning ============ - + Resource management for secure Databricks Workspace deployment, cross-account IAM roles, storage, encryption, networking and private access. - + .. toctree:: :maxdepth: 1 - + credentials encryption_keys networks diff --git a/docs/account/account-settings.rst b/docs/account/account-settings.rst index e96f7c83d..1feecca16 100644 --- a/docs/account/account-settings.rst +++ b/docs/account/account-settings.rst @@ -1,11 +1,11 @@ Settings ======== - + Manage security settings for Accounts and Workspaces - + .. toctree:: :maxdepth: 1 - + ip_access_lists settings \ No newline at end of file diff --git a/docs/account/groups.rst b/docs/account/groups.rst index 4e16cc266..4595ed453 100644 --- a/docs/account/groups.rst +++ b/docs/account/groups.rst @@ -9,7 +9,7 @@ Account Groups instead of to users individually. All Databricks account identities can be assigned as members of groups, and members inherit permissions that are assigned to their group. - .. py:method:: create( [, display_name, entitlements, external_id, groups, id, members, roles]) + .. py:method:: create( [, display_name, entitlements, external_id, groups, id, members, meta, roles]) Usage: @@ -38,6 +38,8 @@ Account Groups :param id: str (optional) Databricks group ID :param members: List[:class:`ComplexValue`] (optional) + :param meta: :class:`ResourceMeta` (optional) + Container for the group identifier. Workspace local versus account. :param roles: List[:class:`ComplexValue`] (optional) :returns: :class:`Group` @@ -127,7 +129,7 @@ Account Groups :returns: Iterator over :class:`Group` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update group details. @@ -136,11 +138,13 @@ Account Groups :param id: str Unique ID for a group in the Databricks account. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. - .. py:method:: update(id [, display_name, entitlements, external_id, groups, members, roles]) + .. py:method:: update(id [, display_name, entitlements, external_id, groups, members, meta, roles]) Replace a group. @@ -154,6 +158,8 @@ Account Groups :param external_id: str (optional) :param groups: List[:class:`ComplexValue`] (optional) :param members: List[:class:`ComplexValue`] (optional) + :param meta: :class:`ResourceMeta` (optional) + Container for the group identifier. Workspace local versus account. :param roles: List[:class:`ComplexValue`] (optional) diff --git a/docs/account/index.rst b/docs/account/index.rst index 82c2f6f61..8993d2120 100644 --- a/docs/account/index.rst +++ b/docs/account/index.rst @@ -1,12 +1,12 @@ Account APIs ============ - + These APIs are available from AccountClient - + .. toctree:: :maxdepth: 1 - + account-iam account-catalog account-settings diff --git a/docs/account/service_principals.rst b/docs/account/service_principals.rst index 667b7eda1..497ec8dbf 100644 --- a/docs/account/service_principals.rst +++ b/docs/account/service_principals.rst @@ -130,7 +130,7 @@ Account Service Principals :returns: Iterator over :class:`ServicePrincipal` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update service principal details. @@ -139,6 +139,8 @@ Account Service Principals :param id: str Unique ID for a service principal in the Databricks account. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/account/settings.rst b/docs/account/settings.rst index 3aede4118..08ec2dae9 100644 --- a/docs/account/settings.rst +++ b/docs/account/settings.rst @@ -1,29 +1,43 @@ -Personal Compute setting -======================== +Personal Compute Enablement +=========================== .. py:class:: AccountSettingsAPI - TBD + The Personal Compute enablement setting lets you control which users can use the Personal Compute default + policy to create compute resources. By default all users in all workspaces have access (ON), but you can + change the setting to instead let individual workspaces configure access control (DELEGATE). + + There is only one instance of this setting per account. Since this setting has a default value, this + setting is present on all accounts even though it's never set on a given account. Deletion reverts the + value of the setting back to the default value. - .. py:method:: delete_personal_compute_setting( [, etag]) + .. py:method:: delete_personal_compute_setting(etag) Delete Personal Compute setting. - TBD + Reverts back the Personal Compute setting value to default (ON) - :param etag: str (optional) - TBD + :param etag: str + etag used for versioning. The response is at least as fresh as the eTag provided. This is used for + optimistic concurrency control as a way to help prevent simultaneous writes of a setting overwriting + each other. It is strongly suggested that systems make use of the etag in the read -> delete pattern + to perform setting deletions in order to avoid race conditions. That is, get an etag from a GET + request, and pass it with the DELETE request to identify the rule set version you are deleting. :returns: :class:`DeletePersonalComputeSettingResponse` - .. py:method:: read_personal_compute_setting( [, etag]) + .. py:method:: read_personal_compute_setting(etag) Get Personal Compute setting. - TBD + Gets the value of the Personal Compute setting. - :param etag: str (optional) - TBD + :param etag: str + etag used for versioning. The response is at least as fresh as the eTag provided. This is used for + optimistic concurrency control as a way to help prevent simultaneous writes of a setting overwriting + each other. It is strongly suggested that systems make use of the etag in the read -> delete pattern + to perform setting deletions in order to avoid race conditions. That is, get an etag from a GET + request, and pass it with the DELETE request to identify the rule set version you are deleting. :returns: :class:`PersonalComputeSetting` @@ -32,10 +46,10 @@ Personal Compute setting Update Personal Compute setting. - TBD + Updates the value of the Personal Compute setting. :param allow_missing: bool (optional) - TBD + This should always be set to true for Settings RPCs. Added for AIP compliance. :param setting: :class:`PersonalComputeSetting` (optional) :returns: :class:`PersonalComputeSetting` diff --git a/docs/account/users.rst b/docs/account/users.rst index 39c35b2f2..d78121f5a 100644 --- a/docs/account/users.rst +++ b/docs/account/users.rst @@ -116,7 +116,7 @@ Account Users all_users = w.users.list(attributes="id,userName", sort_by="userName", - sort_order=iam.ListSortOrder.descending) + sort_order=iam.ListSortOrder.DESCENDING) List users. @@ -146,7 +146,7 @@ Account Users :returns: Iterator over :class:`User` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update user details. @@ -155,6 +155,8 @@ Account Users :param id: str Unique ID for a user in the Databricks account. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/workspace/alerts.rst b/docs/workspace/alerts.rst index 194eecdba..0778ee939 100644 --- a/docs/workspace/alerts.rst +++ b/docs/workspace/alerts.rst @@ -45,9 +45,9 @@ Alerts :param options: :class:`AlertOptions` Alert configuration options. :param query_id: str - ID of the query evaluated by the alert. + Query ID. :param parent: str (optional) - The identifier of the workspace folder containing the alert. The default is ther user's home folder. + The identifier of the workspace folder containing the object. :param rearm: int (optional) Number of seconds after being triggered before the alert rearms itself and can be triggered again. If `null`, alert will never be triggered again. @@ -167,7 +167,7 @@ Alerts :param options: :class:`AlertOptions` Alert configuration options. :param query_id: str - ID of the query evaluated by the alert. + Query ID. :param alert_id: str :param rearm: int (optional) Number of seconds after being triggered before the alert rearms itself and can be triggered again. diff --git a/docs/workspace/clusters.rst b/docs/workspace/clusters.rst index 32da3f537..eb511eb5d 100644 --- a/docs/workspace/clusters.rst +++ b/docs/workspace/clusters.rst @@ -421,7 +421,7 @@ Clusters cluster_id = os.environ["TEST_DEFAULT_CLUSTER_ID"] - context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.python).result() + context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.PYTHON).result() w.clusters.ensure_cluster_is_running(cluster_id) diff --git a/docs/workspace/command_execution.rst b/docs/workspace/command_execution.rst index 988a86a5b..f2d15635a 100644 --- a/docs/workspace/command_execution.rst +++ b/docs/workspace/command_execution.rst @@ -63,7 +63,7 @@ Command Execution cluster_id = os.environ["TEST_DEFAULT_CLUSTER_ID"] - context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.python).result() + context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.PYTHON).result() # cleanup w.command_execution.destroy(cluster_id=cluster_id, context_id=context.id) @@ -110,11 +110,11 @@ Command Execution cluster_id = os.environ["TEST_DEFAULT_CLUSTER_ID"] - context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.python).result() + context = w.command_execution.create(cluster_id=cluster_id, language=compute.Language.PYTHON).result() text_results = w.command_execution.execute(cluster_id=cluster_id, context_id=context.id, - language=compute.Language.python, + language=compute.Language.PYTHON, command="print(1)").result() # cleanup diff --git a/docs/workspace/dashboards.rst b/docs/workspace/dashboards.rst index cbfdad098..6983366c1 100644 --- a/docs/workspace/dashboards.rst +++ b/docs/workspace/dashboards.rst @@ -33,8 +33,7 @@ Dashboards :param name: str (optional) The title of this dashboard that appears in list views and at the top of the dashboard page. :param parent: str (optional) - The identifier of the workspace folder containing the dashboard. The default is the user's home - folder. + The identifier of the workspace folder containing the object. :param tags: List[str] (optional) :returns: :class:`Dashboard` diff --git a/docs/workspace/groups.rst b/docs/workspace/groups.rst index 3990e9c0a..58a5c4b84 100644 --- a/docs/workspace/groups.rst +++ b/docs/workspace/groups.rst @@ -9,7 +9,7 @@ Groups instead of to users individually. All Databricks workspace identities can be assigned as members of groups, and members inherit permissions that are assigned to their group. - .. py:method:: create( [, display_name, entitlements, external_id, groups, id, members, roles]) + .. py:method:: create( [, display_name, entitlements, external_id, groups, id, members, meta, roles]) Usage: @@ -38,6 +38,8 @@ Groups :param id: str (optional) Databricks group ID :param members: List[:class:`ComplexValue`] (optional) + :param meta: :class:`ResourceMeta` (optional) + Container for the group identifier. Workspace local versus account. :param roles: List[:class:`ComplexValue`] (optional) :returns: :class:`Group` @@ -127,7 +129,7 @@ Groups :returns: Iterator over :class:`Group` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update group details. @@ -136,11 +138,13 @@ Groups :param id: str Unique ID for a group in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. - .. py:method:: update(id [, display_name, entitlements, external_id, groups, members, roles]) + .. py:method:: update(id [, display_name, entitlements, external_id, groups, members, meta, roles]) Replace a group. @@ -154,6 +158,8 @@ Groups :param external_id: str (optional) :param groups: List[:class:`ComplexValue`] (optional) :param members: List[:class:`ComplexValue`] (optional) + :param meta: :class:`ResourceMeta` (optional) + Container for the group identifier. Workspace local versus account. :param roles: List[:class:`ComplexValue`] (optional) diff --git a/docs/workspace/index.rst b/docs/workspace/index.rst index b19b3aabf..caf3d2f81 100644 --- a/docs/workspace/index.rst +++ b/docs/workspace/index.rst @@ -1,12 +1,12 @@ Workspace APIs ============== - + These APIs are available from WorkspaceClient - + .. toctree:: :maxdepth: 1 - + workspace-workspace workspace-compute workspace-jobs diff --git a/docs/workspace/instance_profiles.rst b/docs/workspace/instance_profiles.rst index cb7bddc2c..b67b63a66 100644 --- a/docs/workspace/instance_profiles.rst +++ b/docs/workspace/instance_profiles.rst @@ -40,11 +40,10 @@ Instance Profiles [Databricks SQL Serverless]: https://docs.databricks.com/sql/admin/serverless.html :param is_meta_instance_profile: bool (optional) - By default, Databricks validates that it has sufficient permissions to launch instances with the - instance profile. This validation uses AWS dry-run mode for the RunInstances API. If validation - fails with an error message that does not indicate an IAM related permission issue, (e.g. `Your - requested instance type is not supported in your requested availability zone`), you can pass this - flag to skip the validation and forcibly add the instance profile. + Boolean flag indicating whether the instance profile should only be used in credential passthrough + scenarios. If true, it means the instance profile contains an meta IAM role which could assume a + wide range of roles. Therefore it should always be used with authorization. This field is optional, + the default value is `false`. :param skip_validation: bool (optional) By default, Databricks validates that it has sufficient permissions to launch instances with the instance profile. This validation uses AWS dry-run mode for the RunInstances API. If validation @@ -95,11 +94,10 @@ Instance Profiles [Databricks SQL Serverless]: https://docs.databricks.com/sql/admin/serverless.html :param is_meta_instance_profile: bool (optional) - By default, Databricks validates that it has sufficient permissions to launch instances with the - instance profile. This validation uses AWS dry-run mode for the RunInstances API. If validation - fails with an error message that does not indicate an IAM related permission issue, (e.g. `Your - requested instance type is not supported in your requested availability zone`), you can pass this - flag to skip the validation and forcibly add the instance profile. + Boolean flag indicating whether the instance profile should only be used in credential passthrough + scenarios. If true, it means the instance profile contains an meta IAM role which could assume a + wide range of roles. Therefore it should always be used with authorization. This field is optional, + the default value is `false`. diff --git a/docs/workspace/jobs.rst b/docs/workspace/jobs.rst index 92eefc530..e6f25ca57 100644 --- a/docs/workspace/jobs.rst +++ b/docs/workspace/jobs.rst @@ -110,7 +110,7 @@ Jobs See :method:wait_get_run_job_terminated_or_skipped for more details. - .. py:method:: create( [, access_control_list, compute, continuous, email_notifications, format, git_source, job_clusters, max_concurrent_runs, name, notification_settings, run_as, schedule, tags, tasks, timeout_seconds, trigger, webhook_notifications]) + .. py:method:: create( [, access_control_list, compute, continuous, email_notifications, format, git_source, health, job_clusters, max_concurrent_runs, name, notification_settings, parameters, run_as, schedule, tags, tasks, timeout_seconds, trigger, webhook_notifications]) Usage: @@ -161,6 +161,8 @@ Jobs :param git_source: :class:`GitSource` (optional) An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. + :param health: :class:`JobsHealthRules` (optional) + An optional set of health rules that can be defined for this job. :param job_clusters: List[:class:`JobCluster`] (optional) A list of job cluster specifications that can be shared and reused by tasks of this job. Libraries cannot be declared in a shared job cluster. You must declare dependent libraries in task settings. @@ -183,6 +185,8 @@ Jobs :param notification_settings: :class:`JobNotificationSettings` (optional) Optional notification settings that are used when sending notifications to each of the `email_notifications` and `webhook_notifications` for this job. + :param parameters: List[:class:`JobParameterDefinition`] (optional) + Job-level parameter definitions :param run_as: :class:`JobRunAs` (optional) Write-only setting, available only in Create/Update/Reset and Submit calls. Specifies the user or service principal that the job runs as. If not specified, the job runs as the user who created the @@ -434,8 +438,8 @@ Jobs :param expand_tasks: bool (optional) Whether to include task and cluster details in the response. :param limit: int (optional) - The number of jobs to return. This value must be greater than 0 and less or equal to 25. The default - value is 20. + The number of jobs to return. This value must be greater than 0 and less or equal to 100. The + default value is 20. :param name: str (optional) A filter on the list based on the exact (case insensitive) job name. :param offset: int (optional) @@ -488,7 +492,7 @@ Jobs :returns: Iterator over :class:`BaseRun` - .. py:method:: repair_run(run_id [, dbt_commands, jar_params, latest_repair_id, notebook_params, pipeline_params, python_named_params, python_params, rerun_all_failed_tasks, rerun_tasks, spark_submit_params, sql_params]) + .. py:method:: repair_run(run_id [, dbt_commands, jar_params, latest_repair_id, notebook_params, pipeline_params, python_named_params, python_params, rerun_all_failed_tasks, rerun_dependent_tasks, rerun_tasks, spark_submit_params, sql_params]) Usage: @@ -584,7 +588,10 @@ Jobs [Task parameter variables]: https://docs.databricks.com/jobs.html#parameter-variables :param rerun_all_failed_tasks: bool (optional) - If true, repair all failed tasks. Only one of rerun_tasks or rerun_all_failed_tasks can be used. + If true, repair all failed tasks. Only one of `rerun_tasks` or `rerun_all_failed_tasks` can be used. + :param rerun_dependent_tasks: bool (optional) + If true, repair all tasks that depend on the tasks in `rerun_tasks`, even if they were previously + successful. Can be also used in combination with `rerun_all_failed_tasks`. :param rerun_tasks: List[str] (optional) The task keys of the task runs to repair. :param spark_submit_params: List[str] (optional) @@ -665,7 +672,7 @@ Jobs - .. py:method:: run_now(job_id [, dbt_commands, idempotency_token, jar_params, notebook_params, pipeline_params, python_named_params, python_params, spark_submit_params, sql_params]) + .. py:method:: run_now(job_id [, dbt_commands, idempotency_token, jar_params, job_parameters, notebook_params, pipeline_params, python_named_params, python_params, spark_submit_params, sql_params]) Usage: @@ -729,6 +736,8 @@ Jobs Use [Task parameter variables](/jobs.html"#parameter-variables") to set parameters containing information about job runs. + :param job_parameters: List[Dict[str,str]] (optional) + Job-level parameters used in the run :param notebook_params: Dict[str,str] (optional) A map from keys to values for jobs with notebook task, for example `"notebook_params": {"name": "john doe", "age": "35"}`. The map is passed to the notebook and is accessible through the @@ -789,7 +798,7 @@ Jobs See :method:wait_get_run_job_terminated_or_skipped for more details. - .. py:method:: submit( [, access_control_list, git_source, idempotency_token, notification_settings, run_name, tasks, timeout_seconds, webhook_notifications]) + .. py:method:: submit( [, access_control_list, email_notifications, git_source, health, idempotency_token, notification_settings, run_name, tasks, timeout_seconds, webhook_notifications]) Usage: @@ -826,9 +835,14 @@ Jobs :param access_control_list: List[:class:`AccessControlRequest`] (optional) List of permissions to set on the job. + :param email_notifications: :class:`JobEmailNotifications` (optional) + An optional set of email addresses notified when the run begins or completes. The default behavior + is to not send any emails. :param git_source: :class:`GitSource` (optional) An optional specification for a remote repository containing the notebooks used by this job's notebook tasks. + :param health: :class:`JobsHealthRules` (optional) + An optional set of health rules that can be defined for this job. :param idempotency_token: str (optional) An optional token that can be used to guarantee the idempotency of job run requests. If a run with the provided token already exists, the request does not create a new run but returns the ID of the diff --git a/docs/workspace/policy_families.rst b/docs/workspace/policy_families.rst index 4f6481f3d..326612738 100644 --- a/docs/workspace/policy_families.rst +++ b/docs/workspace/policy_families.rst @@ -10,4 +10,53 @@ Policy Families Policy families cannot be used directly to create clusters. Instead, you create cluster policies using a policy family. Cluster policies created using a policy family inherit the policy family's policy - definition. \ No newline at end of file + definition. + + .. py:method:: get(policy_family_id) + + Usage: + + .. code-block:: + + from databricks.sdk import WorkspaceClient + from databricks.sdk.service import compute + + w = WorkspaceClient() + + all = w.policy_families.list(compute.ListPolicyFamiliesRequest()) + + first_family = w.policy_families.get(policy_family_id=all[0].policy_family_id) + + Get policy family information. + + Retrieve the information for an policy family based on its identifier. + + :param policy_family_id: str + + :returns: :class:`PolicyFamily` + + + .. py:method:: list( [, max_results, page_token]) + + Usage: + + .. code-block:: + + from databricks.sdk import WorkspaceClient + from databricks.sdk.service import compute + + w = WorkspaceClient() + + all = w.policy_families.list(compute.ListPolicyFamiliesRequest()) + + List policy families. + + Retrieve a list of policy families. This API is paginated. + + :param max_results: int (optional) + The max number of policy families to return. + :param page_token: str (optional) + A token that can be used to get the next page of results. + + :returns: Iterator over :class:`PolicyFamily` + \ No newline at end of file diff --git a/docs/workspace/queries.rst b/docs/workspace/queries.rst index b74d53f01..9c015020d 100644 --- a/docs/workspace/queries.rst +++ b/docs/workspace/queries.rst @@ -40,19 +40,19 @@ Queries / Results **Note**: You cannot add a visualization until you create the query. :param data_source_id: str (optional) - The ID of the data source / SQL warehouse where this query will run. + Data source ID. :param description: str (optional) - General description that can convey additional information about this query such as usage notes. + General description that conveys additional information about this query such as usage notes. :param name: str (optional) - The name or title of this query to display in list views. + The title of this query that appears in list views, widget headings, and on the query page. :param options: Any (optional) Exclusively used for storing a list parameter definitions. A parameter is an object with `title`, `name`, `type`, and `value` properties. The `value` field here is the default value. It can be overridden at runtime. :param parent: str (optional) - The identifier of the workspace folder containing the query. The default is the user's home folder. + The identifier of the workspace folder containing the object. :param query: str (optional) - The text of the query. + The text of the query to be run. :returns: :class:`Query` @@ -181,17 +181,17 @@ Queries / Results :param query_id: str :param data_source_id: str (optional) - The ID of the data source / SQL warehouse where this query will run. + Data source ID. :param description: str (optional) - General description that can convey additional information about this query such as usage notes. + General description that conveys additional information about this query such as usage notes. :param name: str (optional) - The name or title of this query to display in list views. + The title of this query that appears in list views, widget headings, and on the query page. :param options: Any (optional) Exclusively used for storing a list parameter definitions. A parameter is an object with `title`, `name`, `type`, and `value` properties. The `value` field here is the default value. It can be overridden at runtime. :param query: str (optional) - The text of the query. + The text of the query to be run. :returns: :class:`Query` \ No newline at end of file diff --git a/docs/workspace/service_principals.rst b/docs/workspace/service_principals.rst index ee0027e2a..34cbb0c06 100644 --- a/docs/workspace/service_principals.rst +++ b/docs/workspace/service_principals.rst @@ -130,7 +130,7 @@ Service Principals :returns: Iterator over :class:`ServicePrincipal` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update service principal details. @@ -139,6 +139,8 @@ Service Principals :param id: str Unique ID for a service principal in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/workspace/serving_endpoints.rst b/docs/workspace/serving_endpoints.rst index cceb24d31..699d6be75 100644 --- a/docs/workspace/serving_endpoints.rst +++ b/docs/workspace/serving_endpoints.rst @@ -4,13 +4,13 @@ Serving endpoints The Serving Endpoints API allows you to create, update, and delete model serving endpoints. - You can use a serving endpoint to serve models from the Databricks Model Registry. Endpoints expose the - underlying models as scalable REST API endpoints using serverless compute. This means the endpoints and - associated compute resources are fully managed by Databricks and will not appear in your cloud account. A - serving endpoint can consist of one or more MLflow models from the Databricks Model Registry, called - served models. A serving endpoint can have at most ten served models. You can configure traffic settings - to define how requests should be routed to your served models behind an endpoint. Additionally, you can - configure the scale of resources that should be applied to each served model. + You can use a serving endpoint to serve models from the Databricks Model Registry or from Unity Catalog. + Endpoints expose the underlying models as scalable REST API endpoints using serverless compute. This means + the endpoints and associated compute resources are fully managed by Databricks and will not appear in your + cloud account. A serving endpoint can consist of one or more MLflow models from the Databricks Model + Registry, called served models. A serving endpoint can have at most ten served models. You can configure + traffic settings to define how requests should be routed to your served models behind an endpoint. + Additionally, you can configure the scale of resources that should be applied to each served model. .. py:method:: build_logs(name, served_model_name) diff --git a/docs/workspace/tables.rst b/docs/workspace/tables.rst index 8609cb03b..7508cfcf8 100644 --- a/docs/workspace/tables.rst +++ b/docs/workspace/tables.rst @@ -170,4 +170,20 @@ Tables A sql LIKE pattern (% and _) for table names. All tables will be returned if not set or empty. :returns: Iterator over :class:`TableSummary` + + + .. py:method:: update(full_name [, owner]) + + Update a table owner. + + Change the owner of the table. The caller must be the owner of the parent catalog, have the + **USE_CATALOG** privilege on the parent catalog and be the owner of the parent schema, or be the owner + of the table and have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** + privilege on the parent schema. + + :param full_name: str + Full name of the table. + :param owner: str (optional) + + \ No newline at end of file diff --git a/docs/workspace/users.rst b/docs/workspace/users.rst index 571df325d..c7b89d793 100644 --- a/docs/workspace/users.rst +++ b/docs/workspace/users.rst @@ -116,7 +116,7 @@ Users all_users = w.users.list(attributes="id,userName", sort_by="userName", - sort_order=iam.ListSortOrder.descending) + sort_order=iam.ListSortOrder.DESCENDING) List users. @@ -146,7 +146,7 @@ Users :returns: Iterator over :class:`User` - .. py:method:: patch(id [, operations]) + .. py:method:: patch(id [, operations, schema]) Update user details. @@ -155,6 +155,8 @@ Users :param id: str Unique ID for a user in the Databricks workspace. :param operations: List[:class:`Patch`] (optional) + :param schema: List[:class:`PatchSchema`] (optional) + The schema of the patch request. Must be ["urn:ietf:params:scim:api:messages:2.0:PatchOp"]. diff --git a/docs/workspace/workspace-catalog.rst b/docs/workspace/workspace-catalog.rst index 15e4e8c41..3b0e859ff 100644 --- a/docs/workspace/workspace-catalog.rst +++ b/docs/workspace/workspace-catalog.rst @@ -1,12 +1,12 @@ Unity Catalog ============= - + Configure data governance with Unity Catalog for metastores, catalogs, schemas, tables, external locations, and storage credentials - + .. toctree:: :maxdepth: 1 - + catalogs connections external_locations diff --git a/docs/workspace/workspace-compute.rst b/docs/workspace/workspace-compute.rst index 32b215b35..cbb4bb833 100644 --- a/docs/workspace/workspace-compute.rst +++ b/docs/workspace/workspace-compute.rst @@ -1,12 +1,12 @@ Compute ======= - + Use and configure compute for Databricks - + .. toctree:: :maxdepth: 1 - + cluster_policies clusters command_execution diff --git a/docs/workspace/workspace-files.rst b/docs/workspace/workspace-files.rst index 8c93a0044..88530a6b8 100644 --- a/docs/workspace/workspace-files.rst +++ b/docs/workspace/workspace-files.rst @@ -1,10 +1,10 @@ File Management =============== - + Manage files on Databricks in a filesystem-like interface - + .. toctree:: :maxdepth: 1 - + dbfs \ No newline at end of file diff --git a/docs/workspace/workspace-iam.rst b/docs/workspace/workspace-iam.rst index 4468aaaf1..021ff539d 100644 --- a/docs/workspace/workspace-iam.rst +++ b/docs/workspace/workspace-iam.rst @@ -1,12 +1,12 @@ Identity and Access Management ============================== - + Manage users, service principals, groups and their permissions in Accounts and Workspaces - + .. toctree:: :maxdepth: 1 - + account_access_control_proxy current_user groups diff --git a/docs/workspace/workspace-jobs.rst b/docs/workspace/workspace-jobs.rst index 0da2f655f..a1a53a955 100644 --- a/docs/workspace/workspace-jobs.rst +++ b/docs/workspace/workspace-jobs.rst @@ -1,10 +1,10 @@ Jobs ==== - + Schedule automated jobs on Databricks Workspaces - + .. toctree:: :maxdepth: 1 - + jobs \ No newline at end of file diff --git a/docs/workspace/workspace-ml.rst b/docs/workspace/workspace-ml.rst index d1f926f93..e701cfd1d 100644 --- a/docs/workspace/workspace-ml.rst +++ b/docs/workspace/workspace-ml.rst @@ -1,11 +1,11 @@ Machine Learning ================ - + Create and manage experiments, features, and other machine learning artifacts - + .. toctree:: :maxdepth: 1 - + experiments model_registry \ No newline at end of file diff --git a/docs/workspace/workspace-pipelines.rst b/docs/workspace/workspace-pipelines.rst index 4e4a82371..8213f87ef 100644 --- a/docs/workspace/workspace-pipelines.rst +++ b/docs/workspace/workspace-pipelines.rst @@ -1,10 +1,10 @@ Delta Live Tables ================= - + Manage pipelines, runs, and other Delta Live Table resources - + .. toctree:: :maxdepth: 1 - + pipelines \ No newline at end of file diff --git a/docs/workspace/workspace-serving.rst b/docs/workspace/workspace-serving.rst index 0d20fba39..34c347516 100644 --- a/docs/workspace/workspace-serving.rst +++ b/docs/workspace/workspace-serving.rst @@ -1,10 +1,10 @@ Real-time Serving ================= - + Use real-time inference for machine learning - + .. toctree:: :maxdepth: 1 - + serving_endpoints \ No newline at end of file diff --git a/docs/workspace/workspace-settings.rst b/docs/workspace/workspace-settings.rst index 8174e8a78..71e66ac16 100644 --- a/docs/workspace/workspace-settings.rst +++ b/docs/workspace/workspace-settings.rst @@ -1,12 +1,12 @@ Settings ======== - + Manage security settings for Accounts and Workspaces - + .. toctree:: :maxdepth: 1 - + ip_access_lists token_management tokens diff --git a/docs/workspace/workspace-sharing.rst b/docs/workspace/workspace-sharing.rst index e4b3b7e7e..5ba08d21b 100644 --- a/docs/workspace/workspace-sharing.rst +++ b/docs/workspace/workspace-sharing.rst @@ -1,12 +1,13 @@ Delta Sharing ============= - + Configure data sharing with Unity Catalog for providers, recipients, and shares - + .. toctree:: :maxdepth: 1 - + + clean_rooms providers recipient_activation recipients diff --git a/docs/workspace/workspace-sql.rst b/docs/workspace/workspace-sql.rst index aa24ed626..bd49e65de 100644 --- a/docs/workspace/workspace-sql.rst +++ b/docs/workspace/workspace-sql.rst @@ -1,12 +1,12 @@ Databricks SQL ============== - + Manage Databricks SQL assets, including warehouses, dashboards, queries and query history, and alerts - + .. toctree:: :maxdepth: 1 - + alerts dashboards data_sources diff --git a/docs/workspace/workspace-workspace.rst b/docs/workspace/workspace-workspace.rst index 17348c9f8..7845b7784 100644 --- a/docs/workspace/workspace-workspace.rst +++ b/docs/workspace/workspace-workspace.rst @@ -1,12 +1,12 @@ Databricks Workspace ==================== - + Manage workspace-level entities that include notebooks, Git checkouts, and secrets - + .. toctree:: :maxdepth: 1 - + git_credentials repos secrets