subcategory |
---|
Databricks SQL |
This resource is used to manage Databricks SQL warehouses. To create SQL warehouses you must have databricks_sql_access
on your databricks_group or databricks_user.
data "databricks_current_user" "me" {}
resource "databricks_sql_endpoint" "this" {
name = "Endpoint of ${data.databricks_current_user.me.alphanumeric}"
cluster_size = "Small"
max_num_clusters = 1
tags {
custom_tags {
key = "City"
value = "Amsterdam"
}
}
}
The following arguments are supported:
-
name
- (Required) Name of the SQL warehouse. Must be unique. -
cluster_size
- (Required) The size of the clusters allocated to the endpoint: "2X-Small", "X-Small", "Small", "Medium", "Large", "X-Large", "2X-Large", "3X-Large", "4X-Large". -
min_num_clusters
- Minimum number of clusters available when a SQL warehouse is running. The default is1
. -
max_num_clusters
- Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, this is default to1
. -
auto_stop_mins
- Time in minutes until an idle SQL warehouse terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop. -
tags
- Databricks tags all endpoint resources with these tags. -
spot_instance_policy
- The spot policy to use for allocating instances to clusters:COST_OPTIMIZED
orRELIABILITY_OPTIMIZED
. This field is optional. Default isCOST_OPTIMIZED
. -
enable_photon
- Whether to enable Photon. This field is optional and is enabled by default. -
enable_serverless_compute
- Whether this SQL warehouse is a serverless endpoint. See below for details about the default values. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field explicitly.-
For AWS, If omitted, the default is
false
for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 30, 2023, the default remains the previous behavior which is default totrue
if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. If your account needs updated terms of use, workspace admins are prompted in the Databricks SQL UI. A workspace must meet the requirements and might require an update to its instance profile role to add a trust relationship. -
For Azure, If omitted, the default is
false
for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between November 1, 2022 and May 19, 2023, the default remains the previous behavior which is default totrue
if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. A workspace must meet the requirements and might require an update to its Azure storage firewall.
-
-
channel
block, consisting of following fields:name
- Name of the Databricks SQL release channel. Possible values are:CHANNEL_NAME_PREVIEW
andCHANNEL_NAME_CURRENT
. Default isCHANNEL_NAME_CURRENT
.
-
warehouse_type
- SQL warehouse type. See for AWS or Azure. Set toPRO
orCLASSIC
. If the fieldenable_serverless_compute
has the valuetrue
either explicitly or through the default logic (see that field above for details), the default isPRO
, which is required for serverless SQL warehouses. Otherwise, the default isCLASSIC
.
In addition to all arguments above, the following attributes are exported:
jdbc_url
- JDBC connection string.odbc_params
- ODBC connection params:odbc_params.hostname
,odbc_params.path
,odbc_params.protocol
, andodbc_params.port
.data_source_id
- ID of the data source for this endpoint. This is used to bind an Databricks SQL query to an endpoint.
- databricks_permissions can control which groups or individual users can Can Use or Can Manage SQL warehouses.
databricks_sql_access
on databricks_group or databricks_user.
The timeouts
block allows you to specify create
timeouts. It usually takes 10-20 minutes to provision a Databricks SQL warehouse.
timeouts {
create = "30m"
}
You can import a databricks_sql_endpoint
resource with ID like the following:
$ terraform import databricks_sql_endpoint.this <endpoint-id>
The following resources are often used in the same context:
- End to end workspace management guide.
- databricks_instance_profile to manage AWS EC2 instance profiles that users can launch databricks_cluster and access data, like databricks_mount.
- databricks_sql_dashboard to manage Databricks SQL Dashboards.
- databricks_sql_global_config to configure the security policy, databricks_instance_profile, and data access properties for all databricks_sql_endpoint of workspace.
- databricks_sql_permissions to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and more.