Skip to content

Commit

Permalink
Document sql_task configuration block in databricks_job resource
Browse files Browse the repository at this point in the history
  • Loading branch information
alexott committed Sep 9, 2022
1 parent d1d468a commit abe8d7a
Showing 1 changed file with 10 additions and 0 deletions.
10 changes: 10 additions & 0 deletions docs/resources/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,16 @@ You can invoke Spark submit tasks only on new clusters. **In the `new_cluster` s

You also need to include a `git_source` block to configure the repository that contains the dbt project.

### sql_task Configuration Block

One of the `query`, `dashboard` or `alert` needs to be provided.

* `warehouse_id` - (Required) ID of the (the [databricks_sql_endpoint](sql_endpoint.md)) that will be used to execute the task. Only serverless warehouses are supported right now.
* `parameters` - (Optional) (Map) parameters to be used for each run of this task. The SQL alert task does not support custom parameters.
* `query` - (Optional) block consisting of single string field: `query_id` - identifier of the Databricks SQL Query ([databricks_sql_query](sql_query.md)).
* `dashboard` - (Optional) block consisting of single string field: `dashboard_id` - identifier of the Databricks SQL Dashboard [databricks_sql_dashboard](sql_dashboard.md).
* `alert` - (Optional) block consisting of single string field: `alert_id` - identifier of the Databricks SQL Alert.

### email_notifications Configuration Block

* `on_failure` - (Optional) (List) list of emails to notify on failure
Expand Down

0 comments on commit abe8d7a

Please sign in to comment.