Skip to content

Commit

Permalink
initial support for GCP preemtible executors
Browse files Browse the repository at this point in the history
  • Loading branch information
alexott committed Mar 23, 2021
1 parent ecf402e commit 71de544
Show file tree
Hide file tree
Showing 3 changed files with 27 additions and 3 deletions.
9 changes: 9 additions & 0 deletions compute/model.go
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,13 @@ type AzureAttributes struct {
SpotBidMaxPrice float64 `json:"spot_bid_max_price,omitempty" tf:"computed"`
}

// GcpAttributes encapsultes GCP specific attributes
// https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#clustergcpattributes
type GcpAttributes struct {
UsePreemptibleExecutors bool `json:"use_preemptible_executors,omitempty" tf:"computed"`
GoogleServiceAccount string `json:"google_service_account,omitempty" tf:"computed"`
}

// DbfsStorageInfo contains the destination string for DBFS
type DbfsStorageInfo struct {
Destination string `json:"destination"`
Expand Down Expand Up @@ -257,6 +264,7 @@ type Cluster struct {
PolicyID string `json:"policy_id,omitempty"`
AwsAttributes *AwsAttributes `json:"aws_attributes,omitempty" tf:"conflicts:instance_pool_id"`
AzureAttributes *AzureAttributes `json:"azure_attributes,omitempty" tf:"conflicts:instance_pool_id"`
GcpAttributes *GcpAttributes `json:"gcp_attributes,omitempty" tf:"conflicts:instance_pool_id"`
AutoterminationMinutes int32 `json:"autotermination_minutes,omitempty"`

SparkConf map[string]string `json:"spark_conf,omitempty"`
Expand Down Expand Up @@ -292,6 +300,7 @@ type ClusterInfo struct {
SparkConf map[string]string `json:"spark_conf,omitempty"`
AwsAttributes *AwsAttributes `json:"aws_attributes,omitempty"`
AzureAttributes *AzureAttributes `json:"azure_attributes,omitempty"`
GcpAttributes *GcpAttributes `json:"gcp_attributes,omitempty"`
NodeTypeID string `json:"node_type_id,omitempty"`
DriverNodeTypeID string `json:"driver_node_type_id,omitempty"`
SSHPublicKeys []string `json:"ssh_public_keys,omitempty"`
Expand Down
8 changes: 6 additions & 2 deletions compute/resource_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,9 @@ func resourceClusterSchema() map[string]*schema.Schema {
Optional: true,
Computed: true,
}
s["aws_attributes"].ConflictsWith = []string{"azure_attributes"}
s["azure_attributes"].ConflictsWith = []string{"aws_attributes"}
s["aws_attributes"].ConflictsWith = []string{"azure_attributes", "gcp_attributes"}
s["azure_attributes"].ConflictsWith = []string{"aws_attributes", "gcp_attributes"}
s["gcp_attributes"].ConflictsWith = []string{"aws_attributes", "azure_attributes"}
s["is_pinned"] = &schema.Schema{
Type: schema.TypeBool,
Optional: true,
Expand Down Expand Up @@ -326,6 +327,9 @@ func modifyClusterRequest(clusterModel *Cluster) {
if clusterModel.AzureAttributes != nil {
clusterModel.AzureAttributes = nil
}
if clusterModel.GcpAttributes != nil {
clusterModel.GcpAttributes = nil
}
clusterModel.EnableElasticDisk = false
clusterModel.NodeTypeID = ""
clusterModel.DriverNodeTypeID = ""
Expand Down
13 changes: 12 additions & 1 deletion docs/resources/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ cluster_log_conf {

There are a few more advanced attributes for S3 log delivery:

* `destination` - S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.
* `destination` - S3 destination, e.g., `s3://my-bucket/some-prefix` You must configure the cluster with an instance profile, and the instance profile must have write access to the destination. You cannot use AWS keys.
* `region` - (Optional) S3 region, e.g. `us-west-2`. Either `region` or `endpoint` must be set. If both are set, the endpoint is used.
* `endpoint` - (Optional) S3 endpoint, e.g. https://s3-us-west-2.amazonaws.com. Either `region` or `endpoint` needs to be set. If both are set, the endpoint is used.
* `enable_encryption` - (Optional) Enable server-side encryption, false by default.
Expand Down Expand Up @@ -299,6 +299,17 @@ The following options are available:
* `first_on_demand` - (Optional) The first `first_on_demand` nodes of the cluster will be placed on on-demand instances. If this value is greater than 0, the cluster driver node will be placed on an on-demand instance. If this value is greater than or equal to the current cluster size, all nodes will be placed on on-demand instances. If this value is less than the current cluster size, `first_on_demand` nodes will be placed on on-demand instances, and the remainder will be placed on availability instances. This value does not affect cluster size and cannot be mutated over the lifetime of a cluster.
* `spot_bid_max_price` - (Optional) The max price for Azure spot instances.

## gcp_attributes

`gcp_attributes` optional configuration block contains attributes related to [clusters running on GCP](https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#clustergcpattributes).

-> **Note** *(GCP only)* Please specify empty configuration block (`gcp_attributes {}`), even if you're not setting any custom values. This will prevent any resource update issues.

The following options are available:

* `use_preemptible_executors` - (Optional, bool) if we should use preemptible executors ([GCP documentation](https://cloud.google.com/compute/docs/instances/preemptible))
* `google_service_account` - (Optional, string) Google Service Account email address that the cluster uses to authenticate with Google Identity. This field is used for authentication with the GCS and BigQuery data sources.

## docker_image

[Databricks Container Services](https://docs.databricks.com/clusters/custom-containers.html) lets you specify a Docker image when you create a cluster. You need to enable Container Services in *Admin Console / Advanced* page in the user interface. By enabling this feature, you acknowledge and agree that your usage of this feature is subject to the [applicable additional terms](http://www.databricks.com/product-specific-terms).
Expand Down

0 comments on commit 71de544

Please sign in to comment.