Skip to content

Commit

Permalink
Reformat code examples (#999)
Browse files Browse the repository at this point in the history
Run of `terrafmt fmt -p '*.md' .`
  • Loading branch information
nfx committed Dec 22, 2021
1 parent 2ce4496 commit 1a7ddaa
Show file tree
Hide file tree
Showing 60 changed files with 651 additions and 647 deletions.
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/provider-issue.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,4 +39,4 @@ TF_LOG=DEBUG terraform plan 2>&1 | grep databricks | sed -E 's/^.* plugin[^:]+:
If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`.

### Important Factoids
Are there anything atypical about your accounts that we should know?
Are there anything atypical about your accounts that we should know?
4 changes: 4 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ fmt:
@echo "✓ Formatting source code with gofmt ..."
@gofmt -w $(shell find . -type f -name '*.go' -not -path "./vendor/*")

fmt-docs:
@echo "✓ Formatting code samples in documentation"
@terrafmt fmt -p '*.md' .

lint: vendor
@echo "✓ Linting source code with https://staticcheck.io/ ..."
@staticcheck ./...
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ If you use Terraform 0.13 or newer, please refer to instructions specified at [r
terraform {
required_providers {
databricks = {
source = "databrickslabs/databricks"
source = "databrickslabs/databricks"
version = "0.4.1"
}
}
Expand All @@ -80,7 +80,7 @@ Then create a small sample file, named `main.tf` with approximately following co

```terraform
provider "databricks" {
host = "https://abc-defg-024.cloud.databricks.com/"
host = "https://abc-defg-024.cloud.databricks.com/"
token = "<your PAT token>"
}
Expand Down
2 changes: 1 addition & 1 deletion docs/data-sources/aws_assume_role_policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,4 +55,4 @@ resource "databricks_mws_credentials" "this" {

In addition to all arguments above, the following attributes are exported:

* `json` - AWS IAM Policy JSON document
* `json` - AWS IAM Policy JSON document
8 changes: 4 additions & 4 deletions docs/data-sources/aws_bucket_policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ This datasource configures a simple access policy for AWS S3 buckets, so that Da

```hcl
resource "aws_s3_bucket" "this" {
bucket = "<unique_bucket_name>"
acl = "private"
bucket = "<unique_bucket_name>"
acl = "private"
force_destroy = true
}
Expand All @@ -19,8 +19,8 @@ data "databricks_aws_bucket_policy" "stuff" {
}
resource "aws_s3_bucket_policy" "this" {
bucket = aws_s3_bucket.this.id
policy = data.databricks_aws_bucket_policy.this.json
bucket = aws_s3_bucket.this.id
policy = data.databricks_aws_bucket_policy.this.json
}
```

Expand Down
6 changes: 3 additions & 3 deletions docs/data-sources/clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,16 @@ Retrieve all clusters on this workspace on AWS or GCP:

```hcl
data "databricks_clusters" "all" {
depends_on = [databricks_mws_workspaces.this]
depends_on = [databricks_mws_workspaces.this]
}
```

Retrieve all clusters with "Shared" in their cluster name on this Azure Databricks workspace:

```hcl
data "databricks_clusters" "all_shared" {
depends_on = [azurerm_databricks_workspace.this]
cluster_name_contains = "shared"
depends_on = [azurerm_databricks_workspace.this]
cluster_name_contains = "shared"
}
```

Expand Down
6 changes: 3 additions & 3 deletions docs/data-sources/dbfs_file.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ This data source allows to get file content from DBFS

```hcl
data "databricks_dbfs_file" "report" {
path = "dbfs:/reports/some.csv"
limit_file_size = 10240
path = "dbfs:/reports/some.csv"
limit_file_size = 10240
}
```
## Argument Reference
Expand All @@ -25,4 +25,4 @@ data "databricks_dbfs_file" "report" {
This data source exports the following attributes:

* `content` - base64-encoded file contents
* `file_size` - size of the file in bytes
* `file_size` - size of the file in bytes
6 changes: 3 additions & 3 deletions docs/data-sources/dbfs_file_paths.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ This data source allows to get list of file names from DBFS

```hcl
data "databricks_dbfs_file_paths" "partitions" {
path = "dbfs:/user/hive/default.db/table"
recursive = false
path = "dbfs:/user/hive/default.db/table"
recursive = false
}
```
## Argument Reference
Expand All @@ -24,4 +24,4 @@ data "databricks_dbfs_file_paths" "partitions" {

This data source exports the following attributes:

* `path_list` - returns list of objects with `path` and `file_size` attributes in each
* `path_list` - returns list of objects with `path` and `file_size` attributes in each
6 changes: 3 additions & 3 deletions docs/data-sources/group.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,15 @@ Adding user to administrative group

```hcl
data "databricks_group" "admins" {
display_name = "admins"
display_name = "admins"
}
resource "databricks_user" "me" {
user_name = "me@example.com"
user_name = "me@example.com"
}
resource "databricks_group_member" "my_member_a" {
group_id = data.databricks_group.admins.id
group_id = data.databricks_group.admins.id
member_id = databricks_user.me.id
}
```
Expand Down
26 changes: 13 additions & 13 deletions docs/data-sources/node_type.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,26 +13,26 @@ Gets the smallest node type for [databricks_cluster](../resources/cluster.md) th

```hcl
data "databricks_node_type" "with_gpu" {
local_disk = true
min_cores = 16
gb_per_core = 1
min_gpus = 1
local_disk = true
min_cores = 16
gb_per_core = 1
min_gpus = 1
}
data "databricks_spark_version" "gpu_ml" {
gpu = true
ml = true
ml = true
}
resource "databricks_cluster" "research" {
cluster_name = "Research Cluster"
spark_version = data.databricks_spark_version.gpu_ml.id
node_type_id = data.databricks_node_type.with_gpu.id
autotermination_minutes = 20
autoscale {
min_workers = 1
max_workers = 50
}
cluster_name = "Research Cluster"
spark_version = data.databricks_spark_version.gpu_ml.id
node_type_id = data.databricks_node_type.with_gpu.id
autotermination_minutes = 20
autoscale {
min_workers = 1
max_workers = 50
}
}
```

Expand Down
6 changes: 3 additions & 3 deletions docs/data-sources/notebook.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ This data source allows to export a notebook from workspace

```hcl
data "databricks_notebook" "features" {
path = "/Production/Features"
format = "SOURCE"
path = "/Production/Features"
format = "SOURCE"
}
```

Expand All @@ -28,4 +28,4 @@ This data source exports the following attributes:
* `content` - notebook content in selected format
* `language` - notebook language
* `object_id` - notebook object ID
* `object_type` - notebook object type
* `object_type` - notebook object type
4 changes: 2 additions & 2 deletions docs/data-sources/notebook_paths.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ This data source allows to list notebooks in the workspace

```hcl
data "databricks_notebook_paths" "prod" {
path = "/Production"
recursive = true
path = "/Production"
recursive = true
}
```

Expand Down
26 changes: 13 additions & 13 deletions docs/data-sources/spark_version.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,26 +13,26 @@ Gets Databricks Runtime (DBR) version that could be used for `spark_version` par

```hcl
data "databricks_node_type" "with_gpu" {
local_disk = true
min_cores = 16
gb_per_core = 1
min_gpus = 1
local_disk = true
min_cores = 16
gb_per_core = 1
min_gpus = 1
}
data "databricks_spark_version" "gpu_ml" {
gpu = true
ml = true
ml = true
}
resource "databricks_cluster" "research" {
cluster_name = "Research Cluster"
spark_version = data.databricks_spark_version.gpu_ml.id
node_type_id = data.databricks_node_type.with_gpu.id
autotermination_minutes = 20
autoscale {
min_workers = 1
max_workers = 50
}
cluster_name = "Research Cluster"
spark_version = data.databricks_spark_version.gpu_ml.id
node_type_id = data.databricks_node_type.with_gpu.id
autotermination_minutes = 20
autoscale {
min_workers = 1
max_workers = 50
}
}
```

Expand Down
6 changes: 3 additions & 3 deletions docs/data-sources/user.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@ Adding user to administrative group

```hcl
data "databricks_group" "admins" {
display_name = "admins"
display_name = "admins"
}
data "databricks_user" "me" {
user_name = "me@example.com"
user_name = "me@example.com"
}
resource "databricks_group_member" "my_member_a" {
group_id = data.databricks_group.admins.id
group_id = data.databricks_group.admins.id
member_id = data.databricks_user.me.id
}
```
Expand Down
52 changes: 26 additions & 26 deletions docs/guides/aws-e2-firewall-hub-and-spoke.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,17 +61,17 @@ variable "prefix" {
}
locals {
prefix = "${var.prefix}${random_string.naming.result}"
spoke_db_private_subnets_cidr = [cidrsubnet(var.spoke_cidr_block, 3, 0), cidrsubnet(var.spoke_cidr_block, 3, 1)]
spoke_tgw_private_subnets_cidr = [cidrsubnet(var.spoke_cidr_block, 3, 2), cidrsubnet(var.spoke_cidr_block, 3, 3)]
hub_tgw_private_subnets_cidr = [cidrsubnet(var.hub_cidr_block, 3, 0)]
hub_nat_public_subnets_cidr = [cidrsubnet(var.hub_cidr_block, 3, 1)]
hub_firewall_subnets_cidr = [cidrsubnet(var.hub_cidr_block, 3, 2)]
sg_egress_ports = [443, 3306, 6666]
sg_ingress_protocol = ["tcp", "udp"]
sg_egress_protocol = ["tcp", "udp"]
availability_zones = ["${var.region}a", "${var.region}b"]
db_root_bucket = "${var.prefix}${random_string.naming.result}-rootbucket.s3.amazonaws.com"
prefix = "${var.prefix}${random_string.naming.result}"
spoke_db_private_subnets_cidr = [cidrsubnet(var.spoke_cidr_block, 3, 0), cidrsubnet(var.spoke_cidr_block, 3, 1)]
spoke_tgw_private_subnets_cidr = [cidrsubnet(var.spoke_cidr_block, 3, 2), cidrsubnet(var.spoke_cidr_block, 3, 3)]
hub_tgw_private_subnets_cidr = [cidrsubnet(var.hub_cidr_block, 3, 0)]
hub_nat_public_subnets_cidr = [cidrsubnet(var.hub_cidr_block, 3, 1)]
hub_firewall_subnets_cidr = [cidrsubnet(var.hub_cidr_block, 3, 2)]
sg_egress_ports = [443, 3306, 6666]
sg_ingress_protocol = ["tcp", "udp"]
sg_egress_protocol = ["tcp", "udp"]
availability_zones = ["${var.region}a", "${var.region}b"]
db_root_bucket = "${var.prefix}${random_string.naming.result}-rootbucket.s3.amazonaws.com"
}
```

Expand All @@ -94,9 +94,9 @@ terraform {
version = "0.4.1"
}
aws = {
source = "hashicorp/aws"
source = "hashicorp/aws"
version = "3.49.0"
}
}
}
}
Expand Down Expand Up @@ -269,12 +269,12 @@ module "vpc_endpoints" {
endpoints = {
s3 = {
service = "s3"
service_type = "Gateway"
service = "s3"
service_type = "Gateway"
route_table_ids = flatten([
aws_route_table.spoke_db_private_rt.id
])
tags = {
tags = {
Name = "${local.prefix}-s3-vpc-endpoint"
}
},
Expand Down Expand Up @@ -348,7 +348,7 @@ resource "aws_subnet" "hub_firewall_subnet" {
cidr_block = element(local.hub_firewall_subnets_cidr, count.index)
availability_zone = element(local.availability_zones, count.index)
map_public_ip_on_launch = false
tags = merge(var.tags, {
tags = merge(var.tags, {
Name = "${local.prefix}-hub-firewall-public-${element(local.availability_zones, count.index)}"
})
}
Expand Down Expand Up @@ -486,8 +486,8 @@ resource "aws_ec2_transit_gateway_vpc_attachment" "hub" {
transit_gateway_default_route_table_association = true
transit_gateway_default_route_table_propagation = true
tags = merge(var.tags, {
Name = "${local.prefix}-hub"
Purpose = "Transit Gateway Attachment - Hub VPC"
Name = "${local.prefix}-hub"
Purpose = "Transit Gateway Attachment - Hub VPC"
})
}
Expand All @@ -501,8 +501,8 @@ resource "aws_ec2_transit_gateway_vpc_attachment" "spoke" {
transit_gateway_default_route_table_association = true
transit_gateway_default_route_table_propagation = true
tags = merge(var.tags, {
Name = "${local.prefix}-spoke"
Purpose = "Transit Gateway Attachment - Spoke VPC"
Name = "${local.prefix}-spoke"
Purpose = "Transit Gateway Attachment - Spoke VPC"
})
}
Expand Down Expand Up @@ -559,14 +559,14 @@ resource "aws_networkfirewall_rule_group" "databricks_fqdns_rg" {
rules_source_list {
generated_rules_type = "ALLOWLIST"
target_types = ["TLS_SNI", "HTTP_HOST"]
targets = concat([var.db_web_app, var.db_tunnel, var.db_rds,local.db_root_bucket], var.whitelisted_urls)
targets = concat([var.db_web_app, var.db_tunnel, var.db_rds, local.db_root_bucket], var.whitelisted_urls)
}
}
rule_variables {
ip_sets {
key = "HOME_NET"
ip_set {
definition = [var.spoke_cidr_block,var.hub_cidr_block]
definition = [var.spoke_cidr_block, var.hub_cidr_block]
}
}
}
Expand Down Expand Up @@ -594,7 +594,7 @@ resource "aws_networkfirewall_rule_group" "allow_db_cpl_protocols_rg" {
ip_sets {
key = "HOME_NET"
ip_set {
definition = [var.spoke_cidr_block,var.hub_cidr_block]
definition = [var.spoke_cidr_block, var.hub_cidr_block]
}
}
}
Expand Down Expand Up @@ -636,7 +636,7 @@ resource "aws_networkfirewall_rule_group" "deny_protocols_rg" {
ip_sets {
key = "HOME_NET"
ip_set {
definition = [var.spoke_cidr_block,var.hub_cidr_block]
definition = [var.spoke_cidr_block, var.hub_cidr_block]
}
}
}
Expand Down Expand Up @@ -738,4 +738,4 @@ resource "aws_route" "db_igw_nat_firewall" {
```

## Troubleshooting
If the Databricks clusters cannot reach DBFS or VPC endpoints do not work as intended, for example if your data sources are inaccessible or if the traffic is bypassing the endpoints please visit [Troubleshoot regional endpoints](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html#troubleshoot-regional-endpoints)
If the Databricks clusters cannot reach DBFS or VPC endpoints do not work as intended, for example if your data sources are inaccessible or if the traffic is bypassing the endpoints please visit [Troubleshoot regional endpoints](https://docs.databricks.com/administration-guide/cloud-configurations/aws/customer-managed-vpc.html#troubleshoot-regional-endpoints)
Loading

0 comments on commit 1a7ddaa

Please sign in to comment.