Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct names for nested object properties. #132

Merged
merged 24 commits into from
Apr 4, 2019
Merged
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
597a39f
Inspec cloudfunction
slevenick Mar 19, 2019
4ba4a00
Merge pull request #131 from modular-magician/codegen-pr-1545
slevenick Mar 19, 2019
3f8cf92
Remove duplicated method due to merges
slevenick Mar 19, 2019
4d13b4c
Merge pull request #132 from modular-magician/codegen-pr-1548
slevenick Mar 19, 2019
a9bf9f8
Adding support for backend buckets in InSpec
slevenick Mar 20, 2019
79c4cd0
Merge pull request #133 from modular-magician/codegen-pr-1549
slevenick Mar 20, 2019
51b1cee
Adding support for backend buckets in InSpec
slevenick Mar 20, 2019
cc64fa6
Merge pull request #134 from modular-magician/codegen-pr-1549
slevenick Mar 20, 2019
c9d5493
Use out_name instead of name for nested properties in markdown doc.
slevenick Mar 22, 2019
f18b3f5
Merge pull request #135 from modular-magician/codegen-pr-1566
slevenick Mar 22, 2019
7a0a9b6
Use environment variable instead of MM specific for cloudfunction region
slevenick Mar 28, 2019
d1f6550
Merge branch 'backend-buckets-cloudfunctions'
slevenick Mar 28, 2019
ee9f99f
Refactor cloudfunction region variable
slevenick Mar 28, 2019
26e6965
Merge pull request #136 from modular-magician/codegen-pr-1585
slevenick Mar 28, 2019
5224af4
Add backend bucket signed URL key (for CDN) support
emilymye Mar 28, 2019
3e1ee00
Fix reference to variable in gcp-mm.tf
slevenick Mar 29, 2019
2931a8f
Merge pull request #141 from modular-magician/codegen-pr-1593
slevenick Mar 29, 2019
a09ef08
Update region for cloudfunction, europe not working
slevenick Apr 1, 2019
fa2f7a2
Merge pull request #127 from modular-magician/codegen-pr-1504
slevenick Apr 1, 2019
a7dd338
More Cloudbuild Trigger Step support (#143)
modular-magician Apr 2, 2019
e99a3e2
Merge branch 'master' of https://github.com/inspec/inspec-gcp
slevenick Apr 2, 2019
bfa10b2
Missing comma
slevenick Apr 2, 2019
61a327c
Add fingerprint, security_policy to BackendService (#142)
modular-magician Apr 2, 2019
0f1f0d5
Set size to 1 to preserve resources, variablize the node pool size
slevenick Apr 3, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -305,6 +305,7 @@ export GCP_LB_ZONE_MIG3="us-central1-c"
export GCP_KUBE_CLUSTER_ZONE="us-central1-a"
export GCP_KUBE_CLUSTER_ZONE_EXTRA1="us-central1-b"
export GCP_KUBE_CLUSTER_ZONE_EXTRA2="us-central1-c"
export GCP_CLOUD_FUNCTION_REGION="us-central1"
```

Other regions can be targeted by updating the above. For example, see [https://cloud.google.com/compute/docs/regions-zones/](https://cloud.google.com/compute/docs/regions-zones/) for suitable values.
Expand Down
10 changes: 5 additions & 5 deletions docs/resources/google_bigquery_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,23 +39,23 @@ Properties that can be accessed from the `google_bigquery_dataset` resource:

* `domain`: A domain to grant access to. Any users signed in with the domain specified will be granted the specified access

* `groupByEmail`: An email address of a Google Group to grant access to
* `group_by_email`: An email address of a Google Group to grant access to

* `role`: Describes the rights granted to the user specified by the other member of the access object

* `specialGroup`: A special group to grant access to.
* `special_group`: A special group to grant access to.

* `userByEmail`: An email address of a user to grant access to. For example: fred@example.com
* `user_by_email`: An email address of a user to grant access to. For example: fred@example.com

* `view`: A view from a different dataset to grant access to. Queries executed against that view will have read access to tables in this dataset. The role field is not required when this field is set. If that view is updated by any user, access to the view needs to be granted again via an update operation.

* `creation_time`: The time when this dataset was created, in milliseconds since the epoch.

* `dataset_reference`: A reference that identifies the dataset.

* `datasetId`: A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
* `dataset_id`: A unique ID for this dataset, without the project name. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.

* `projectId`: The ID of the project containing this dataset.
* `project_id`: The ID of the project containing this dataset.

* `default_table_expiration_ms`: The default lifetime of all tables in the dataset, in milliseconds

Expand Down
34 changes: 17 additions & 17 deletions docs/resources/google_bigquery_table.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@ Properties that can be accessed from the `google_bigquery_table` resource:

* `table_reference`: Reference describing the ID of this table

* `datasetId`: The ID of the dataset containing this table
* `dataset_id`: The ID of the dataset containing this table

* `projectId`: The ID of the project containing this table
* `project_id`: The ID of the project containing this table

* `tableId`: The ID of the the table
* `table_id`: The ID of the the table

* `creation_time`: The time when this dataset was created, in milliseconds since the epoch.

Expand Down Expand Up @@ -58,31 +58,31 @@ Properties that can be accessed from the `google_bigquery_table` resource:

* `view`: The view definition.

* `useLegacySql`: Specifies whether to use BigQuery's legacy SQL for this view
* `use_legacy_sql`: Specifies whether to use BigQuery's legacy SQL for this view

* `userDefinedFunctionResources`: Describes user-defined function resources used in the query.
* `user_defined_function_resources`: Describes user-defined function resources used in the query.

* `time_partitioning`: If specified, configures time-based partitioning for this table.

* `expirationMs`: Number of milliseconds for which to keep the storage for a partition.
* `expiration_ms`: Number of milliseconds for which to keep the storage for a partition.

* `type`: The only type supported is DAY, which will generate one partition per day.

* `streaming_buffer`: Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.

* `estimatedBytes`: A lower-bound estimate of the number of bytes currently in the streaming buffer.
* `estimated_bytes`: A lower-bound estimate of the number of bytes currently in the streaming buffer.

* `estimatedRows`: A lower-bound estimate of the number of rows currently in the streaming buffer.
* `estimated_rows`: A lower-bound estimate of the number of rows currently in the streaming buffer.

* `oldestEntryTime`: Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.
* `oldest_entry_time`: Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.

* `schema`: Describes the schema of this table

* `fields`: Describes the fields in a table.

* `encryption_configuration`: Custom encryption configuration

* `kmsKeyName`: Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
* `kms_key_name`: Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

* `expiration_time`: The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely.

Expand All @@ -92,21 +92,21 @@ Properties that can be accessed from the `google_bigquery_table` resource:

* `compression`: The compression type of the data source

* `ignoreUnknownValues`: Indicates if BigQuery should allow extra values that are not represented in the table schema
* `ignore_unknown_values`: Indicates if BigQuery should allow extra values that are not represented in the table schema

* `maxBadRecords`: The maximum number of bad records that BigQuery can ignore when reading data
* `max_bad_records`: The maximum number of bad records that BigQuery can ignore when reading data

* `sourceFormat`: The data format
* `source_format`: The data format

* `sourceUris`: The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '*' wildcard character is not allowed.
* `source_uris`: The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '*' wildcard character is not allowed.

* `schema`: The schema for the data. Schema is required for CSV and JSON formats

* `googleSheetsOptions`: Additional options if sourceFormat is set to GOOGLE_SHEETS.
* `google_sheets_options`: Additional options if sourceFormat is set to GOOGLE_SHEETS.

* `csvOptions`: Additional properties to set if sourceFormat is set to CSV.
* `csv_options`: Additional properties to set if sourceFormat is set to CSV.

* `bigtableOptions`: Additional options if sourceFormat is set to BIGTABLE.
* `bigtable_options`: Additional options if sourceFormat is set to BIGTABLE.

* `dataset`: Name of the dataset

Expand Down
10 changes: 5 additions & 5 deletions docs/resources/google_cloudbuild_trigger.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,17 @@ Properties that can be accessed from the `google_cloudbuild_trigger` resource:

* `trigger_template`: Template describing the types of source changes to trigger a build. Branch and tag names in trigger templates are interpreted as regular expressions. Any branch or tag change that matches that regular expression will trigger a build.

* `projectId`: ID of the project that owns the Cloud Source Repository. If omitted, the project ID requesting the build is assumed.
* `project_id`: ID of the project that owns the Cloud Source Repository. If omitted, the project ID requesting the build is assumed.

* `repoName`: Name of the Cloud Source Repository. If omitted, the name "default" is assumed.
* `repo_name`: Name of the Cloud Source Repository. If omitted, the name "default" is assumed.

* `dir`: Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's dir is specified and is an absolute path, this value is ignored for that step's execution.

* `branchName`: Name of the branch to build. Exactly one a of branch name, tag, or commit SHA must be provided.
* `branch_name`: Name of the branch to build. Exactly one a of branch name, tag, or commit SHA must be provided.

* `tagName`: Name of the tag to build. Exactly one of a branch name, tag, or commit SHA must be provided.
* `tag_name`: Name of the tag to build. Exactly one of a branch name, tag, or commit SHA must be provided.

* `commitSha`: Explicit commit SHA to build. Exactly one of a branch name, tag, or commit SHA must be provided.
* `commit_sha`: Explicit commit SHA to build. Exactly one of a branch name, tag, or commit SHA must be provided.

* `build`: Contents of the build template. Either a filename or build template must be provided.

Expand Down
78 changes: 78 additions & 0 deletions docs/resources/google_cloudfunctions_cloud_function.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
title: About the google_cloudfunctions_cloud_function resource
platform: gcp
---

## Syntax
A `google_cloudfunctions_cloud_function` is used to test a Google CloudFunction resource

## Examples
```
describe google_cloudfunctions_cloud_function(project: 'chef-gcp-inspec', location: 'europe-west1', name: 'inspec-gcp-function') do
it { should exist }
its('description') { should eq 'A description of the function' }
its('available_memory_mb') { should eq '128' }
its('https_trigger.url') { should match /\/inspec-gcp-function$/ }
its('entry_point') { should eq 'hello' }
its('environment_variables') { should include('MY_ENV_VAR' => 'val1') }
end

describe google_cloudfunctions_cloud_function(project: 'chef-gcp-inspec', location: 'europe-west1', name: 'nonexistent') do
it { should_not exist }
end
```

## Properties
Properties that can be accessed from the `google_cloudfunctions_cloud_function` resource:

* `name`: A user-defined name of the function. Function names must be unique globally and match pattern `projects/*/locations/*/functions/*`.

* `description`: User-provided description of a function.

* `status`: Status of the function deployment.

* `entry_point`: The name of the function (as defined in source code) that will be executed. Defaults to the resource name suffix, if not specified. For backward compatibility, if function with given name is not found, then the system will try to use function named "function". For Node.js this is name of a function exported by the module specified in source_location.

* `runtime`: The runtime in which the function is going to run. If empty, defaults to Node.js 6.

* `timeout`: The function execution timeout. Execution is considered failed and can be terminated if the function is not completed at the end of the timeout period. Defaults to 60 seconds.

* `available_memory_mb`: The amount of memory in MB available for a function.

* `service_account_email`: The email of the service account for this function.

* `update_time`: The last update timestamp of a Cloud Function

* `version_id`: The version identifier of the Cloud Function. Each deployment attempt results in a new version of a function being created.

* `labels`: A set of key/value label pairs associated with this Cloud Function.

* `environment_variables`: Environment variables that shall be available during function execution.

* `source_archive_url`: The Google Cloud Storage URL, starting with gs://, pointing to the zip archive which contains the function.

* `source_upload_url`: The Google Cloud Storage signed URL used for source uploading.

* `source_repository`: The source repository where a function is hosted.

* `url`: The URL pointing to the hosted repository where the function is defined

* `deployed_url`: The URL pointing to the hosted repository where the function were defined at the time of deployment.

* `https_trigger`: An HTTPS endpoint type of source that can be triggered via URL.

* `url`: The deployed url for the function.

* `event_trigger`: An HTTPS endpoint type of source that can be triggered via URL.

* `event_type`: The type of event to observe. For example: `providers/cloud.storage/eventTypes/object.change` and `providers/cloud.pubsub/eventTypes/topic.publish`.

* `resource`: The resource(s) from which to observe events, for example, `projects/_/buckets/myBucket.`

* `service`: The hostname of the service that should be observed.



## GCP Permissions

Ensure the [Cloud Functions API](https://console.cloud.google.com/apis/library/cloudfunctions.googleapis.com/) is enabled for the current project.
45 changes: 45 additions & 0 deletions docs/resources/google_cloudfunctions_cloud_functions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
title: About the google_cloudfunctions_cloud_functions resource
platform: gcp
---

## Syntax
A `google_cloudfunctions_cloud_functions` is used to test a Google CloudFunction resource

## Examples
```
describe google_cloudfunctions_cloud_functions(project: 'chef-gcp-inspec', location: 'europe-west1') do
its('descriptions') { should include 'A description of the function' }
its('entry_points') { should include 'hello' }
end
```

## Properties
Properties that can be accessed from the `google_cloudfunctions_cloud_functions` resource:

See [google_cloudfunctions_cloud_function.md](google_cloudfunctions_cloud_function.md) for more detailed information
* `names`: an array of `google_cloudfunctions_cloud_function` name
* `descriptions`: an array of `google_cloudfunctions_cloud_function` description
* `statuses`: an array of `google_cloudfunctions_cloud_function` status
* `entry_points`: an array of `google_cloudfunctions_cloud_function` entry_point
* `runtimes`: an array of `google_cloudfunctions_cloud_function` runtime
* `timeouts`: an array of `google_cloudfunctions_cloud_function` timeout
* `available_memory_mbs`: an array of `google_cloudfunctions_cloud_function` available_memory_mb
* `service_account_emails`: an array of `google_cloudfunctions_cloud_function` service_account_email
* `update_times`: an array of `google_cloudfunctions_cloud_function` update_time
* `version_ids`: an array of `google_cloudfunctions_cloud_function` version_id
* `labels`: an array of `google_cloudfunctions_cloud_function` labels
* `environment_variables`: an array of `google_cloudfunctions_cloud_function` environment_variables
* `source_archive_urls`: an array of `google_cloudfunctions_cloud_function` source_archive_url
* `source_upload_urls`: an array of `google_cloudfunctions_cloud_function` source_upload_url
* `source_repositories`: an array of `google_cloudfunctions_cloud_function` source_repository
* `https_triggers`: an array of `google_cloudfunctions_cloud_function` https_trigger
* `event_triggers`: an array of `google_cloudfunctions_cloud_function` event_trigger

## Filter Criteria
This resource supports all of the above properties as filter criteria, which can be used
with `where` as a block or a method.

## GCP Permissions

Ensure the [Cloud Functions API](https://console.cloud.google.com/apis/library/cloudfunctions.googleapis.com/) is enabled for the current project.
12 changes: 6 additions & 6 deletions docs/resources/google_compute_autoscaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,17 +36,17 @@ Properties that can be accessed from the `google_compute_autoscaler` resource:

* `autoscaling_policy`: The configuration parameters for the autoscaling algorithm. You can define one or more of the policies for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%.

* `minNumReplicas`: The minimum number of replicas that the autoscaler can scale down to. This cannot be less than 0. If not provided, autoscaler will choose a default value depending on maximum number of instances allowed.
* `min_num_replicas`: The minimum number of replicas that the autoscaler can scale down to. This cannot be less than 0. If not provided, autoscaler will choose a default value depending on maximum number of instances allowed.

* `maxNumReplicas`: The maximum number of instances that the autoscaler can scale up to. This is required when creating or updating an autoscaler. The maximum number of replicas should not be lower than minimal number of replicas.
* `max_num_replicas`: The maximum number of instances that the autoscaler can scale up to. This is required when creating or updating an autoscaler. The maximum number of replicas should not be lower than minimal number of replicas.

* `coolDownPeriodSec`: The number of seconds that the autoscaler should wait before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process.
* `cool_down_period_sec`: The number of seconds that the autoscaler should wait before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process.

* `cpuUtilization`: Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
* `cpu_utilization`: Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.

* `customMetricUtilizations`: Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
* `custom_metric_utilizations`: Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.

* `loadBalancingUtilization`: Configuration parameters of autoscaling based on a load balancer.
* `load_balancing_utilization`: Configuration parameters of autoscaling based on a load balancer.

* `target`: URL of the managed instance group that this autoscaler will scale.

Expand Down
Loading