Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MM updates #280

Merged
merged 38 commits into from
Sep 29, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
59d32bb
fix typo (#3369) (#413)
modular-magician Jun 19, 2020
6300d63
Fix check for serial port disabled (#3695) (#415)
modular-magician Jun 19, 2020
b34219f
Add mode enum and scale down controls for Compute AutoScaler (#3693) …
modular-magician Jun 23, 2020
495a08f
added support for shielded nodes in container (#3639) (#417)
modular-magician Jul 7, 2020
2eb472b
fix memcache_parameters (#3733) (#418)
modular-magician Jul 8, 2020
b9fd5a0
add tiers and nfs_export_options (#3766) (#419)
modular-magician Jul 22, 2020
92356e2
Add skip enum value generation (#3767) (#420)
modular-magician Jul 23, 2020
b4396e6
Backend service support for internet NEG backend (#3782) (#421)
modular-magician Jul 25, 2020
3a9f29a
add firewall logging controls (#3780) (#422)
modular-magician Jul 28, 2020
932ef02
Fix colon in doc notes (#3796) (#423)
modular-magician Jul 29, 2020
9ee8357
Add persistence_iam_identity to Redis Instance (#3805) (#424)
modular-magician Aug 3, 2020
64ec4dc
Org Security Policies (Hierarchical Firewalls) (#3626) (#425)
modular-magician Aug 4, 2020
660eaf7
Adding Missing Cloud Build Attributes (#3627) (#426)
modular-magician Aug 5, 2020
55b6655
Add additional fields to Memcached Instance (#3821) (#427)
modular-magician Aug 5, 2020
bd1fe66
Convert inboundServices to an enum. (#3820) (#428)
modular-magician Aug 6, 2020
8baa33b
add source_image and source_snapshot to google_compute_image (#3799) …
modular-magician Aug 7, 2020
a60496e
Collection fixes for release (#3831) (#430)
modular-magician Aug 10, 2020
b4e8635
Add new field filter to pubsub. (#3759) (#431)
modular-magician Aug 11, 2020
b8772a7
Add archive class to gcs (#3867) (#432)
modular-magician Aug 14, 2020
706f919
Add support for gRPC healthchecks (#3825) (#433)
modular-magician Aug 17, 2020
5ae9667
Add enableMessageOrdering to Pub/Sub Subscription (#3872) (#434)
modular-magician Aug 17, 2020
8f497d6
Specify possible values for arg only once (#3874) (#435)
modular-magician Aug 17, 2020
aef0355
use {product}.googleapis.com endpoints (#3755) (#436)
modular-magician Aug 17, 2020
97d1dd4
Add vpcAccessConnector property on google_app_engine_standard_app_ver…
modular-magician Aug 19, 2020
62c0823
retrypolicy attribute added (#3843) (#438)
modular-magician Aug 21, 2020
4740dc9
add discovery endpoint (#3891) (#439)
modular-magician Aug 24, 2020
a7532ad
Advanced logging config options in google_compute_subnetwork (#3603) …
modular-magician Aug 25, 2020
5e860a3
Add Erase Windows VSS support to compute disk (#3898) (#441)
modular-magician Aug 27, 2020
7173605
Add Snapshot location to compute snapshot (#3896) (#442)
modular-magician Sep 4, 2020
054a8cd
Added missing 'all' option for protocol firewall rule (#3962) (#443)
modular-magician Sep 9, 2020
76663bb
Revert `eraseWindowsVssSignature` field and test (#3959) (#444)
modular-magician Sep 9, 2020
40c7ac9
Added support GRPC for google_compute_(region)_backend_service.protoc…
modular-magician Sep 11, 2020
f722276
Added properties of options & artifacts on google_cloudbuild_trigger …
modular-magician Sep 16, 2020
a5798d0
products/container: Add datapath provider field (#3956) (#447)
modular-magician Sep 16, 2020
3edd18f
Add SEV_CAPABLE option to google_compute_image (#3994) (#448)
modular-magician Sep 21, 2020
da36f4c
Add network peerings for inspec (#4002) (#449)
modular-magician Sep 22, 2020
37f3853
Update docs for pubsub targets in cloud scheduler (#4008) (#450)
modular-magician Sep 23, 2020
e08a6b8
Merge branch 'master' into master
Sep 29, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions docs/resources/google_appengine_standard_app_version.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,17 @@ Properties that can be accessed from the `google_appengine_standard_app_version`

* `name`: Full path to the Version resource in the API. Example, "v1".

* `version_id`: Relative name of the version within the service. For example, `v1`. Version names can contain only lowercase letters, numbers, or hyphens. Reserved names,"default", "latest", and any name with the prefix "ah-".
* `version_id`: Relative name of the version within the service. For example, `v1`. Version names can contain only lowercase letters, numbers, or hyphens. Reserved names,"default", "latest", and any name with the prefix "ah-".

* `runtime`: Desired runtime. Example python27.

* `threadsafe`: Whether multiple requests can be dispatched to this version at once.

* `inbound_services`: Before an application can receive email or XMPP messages, the application must be configured to enable the service.
* `vpc_access_connector`: Enables VPC connectivity for standard apps.

* `name`: Full Serverless VPC Access Connector name e.g. /projects/my-project/locations/us-central1/connectors/c1.

* `inbound_services`: A list of the types of messages that this application is able to receive.

* `instance_class`: Instance class that is used to run this version. Valid values are AutomaticScaling: F1, F2, F4, F4_1G BasicScaling or ManualScaling: B1, B2, B4, B4_1G, B8 Defaults to F1 for AutomaticScaling and B2 for ManualScaling and BasicScaling. If no scaling is specified, AutomaticScaling is chosen.

Expand Down Expand Up @@ -62,7 +66,7 @@ Properties that can be accessed from the `google_appengine_standard_app_version`

* `manual_scaling`: A service with manual scaling runs continuously, allowing you to perform complex initialization and rely on the state of its memory over time.

* `instances`: Number of instances to assign to the service at the start. **Note:** When managing the number of instances at runtime through the App Engine Admin API or the (now deprecated) Python 2 Modules API set_num_instances() you must use `lifecycle.ignore_changes = ["manual_scaling"[0].instances]` to prevent drift detection.
* `instances`: Number of instances to assign to the service at the start. **Note:** When managing the number of instances at runtime through the App Engine Admin API or the (now deprecated) Python 2 Modules API set_num_instances() you must use `lifecycle.ignore_changes = ["manual_scaling"[0].instances]` to prevent drift detection.


## GCP Permissions
Expand Down
1 change: 1 addition & 0 deletions docs/resources/google_appengine_standard_app_versions.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ See [google_appengine_standard_app_version.md](google_appengine_standard_app_ver
* `version_ids`: an array of `google_appengine_standard_app_version` version_id
* `runtimes`: an array of `google_appengine_standard_app_version` runtime
* `threadsaves`: an array of `google_appengine_standard_app_version` threadsafe
* `vpc_access_connectors`: an array of `google_appengine_standard_app_version` vpc_access_connector
* `inbound_services`: an array of `google_appengine_standard_app_version` inbound_services
* `instance_classes`: an array of `google_appengine_standard_app_version` instance_class
* `automatic_scalings`: an array of `google_appengine_standard_app_version` automatic_scaling
Expand Down
4 changes: 2 additions & 2 deletions docs/resources/google_cloud_scheduler_job.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Properties that can be accessed from the `google_cloud_scheduler_job` resource:

* `time_zone`: Specifies the time zone to be used in interpreting schedule. The value of this field must be a time zone name from the tz database.

* `attempt_deadline`: The deadline for job attempts. If the request handler does not respond by this deadline then the request is cancelled and the attempt is marked as a DEADLINE_EXCEEDED failure. The failed attempt can be viewed in execution logs. Cloud Scheduler will retry the job according to the RetryConfig. The allowed duration for this deadline is: * For HTTP targets, between 15 seconds and 30 minutes. * For App Engine HTTP targets, between 15 seconds and 24 hours. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s"
* `attempt_deadline`: The deadline for job attempts. If the request handler does not respond by this deadline then the request is cancelled and the attempt is marked as a DEADLINE_EXCEEDED failure. The failed attempt can be viewed in execution logs. Cloud Scheduler will retry the job according to the RetryConfig. The allowed duration for this deadline is: * For HTTP targets, between 15 seconds and 30 minutes. * For App Engine HTTP targets, between 15 seconds and 24 hours. * **Note**: For PubSub targets, this field is ignored - setting it will introduce an unresolvable diff. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s"

* `retry_config`: By default, if a job does not complete successfully, meaning that an acknowledgement is not received from the handler, then it will be retried with exponential backoff according to the settings

Expand All @@ -47,7 +47,7 @@ Properties that can be accessed from the `google_cloud_scheduler_job` resource:

* `pubsub_target`: Pub/Sub target If the job providers a Pub/Sub target the cron will publish a message to the provided topic

* `topic_name`: The full resource name for the Cloud Pub/Sub topic to which messages will be published when a job is delivered. ~>**NOTE**: The topic name must be in the same format as required by PubSub's PublishRequest.name, e.g. `projects/my-project/topics/my-topic`.
* `topic_name`: The full resource name for the Cloud Pub/Sub topic to which messages will be published when a job is delivered. ~>**NOTE:** The topic name must be in the same format as required by PubSub's PublishRequest.name, e.g. `projects/my-project/topics/my-topic`.

* `data`: The message payload for PubsubMessage. Pubsub message must contain either non-empty data, or at least one attribute.

Expand Down
108 changes: 108 additions & 0 deletions docs/resources/google_cloudbuild_trigger.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ Properties that can be accessed from the `google_cloudbuild_trigger` resource:

* `description`: Human-readable description of the trigger.

* `tags`: Tags for annotation of a BuildTrigger

* `disabled`: Whether the trigger is disabled or not. If true, the trigger will never result in a build.

* `create_time`: Time when the trigger was created.
Expand Down Expand Up @@ -87,12 +89,52 @@ Properties that can be accessed from the `google_cloudbuild_trigger` resource:

* `build`: Contents of the build template. Either a filename or build template must be provided.

* `source`: The location of the source files to build.

* `storage_source`: Location of the source in an archive file in Google Cloud Storage.

* `bucket`: Google Cloud Storage bucket containing the source.

* `object`: Google Cloud Storage object containing the source. This object must be a gzipped archive file (.tar.gz) containing source to build.

* `generation`: Google Cloud Storage generation for the object. If the generation is omitted, the latest generation will be used
Copy link

@rmoles rmoles Sep 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a bunch of instances of double-spacing throughout the resource documentation, this would probably need to be updated upstream though right?
It's not a huge issue at all though.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest we proceed with what's here but you're right, the spacing could be tweaked in a future MM PR to ensure consistency.


* `repo_source`: Location of the source in a Google Cloud Source Repository.

* `project_id`: ID of the project that owns the Cloud Source Repository. If omitted, the project ID requesting the build is assumed.

* `repo_name`: Name of the Cloud Source Repository.

* `dir`: Directory, relative to the source root, in which to run the build. This must be a relative path. If a step's dir is specified and is an absolute path, this value is ignored for that step's execution.

* `invert_regex`: Only trigger a build if the revision regex does NOT match the revision regex.

* `substitutions`: Substitutions to use in a triggered build. Should only be used with triggers.run

* `branch_name`: Regex matching branches to build. Exactly one a of branch name, tag, or commit SHA must be provided. The syntax of the regular expressions accepted is the syntax accepted by RE2 and described at https://github.com/google/re2/wiki/Syntax

* `tag_name`: Regex matching tags to build. Exactly one a of branch name, tag, or commit SHA must be provided. The syntax of the regular expressions accepted is the syntax accepted by RE2 and described at https://github.com/google/re2/wiki/Syntax

* `commit_sha`: Explicit commit SHA to build. Exactly one a of branch name, tag, or commit SHA must be provided.

* `tags`: Tags for annotation of a Build. These are not docker tags.

* `images`: A list of images to be pushed upon the successful completion of all build steps. The images are pushed using the builder service account's credentials. The digests of the pushed images will be stored in the Build resource's results field. If any of the images fail to be pushed, the build status is marked FAILURE.

* `substitutions`: Substitutions data for Build resource.

* `queue_ttl`: TTL in queue for this build. If provided and the build is enqueued longer than this value, the build will expire and the build status will be EXPIRED. The TTL starts ticking from createTime. A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s".

* `logs_bucket`: Google Cloud Storage bucket where logs should be written. Logs file names will be of the format ${logsBucket}/log-${build_id}.txt.

* `timeout`: Amount of time that this build should be allowed to run, to second granularity. If this amount of time elapses, work on the build will cease and the build status will be TIMEOUT. This timeout must be equal to or greater than the sum of the timeouts for build steps within the build. The expected format is the number of seconds followed by s. Default time is ten minutes (600s).

* `secrets`: Secrets to decrypt using Cloud Key Management Service.

* `kms_key_name`: Cloud KMS key name to use to decrypt these envs.

* `secret_env`: Map of environment variable name to its encrypted value. Secret environment variables must be unique across all of a build's secrets, and must be used by at least one build step. Values can be at most 64 KB in size. There can be at most 100 secret values across all of a build's secrets.

* `steps`: The operations to be performed on the workspace.

* `name`: The name of the container image that will run this particular build step. If the image is available in the host's Docker daemon's cache, it will be run directly. If not, the host will attempt to pull the image first, using the builder service account's credentials if necessary. The Docker daemon's cache will already have the latest versions of all of the officially supported build steps (https://github.com/GoogleCloudPlatform/cloud-builders). The Docker daemon will also have cached many of the layers for some popular images, like "ubuntu", "debian", but they will be refreshed at the time you attempt to use them. If you built an image in a previous build step, it will be stored in the host's Docker daemon's cache and is available to use as the name for a later build step.
Expand Down Expand Up @@ -121,6 +163,72 @@ Properties that can be accessed from the `google_cloudbuild_trigger` resource:

* `wait_for`: The ID(s) of the step(s) that this build step depends on. This build step will not start until all the build steps in `wait_for` have completed successfully. If `wait_for` is empty, this build step will start when all previous build steps in the `Build.Steps` list have completed successfully.

* `artifacts`: Artifacts produced by the build that should be uploaded upon successful completion of all build steps.

* `images`: A list of images to be pushed upon the successful completion of all build steps. The images will be pushed using the builder service account's credentials. The digests of the pushed images will be stored in the Build resource's results field. If any of the images fail to be pushed, the build is marked FAILURE.

* `objects`: A list of objects to be uploaded to Cloud Storage upon successful completion of all build steps. Files in the workspace matching specified paths globs will be uploaded to the Cloud Storage location using the builder service account's credentials. The location and generation of the uploaded objects will be stored in the Build resource's results field. If any objects fail to be pushed, the build is marked FAILURE.

* `location`: Cloud Storage bucket and optional object path, in the form "gs://bucket/path/to/somewhere/". Files in the workspace matching any path pattern will be uploaded to Cloud Storage with this location as a prefix.

* `paths`: Path globs used to match files in the build's workspace.

* `timing`: Output only. Stores timing information for pushing all artifact objects.

* `start_time`: Start of time span. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

* `end_time`: End of time span. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: "2014-10-02T15:01:23Z" and "2014-10-02T15:01:23.045123456Z".

* `options`: Special options for this build.

* `source_provenance_hash`: Requested hash for SourceProvenance.

* `requested_verify_option`: Requested verifiability options.
Possible values:
* NOT_VERIFIED
* VERIFIED

* `machine_type`: Compute Engine machine type on which to run the build.
Possible values:
* UNSPECIFIED
* N1_HIGHCPU_8
* N1_HIGHCPU_32

* `disk_size_gb`: Requested disk size for the VM that runs the build. Note that this is NOT "disk free"; some of the space will be used by the operating system and build utilities. Also note that this is the minimum disk size that will be allocated for the build -- the build may run with a larger disk than requested. At present, the maximum disk size is 1000GB; builds that request more than the maximum are rejected with an error.

* `substitution_option`: Option to specify behavior when there is an error in the substitution checks. NOTE this is always set to ALLOW_LOOSE for triggered builds and cannot be overridden in the build configuration file.
Possible values:
* MUST_MATCH
* ALLOW_LOOSE

* `dynamic_substitutions`: Option to specify whether or not to apply bash style string operations to the substitutions. NOTE this is always enabled for triggered builds and cannot be overridden in the build configuration file.

* `log_streaming_option`: Option to define build log streaming behavior to Google Cloud Storage.
Possible values:
* STREAM_DEFAULT
* STREAM_ON
* STREAM_OFF

* `worker_pool`: Option to specify a WorkerPool for the build. Format projects/{project}/workerPools/{workerPool} This field is experimental.

* `logging`: Option to specify the logging mode, which determines if and where build logs are stored.
Possible values:
* LOGGING_UNSPECIFIED
* LEGACY
* GCS_ONLY
* STACKDRIVER_ONLY
* NONE

* `env`: A list of global environment variable definitions that will exist for all build steps in this build. If a variable is defined in both globally and in a build step, the variable will use the build step value. The elements are of the form "KEY=VALUE" for the environment variable "KEY" being given the value "VALUE".

* `secret_env`: A list of global environment variables, which are encrypted using a Cloud Key Management Service crypto key. These values must be specified in the build's Secret. These variables will be available to all build steps in this build.

* `volumes`: Global list of volumes to mount for ALL build steps Each volume is created as an empty volume prior to starting the build process. Upon completion of the build, volumes and their contents are discarded. Global volume names and paths cannot conflict with the volumes defined a build step. Using a global volume in a build with only one step is not valid as it is indicative of a build request with an incorrect configuration.

* `name`: Name of the volume to mount. Volume names must be unique per build step and must be valid names for Docker volumes. Each named volume must be used by at least two build steps.

* `path`: Path at which to mount the volume. Paths must be absolute and cannot conflict with other volume paths on the same build step or with certain reserved volume paths.


## GCP Permissions

Expand Down
1 change: 1 addition & 0 deletions docs/resources/google_cloudbuild_triggers.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ See [google_cloudbuild_trigger.md](google_cloudbuild_trigger.md) for more detail
* `ids`: an array of `google_cloudbuild_trigger` id
* `names`: an array of `google_cloudbuild_trigger` name
* `descriptions`: an array of `google_cloudbuild_trigger` description
* `tags`: an array of `google_cloudbuild_trigger` tags
* `disableds`: an array of `google_cloudbuild_trigger` disabled
* `create_times`: an array of `google_cloudbuild_trigger` create_time
* `substitutions`: an array of `google_cloudbuild_trigger` substitutions
Expand Down
16 changes: 16 additions & 0 deletions docs/resources/google_compute_autoscaler.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,22 @@ Properties that can be accessed from the `google_compute_autoscaler` resource:

* `cool_down_period_sec`: The number of seconds that the autoscaler should wait before it starts collecting information from a new instance. This prevents the autoscaler from collecting information when the instance is initializing, during which the collected usage would not be reliable. The default time autoscaler waits is 60 seconds. Virtual machine initialization times might vary because of numerous factors. We recommend that you test how long an instance may take to initialize. To do this, create an instance and time the startup process.

* `mode`: Defines operating mode for this policy.
Possible values:
* OFF
* ONLY_UP
* ON

* `scale_down_control`: (Beta only) Defines scale down controls to reduce the risk of response latency and outages due to abrupt scale-in events

* `max_scaled_down_replicas`: A nested object resource

* `fixed`: Specifies a fixed number of VM instances. This must be a positive integer.

* `percent`: Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.

* `time_window_sec`: How long back autoscaling should look when computing recommendations to include directives regarding slower scale down, as described above.

* `cpu_utilization`: Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.

* `utilization_target`: The target CPU utilization that the autoscaler should maintain. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales down the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales up until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
Expand Down
Loading