Skip to content

Commit

Permalink
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into renovate/sinon-17.x
Browse files Browse the repository at this point in the history
d-goog authored Dec 9, 2024
2 parents c77803d + 62687be commit bc34b73
Showing 30 changed files with 2,867 additions and 953 deletions.
4 changes: 2 additions & 2 deletions packages/google-cloud-automl/README.md
Original file line number Diff line number Diff line change
@@ -44,7 +44,7 @@ Google APIs Client Libraries, in [Client Libraries Explained][explained].
1. [Select or create a Cloud Platform project][projects].
1. [Enable billing for your project][billing].
1. [Enable the Cloud AutoML API][enable_api].
1. [Set up authentication with a service account][auth] so you can access the
1. [Set up authentication][auth] so you can access the
API from your local workstation.

### Installing the client library
@@ -250,4 +250,4 @@ See [LICENSE](https://github.com/googleapis/google-cloud-node/blob/main/LICENSE)
[projects]: https://console.cloud.google.com/project
[billing]: https://support.google.com/cloud/answer/6293499#enable-billing
[enable_api]: https://console.cloud.google.com/flows/enableapi?apiid=automl.googleapis.com
[auth]: https://cloud.google.com/docs/authentication/getting-started
[auth]: https://cloud.google.com/docs/authentication/external/set-up-adc-local
86 changes: 37 additions & 49 deletions packages/google-cloud-automl/protos/google/cloud/automl/v1/io.proto
Original file line number Diff line number Diff line change
@@ -25,7 +25,8 @@ option java_package = "com.google.cloud.automl.v1";
option php_namespace = "Google\\Cloud\\AutoMl\\V1";
option ruby_package = "Google::Cloud::AutoML::V1";

// Input configuration for [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData] action.
// Input configuration for
// [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData] action.
//
// The format of input depends on dataset_metadata the Dataset into which
// the import is happening has. As input source the
@@ -41,10 +42,10 @@ option ruby_package = "Google::Cloud::AutoML::V1";
// The formats are represented in EBNF with commas being literal and with
// non-terminal symbols defined near the end of this comment. The formats are:
//
// <h4>AutoML Vision</h4>
// #### AutoML Vision
//
//
// <div class="ds-selector-tabs"><section><h5>Classification</h5>
// ##### Classification
//
// See [Preparing your training
// data](https://cloud.google.com/vision/automl/docs/prepare) for more
@@ -81,7 +82,7 @@ option ruby_package = "Google::Cloud::AutoML::V1";
// UNASSIGNED,gs://folder/image4.jpg
//
//
// </section><section><h5>Object Detection</h5>
// ##### Object Detection
// See [Preparing your training
// data](https://cloud.google.com/vision/automl/object-detection/docs/prepare)
// for more information.
@@ -123,10 +124,10 @@ option ruby_package = "Google::Cloud::AutoML::V1";
// </div>
//
//
// <h4>AutoML Video Intelligence</h4>
// #### AutoML Video Intelligence
//
//
// <div class="ds-selector-tabs"><section><h5>Classification</h5>
// ##### Classification
//
// See [Preparing your training
// data](https://cloud.google.com/video-intelligence/automl/docs/prepare) for
@@ -169,7 +170,7 @@ option ruby_package = "Google::Cloud::AutoML::V1";
//
//
//
// </section><section><h5>Object Tracking</h5>
// ##### Object Tracking
//
// See [Preparing your training
// data](/video-intelligence/automl/object-tracking/docs/prepare) for more
@@ -220,14 +221,12 @@ option ruby_package = "Google::Cloud::AutoML::V1";
// gs://folder/video1.avi,bike,,12.50,.45,.45,,,.55,.55,,
// gs://folder/video2.avi,car,1,0,.1,.9,,,.9,.1,,
// gs://folder/video2.avi,,,,,,,,,,,
// </section>
// </div>
//
//
// <h4>AutoML Natural Language</h4>
// #### AutoML Natural Language
//
//
// <div class="ds-selector-tabs"><section><h5>Entity Extraction</h5>
// ##### Entity Extraction
//
// See [Preparing your training
// data](/natural-language/automl/entity-analysis/docs/prepare) for more
@@ -410,7 +409,7 @@ option ruby_package = "Google::Cloud::AutoML::V1";
//
//
//
// </section><section><h5>Classification</h5>
// ##### Classification
//
// See [Preparing your training
// data](https://cloud.google.com/natural-language/automl/docs/prepare) for more
@@ -457,7 +456,7 @@ option ruby_package = "Google::Cloud::AutoML::V1";
//
//
//
// </section><section><h5>Sentiment Analysis</h5>
// ##### Sentiment Analysis
//
// See [Preparing your training
// data](https://cloud.google.com/natural-language/automl/docs/prepare) for more
@@ -514,13 +513,10 @@ option ruby_package = "Google::Cloud::AutoML::V1";
// gs://folder/content.txt,3
// TEST,gs://folder/document.pdf
// VALIDATE,gs://folder/text_files.zip,2
// </section>
// </div>
//
//
//
// <h4>AutoML Tables</h4><div class="ui-datasection-main"><section
// class="selected">
// #### AutoML Tables
//
// See [Preparing your training
// data](https://cloud.google.com/automl-tables/docs/prepare) for more
@@ -559,8 +555,6 @@ option ruby_package = "Google::Cloud::AutoML::V1";
// and between 1000 and 100,000,000 rows, inclusive. There are at most 5
// import data running in parallel.
//
// </section>
// </div>
//
//
// **Input field definitions:**
@@ -641,16 +635,17 @@ message InputConfig {
// The source of the input.
oneof source {
// The Google Cloud Storage location for the input content.
// For [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData], `gcs_source` points to a CSV file with
// a structure described in [InputConfig][google.cloud.automl.v1.InputConfig].
// For [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData],
// `gcs_source` points to a CSV file with a structure described in
// [InputConfig][google.cloud.automl.v1.InputConfig].
GcsSource gcs_source = 1;
}

// Additional domain-specific parameters describing the semantic of the
// imported data, any string must be up to 25000
// characters long.
//
// <h4>AutoML Tables</h4>
// #### AutoML Tables
//
// `schema_inference_version`
// : (integer) This value must be supplied.
@@ -671,8 +666,8 @@ message InputConfig {
// non-terminal symbols defined near the end of this comment. The formats
// are:
//
// <h4>AutoML Vision</h4>
// <div class="ds-selector-tabs"><section><h5>Classification</h5>
// #### AutoML Vision
// ##### Classification
//
// One or more CSV files where each line is a single column:
//
@@ -688,7 +683,7 @@ message InputConfig {
// gs://folder/image2.gif
// gs://folder/image3.png
//
// </section><section><h5>Object Detection</h5>
// ##### Object Detection
//
// One or more CSV files where each line is a single column:
//
@@ -703,11 +698,9 @@ message InputConfig {
// gs://folder/image1.jpeg
// gs://folder/image2.gif
// gs://folder/image3.png
// </section>
// </div>
//
// <h4>AutoML Video Intelligence</h4>
// <div class="ds-selector-tabs"><section><h5>Classification</h5>
// #### AutoML Video Intelligence
// ##### Classification
//
// One or more CSV files where each line is a single column:
//
@@ -726,7 +719,7 @@ message InputConfig {
// gs://folder/video1.mp4,20,60
// gs://folder/vid2.mov,0,inf
//
// </section><section><h5>Object Tracking</h5>
// ##### Object Tracking
//
// One or more CSV files where each line is a single column:
//
@@ -744,11 +737,9 @@ message InputConfig {
// gs://folder/video1.mp4,10,40
// gs://folder/video1.mp4,20,60
// gs://folder/vid2.mov,0,inf
// </section>
// </div>
//
// <h4>AutoML Natural Language</h4>
// <div class="ds-selector-tabs"><section><h5>Classification</h5>
// #### AutoML Natural Language
// ##### Classification
//
// One or more CSV files where each line is a single column:
//
@@ -765,7 +756,7 @@ message InputConfig {
// gs://folder/text2.pdf
// gs://folder/text3.tif
//
// </section><section><h5>Sentiment Analysis</h5>
// ##### Sentiment Analysis
// One or more CSV files where each line is a single column:
//
// GCS_FILE_PATH
@@ -781,7 +772,7 @@ message InputConfig {
// gs://folder/text2.pdf
// gs://folder/text3.tif
//
// </section><section><h5>Entity Extraction</h5>
// ##### Entity Extraction
//
// One or more JSONL (JSON Lines) files that either provide inline text or
// documents. You can only use one format, either inline text or documents,
@@ -851,11 +842,8 @@ message InputConfig {
// }
// }
// }
// </section>
// </div>
//
// <h4>AutoML Tables</h4><div class="ui-datasection-main"><section
// class="selected">
// #### AutoML Tables
//
// See [Preparing your training
// data](https://cloud.google.com/automl-tables/docs/predict-batch) for more
@@ -901,8 +889,6 @@ message InputConfig {
// input feature column specs must contain values compatible with the
// column spec's data types. Prediction on all the rows of the table
// will be attempted.
// </section>
// </div>
//
// **Input field definitions:**
//
@@ -984,9 +970,10 @@ message DocumentInputConfig {
message OutputConfig {
// The destination of the output.
oneof destination {
// Required. The Google Cloud Storage location where the output is to be written to.
// For Image Object Detection, Text Extraction, Video Classification and
// Tables, in the given directory a new directory will be created with name:
// Required. The Google Cloud Storage location where the output is to be
// written to. For Image Object Detection, Text Extraction, Video
// Classification and Tables, in the given directory a new directory will be
// created with name:
// export_data-<dataset-display-name>-<timestamp-of-export-call> where
// timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All export
// output will be written into that directory.
@@ -1246,8 +1233,8 @@ message OutputConfig {
message BatchPredictOutputConfig {
// The destination of the output.
oneof destination {
// Required. The Google Cloud Storage location of the directory where the output is to
// be written to.
// Required. The Google Cloud Storage location of the directory where the
// output is to be written to.
GcsDestination gcs_destination = 1 [(google.api.field_behavior) = REQUIRED];
}
}
@@ -1256,8 +1243,9 @@ message BatchPredictOutputConfig {
message ModelExportOutputConfig {
// The destination of the output.
oneof destination {
// Required. The Google Cloud Storage location where the model is to be written to.
// This location may only be set for the following model formats:
// Required. The Google Cloud Storage location where the model is to be
// written to. This location may only be set for the following model
// formats:
// "tflite", "edgetpu_tflite", "tf_saved_model", "tf_js", "core_ml".
//
// Under the directory given as the destination a new one with name
3 changes: 3 additions & 0 deletions packages/google-cloud-automl/protos/protos.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

35 changes: 22 additions & 13 deletions packages/google-cloud-batch/protos/google/cloud/batch/v1/job.proto
Original file line number Diff line number Diff line change
@@ -90,13 +90,15 @@ message Job {
repeated JobNotification notifications = 14;
}

// LogsPolicy describes how outputs from a Job's Tasks (stdout/stderr) will be
// preserved.
// LogsPolicy describes if and how a job's logs are preserved. Logs include
// information that is automatically written by the Batch service agent and any
// information that you configured the job's runnables to write to the `stdout`
// or `stderr` streams.
message LogsPolicy {
// `CloudLoggingOption` contains additional settings for Cloud Logging logs
// generated by Batch job.
message CloudLoggingOption {
// Optional. Set this flag to true to change the [monitored resource
// Optional. Set this field to `true` to change the [monitored resource
// type](https://cloud.google.com/monitoring/api/resources) for
// Cloud Logging logs generated by this Batch job from
// the
@@ -110,26 +112,32 @@ message LogsPolicy {

// The destination (if any) for logs.
enum Destination {
// Logs are not preserved.
// (Default) Logs are not preserved.
DESTINATION_UNSPECIFIED = 0;

// Logs are streamed to Cloud Logging.
// Logs are streamed to Cloud Logging. Optionally, you can configure
// additional settings in the `cloudLoggingOption` field.
CLOUD_LOGGING = 1;

// Logs are saved to a file path.
// Logs are saved to the file path specified in the `logsPath` field.
PATH = 2;
}

// Where logs should be saved.
// If and where logs should be saved.
Destination destination = 1;

// The path to which logs are saved when the destination = PATH. This can be a
// local file path on the VM, or under the mount point of a Persistent Disk or
// Filestore, or a Cloud Storage path.
// When `destination` is set to `PATH`, you must set this field to the path
// where you want logs to be saved. This path can point to a local directory
// on the VM or (if congifured) a directory under the mount path of any
// Cloud Storage bucket, network file system (NFS), or writable persistent
// disk that is mounted to the job. For example, if the job has a bucket with
// `mountPath` set to `/mnt/disks/my-bucket`, you can write logs to the
// root directory of the `remotePath` of that bucket by setting this field to
// `/mnt/disks/my-bucket/`.
string logs_path = 2;

// Optional. Additional settings for Cloud Logging. It will only take effect
// when the destination of `LogsPolicy` is set to `CLOUD_LOGGING`.
// Optional. When `destination` is set to `CLOUD_LOGGING`, you can optionally
// set this field to configure additional settings for Cloud Logging.
CloudLoggingOption cloud_logging_option = 3
[(google.api.field_behavior) = OPTIONAL];
}
@@ -429,7 +437,8 @@ message AllocationPolicy {
// Named the field as 'instance_template' instead of 'template' to avoid
// C++ keyword conflict.
//
// Batch only supports global instance templates.
// Batch only supports global instance templates from the same project as
// the job.
// You can specify the global instance template as a full or partial URL.
string instance_template = 2;
}
Loading

0 comments on commit bc34b73

Please sign in to comment.