Skip to content

Commit

Permalink
feat: Client libraries for the Dataform API (#1221)
Browse files Browse the repository at this point in the history
* chore(ruby): Initial generation of google-iam-v1

PiperOrigin-RevId: 448073008

Source-Link: googleapis/googleapis@d664bc5

Source-Link: googleapis/googleapis-gen@73f6abc
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiNzNmNmFiYzE0MWQyYmI3MjZjMDdjODExZmQ4MWRlMDY3Zjk3ZjY0ZSJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* feat: add display_name and metadata to ModelEvaluation in aiplatform model_evaluation.proto

PiperOrigin-RevId: 448160148

Source-Link: googleapis/googleapis@936ab35

Source-Link: googleapis/googleapis-gen@f841b8e
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiZjg0MWI4ZTIzZDgxNmU5OThmOWU4ZTZjMGUwNGJhOTJiN2Y2YjgzNSJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* feat: refreshes Bigtable Admin API(s) protos

PiperOrigin-RevId: 448988001

Source-Link: googleapis/googleapis@b6fa58e

Source-Link: googleapis/googleapis-gen@fc8b8db
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiZmM4YjhkYmM3ZGExYTc4NDVkNTcxMzRmNDExMzAyZDEwNmVhMmVmMiJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* Synchronize new proto/yaml changes.

PiperOrigin-RevId: 449052112

Source-Link: googleapis/googleapis@3150afa

Source-Link: googleapis/googleapis-gen@9945a36
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiOTk0NWEzNjZlNGE1ZWZiMDBmYzQwOTg1ZjAzYzEwN2JkZWU4OWRmMiJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* docs: fix docstring formatting

Committer: parthea
PiperOrigin-RevId: 449545643

Source-Link: googleapis/googleapis@1bed8a0

Source-Link: googleapis/googleapis-gen@d4ccc5f
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiZDRjY2M1ZmM2MTJjMjYwNTQ2YmNlY2VmMjg3MzU3NTY4OGEwMmQ4YiJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* feat: add Examples to Explanation related messages in aiplatform v1beta1 explanation.proto

PiperOrigin-RevId: 449620845

Source-Link: googleapis/googleapis@117f86b

Source-Link: googleapis/googleapis-gen@1d670db
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiMWQ2NzBkYmIxYTlmZjE1NWNmNjg2ZTk1NDRmZWZmYjc1MDIxYTJmNSJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* feat: update protos to include InvalidateApprovalRequest and GetAccessApprovalServiceAccount APIs

PiperOrigin-RevId: 449820922

Source-Link: googleapis/googleapis@9682584

Source-Link: googleapis/googleapis-gen@09360c9
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiMDkzNjBjOTVjYTEyYWZmMDBiY2QyY2ZhMmY3NjE0YmJiM2UyOWJmMyJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* chore: remove unused imports

PiperOrigin-RevId: 450372109

Source-Link: googleapis/googleapis@942691f

Source-Link: googleapis/googleapis-gen@609a369
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiNjA5YTM2OTY2YzQwZjcyNmZkMGRjNzFlOTUzZGM4M2Y4ZTUyZmVmMSJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* feat: add latent_space_source to ExplanationMetadata in aiplatform v1 explanation_metadata.proto
feat: add scaling to OnlineServingConfig in aiplatform v1 featurestore.proto
feat: add template_metadata to PipelineJob in aiplatform v1 pipeline_job.proto

PiperOrigin-RevId: 450687287

Source-Link: googleapis/googleapis@058bff3

Source-Link: googleapis/googleapis-gen@f072bfe
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiZjA3MmJmZTc3MDRkOTk2YzQxZDc0YWMzYWExZDg1MTRmNDY0YzRmZiJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* feat: add failure_policy to PipelineJob in aiplatform v1 & v1beta1 pipeline_job.proto

PiperOrigin-RevId: 450704795

Source-Link: googleapis/googleapis@c875f2b

Source-Link: googleapis/googleapis-gen@e606d62
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiZTYwNmQ2MmFiMzJiODU0MzQ2OTc2NzhiMzAwNGYyMjA2ZDVjMDJhZSJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* feat: add IAM policy to aiplatform_v1beta1.yaml
feat: add preset configuration for example-based explanations in aiplatform v1beta1 explanation.proto
feat: add latent_space_source to ExplanationMetadata in aiplatform v1beta1 explanation_metadata.proto
feat: add successful_forecast_point_count to CompletionStats in completion_stats.proto

PiperOrigin-RevId: 450727462

Source-Link: googleapis/googleapis@665682d

Source-Link: googleapis/googleapis-gen@34cddbe
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiMzRjZGRiZWYzOWMxN2M1OGY5NmY1ZmZlYmY2MDY1MTM2YjZkNTcxOSJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* chore: use gapic-generator-python 1.0.0

PiperOrigin-RevId: 451250442

Source-Link: googleapis/googleapis@cca5e81

Source-Link: googleapis/googleapis-gen@0b219da
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiMGIyMTlkYTE2MWE4YmRjYzNjNmY3YjJlZmNkODIxMDUxODJhMzBjYSJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

* feat: Client libraries for the Dataform API

This is the first release of the Public Dataform API client libraries.

PiperOrigin-RevId: 451825930

Source-Link: googleapis/googleapis@34c6901

Source-Link: googleapis/googleapis-gen@68f6624
Copy-Tag: eyJwIjoiLmdpdGh1Yi8uT3dsQm90LnlhbWwiLCJoIjoiNjhmNjYyNDgyOTdhMjJiNmJlYjY0Njc3ZDEyNmFkMDAwNDczMmU1NyJ9

* 🦉 Updates from OwlBot post-processor

See https://github.com/googleapis/repo-automation-bots/blob/main/packages/owl-bot/README.md

Co-authored-by: Owl Bot <gcf-owl-bot[bot]@users.noreply.github.com>
Co-authored-by: gcf-merge-on-green[bot] <60162190+gcf-merge-on-green[bot]@users.noreply.github.com>
Co-authored-by: Yu-Han Liu <yuhanliu@google.com>
  • Loading branch information
4 people authored and rosiezou committed Jun 16, 2022
1 parent c8f6f35 commit 3e987a3
Show file tree
Hide file tree
Showing 75 changed files with 1,724 additions and 79 deletions.
7 changes: 7 additions & 0 deletions docs/definition_v1/types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Types for Google Cloud Aiplatform V1 Schema Trainingjob Definition v1 API
=========================================================================

.. automodule:: google.cloud.aiplatform.v1.schema.trainingjob.definition_v1.types
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/definition_v1beta1/types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Types for Google Cloud Aiplatform V1beta1 Schema Trainingjob Definition v1beta1 API
===================================================================================

.. automodule:: google.cloud.aiplatform.v1beta1.schema.trainingjob.definition_v1beta1.types
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/instance_v1/types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Types for Google Cloud Aiplatform V1 Schema Predict Instance v1 API
===================================================================

.. automodule:: google.cloud.aiplatform.v1.schema.predict.instance_v1.types
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/instance_v1beta1/types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Types for Google Cloud Aiplatform V1beta1 Schema Predict Instance v1beta1 API
=============================================================================

.. automodule:: google.cloud.aiplatform.v1beta1.schema.predict.instance_v1beta1.types
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/params_v1/types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Types for Google Cloud Aiplatform V1 Schema Predict Params v1 API
=================================================================

.. automodule:: google.cloud.aiplatform.v1.schema.predict.params_v1.types
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/params_v1beta1/types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Types for Google Cloud Aiplatform V1beta1 Schema Predict Params v1beta1 API
===========================================================================

.. automodule:: google.cloud.aiplatform.v1beta1.schema.predict.params_v1beta1.types
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/prediction_v1/types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Types for Google Cloud Aiplatform V1 Schema Predict Prediction v1 API
=====================================================================

.. automodule:: google.cloud.aiplatform.v1.schema.predict.prediction_v1.types
:members:
:undoc-members:
:show-inheritance:
7 changes: 7 additions & 0 deletions docs/prediction_v1beta1/types.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
Types for Google Cloud Aiplatform V1beta1 Schema Predict Prediction v1beta1 API
===============================================================================

.. automodule:: google.cloud.aiplatform.v1beta1.schema.predict.prediction_v1beta1.types
:members:
:undoc-members:
:show-inheritance:
4 changes: 4 additions & 0 deletions google/cloud/aiplatform_v1/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -362,10 +362,12 @@
from .types.model_service import UploadModelResponse
from .types.operation import DeleteOperationMetadata
from .types.operation import GenericOperationMetadata
from .types.pipeline_failure_policy import PipelineFailurePolicy
from .types.pipeline_job import PipelineJob
from .types.pipeline_job import PipelineJobDetail
from .types.pipeline_job import PipelineTaskDetail
from .types.pipeline_job import PipelineTaskExecutorDetail
from .types.pipeline_job import PipelineTemplateMetadata
from .types.pipeline_service import CancelPipelineJobRequest
from .types.pipeline_service import CancelTrainingPipelineRequest
from .types.pipeline_service import CreatePipelineJobRequest
Expand Down Expand Up @@ -829,12 +831,14 @@
"NearestNeighborSearchOperationMetadata",
"NfsMount",
"PauseModelDeploymentMonitoringJobRequest",
"PipelineFailurePolicy",
"PipelineJob",
"PipelineJobDetail",
"PipelineServiceClient",
"PipelineState",
"PipelineTaskDetail",
"PipelineTaskExecutorDetail",
"PipelineTemplateMetadata",
"Port",
"PredefinedSplit",
"PredictRequest",
Expand Down
18 changes: 9 additions & 9 deletions google/cloud/aiplatform_v1/services/migration_service/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,40 +192,40 @@ def parse_annotated_dataset_path(path: str) -> Dict[str, str]:
@staticmethod
def dataset_path(
project: str,
location: str,
dataset: str,
) -> str:
"""Returns a fully-qualified dataset string."""
return "projects/{project}/datasets/{dataset}".format(
return "projects/{project}/locations/{location}/datasets/{dataset}".format(
project=project,
location=location,
dataset=dataset,
)

@staticmethod
def parse_dataset_path(path: str) -> Dict[str, str]:
"""Parses a dataset path into its component segments."""
m = re.match(r"^projects/(?P<project>.+?)/datasets/(?P<dataset>.+?)$", path)
m = re.match(
r"^projects/(?P<project>.+?)/locations/(?P<location>.+?)/datasets/(?P<dataset>.+?)$",
path,
)
return m.groupdict() if m else {}

@staticmethod
def dataset_path(
project: str,
location: str,
dataset: str,
) -> str:
"""Returns a fully-qualified dataset string."""
return "projects/{project}/locations/{location}/datasets/{dataset}".format(
return "projects/{project}/datasets/{dataset}".format(
project=project,
location=location,
dataset=dataset,
)

@staticmethod
def parse_dataset_path(path: str) -> Dict[str, str]:
"""Parses a dataset path into its component segments."""
m = re.match(
r"^projects/(?P<project>.+?)/locations/(?P<location>.+?)/datasets/(?P<dataset>.+?)$",
path,
)
m = re.match(r"^projects/(?P<project>.+?)/datasets/(?P<dataset>.+?)$", path)
return m.groupdict() if m else {}

@staticmethod
Expand Down
3 changes: 3 additions & 0 deletions google/cloud/aiplatform_v1/types/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -423,6 +423,7 @@
PipelineJobDetail,
PipelineTaskDetail,
PipelineTaskExecutorDetail,
PipelineTemplateMetadata,
)
from .pipeline_service import (
CancelPipelineJobRequest,
Expand Down Expand Up @@ -883,10 +884,12 @@
"UploadModelResponse",
"DeleteOperationMetadata",
"GenericOperationMetadata",
"PipelineFailurePolicy",
"PipelineJob",
"PipelineJobDetail",
"PipelineTaskDetail",
"PipelineTaskExecutorDetail",
"PipelineTemplateMetadata",
"CancelPipelineJobRequest",
"CancelTrainingPipelineRequest",
"CreatePipelineJobRequest",
Expand Down
8 changes: 4 additions & 4 deletions google/cloud/aiplatform_v1/types/endpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -221,10 +221,10 @@ class DeployedModel(proto.Message):
This value should be 1-10 characters, and valid characters
are /[0-9]/.
model (str):
Required. The name of the Model that this is
the deployment of. Note that the Model may be in
a different location than the DeployedModel's
Endpoint.
Required. The resource name of the Model that
this is the deployment of. Note that the Model
may be in a different location than the
DeployedModel's Endpoint.
display_name (str):
The display name of the DeployedModel. If not provided upon
creation, the Model's display_name is used.
Expand Down
7 changes: 7 additions & 0 deletions google/cloud/aiplatform_v1/types/explanation_metadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,9 @@ class ExplanationMetadata(proto.Message):
including the URI scheme, than the one given on input. The
output URI will point to a location where the user only has
a read access.
latent_space_source (str):
Name of the source to generate embeddings for
example based explanations.
"""

class InputMetadata(proto.Message):
Expand Down Expand Up @@ -457,6 +460,10 @@ class OutputMetadata(proto.Message):
proto.STRING,
number=3,
)
latent_space_source = proto.Field(
proto.STRING,
number=5,
)


__all__ = tuple(sorted(__protobuf__.manifest))
34 changes: 34 additions & 0 deletions google/cloud/aiplatform_v1/types/featurestore.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,12 +92,46 @@ class OnlineServingConfig(proto.Message):
set to 0, the featurestore will not have an
online store and cannot be used for online
serving.
scaling (google.cloud.aiplatform_v1.types.Featurestore.OnlineServingConfig.Scaling):
Online serving scaling configuration. Only one of
``fixed_node_count`` and ``scaling`` can be set. Setting one
will reset the other.
"""

class Scaling(proto.Message):
r"""Online serving scaling configuration. If min_node_count and
max_node_count are set to the same value, the cluster will be
configured with the fixed number of node (no auto-scaling).
Attributes:
min_node_count (int):
Required. The minimum number of nodes to
scale down to. Must be greater than or equal to
1.
max_node_count (int):
The maximum number of nodes to scale up to. Must be greater
than min_node_count, and less than or equal to 10 times of
'min_node_count'.
"""

min_node_count = proto.Field(
proto.INT32,
number=1,
)
max_node_count = proto.Field(
proto.INT32,
number=2,
)

fixed_node_count = proto.Field(
proto.INT32,
number=2,
)
scaling = proto.Field(
proto.MESSAGE,
number=4,
message="Featurestore.OnlineServingConfig.Scaling",
)

name = proto.Field(
proto.STRING,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ class StreamingReadFeatureValuesRequest(proto.Message):

class FeatureValue(proto.Message):
r"""Value for a feature.
NEXT ID: 15
(-- NEXT ID: 15 --)
This message has `oneof`_ fields (mutually exclusive fields).
For each oneof, at most one member field can be set at the same time.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ class ManualBatchTuningParameters(proto.Message):
value will result in a whole batch not fitting
in a machine's memory, and the whole operation
will fail.
The default value is 4.
The default value is 64.
"""

batch_size = proto.Field(
Expand Down
16 changes: 16 additions & 0 deletions google/cloud/aiplatform_v1/types/model_evaluation.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ class ModelEvaluation(proto.Message):
name (str):
Output only. The resource name of the
ModelEvaluation.
display_name (str):
The display name of the ModelEvaluation.
metrics_schema_uri (str):
Points to a YAML file stored on Google Cloud Storage
describing the
Expand Down Expand Up @@ -92,6 +94,11 @@ class ModelEvaluation(proto.Message):
[ExplanationSpec][google.cloud.aiplatform.v1.ExplanationSpec]
that are used for explaining the predicted values on the
evaluated data.
metadata (google.protobuf.struct_pb2.Value):
The metadata of the ModelEvaluation. For the ModelEvaluation
uploaded from Managed Pipeline, metadata contains a
structured value with keys of "pipeline_job_id",
"evaluation_dataset_type", "evaluation_dataset_path".
"""

class ModelEvaluationExplanationSpec(proto.Message):
Expand Down Expand Up @@ -123,6 +130,10 @@ class ModelEvaluationExplanationSpec(proto.Message):
proto.STRING,
number=1,
)
display_name = proto.Field(
proto.STRING,
number=10,
)
metrics_schema_uri = proto.Field(
proto.STRING,
number=2,
Expand Down Expand Up @@ -159,6 +170,11 @@ class ModelEvaluationExplanationSpec(proto.Message):
number=9,
message=ModelEvaluationExplanationSpec,
)
metadata = proto.Field(
proto.MESSAGE,
number=11,
message=struct_pb2.Value,
)


__all__ = tuple(sorted(__protobuf__.manifest))
2 changes: 2 additions & 0 deletions google/cloud/aiplatform_v1/types/model_monitoring.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,8 @@ class TrainingDataset(proto.Message):
"csv"
The source file is a CSV file.
"jsonl"
The source file is a JSONL file.
target_field (str):
The target field name the model is to
predict. This field will be excluded when doing
Expand Down
41 changes: 41 additions & 0 deletions google/cloud/aiplatform_v1/types/pipeline_failure_policy.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# -*- coding: utf-8 -*-
# Copyright 2022 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import proto # type: ignore


__protobuf__ = proto.module(
package="google.cloud.aiplatform.v1",
manifest={
"PipelineFailurePolicy",
},
)


class PipelineFailurePolicy(proto.Enum):
r"""Represents the failure policy of a pipeline. Currently, the default
of a pipeline is that the pipeline will continue to run until no
more tasks can be executed, also known as
PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to
PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new
tasks when a task has failed. Any scheduled tasks will continue to
completion.
"""
PIPELINE_FAILURE_POLICY_UNSPECIFIED = 0
PIPELINE_FAILURE_POLICY_FAIL_SLOW = 1
PIPELINE_FAILURE_POLICY_FAIL_FAST = 2


__all__ = tuple(sorted(__protobuf__.manifest))
Loading

0 comments on commit 3e987a3

Please sign in to comment.