Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add ModelEvaluation support #1167

Merged
merged 18 commits into from
Apr 20, 2022
Merged

Conversation

sararob
Copy link
Contributor

@sararob sararob commented Apr 13, 2022

Added initial support for the ModelEvaluation gapic client:

  • ModelEvaluation class defined in model_evaluation/
  • Exposed ModelEvaluation class in models.py
  • Added related tests

Fixes b/224573576 🦕

@product-auto-label product-auto-label bot added the size: l Pull request size is large. label Apr 13, 2022
@sararob sararob requested a review from sasha-gitg April 13, 2022 23:07

def list_model_evaluations(
self,
project: Optional[str] = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

project, location credentials shouldn't be necessary since they can be referenced from this Model instance.

google/cloud/aiplatform/models.py Outdated Show resolved Hide resolved
0
].resource_name
else:
# TODO: validate evaluation_id
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Preference to remove this as we can rely on the service to validate.

else:
# TODO: validate evaluation_id
evaluation_resource_name = (
f"{self.resource_name}/evaluations/{evaluation_id}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Preference to rely on Model resource names parsers and formatter. Something like:

parts = self._parse_resource_name(self.resource_name)
eval_resource_name = ModelEvaluation._format_resource_name(**parts, evalution=evaluation_id)

ModelEvaluation resource.
"""
if not evaluation_id:
evaluation_resource_name = self.list_model_evaluations(self.resource_name)[
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Preference to warn if more than one model evaluation is present and log that we are returning this particular model eval resource name.

Additionally, this can just be returned directly.

@sararob sararob requested a review from sasha-gitg April 18, 2022 17:50
@sararob sararob requested a review from a team as a code owner April 19, 2022 19:41
@@ -41,6 +41,7 @@
from google.cloud.aiplatform.metadata import metadata
from google.cloud.aiplatform.models import Endpoint
from google.cloud.aiplatform.models import Model
from google.cloud.aiplatform.model_evaluation import ModelEvaluation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copyright year update.

_format_resource_name_method = "model_evaluation_path"

@property
def metrics(self) -> Optional[Dict[str, Any]]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this return type is struct_pb2.Value based on the proto definition

List[model_evaluation.ModelEvaluation]: List of ModelEvaluation resources
for the model.
"""
parent = utils.full_resource_name(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this should be necessary. The resource name can be passed directly.


return model_evaluation.ModelEvaluation._list(
parent=parent,
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.credentials should be forwarded here

ModelEvaluation resource.
"""
if not evaluation_id:
evaluation_resource_name = self.list_model_evaluations()[0].resource_name
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can throw an exception if there are no model evalutions. Also there is no need to do the additional GET call below:

evals = self.list_model_evaluations()

if len(evals) > 1:
  _LOGGER.warning(
                f"Your model has more than one model evaluation, this is returning only one evaluation resource: {evals[0].resource_name}"
            )
            
return evals[0] if evals else evals

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point re: models w/ no evaluations. I think I've fixed this.

@sararob sararob requested a review from sasha-gitg April 19, 2022 23:46
@sasha-gitg sasha-gitg merged commit 10f95cd into googleapis:main Apr 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size: l Pull request size is large.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants