-
Notifications
You must be signed in to change notification settings - Fork 347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add ModelEvaluation support #1167
Conversation
google/cloud/aiplatform/models.py
Outdated
|
||
def list_model_evaluations( | ||
self, | ||
project: Optional[str] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
project, location credentials shouldn't be necessary since they can be referenced from this Model instance.
google/cloud/aiplatform/models.py
Outdated
0 | ||
].resource_name | ||
else: | ||
# TODO: validate evaluation_id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preference to remove this as we can rely on the service to validate.
google/cloud/aiplatform/models.py
Outdated
else: | ||
# TODO: validate evaluation_id | ||
evaluation_resource_name = ( | ||
f"{self.resource_name}/evaluations/{evaluation_id}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preference to rely on Model resource names parsers and formatter. Something like:
parts = self._parse_resource_name(self.resource_name)
eval_resource_name = ModelEvaluation._format_resource_name(**parts, evalution=evaluation_id)
google/cloud/aiplatform/models.py
Outdated
ModelEvaluation resource. | ||
""" | ||
if not evaluation_id: | ||
evaluation_resource_name = self.list_model_evaluations(self.resource_name)[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preference to warn if more than one model evaluation is present and log that we are returning this particular model eval resource name.
Additionally, this can just be returned directly.
@@ -41,6 +41,7 @@ | |||
from google.cloud.aiplatform.metadata import metadata | |||
from google.cloud.aiplatform.models import Endpoint | |||
from google.cloud.aiplatform.models import Model | |||
from google.cloud.aiplatform.model_evaluation import ModelEvaluation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copyright year update.
_format_resource_name_method = "model_evaluation_path" | ||
|
||
@property | ||
def metrics(self) -> Optional[Dict[str, Any]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this return type is struct_pb2.Value
based on the proto definition
google/cloud/aiplatform/models.py
Outdated
List[model_evaluation.ModelEvaluation]: List of ModelEvaluation resources | ||
for the model. | ||
""" | ||
parent = utils.full_resource_name( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this should be necessary. The resource name can be passed directly.
|
||
return model_evaluation.ModelEvaluation._list( | ||
parent=parent, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.credentials
should be forwarded here
google/cloud/aiplatform/models.py
Outdated
ModelEvaluation resource. | ||
""" | ||
if not evaluation_id: | ||
evaluation_resource_name = self.list_model_evaluations()[0].resource_name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can throw an exception if there are no model evalutions. Also there is no need to do the additional GET call below:
evals = self.list_model_evaluations()
if len(evals) > 1:
_LOGGER.warning(
f"Your model has more than one model evaluation, this is returning only one evaluation resource: {evals[0].resource_name}"
)
return evals[0] if evals else evals
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point re: models w/ no evaluations. I think I've fixed this.
Added initial support for the ModelEvaluation gapic client:
ModelEvaluation
class defined inmodel_evaluation/
ModelEvaluation
class inmodels.py
Fixes b/224573576 🦕