-
Notifications
You must be signed in to change notification settings - Fork 581
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix metadata_update for verified evaluations #1214
Conversation
The documentation is not available anymore as the PR was closed or merged. |
src/huggingface_hub/repocard_data.py
Outdated
@@ -514,12 +516,22 @@ def eval_results_to_model_index( | |||
# Here, we make a map of those pairs and the associated EvalResults. | |||
task_and_ds_types_map = defaultdict(list) | |||
for eval_result in eval_results: | |||
task_and_ds_pair = (eval_result.task_type, eval_result.dataset_type) | |||
task_and_ds_pair = ( | |||
eval_result.task_type, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The unique combination for evaluation results is (task, dataset, config, split)
, so we extend the logic here to avoid erasing this information from the evaluation results
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice solving this bug @lewtun ! Do you see any way we could avoid a break in the future if another field is added to EvalResult
? Or is it most likely not gonna happen ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just had a look to the code, I guess it would make sense to add dataset_revision
and dataset_args
as well right ?
Maybe we could have a EvalResult.unique_identifier
property that would be a tuple/hash that depends on task and dataset properties: task_type
, dataset_type
, dataset_config
,... Because in the end, an EvalResult
is "just" a config (task+dataset) and a value associated to it right (+a attributes for verification) ? I don't know how we could name that but it would also be very convenient also for the is_equal_except_value
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @nateraw who worked on that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! Added in 8e4eae2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @lewtun thanks for reporting the bugs and sending a PR for it. Sorry not answering earlier, it took me some time to dig into it/understand how eval results work in the card metadata. I like your suggested solution but before merging I'd like to discuss a bit how we could (hopefully) avoid this type of issue in the future.
And I agree about the "add regression test" item from the PR todo list. I feel we definitely need more tests in the eval results module. Sorry that you are the one experiencing the issues here but I think it's really good that you are getting them now rather than noticing too late that we mixed up some metadata 😄
src/huggingface_hub/repocard_data.py
Outdated
@@ -514,12 +516,22 @@ def eval_results_to_model_index( | |||
# Here, we make a map of those pairs and the associated EvalResults. | |||
task_and_ds_types_map = defaultdict(list) | |||
for eval_result in eval_results: | |||
task_and_ds_pair = (eval_result.task_type, eval_result.dataset_type) | |||
task_and_ds_pair = ( | |||
eval_result.task_type, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice solving this bug @lewtun ! Do you see any way we could avoid a break in the future if another field is added to EvalResult
? Or is it most likely not gonna happen ?
src/huggingface_hub/repocard_data.py
Outdated
@@ -514,12 +516,22 @@ def eval_results_to_model_index( | |||
# Here, we make a map of those pairs and the associated EvalResults. | |||
task_and_ds_types_map = defaultdict(list) | |||
for eval_result in eval_results: | |||
task_and_ds_pair = (eval_result.task_type, eval_result.dataset_type) | |||
task_and_ds_pair = ( | |||
eval_result.task_type, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just had a look to the code, I guess it would make sense to add dataset_revision
and dataset_args
as well right ?
Maybe we could have a EvalResult.unique_identifier
property that would be a tuple/hash that depends on task and dataset properties: task_type
, dataset_type
, dataset_config
,... Because in the end, an EvalResult
is "just" a config (task+dataset) and a value associated to it right (+a attributes for verification) ? I don't know how we could name that but it would also be very convenient also for the is_equal_except_value
method.
src/huggingface_hub/repocard_data.py
Outdated
@@ -514,12 +516,22 @@ def eval_results_to_model_index( | |||
# Here, we make a map of those pairs and the associated EvalResults. | |||
task_and_ds_types_map = defaultdict(list) | |||
for eval_result in eval_results: | |||
task_and_ds_pair = (eval_result.task_type, eval_result.dataset_type) | |||
task_and_ds_pair = ( | |||
eval_result.task_type, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @nateraw who worked on that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay on this - got stuck with travelling last week. Should now be good for another review based on our discussion offline!
@@ -127,7 +127,7 @@ | |||
value: 0.2662102282047272 | |||
name: Accuracy | |||
config: default | |||
verified: false | |||
verified: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For simplicity I decided to make this dummy eval "verified" so I could reuse the existing test cases of ModelCardTest
. Let me know if you'd prefer a separate test class :)
Oh, some tests are failing from my changes - will take a look! |
self.dataset_type, | ||
self.dataset_config, | ||
self.dataset_split, | ||
self.dataset_revision, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to exclude dataset_args
from this tuple because it's a dict
in general which causes hashing errors when we try to use it to access keys here
IMO dataset_args
is unlikely to be used much for defining a model evaluation since the config
and split
etc capture the salient info
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally fine with me 👍
Fixed! Not sure what to do about |
Codecov ReportBase: 84.37% // Head: 84.39% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## main #1214 +/- ##
==========================================
+ Coverage 84.37% 84.39% +0.01%
==========================================
Files 44 44
Lines 4365 4370 +5
==========================================
+ Hits 3683 3688 +5
Misses 682 682
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @lewtun for making the changing and fixing the tests !
I made a last suggestion but apart from that, it's all good :)
src/huggingface_hub/repocard_data.py
Outdated
for eval_result_identifier, results in task_and_ds_types_map.items(): | ||
data = { | ||
"task": { | ||
"type": task_type, | ||
"type": eval_result_identifier[0], | ||
"name": results[0].task_name, | ||
}, | ||
"dataset": { | ||
"name": results[0].dataset_name, | ||
"type": dataset_type, | ||
"config": results[0].dataset_config, | ||
"split": results[0].dataset_split, | ||
"revision": results[0].dataset_revision, | ||
"type": eval_result_identifier[1], | ||
"config": eval_result_identifier[2], | ||
"split": eval_result_identifier[3], | ||
"revision": eval_result_identifier[4], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lewtun Small change but here I would take all information from the first item of results
. I think it's clearer which metadata we are using (instead of the eval_result_identifier[...]
). And imagine in the future we return a hash instead of a tuple (for example if dataset_config
has to be hashed), we wouldn't have to change this snippet. What do you think ?
for eval_result_identifier, results in task_and_ds_types_map.items(): | |
data = { | |
"task": { | |
"type": task_type, | |
"type": eval_result_identifier[0], | |
"name": results[0].task_name, | |
}, | |
"dataset": { | |
"name": results[0].dataset_name, | |
"type": dataset_type, | |
"config": results[0].dataset_config, | |
"split": results[0].dataset_split, | |
"revision": results[0].dataset_revision, | |
"type": eval_result_identifier[1], | |
"config": eval_result_identifier[2], | |
"split": eval_result_identifier[3], | |
"revision": eval_result_identifier[4], | |
for results in task_and_ds_types_map.values(): | |
# All items from `results` share same metadata | |
sample_result = results[0] | |
data = { | |
"task": { | |
"type": sample_result.task_type, | |
"name": sample_result.task_name, | |
}, | |
"dataset": { | |
"name": sample_result.dataset_name, | |
"type": sample_result.dataset_type, | |
"config": sample_result.dataset_config, | |
"split": sample_result.dataset_split, | |
"revision": sample_result.dataset_revision, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! Added in b2bb6d6 because GitHub wouldn't let me commit your suggestion in the UI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice ! CI is green so feel free to merge :) Thanks again for fixing this thing !
self.dataset_type, | ||
self.dataset_config, | ||
self.dataset_split, | ||
self.dataset_revision, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally fine with me 👍
Fixes #1210 and #1208
TODO