Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix metadata_update for verified evaluations #1214

Merged
merged 11 commits into from
Nov 29, 2022
Merged

Fix metadata_update for verified evaluations #1214

merged 11 commits into from
Nov 29, 2022

Conversation

lewtun
Copy link
Member

@lewtun lewtun commented Nov 21, 2022

Fixes #1210 and #1208

TODO

  • Add regression tests

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Nov 21, 2022

The documentation is not available anymore as the PR was closed or merged.

@lewtun lewtun changed the title Fix metadata_update for verified evaluations [WIP] Fix metadata_update for verified evaluations Nov 21, 2022
@@ -514,12 +516,22 @@ def eval_results_to_model_index(
# Here, we make a map of those pairs and the associated EvalResults.
task_and_ds_types_map = defaultdict(list)
for eval_result in eval_results:
task_and_ds_pair = (eval_result.task_type, eval_result.dataset_type)
task_and_ds_pair = (
eval_result.task_type,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unique combination for evaluation results is (task, dataset, config, split), so we extend the logic here to avoid erasing this information from the evaluation results

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice solving this bug @lewtun ! Do you see any way we could avoid a break in the future if another field is added to EvalResult ? Or is it most likely not gonna happen ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just had a look to the code, I guess it would make sense to add dataset_revision and dataset_args as well right ?

Maybe we could have a EvalResult.unique_identifier property that would be a tuple/hash that depends on task and dataset properties: task_type, dataset_type, dataset_config,... Because in the end, an EvalResult is "just" a config (task+dataset) and a value associated to it right (+a attributes for verification) ? I don't know how we could name that but it would also be very convenient also for the is_equal_except_value method.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @nateraw who worked on that

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! Added in 8e4eae2

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lewtun thanks for reporting the bugs and sending a PR for it. Sorry not answering earlier, it took me some time to dig into it/understand how eval results work in the card metadata. I like your suggested solution but before merging I'd like to discuss a bit how we could (hopefully) avoid this type of issue in the future.

And I agree about the "add regression test" item from the PR todo list. I feel we definitely need more tests in the eval results module. Sorry that you are the one experiencing the issues here but I think it's really good that you are getting them now rather than noticing too late that we mixed up some metadata 😄

@@ -514,12 +516,22 @@ def eval_results_to_model_index(
# Here, we make a map of those pairs and the associated EvalResults.
task_and_ds_types_map = defaultdict(list)
for eval_result in eval_results:
task_and_ds_pair = (eval_result.task_type, eval_result.dataset_type)
task_and_ds_pair = (
eval_result.task_type,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice solving this bug @lewtun ! Do you see any way we could avoid a break in the future if another field is added to EvalResult ? Or is it most likely not gonna happen ?

@@ -514,12 +516,22 @@ def eval_results_to_model_index(
# Here, we make a map of those pairs and the associated EvalResults.
task_and_ds_types_map = defaultdict(list)
for eval_result in eval_results:
task_and_ds_pair = (eval_result.task_type, eval_result.dataset_type)
task_and_ds_pair = (
eval_result.task_type,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just had a look to the code, I guess it would make sense to add dataset_revision and dataset_args as well right ?

Maybe we could have a EvalResult.unique_identifier property that would be a tuple/hash that depends on task and dataset properties: task_type, dataset_type, dataset_config,... Because in the end, an EvalResult is "just" a config (task+dataset) and a value associated to it right (+a attributes for verification) ? I don't know how we could name that but it would also be very convenient also for the is_equal_except_value method.

@@ -514,12 +516,22 @@ def eval_results_to_model_index(
# Here, we make a map of those pairs and the associated EvalResults.
task_and_ds_types_map = defaultdict(list)
for eval_result in eval_results:
task_and_ds_pair = (eval_result.task_type, eval_result.dataset_type)
task_and_ds_pair = (
eval_result.task_type,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @nateraw who worked on that

Copy link
Member Author

@lewtun lewtun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay on this - got stuck with travelling last week. Should now be good for another review based on our discussion offline!

@@ -127,7 +127,7 @@
value: 0.2662102282047272
name: Accuracy
config: default
verified: false
verified: true
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For simplicity I decided to make this dummy eval "verified" so I could reuse the existing test cases of ModelCardTest. Let me know if you'd prefer a separate test class :)

@lewtun lewtun changed the title [WIP] Fix metadata_update for verified evaluations Fix metadata_update for verified evaluations Nov 28, 2022
@lewtun
Copy link
Member Author

lewtun commented Nov 28, 2022

Oh, some tests are failing from my changes - will take a look!

self.dataset_type,
self.dataset_config,
self.dataset_split,
self.dataset_revision,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I decided to exclude dataset_args from this tuple because it's a dict in general which causes hashing errors when we try to use it to access keys here

IMO dataset_args is unlikely to be used much for defining a model evaluation since the config and split etc capture the salient info

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally fine with me 👍

@lewtun
Copy link
Member Author

lewtun commented Nov 28, 2022

Oh, some tests are failing from my changes - will take a look!

Fixed! Not sure what to do about codecov though ...

@codecov
Copy link

codecov bot commented Nov 28, 2022

Codecov Report

Base: 84.37% // Head: 84.39% // Increases project coverage by +0.01% 🎉

Coverage data is based on head (7ec1406) compared to base (b33c1f2).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1214      +/-   ##
==========================================
+ Coverage   84.37%   84.39%   +0.01%     
==========================================
  Files          44       44              
  Lines        4365     4370       +5     
==========================================
+ Hits         3683     3688       +5     
  Misses        682      682              
Impacted Files Coverage Δ
src/huggingface_hub/repocard.py 95.74% <100.00%> (+0.04%) ⬆️
src/huggingface_hub/repocard_data.py 98.52% <100.00%> (+0.03%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Contributor

@Wauplin Wauplin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @lewtun for making the changing and fixing the tests !
I made a last suggestion but apart from that, it's all good :)

Comment on lines 534 to 545
for eval_result_identifier, results in task_and_ds_types_map.items():
data = {
"task": {
"type": task_type,
"type": eval_result_identifier[0],
"name": results[0].task_name,
},
"dataset": {
"name": results[0].dataset_name,
"type": dataset_type,
"config": results[0].dataset_config,
"split": results[0].dataset_split,
"revision": results[0].dataset_revision,
"type": eval_result_identifier[1],
"config": eval_result_identifier[2],
"split": eval_result_identifier[3],
"revision": eval_result_identifier[4],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lewtun Small change but here I would take all information from the first item of results. I think it's clearer which metadata we are using (instead of the eval_result_identifier[...]). And imagine in the future we return a hash instead of a tuple (for example if dataset_config has to be hashed), we wouldn't have to change this snippet. What do you think ?

Suggested change
for eval_result_identifier, results in task_and_ds_types_map.items():
data = {
"task": {
"type": task_type,
"type": eval_result_identifier[0],
"name": results[0].task_name,
},
"dataset": {
"name": results[0].dataset_name,
"type": dataset_type,
"config": results[0].dataset_config,
"split": results[0].dataset_split,
"revision": results[0].dataset_revision,
"type": eval_result_identifier[1],
"config": eval_result_identifier[2],
"split": eval_result_identifier[3],
"revision": eval_result_identifier[4],
for results in task_and_ds_types_map.values():
# All items from `results` share same metadata
sample_result = results[0]
data = {
"task": {
"type": sample_result.task_type,
"name": sample_result.task_name,
},
"dataset": {
"name": sample_result.dataset_name,
"type": sample_result.dataset_type,
"config": sample_result.dataset_config,
"split": sample_result.dataset_split,
"revision": sample_result.dataset_revision,

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! Added in b2bb6d6 because GitHub wouldn't let me commit your suggestion in the UI

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice ! CI is green so feel free to merge :) Thanks again for fixing this thing !

self.dataset_type,
self.dataset_config,
self.dataset_split,
self.dataset_revision,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally fine with me 👍

@lewtun lewtun merged commit 169e99d into main Nov 29, 2022
@lewtun lewtun deleted the fix-metadata-update branch November 29, 2022 05:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

metadata_update() duplicates metrics when updating a single field from None
3 participants