Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX overwriting metadata when both verified and unverified reported values #1186

Merged
merged 2 commits into from
Nov 15, 2022

Conversation

Wauplin
Copy link
Contributor

@Wauplin Wauplin commented Nov 14, 2022

Should fix #1185.

With this PR, 2 eval results are considered to describe the same object if and only if all attributes are the same except the metric value itself (see is_equal_except_value). Otherwise value is not overwritten.

cc @lewtun for review

@Wauplin Wauplin requested review from lewtun and nateraw November 14, 2022 16:49
@HuggingFaceDocBuilder
Copy link

HuggingFaceDocBuilder commented Nov 14, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Member

@lewtun lewtun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the quick fix on this bug @Wauplin 🔥 !

I've tested the PR against my own model can can confirm it works as expected: https://huggingface.co/autoevaluate/binary-classification/discussions/58/files

The code itself also LGTM!

verified: true
---

This is a test model card.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small suggestion to help community developers understand what this is about:

Suggested change
This is a test model card.
This is a test model card containing one self-reported metric and one "verified" metric in the format produced by Hugging Face's [model evaluation service](https://huggingface.co/spaces/autoevaluate/model-evaluator).

@nateraw
Copy link
Contributor

nateraw commented Nov 14, 2022

Seeing failing test here:

FAILED ../tests/test_repocard.py::RepocardMetadataUpdateTest::test_update_existing_result_without_overwrite - AssertionError: Regex pattern did not match.
 Regex: "You passed a new value for the existing metric 'name: Accuracy, type: accuracy'. Set `overwrite=True` to overwrite existing metrics."
 Input: "You passed a new value for the existing metric 'name: Accuracy, type:  accuracy'. Set `overwrite=True` to overwrite existing metrics."

@codecov
Copy link

codecov bot commented Nov 15, 2022

Codecov Report

Base: 84.00% // Head: 84.03% // Increases project coverage by +0.02% 🎉

Coverage data is based on head (706c629) compared to base (711f688).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1186      +/-   ##
==========================================
+ Coverage   84.00%   84.03%   +0.02%     
==========================================
  Files          44       44              
  Lines        4321     4327       +6     
==========================================
+ Hits         3630     3636       +6     
  Misses        691      691              
Impacted Files Coverage Δ
src/huggingface_hub/repocard.py 95.69% <100.00%> (-0.03%) ⬇️
src/huggingface_hub/repocard_data.py 98.49% <100.00%> (+0.08%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@Wauplin Wauplin merged commit 131fd35 into main Nov 15, 2022
@Wauplin Wauplin deleted the 1185-fix-metadata-update-insert-verified-true branch November 15, 2022 08:49
@julien-c
Copy link
Member

pinging @coyotte508 and @allendorf to make sure this is also the behavior that makes sense considering the server-side implem of verified eval results

(i'm a bit fuzzy on this subject myself)

@lewtun
Copy link
Member

lewtun commented Nov 15, 2022

pinging @coyotte508 and @allendorf to make sure this is also the behavior that makes sense considering the server-side implem of verified eval results

(i'm a bit fuzzy on this subject myself)

I think this will become clearer once I refactor AutoTrain to include the new model card API in https://github.com/huggingface/autotrain-backend/pull/823 :)

Wauplin pushed a commit that referenced this pull request Jan 25, 2024
* Disable tqdm progress bar if no TTY attached

When dockerized applications write to STDOUT/STDERR, the applications
can block due to logging back pressure (see
https://docs.docker.com/config/containers/logging/configure/#configure-the-delivery-mode-of-log-messages-from-container-to-log-driver6

HuggingFace's TGI container is one such example (see huggingface/text-generation-inference#1186).

Setting tqdm's `disable=None` will disable the progress bar if no tty is
attached and help to resolve TGI's issue #1186.

References:
    huggingface/text-generation-inference#1186 (comment)
    huggingface/text-generation-inference#1186 (comment)

* Disable tqdm progress bar if no TTY attached in lfs.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

metadata_update() function inserts verified=True in self-reported evaluations
5 participants