-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Pipelines] Add revision tag to all default pipelines #17667
[Pipelines] Add revision tag to all default pipelines #17667
Conversation
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, LGTM!
Co-authored-by: Julien Chaumond <julien@huggingface.co>
…to revision_tags_for_default_pipeline
…om/patrickvonplaten/transformers into revision_tags_for_default_pipeline
"type": "text", | ||
}, | ||
"zero-shot-classification": { | ||
"impl": ZeroShotClassificationPipeline, | ||
"tf": (TFAutoModelForSequenceClassification,) if is_tf_available() else (), | ||
"pt": (AutoModelForSequenceClassification,) if is_torch_available() else (), | ||
"default": { | ||
"model": {"pt": "facebook/bart-large-mnli", "tf": "roberta-large-mnli"}, | ||
"config": {"pt": "facebook/bart-large-mnli", "tf": "roberta-large-mnli"}, | ||
"tokenizer": {"pt": "facebook/bart-large-mnli", "tf": "roberta-large-mnli"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Narsil before merging this PR, I'd love to have your opinion here. I've checked a the pipeline function and I don't think a "default" tokenizer is never used. If no repo_id
is provided it seems like for tokenizer or feature extractor it's always the model id that is used and never the tokenizer id => to me it looks like this is dead code here. Can you confirm?
|
||
@slow | ||
@require_torch | ||
def test_load_default_pipelines_pt(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I test here that the default model that is loaded by calling pipeline(<task_name>)
indeed loads the corresponding model. Weights are compared to be sure it's actually exactly the same model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is run for all pipelines and should in general serve as a good test to make sure all default pipelines work as expected
@slow | ||
@require_tf | ||
@require_tensorflow_probability | ||
def test_load_default_pipelines_tf_table_qa(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
split the test here into table_qa and no table_qa because of the scatter
and tensorflow_prop
dependencies which are quite annoying and in case one is not installed I still want to run all the other tasks
@@ -187,9 +190,8 @@ | |||
"tf": (TFAutoModelForTableQuestionAnswering,) if is_tf_available() else (), | |||
"default": { | |||
"model": { | |||
"pt": "google/tapas-base-finetuned-wtq", | |||
"tokenizer": "google/tapas-base-finetuned-wtq", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deleted tokenizer because of https://github.com/huggingface/transformers/pull/17667/files#r910987282
"tokenizer": "dandelin/vilt-b32-finetuned-vqa", | ||
"feature_extractor": "dandelin/vilt-b32-finetuned-vqa", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deleted tokenizer and feature extractor because of https://github.com/huggingface/transformers/pull/17667/files#r910987282
PR is good to go for me. @Narsil could you please take a look at this comment: #17667 (comment) before merging - I think there is some dead code. Default tokenizer never seem to be called. Also cc @sgugger PR should be ready otherwise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this!
@@ -607,3 +617,125 @@ def add(number, extra=0): | |||
|
|||
outputs = [item for item in dataset] | |||
self.assertEqual(outputs, [[{"id": 2}, {"id": 3}, {"id": 4}, {"id": 5}]]) | |||
|
|||
def check_models_equal_pt(self, model1, model2): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are adding all of this in a Tester that is under require_pt
. I think you need to make a new Tester class since there are TensorFlow tests too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch - thanks!
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
@Narsil approved offline (tokenizer_default code is dead indeed) => merging! |
) * trigger test failure * upload revision poc * Update src/transformers/pipelines/base.py Co-authored-by: Julien Chaumond <julien@huggingface.co> * up * add test * correct some stuff * Update src/transformers/pipelines/__init__.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * correct require flag Co-authored-by: Julien Chaumond <julien@huggingface.co> Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
What does this PR do?
Fixes #17666
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.