-
Notifications
You must be signed in to change notification settings - Fork 26.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
4.39.3; ZeroShotClassificationPipeline broken. #30181
Labels
Core: Pipeline
Internals of the library; Pipeline.
Comments
Anyone? It's definitely a big blocker. It prevents us from using newer models. |
cc @zucchini-nlp -- this issue is probably related to #29614 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
System Info
transformers
version: 4.39.3device_map
and manually/algorithmically constructingmax_memory
.Who can help?
@Narsil @ArthurZucker @gante
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
When upgrading from Transformers 4.38.2 to the latest release (4.39.3), the below error can be observed, wherein "discriminatorPipeline" is a zero shot classification pipeline (
pl = pipeline("zero-shot-classification", model=model, tokenizer=model)
). "model" is aAutoModelForCausalLM
, obtained with:"tokenizer" is a
AutoTokenizer
, obtained with:And finally, the pipeline is called by doing:
The error didn't happen at all in 4.38.2. Reverting back from 4.39 to 4.38 fixes the issue. However, I'd like to use the
/c4ai-command-r-plus
model, and it seems to requires 4.39.Expected behavior
The above code should work as is, yet it does not in the newest version.
The text was updated successfully, but these errors were encountered: