-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import error of the newest version of transformers #29847
Comments
after i downgrade transformers to version |
Looks like the latest release (>= 4.39) breaks compatibility with pytorch 1.13.1 because
|
You are right. Since my cuda version is 11.6 i just can't find out if the higher pytorch version existing the same problem😂 |
That's a regression, @younesbelkada #29588 broke this 😢 sorry. |
yes, that would be great |
Ciosing as resolved in #29919. Working from a source install should work in the meantime whilst we prepare a patch release: |
change LRScheduler to _LRScheduler, the function is defined as a internal function |
This issue seems to be back again in 4.48.0. Our pipeline started failing today and downgrading to 4.47.1 resolved the issue. |
Hi @rphes, this may be caused by your version of torch being older than Can you confirm your |
Appreciate the reply @Rocketknight1. Indeed, like the original submitter, I'm on torch 1.13.1. I can see why you would deprecate torch 1.x, but I'm surprised it has seemingly stopped working after a minor version upgrade. In any case, the issue is easily resolved by pinning the version of transformers, but I just wanted to let you know I ran into this. |
System Info
system: Linux
transformers version: 4.39.1
torch version: 1.13.1
cuda version:1.16
Who can help?
@muellerzr and @pacman100
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Expected behavior
just finish the task
The text was updated successfully, but these errors were encountered: