Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Import error of the newest version of transformers #29847

Closed
2 of 4 tasks
smartliuhw opened this issue Mar 25, 2024 · 10 comments
Closed
2 of 4 tasks

Import error of the newest version of transformers #29847

smartliuhw opened this issue Mar 25, 2024 · 10 comments

Comments

@smartliuhw
Copy link

System Info

system: Linux
transformers version: 4.39.1
torch version: 1.13.1
cuda version:1.16

Who can help?

@muellerzr and @pacman100

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

  1. use the lm-evaluation-harness to run evaluation tasks
  2. run Llama2-7b-hf on any of those tasks: nq_open, trivalqa, truthfulqa
  3. get those results:
Traceback (most recent call last):                                                                      
  File "/usr/local/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1472, in _get_m
odule                                                                                                   
    return importlib.import_module("." + module_name, self.__name__)                                    
  File "/usr/lib64/python3.8/importlib/__init__.py", line 127, in import_module                         
    return _bootstrap._gcd_import(name[level:], package, level)                                         
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import                                       
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load                                     
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked                            
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked                                     
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module                               
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed                          
  File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 47, in <module>
    from .audio_classification import AudioClassificationPipeline                                       
  File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/audio_classification.py", line 21,
 in <module>                                                                                            
    from .base import Pipeline, build_pipeline_init_args                                                
  File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 34, in <module>    
    from ..modelcard import ModelCard                                                                   
  File "/usr/local/lib/python3.8/site-packages/transformers/modelcard.py", line 48, in <module>         
    from .training_args import ParallelMode                                                             
  File "/usr/local/lib/python3.8/site-packages/transformers/training_args.py", line 75, in <module>     
    from .trainer_pt_utils import AcceleratorConfig                                                     
  File "/usr/local/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 37, in <module>  
    from torch.optim.lr_scheduler import LRScheduler                                                    
ImportError: cannot import name 'LRScheduler' from 'torch.optim.lr_scheduler' (/usr/local/lib64/python3.
8/site-packages/torch/optim/lr_scheduler.py)   

Expected behavior

just finish the task

@smartliuhw
Copy link
Author

System Info

system: Linux transformers version: 4.39.1 torch version: 1.13.1 cuda version:1.16

Who can help?

@muellerzr and @pacman100

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

  1. use the lm-evaluation-harness to run evaluation tasks
  2. run Llama2-7b-hf on any of those tasks: nq_open, trivalqa, truthfulqa
  3. get those results:
Traceback (most recent call last):                                                                      
  File "/usr/local/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1472, in _get_m
odule                                                                                                   
    return importlib.import_module("." + module_name, self.__name__)                                    
  File "/usr/lib64/python3.8/importlib/__init__.py", line 127, in import_module                         
    return _bootstrap._gcd_import(name[level:], package, level)                                         
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import                                       
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load                                     
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked                            
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked                                     
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module                               
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed                          
  File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 47, in <module>
    from .audio_classification import AudioClassificationPipeline                                       
  File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/audio_classification.py", line 21,
 in <module>                                                                                            
    from .base import Pipeline, build_pipeline_init_args                                                
  File "/usr/local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 34, in <module>    
    from ..modelcard import ModelCard                                                                   
  File "/usr/local/lib/python3.8/site-packages/transformers/modelcard.py", line 48, in <module>         
    from .training_args import ParallelMode                                                             
  File "/usr/local/lib/python3.8/site-packages/transformers/training_args.py", line 75, in <module>     
    from .trainer_pt_utils import AcceleratorConfig                                                     
  File "/usr/local/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 37, in <module>  
    from torch.optim.lr_scheduler import LRScheduler                                                    
ImportError: cannot import name 'LRScheduler' from 'torch.optim.lr_scheduler' (/usr/local/lib64/python3.
8/site-packages/torch/optim/lr_scheduler.py)   

Expected behavior

just finish the task

after i downgrade transformers to version 4.38.2, the code works well. maybe there's a bug in the newest version

@Phynon
Copy link

Phynon commented Mar 25, 2024

Looks like the latest release (>= 4.39) breaks compatibility with pytorch 1.13.1 because torch/optim/lr_scheduler.py in that version does not export the name LRScheduler but transformers/trainer_pt_utils.py explicitly imports it.

from torch.optim.lr_scheduler import LRScheduler

@smartliuhw
Copy link
Author

Looks like the latest release (>= 4.39) breaks compatibility with pytorch 1.13.1 because torch/optim/lr_scheduler.py in that version does not export the name LRScheduler but transformers/trainer_pt_utils.py explicitly imports it.

from torch.optim.lr_scheduler import LRScheduler

You are right. Since my cuda version is 11.6 i just can't find out if the higher pytorch version existing the same problem😂

@ArthurZucker
Copy link
Collaborator

That's a regression, @younesbelkada #29588 broke this 😢 sorry.
Do you want to open a fix for this? Protecting the import with some torch version available?

@smartliuhw
Copy link
Author

That's a regression, @younesbelkada #29588 broke this 😢 sorry. Do you want to open a fix for this? Protecting the import with some torch version available?

yes, that would be great

@amyeroberts
Copy link
Collaborator

Ciosing as resolved in #29919.

Working from a source install should work in the meantime whilst we prepare a patch release: pip install git+https://github.com/huggingface/transformers

@xlinsz
Copy link

xlinsz commented Aug 1, 2024

change LRScheduler to _LRScheduler, the function is defined as a internal function

@rphes
Copy link

rphes commented Jan 10, 2025

This issue seems to be back again in 4.48.0. Our pipeline started failing today and downgrading to 4.47.1 resolved the issue.

@Rocketknight1
Copy link
Member

Hi @rphes, this may be caused by your version of torch being older than 2.0. We have almost fully deprecated support for PyTorch 1.x at this point, and if you're on an older version we strongly recommend updating!

Can you confirm your torch version? If you're getting these issues with torch >= 2.0 then we'll investigate, but if not then we're unlikely to fix this.

@rphes
Copy link

rphes commented Jan 13, 2025

Appreciate the reply @Rocketknight1. Indeed, like the original submitter, I'm on torch 1.13.1. I can see why you would deprecate torch 1.x, but I'm surprised it has seemingly stopped working after a minor version upgrade. In any case, the issue is easily resolved by pinning the version of transformers, but I just wanted to let you know I ran into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants