-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: module 'torch' has no attribute 'float8_e4m3fn' #32185
Comments
Hi @Perpetue237, thanks for raising an issue! Please make sure to follow the issue template and provide:
|
1. transformers-cli env Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
2. Code snippet:
|
@Perpetue237 Can you share the full error traceback as well please? |
AttributeError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:564, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:3903, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs) File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:4377, in PreTrainedModel._load_pretrained_model(cls, model, state_dict, loaded_keys, resolved_archive_file, pretrained_model_name_or_path, ignore_mismatched_sizes, sharded_metadata, _fast_init, low_cpu_mem_usage, device_map, offload_folder, offload_state_dict, dtype, hf_quantizer, keep_in_fp32_modules, gguf_path) File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:871, in _load_state_dict_into_meta_model(model, state_dict, loaded_state_dict_keys, start_prefix, expected_keys, device_map, offload_folder, offload_index, state_dict_folder, state_dict_index, dtype, hf_quantizer, is_safetensors, keep_in_fp32_modules, unexpected_keys) AttributeError: module 'torch' has no attribute 'float8_e4m3fn' |
Oh my bad, I will fix it ! It is only available after torch 2.1 |
LMK if the above PR fixes it ! @Perpetue237. We will patch it soon ! |
how to update??? |
Same problem, How can I get new version? |
Solved it by run |
Thank you! It worked. I must have missed the new version. |
pip install transformers==4.43.2 has run, but error exist? torch version
|
Did you reboot your kernel after the install? |
same error happened when I tried to load llama3.1 405B with fp8, with torch 2.4.0 (as the issue mentioned above). Tried transformers 4.43.2 but it seems not solved.. |
I works for me now with the newest version of transformer. I restarted the kernel and clear the cache. And where I am working in a docker container I rebuilt the container. |
@Perpetue237 Which cuda version are you using? |
cuda 11.7 |
Was it the cuda version too old? |
yes @kungfu-eric |
I am using cuda_12.1 but I still have this problem. I find one solution https://www.reddit.com/r/comfyui/comments/1ettg44/comment/lipkvex/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button |
Meet this problem again, I am using, transformers 4.45.2 , cuda_12.2 , torch 12.1 |
torch 2.4.1+cu124 |
What's the error you are getting with the traceback @LukeLIN-web @cckuailong ? It's strange that you are getting it if you've installed the latest transformers as I switched the check to a |
I meet |
Then, i'm pretty sure this is because you are using an older version of transformers. If you sent me the full traceback, I will be able to confirm that! |
I can only use torch==2.0.1 . Then how to solve it |
Just install the latest transformers @Boltzmachine ! |
|
did you resolved? i usetransformers==4.46.0 the error still exists |
System Info
AutoModelForCausalLM.from_pretrained now returns the above error.
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
pip install transformers
from transformers import AutoModelForCausalLM
AutoModelForCausalLM.from_pretrained(
model_path,
cache_dir= cache_dir,
device_map="auto",
quantization_config=bnb_config,
)
Expected behavior
Expected to load the model.
The text was updated successfully, but these errors were encountered: