Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'Catcher' object has no attribute 'self_attn' #29352 #29783

Closed
2 of 4 tasks
andinus opened this issue Mar 21, 2024 · 10 comments · Fixed by #29934 or #29965
Closed
2 of 4 tasks

AttributeError: 'Catcher' object has no attribute 'self_attn' #29352 #29783

andinus opened this issue Mar 21, 2024 · 10 comments · Fixed by #29934 or #29965
Labels
Core: Modeling Internals of the library; Models.

Comments

@andinus
Copy link

andinus commented Mar 21, 2024

System Info

  • transformers version: 4.39 (downgrading to 4.38.2 fixes this)

  • Platform: Linux-5.4.0-163-generic-x86_64-with-glibc2.35

  • Python version: 3.10.12

  • Huggingface_hub version: 0.21.4 - Safetensors version: 0.4.2 - Accelerate version: 0.28.0

  • Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True)

  • Tensorflow version (GPU?): not installed (NA)

  • Flax version (CPU?/GPU?/TPU?): not installed (NA)

  • Jax version: not installed

  • JaxLib version: not installed

  • Using GPU in script?: yes

  • Using distributed or parallel set-up in script?: parallel

Related: #29352

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Same as #29352

Expected behavior

Same as #29352 (downgrading to 4.38.2 fixes this)

Screenshot 2024-03-21 at 9 30 16 PM

@amyeroberts
Copy link
Collaborator

Hi @andinus, thanks for raising an issue!

Could you:

  • provide a minimal code snippet to reproduce the error?
  • share the full traceback as text, rather than a screenshot? This makes the errors searchable and enables us to more easily debug as we can copy-paste segments.

cc @ArthurZucker as it seems like a possible regression
cc @younesbelkada as it seems possibly quantization related

@ArthurZucker
Copy link
Collaborator

It's not really a regression, as I mentioned on the other PR, autoawq removes self_attn modules entirely. We don't expect this. Let's open the issue in AWQ we complied with it last time as the release was coming, but long term they are breaking the api!

@andinus
Copy link
Author

andinus commented Mar 22, 2024

* provide a minimal code snippet to reproduce the error?

* share the full traceback as text, rather than a screenshot? This makes the errors searchable and enables us to more easily debug as we can copy-paste segments.

Hello, I'm very sorry, I won't be able to provide these immediately.

OCR of the traceback
Exception: 'Catcher' object has no attribute 'self_attn
Traceback (most recent call last):
  File "/root/qex/framework/run.py", line 318, in child_process
    Generator( input_queue, output_queue ).run()
  File "/root/qex/franework/run.py", line 284, in run
    self .quantize()
  File "/root/qex/framework/run.py", line 189, in quantize
    self. finetuningmodel_engine.quantize()
  File "/root/qex/framework/engine_vilm.py", line 129, in quantize
    model.quantize( tokenizer, quant_config=quant_config )
  File "/usr/local/lib/python3.1@/dist-packages/torch/utils/_contextlib.py", line 115, in decorate.context
    return func(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/awq/models/base.py", line 161, in quantize
    self .quantizer = AwqQuantizer(
  File "/usr/local/lib/python3.16/dist-packages/awq/quantize/quantizer.py", line 59, in __init__
    self.modules, self.module_kwargs, self.inps = self.init_quant()
  File "/usr/local/1ib/python3.18/dist-packages/awq/quantize/quantizer.py", line 478, in init_quant
    self .nodel (samples. to(next(self .model .paraneters()) device)
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module.py", line 1518, in wrapped_call_inp]
    return self..call_impl(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module.py", line 1527, in .call_inp]
    return forward_call(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/accelerate/hooks.py", line 166, in new_forward
    output = module..old_forward(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/transformers/nodels/llama/nodeling_llana.py", line 1196, in forward
    outputs = self.nodel(
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module.py", line 1518, in .wrapped.call_inp]
    return self..call_impl(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module. py", Line 1527, in -call_impl
    return forvard_call(*args, **kwargs)
  File "/usr/local/1ib/python3.18/dist-packages/transformers/nodels/llama/nodeling_llana.py", line 998, in forward
    causal_nask = self._update_causal_nask(attention_mask, inputs_embeds, cache_position)
  File "/usr/local/1ib/python3.10/dist-packages/transformers/nodels/1lana/nodeling_llana.py", line 1867, in _update_causal_mask
    if hasattr(self.layers[@].self_attn, "past_key_value"): # static cache
  File "/usr/local/1ib/python3.18/dist-packages/torch/nn/modules/module.py", line 1695, in __getattr__
    raise AttributeError(f"'{type(self).._name__}' object has no attribute '{nane}'")
AttributeError: 'Catcher' object has no attribute 'self_attn'

@ArthurZucker
Copy link
Collaborator

cc @casper-hansen is this what you mentioned in your tweet about breaking change?

@casper-hansen
Copy link

Hi @ArthurZucker, yes this is one of the issues. I have released 0.2.4 which has pinned transformers<=4.38.2 as a temporary fix for quantization and inference. On the inference issue, I am not sure how to patch it without replacing the whole LlamaForCausalLM which is a big task.

This kind of pattern of accessing modules will break most (if not all) packages that try to utilize transformers to patch/optimize certain parts of the model. I would recommend creating some abstractions that avoid such direct access to modules.

past_key_values = getattr(self.model.layers[0].self_attn, "past_key_value", None)

Reference: I fixed the quantization issue, but there was another issue with inference following quantization that I did not have time to resolve. casper-hansen/AutoAWQ#407 (comment)

@ArthurZucker
Copy link
Collaborator

I'll have a look.
We can fix this as well on our side, it's just a bit hard for us to assume that some modules will be removed 😓 but sorry anyway, should not have happened.

We can make another patch to fix both issue given the huge user base of AWQ it makes sense!

@casper-hansen
Copy link

Thanks @ArthurZucker, I appreciate collaboration here to make the best of quantized models. At present time, I will not be able to provide support for quantizing newer models (e.g. QWen2MoE) due to these breaking changes.

Do you have an idea of when a fix could be implemented?

@ArthurZucker
Copy link
Collaborator

In around 12h I'll do a fix + a patch with #29895

@ANBAYM
Copy link

ANBAYM commented Mar 28, 2024

In around 12h I'll do a fix + a patch with #29895

Hi! I also meet the same issue when using awq to quantize the gemma model. Please let me know when you release the usable version! Thanks for your help.

@ArthurZucker ArthurZucker added the Core: Modeling Internals of the library; Models. label Mar 28, 2024
@TechxGenus
Copy link
Contributor

This issue seems to still be unresolved.
Inference for the AWQ model is now back to normal, but errors still occur when trying to quantify the Llama or Gemma models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Core: Modeling Internals of the library; Models.
Projects
None yet
6 participants