Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"tiiuae/falcon-7b" ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new #26443

Closed
4 tasks
nirdoshrawal009 opened this issue Sep 27, 2023 · 7 comments

Comments

@nirdoshrawal009
Copy link

System Info


ValueError Traceback (most recent call last)
Cell In[8], line 11
1 bnb_config = BitsAndBytesConfig(
2 load_in_4bit=True,
3 bnb_4bit_use_double_quant=True,
4 bnb_4bit_quant_type="nf4",
5 bnb_4bit_compute_dtype=torch.bfloat16 #4-bit quantized part of the model will be using the bfloat16 format for computations, but not other parts of the model
6 )
8 # If you want to use bfloat16 for other parts of the model as well, you should set the --bf16 flag in the training arguments.
9 # This will ensure that the relevant portions of the model, such as the language model head and embedding layers, are also converted to bfloat16,
---> 11 model = AutoModelForCausalLM.from_pretrained(
12 "tiiuae/falcon-7b", #tiiuae/falcon-7b
13 quantization_config=bnb_config,
14 device_map={"": 0},
15 trust_remote_code=True,
16 use_flash_attention_2=True,
17 )
18 model.config.pretraining_tp = 1
20 tokenizer = AutoTokenizer.from_pretrained(
21 "tiiuae/falcon-7b",
22 trust_remote_code=True
23 )

File /opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:558, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
556 else:
557 cls.register(config.class, model_class, exist_ok=True)
--> 558 return model_class.from_pretrained(
559 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
560 )
561 elif type(config) in cls._model_mapping.keys():
562 model_class = _get_model_class(config, cls._model_mapping)

File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:3064, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
3061 init_contexts.append(init_empty_weights())
3063 if use_flash_attention_2:
-> 3064 config = cls._check_and_enable_flash_attn_2(config, torch_dtype=torch_dtype, device_map=device_map)
3066 with ContextManagers(init_contexts):
3067 model = cls(config, *model_args, **model_kwargs)

File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:1265, in PreTrainedModel._check_and_enable_flash_attn_2(cls, config, torch_dtype, device_map)
1250 """
1251 If you don't know about Flash Attention, check out the official repository of flash attention:
1252 https://github.com/Dao-AILab/flash-attention
(...)
1262 can initialize the correct attention module
1263 """
1264 if not cls._supports_flash_attn_2:
-> 1265 raise ValueError(
1266 "The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to "
1267 "request support for this architecture: https://github.com/huggingface/transformers/issues/new style="color:rgb(175,0,0)">"
1268 )
1270 if not is_flash_attn_available():
1271 raise ImportError(
1272 "Flash Attention 2.0 is not available. Please refer to the documentation of https://github.com/Dao-AILab/flash-attention for"
1273 " installing it."
1274 )

ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new

Who can help?

@ SFT training using flash attention 2

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

@ SFT training using flash attention 2

Expected behavior

@ SFT training using flash attention 2

@nirdoshrawal009 nirdoshrawal009 changed the title ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new "tiiuae/falcon-7b" ValueError: The current architecture does not support Flash Attention 2.0. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new Sep 27, 2023
@LysandreJik
Copy link
Member

cc @younesbelkada :)

@LysandreJik
Copy link
Member

LysandreJik commented Sep 28, 2023

Hey @nirdoshrawal009 it seems you're using the remote code for Falcon, which indeed doesn't have FlashAttention supported through the library (but it seems to be supported by default).

Could you try using the Falcon implementation in transformers instead, by removing the trust_remote_code=True flag in your model instantiation?

@younesbelkada
Copy link
Contributor

Hi @nirdoshrawal009
I second what lysandre said, please make sure to remove trust_remode_code and to use the main branch of transformers:

pip install -U git+https://github.com/huggingface/transformers.git

Make also sure that your hardware is listed among the supported hardwares in the official Flash Attention repository:

Screenshot 2023-09-28 at 11 20 50

@nirdoshrawal009
Copy link
Author

Hi @LysandreJik I tried by removing trust_remote_code but its does'nt work

@younesbelkada
Copy link
Contributor

Hi @nirdoshrawal009
Can you try out the instructions detailed in #26443 (comment) ? Make sure to use transformers from main branch

@knoopx
Copy link

knoopx commented Sep 28, 2023

Hi @LysandreJik I tried by removing trust_remote_code but its does'nt work

it does work but you need to also install flash attention package (pip install flash-attn --no-build-isolation)

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot closed this as completed Nov 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants