Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Phi3Transformer does not support an attention implementation #12

Open
mixoadrian opened this issue Nov 3, 2024 · 3 comments
Open

Phi3Transformer does not support an attention implementation #12

mixoadrian opened this issue Nov 3, 2024 · 3 comments

Comments

@mixoadrian
Copy link

I do not know how to make sense of the error so i paste it here:

Fetching 10 files: 100%|███████████████████████████████████████████████████████████████| 10/10 [04:55<00:00, 29.52s/it]
!!! Exception during processing !!! Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implementation="eager" meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")
Traceback (most recent call last):
File "E:\comfy\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in map_node_over_list
process_inputs(input_dict, i)
File "E:\comfy\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\OmniGen-ComfyUI_init
.py", line 116, in gen
pipe = OmniGenPipeline.from_pretrained(omnigen_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\OmniGen-ComfyUI\OmniGen\pipeline.py", line 82, in from_pretrained
model = OmniGen.from_pretrained(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\OmniGen-ComfyUI\OmniGen\model.py", line 197, in from_pretrained
model = cls(config)
^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\ComfyUI\custom_nodes\OmniGen-ComfyUI\OmniGen\model.py", line 186, in init
self.llm = Phi3Transformer(config=transformer_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\phi3\modeling_phi3.py", line 948, in init
super().init(config)
File "E:\comfy\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 1388, in init
config = self._autoset_attn_implementation(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 1565, in _autoset_attn_implementation
config = cls._check_and_enable_sdpa(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\modeling_utils.py", line 1731, in _check_and_enable_sdpa
raise ValueError(
ValueError: Phi3Transformer does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: huggingface/transformers#28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implementation="eager" meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")

Prompt executed in 298.22 seconds

@thephimart
Copy link

thephimart commented Nov 4, 2024

Go to \ComfyUI_windows_portable\ComfyUI\models\AIFSH\Shitao\OmniGen-v1
Open config.json
Scroll to the very bottom
change this:
"_attn_implementation": "sdpa"
to this:
"_attn_implementation": "eager"

worked for me hope it helps

@emimix
Copy link

emimix commented Nov 4, 2024

Go to \ComfyUI_windows_portable\ComfyUI\models\AIFSH\Shitao\OmniGen-v1 Open config.json Scroll to the very bottom change this: "_attn_implementation": "sdpa" to this: "_attn_implementation": "eager"

worked for me hope it helps

That fixed it for me...thank you!

@xgs87762GH
Copy link

Go to \ComfyUI_windows_portable\ComfyUI\models\AIFSH\Shitao\OmniGen-v1 Open config.json Scroll to the very bottom change this: "_attn_implementation": "sdpa" to this: "_attn_implementation": "eager"

worked for me hope it helps

解决了,感谢!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants