Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable_xformers_memory_efficient_attention is not supported #18

Open
JarvisFei opened this issue Aug 30, 2023 · 2 comments
Open

enable_xformers_memory_efficient_attention is not supported #18

JarvisFei opened this issue Aug 30, 2023 · 2 comments

Comments

@JarvisFei
Copy link

JarvisFei commented Aug 30, 2023

File "fastcomposer/fastcomposer/model.py", line 571, in forward
localization_loss = get_object_localization_loss(
File "fastcomposer/model.py", line 416, in get_object_localization_loss
return loss / num_layers
ZeroDivisionError: division by zero

May I ask if there is a solution to this problem when i am using enable_xformers_memory_efficient_attention?
@Guangxuan-Xiao

@JarvisFei
Copy link
Author

in the model.py file, why the code is like this:

if isinstance(module.processor, AttnProcessor2_0): module.set_processor(AttnProcessor())

how can I accelerate the training process with torch.complie() when I am using PyTorch 2.0

@xilanhua12138
Copy link

same problem, did you solve it ? can you share?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants