-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable_xformers_memory_efficient_attention is not supported #18
Comments
in the model.py file, why the code is like this:
how can I accelerate the training process with torch.complie() when I am using PyTorch 2.0 |
same problem, did you solve it ? can you share? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
File "fastcomposer/fastcomposer/model.py", line 571, in forward
localization_loss = get_object_localization_loss(
File "fastcomposer/model.py", line 416, in get_object_localization_loss
return loss / num_layers
ZeroDivisionError: division by zero
May I ask if there is a solution to this problem when i am using enable_xformers_memory_efficient_attention?
@Guangxuan-Xiao
The text was updated successfully, but these errors were encountered: