Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

derivative for aten::_scaled_dot_product_efficient_attention_backward is not implemented #429

Open
Darius888 opened this issue Jun 4, 2024 · 3 comments

Comments

@Darius888
Copy link

Darius888 commented Jun 4, 2024

Hello,

When trying to apply the Sine Wave example approach to a transformer based model I get the following output:

File "/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py", line 767, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: derivative for aten::_scaled_dot_product_efficient_attention_backward is not implemented

Regression task setup. Multiple sequences.

Is it possible to somehow work around this ?

Thank you,

@JingminSun
Copy link

I think this happens when you set first_order = False, so the simplest way is to set first_order = True

If you really want to do second order, check this pytorch/pytorch#117974

@Darius888
Copy link
Author

This was exactly it, thank you so much! @JingminSun

@renhl717445
Copy link

This was exactly it, thank you so much! @JingminSun

How to modify it specifically?

I think this happens when you set first_order = False, so the simplest way is to set first_order = True

If you really want to do second order, check this pytorch/pytorch#117974

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants