-
Notifications
You must be signed in to change notification settings - Fork 4.3k
varlen attention tutorial #3660
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3660
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 30 PendingAs of commit 935d96d with merge base 00d8b24 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
a6cb078 to
19a4ad9
Compare
|
@svekars any tips for reviewing this code in a more human friendly way? |
19a4ad9 to
84481f1
Compare
46751dd to
38867c9
Compare
svekars
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please submit this as a .py - all our tutorials are either .py or .rst and .py tutorials are converted into html and .ipynb on build. You can follow this template: https://github.com/pytorch/tutorials/blob/main/beginner_source/template_tutorial.py
|
There is more info here: https://github.com/pytorch/tutorials#contributing and there is a script that will help you convert your .ipynb to .py |
539dbb9 to
20ae73d
Compare
svekars
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good overall. Need to add to compilers_index.rst and I added some suggestions that would help it make more discoverable on the website/google. Can also add to the What's new in the PyTorch tutorials in the index.rst.
|
There is also an error here: https://github.com/pytorch/tutorials/actions/runs/19864783846/job/56924391840?pr=3660#step:9:4582 - let me check how should we test against 2.10. |
aac18ab to
25db45f
Compare
1b41087 to
9a5f7f4
Compare
|
@AlannaBurke can you please review this 2.10 tutorial. |
057a641 to
52410a5
Compare
| # return_aux: AuxRequest | None = None, | ||
| # ) -> torch.Tensor | tuple[torch.Tensor, torch.Tensor]: | ||
|
|
||
| # ``query``, ``key``, and ``value`` correspond to the ``q``, ``k``, and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, need to ad a # in the beginning of the empty line: https://docs-preview.pytorch.org/pytorch/tutorials-nightly-preview/3660/intermediate/variable_length_attention_tutorial.html#definition
| # is_causal: bool = False, | ||
| # return_aux: AuxRequest | None = None, | ||
| # ) -> torch.Tensor | tuple[torch.Tensor, torch.Tensor]: | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # |
| # and ``max_k`` are the maximum sequence lengths of query and key, | ||
| # respectively. ``is_causal`` applies causal masking if set to True and | ||
| # ``return_aux`` specifies which auxiliary outputs to return (ie ``lse``). | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # |
52410a5 to
df452db
Compare
df452db to
b7c5dbf
Compare
| # Variable length attention handles sequences of varying length by | ||
| # **packing** the tensors in a batch together and essentially collapsing | ||
| # the batch dimension. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes the "Variable length attention" part got fixed but the para below still shows as code block: https://docs-preview.pytorch.org/pytorch/tutorials-nightly-preview/3660/intermediate/variable_length_attention_tutorial.html#overview-of-variable-length-attention
need to make sure, every empty line starts with a #. if you add an empty line the new line needs to have ###### and then your text. The figure below is not showing too. Here is more about sphinx-gallery syntax: https://sphinx-gallery.github.io/stable/syntax.html#embed-rest-in-your-example-python-files
a301e1a to
c2816ab
Compare
c2816ab to
935d96d
Compare
This PR is a tutorial for a variable length attention API