Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How is TransformerFAM different from Landmark Attention? #15

Open
Rock-Anderson opened this issue Apr 16, 2024 · 0 comments
Open

How is TransformerFAM different from Landmark Attention? #15

Rock-Anderson opened this issue Apr 16, 2024 · 0 comments

Comments

@Rock-Anderson
Copy link

Not exactly an issue (please feel free to close it), but wanted to get the authors opinion on the new TransformerFAM paper and how it is different from Landmark Attention.

My original understanding is that by introducing Landmark tokens at the end of blocks, we could potentially scale of infinite length sequences.
But the above paper just concludes - "However, in those papers, the information was not propagated infinitely", in reference to Landmark Attention.

Can the authors / someone please clarify what modifications can Landmark Attention have to achieve what T-FAM paper proposes?
Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant