You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not exactly an issue (please feel free to close it), but wanted to get the authors opinion on the new TransformerFAM paper and how it is different from Landmark Attention.
My original understanding is that by introducing Landmark tokens at the end of blocks, we could potentially scale of infinite length sequences.
But the above paper just concludes - "However, in those papers, the information was not propagated infinitely", in reference to Landmark Attention.
Can the authors / someone please clarify what modifications can Landmark Attention have to achieve what T-FAM paper proposes?
Thanks.
The text was updated successfully, but these errors were encountered:
Not exactly an issue (please feel free to close it), but wanted to get the authors opinion on the new TransformerFAM paper and how it is different from Landmark Attention.
My original understanding is that by introducing Landmark tokens at the end of blocks, we could potentially scale of infinite length sequences.
But the above paper just concludes - "However, in those papers, the information was not propagated infinitely", in reference to Landmark Attention.
Can the authors / someone please clarify what modifications can Landmark Attention have to achieve what T-FAM paper proposes?
Thanks.
The text was updated successfully, but these errors were encountered: