You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to implement relative positional encodings with Hyena similar to what was done in the Transformer-XL paper? Any tips on how to implement that?
The text was updated successfully, but these errors were encountered:
Are you interested in trying relative encodings for the implicit long convolution filter (HyenaFilter) or a more traditional implementation of encodings that would work at the HyenaOperator level? In our experience, the latter does not appear to affect performance much, since Hyena is not permutation equivariant.
We recevied requests for a version with KERPLE positional embeddings, so that might be something to consider.
Hello,
Is there a way to implement relative positional encodings with Hyena similar to what was done in the Transformer-XL paper? Any tips on how to implement that?
The text was updated successfully, but these errors were encountered: