Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Paper Suggestion] Mamba: Linear-Time Sequence Modeling with Selective State Spaces #24

Open
jdavisp3 opened this issue Nov 11, 2024 · 0 comments

Comments

@jdavisp3
Copy link
Collaborator

Paper Suggestion

Content Summary

Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the
Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention,
gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address
Transformers’ computational inefficiency on long sequences, but they have not performed as well as attention on important
modalities such as language. We identify that a key weakness of such models is their inability to perform content-based
reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses
their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the
sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient
convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a
simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast
inference (5× higher throughput than Transformers) and linear scaling in sequence length, and its performance improves
on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art
performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model
outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant