- The CUDA kernel has been updated and must be recompiled.
- A few parameters inside the S4(D) kernels have had their name change
To address differences between models trained on earlier versions and the current V4:
- The CUDA kernel should be re-compiled if moving between versions of this codebase.
- The script
checkpoints/port_v3_to_v4.py
can be used to convert models (see below).
- S4ND
- Recent new models based on or closely related to S4, such as GSS and Mega
- Other long convolution kernels such as simple "wide kernel CNN" baseline (
model.layer.mode=conv
)
- Information about specific papers and models (e.g. model description, overview of code, documentation of experiments) have been moved into the
models/
folder. - Standalone S4 module has been moved from
src/models/s4/
tomodels/s4/
. - General sequence modeling framework under src/models/sequence/ has been reorganized. The old state space modules
src/models/sequence/ss/
have been removed; the S4 module has been broken into a generic convolution block in src/models/sequence/modules/ and the inner linear SSM kernel moved to src/models/sequence/kernels/. - More experiments have been added to configs/experiments/ with improved structuring.
- The Cauchy CUDA kernel has been updated and must be recompiled.
- There is now a CUDA kernel for the Vandermonde operation of S4D, speeding it up over the naive and
pykeops
versions. S4D should now be faster than S4 in all versions (naive, pykeops, or CUDA kernel).
- The
/checkpoints/
folder can be used to score checkpoints and contains several scripts for working with them. See/checkpoints/README.md
for detailed usage. /checkpoints/evaluate.py
takes a trained model and prints metrics on evaluation datasets./checkpoints/port_v3_to_v4.py
converts a model from V3 to V4 code.
model.layer.measure
has been renamed tomodel.layer.init
. The namemeasure
originally referred to approximation measures in the HiPPO theory, but they are only used as initialization in trainable SSM models. There are also many more initializations not based on the HiPPO theory, even the simple S4D-Lin model from the minimal S4D standalone.- TODO document some of the new features
- Updated version of S4 module, including new measures and theory from [How to Train Your HiPPO] (#21, #54)
- Complete version of S4D module from [On the Parameterization and Initialization of Diagonal State Space Models]
- State forwarding (#49, #56)
- Support for S4 variants including DSS and GSS (documentation)
- PyTorch 1.11 had a Dropout bug which is now avoided with a custom Dropout implementation (#42, #22)
- Conjugated tensors API change in PyTorch 1.10 (#35)
- Release of Sashimi+DiffWave model (#46). Can be found at albertfgu/diffwave-sashimi
- Improved generation script for any models trained using this repository (#38)
- Re-trained SaShiMi models with the latest version of S4 (#37, #32)
- New WikiText-103 checkpoint with generation functionality (#5, #19)
- Release of new notebook (and equivalent .py file) visualizing HiPPO function reconstruction. Includes animations used in HTTYH, the Annotated S4D, and various S4 talks.
- Improved configs for Long Range Arena reported in HTTYH and S4D papers
- New datasets and ablation experiments from the S4D paper
Note that there have been various refactors and miscellaneous changes which may affect results slightly, but results should be close and general trends should hold. Feel free to file an issue for any results which do not match the papers.
- Reorganized the README and added much more documentation for using this codebase
- Minor updates to S4 modules
- By default, S4 no longer requires installing Pykeops or a custom CUDA kernel.
- New S4D (S4-diagonal) standalone model found at
src/models/sequence/ss/standalone/s4d.py
. Simple variant using diagonal SSMs that recovers S4's performance on most tasks. Can be run with any existing experiment config with the additional flagmodel/layer=s4d
on the command line. - New LRA configs for updated S4 code, with an average score of ~86
Code release for SaShiMi audio model
Added configs for time series datasets from the Informer paper (#4)
First release of this repository containing the S4 module and configs to reproduce sCIFAR, Speech Commands, Long Range Arena, and WikiText-103 results