Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

code release of research paper "Exploring Long-Sequence Masked Autoencoders"

License

Notifications You must be signed in to change notification settings

facebookresearch/long_seq_mae

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Exploring Long-Sequence Masked Autoencoders

This is the code release of the paper Exploring Long-Sequence Masked Autoencoders:

@Article{hu2022exploring,
  author  = {Ronghang Hu and Shoubhik Debnath and Saining Xie and Xinlei Chen},
  journal = {arXiv:2210.07224},
  title   = {Exploring Long-Sequence Masked Autoencoders},
  year    = {2022},
}
  • This repo is a modification on the MAE repo, and supports long-sequence pretraining on both GPUs and TPUs using PyTorch.

  • This repo is based on timm==0.4.12, which can be installed via pip3 install timm==0.4.12.

Fine-tuning with pre-trained checkpoints

The following table provides the pre-trained checkpoints used in the paper:

Model (pretrained w/ L=784, image size 448, patch size 16) ViT-Base ViT-Large
COCO (train2017 + unlabeled2017) 4000-epoch download download
ImageNet-1k 800-epoch download download
ImageNet-1k 1600-epoch download download

Using the codebase

In addition, this codebase is also compatible with the features in the original MAE repo. Follow README_MAE.md to use the features of the original MAE repo (such as fine-tuning on image classification).

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

About

code release of research paper "Exploring Long-Sequence Masked Autoencoders"

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages