Skip to content

13633491388/AOT_cycle

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AOT (Associating Objects with Transformers for Video Object Segmentation) +cycle (Delving into the Cyclic Mechanism in Semi-supervised Video Object Segmentation) in PyTorch

Requirements

  • Python3
  • pytorch >= 1.7.0 and torchvision
  • opencv-python
  • Pillow
  • Pytorch Correlation (Recommend to install from source instead of using pip. The project can also work without this moduel but will lose some efficiency of the short-term attention.)

Optional:

  • scikit-image (if you want to run our Demo, please install)

Model Zoo and Results

Pre-trained models, benckmark scores, and pre-computed results reproduced by this project can be found in MODEL_ZOO.md.

Getting Started

  1. Prepare a valid environment follow the requirements.

  2. Prepare datasets:

    Please follow the below instruction to prepare datasets in each corresponding folder.

    • Static

      datasets/Static: pre-training dataset with static images. Guidance can be found in AFB-URR, which we referred to in the implementation of the pre-training.

    • YouTube-VOS

      A commonly-used large-scale VOS dataset.

      datasets/YTB/2019: version 2019, download link. train is required for training. valid (6fps) and valid_all_frames (30fps, optional) are used for evaluation.

      datasets/YTB/2018: version 2018, download link. Only valid (6fps) and valid_all_frames (30fps, optional) are required for this project and used for evaluation.

    • DAVIS

      A commonly-used small-scale VOS dataset.

      datasets/DAVIS: TrainVal (480p) contains both the training and validation split. Test-Dev (480p) contains the Test-dev split. The full-resolution version is also supported for training and evaluation but not required.

  3. Download models: Download https://drive.google.com/file/d/1jQ42MVhaX2oUJdYIdMeiAQXgmu3Kt5TJ/view?usp=sharing and put it into folder 'pretrain_models', Download https://drive.google.com/file/d/1o9JXgyDg7xBzlkAZARXSWPPVAqiHx15I/view?usp=sharing and put it into folder 'pretrain_models', Download https://drive.google.com/file/d/1wfuD4Q73E3N4fmiisYoLwKOUQexCxc_4/view?usp=sharing and put it into folder 'pretrain_models'

  4. Evaluation

    • cycle
    python tools/eval.py --exp_name aott_cycle --model aott --dataset davis2017 --split val --gpu_num 1 --ckpt_path pretrain_models/AOTT.pth --cycle pretrain_models/AOTT_cycle.pth
    • cycle+gc
    python tools/eval.py --exp_name aott_cycle_gc --model aott --dataset davis2017 --split val --gpu_num 1 --ckpt_path pretrain_models/AOTT.pth --cycle pretrain_models/AOTT_cycle.pth --gc

TODO

  • Training cycle

Citations

Please consider citing the related paper(s) in your publications if it helps your research.

@article{yang2021aost,
  title={Associating Objects with Scalable Transformers for Video Object Segmentation},
  author={Yang, Zongxin and Miao, Jiaxu and Wang, Xiaohan and Wei, Yunchao and Yang, Yi},
  journal={arXiv preprint arXiv:2203.11442},
  year={2022}
}
@inproceedings{yang2021aot,
  title={Associating Objects with Transformers for Video Object Segmentation},
  author={Yang, Zongxin and Wei, Yunchao and Yang, Yi},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2021}
}
@article{li2022exploring,
  title={Exploring the Semi-Supervised Video Object Segmentation Problem from a Cyclic Perspective},
  author={Li, Yuxi and Xu, Ning and Yang, Wenjie and See, John and Lin, Weiyao},
  journal={International Journal of Computer Vision},
  pages={1--17},
  year={2022},
  publisher={Springer}
}

License

This project is released under the BSD-3-Clause license. See LICENSE for additional details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published