Jiachen Li, Roberto Henschel, Vidit Goel, Marianna Ohanyan, Shant Navasardyan, Humphrey Shi
11/02/2023: Codes and arxiv are released.
Step 1: Clone this repo
git clone https://github.com/SHI-Labs/VIM.git
Step 2: Create conda environment
conda create --name vim python=3.9
conda activate vim
Step 3: Install pytorch and torchvision
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.7 -c pytorch -c nvidia
Step 4: Install dependencies
pip install -r requirements.txt
Inference on the VIM50 with MTRCNN mask guidance:
CUDA_VISIBLE_DEVICES=0 python infer_vim_clip.py --config config/VIM.toml --checkpoint /path/to/msgvim.pth --image-dir /path/to/VIM50 --tg-mask-dir /path/to/MTRCNN/tg_masks/ --re-mask-dir /path/to/MTRCNN/re_masks/ --output outputs/MTRCNN_msgvim
Evaluation the results
CUDA_VISIBLE_DEVICES=0 python metrics_vim.py --gt-dir /path/to/VIM50 --output-dir /path/to/outputs/MTRCNN_msgvim
Inference on the VIM50 with MTRCNN mask guidance:
CUDA_VISIBLE_DEVICES=0 python infer_vim_clip.py --config config/VIM.toml --checkpoint /path/to/msgvim.pth --image-dir /path/to/VIM50 --tg-mask-dir /path/to/SeqFormer/tg_masks/ --re-mask-dir /path/to/SeqFormer/re_masks/ --output outputs/SeqFormer_msgvim
Evaluation the results
CUDA_VISIBLE_DEVICES=0 python metrics_vim.py --gt-dir /path/to/VIM50 --output-dir /path/to/outputs/SeqFormer_msgvim
@article{li2023vim,
title={Video Instance Matting},
author={Jiachen Li and Roberto Henschel and Vidit Goel and Marianna Ohanyan and Shant Navasardyan and Humphrey Shi},
journal={arXiv preprint},
year={2023},
}
This repo is based on MGMatting. Thanks for their open-sourced works.