Skip to content

li-plus/DSNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DSNet: A Flexible Detect-to-Summarize Network for Video Summarization [paper]

UnitTest License: MIT

framework

A PyTorch implementation of our paper DSNet: A Flexible Detect-to-Summarize Network for Video Summarization by Wencheng Zhu, Jiwen Lu, Jiahao Li, and Jie Zhou. Published in IEEE Transactions on Image Processing.

Getting Started

This project is developed on Ubuntu 16.04 with CUDA 9.0.176.

First, clone this project to your local environment.

git clone https://github.com/li-plus/DSNet.git

Create a virtual environment with python 3.6, preferably using Anaconda.

conda create --name dsnet python=3.6
conda activate dsnet

Install python dependencies.

pip install -r requirements.txt

Datasets Preparation

Download the pre-processed datasets into datasets/ folder, including TVSum, SumMe, OVP, and YouTube datasets.

mkdir -p datasets/ && cd datasets/
wget https://www.dropbox.com/s/tdknvkpz1jp6iuz/dsnet_datasets.zip
unzip dsnet_datasets.zip

If the Dropbox link is unavailable to you, try downloading from below links.

Now the datasets structure should look like

DSNet
└── datasets/
    β”œβ”€β”€ eccv16_dataset_ovp_google_pool5.h5
    β”œβ”€β”€ eccv16_dataset_summe_google_pool5.h5
    β”œβ”€β”€ eccv16_dataset_tvsum_google_pool5.h5
    β”œβ”€β”€ eccv16_dataset_youtube_google_pool5.h5
    └── readme.txt

Pre-trained Models

Our pre-trained models are now available online. You may download them for evaluation, or you may skip this section and train a new one from scratch.

mkdir -p models && cd models
# anchor-based model
wget https://www.dropbox.com/s/0jwn4c1ccjjysrz/pretrain_ab_basic.zip
unzip pretrain_ab_basic.zip
# anchor-free model
wget https://www.dropbox.com/s/2hjngmb0f97nxj0/pretrain_af_basic.zip
unzip pretrain_af_basic.zip

To evaluate our pre-trained models, type:

# evaluate anchor-based model
python evaluate.py anchor-based --model-dir ../models/pretrain_ab_basic/ --splits ../splits/tvsum.yml ../splits/summe.yml
# evaluate anchor-free model
python evaluate.py anchor-free --model-dir ../models/pretrain_af_basic/ --splits ../splits/tvsum.yml ../splits/summe.yml --nms-thresh 0.4

If everything works fine, you will get similar F-score results as follows.

TVSum SumMe
Anchor-based 62.05 50.19
Anchor-free 61.86 51.18

Training

Anchor-based

To train anchor-based attention model on TVSum and SumMe datasets with canonical settings, run

python train.py anchor-based --model-dir ../models/ab_basic --splits ../splits/tvsum.yml ../splits/summe.yml

To train on augmented and transfer datasets, run

python train.py anchor-based --model-dir ../models/ab_tvsum_aug/ --splits ../splits/tvsum_aug.yml
python train.py anchor-based --model-dir ../models/ab_summe_aug/ --splits ../splits/summe_aug.yml
python train.py anchor-based --model-dir ../models/ab_tvsum_trans/ --splits ../splits/tvsum_trans.yml
python train.py anchor-based --model-dir ../models/ab_summe_trans/ --splits ../splits/summe_trans.yml

To train with LSTM, Bi-LSTM or GCN feature extractor, specify the --base-model argument as lstm, bilstm, or gcn. For example,

python train.py anchor-based --model-dir ../models/ab_basic --splits ../splits/tvsum.yml ../splits/summe.yml --base-model lstm

Anchor-free

Much similar to anchor-based models, to train on canonical TVSum and SumMe, run

python train.py anchor-free --model-dir ../models/af_basic --splits ../splits/tvsum.yml ../splits/summe.yml --nms-thresh 0.4

Note that NMS threshold is set to 0.4 for anchor-free models.

Evaluation

To evaluate your anchor-based models, run

python evaluate.py anchor-based --model-dir ../models/ab_basic/ --splits ../splits/tvsum.yml ../splits/summe.yml

For anchor-free models, remember to specify NMS threshold as 0.4.

python evaluate.py anchor-free --model-dir ../models/af_basic/ --splits ../splits/tvsum.yml ../splits/summe.yml --nms-thresh 0.4

Generating Shots with KTS

Based on the public datasets provided by DR-DSN, we apply KTS algorithm to generate video shots for OVP and YouTube datasets. Note that the pre-processed datasets already contain these video shots. To re-generate video shots, run

python make_shots.py --dataset ../datasets/eccv16_dataset_ovp_google_pool5.h5
python make_shots.py --dataset ../datasets/eccv16_dataset_youtube_google_pool5.h5

Using Custom Videos

Training & Validation

We provide scripts to pre-process custom video data, like the raw videos in custom_data folder.

First, create an h5 dataset. Here --video-dir contains several MP4 videos, and --label-dir contains ground truth user summaries for each video. The user summary of a video is a UxN binary matrix, where U denotes the number of annotators and N denotes the number of frames in the original video.

python make_dataset.py --video-dir ../custom_data/videos --label-dir ../custom_data/labels \
  --save-path ../custom_data/custom_dataset.h5 --sample-rate 15

Then split the dataset into training and validation sets and generate a split file to index them.

python make_split.py --dataset ../custom_data/custom_dataset.h5 \
  --train-ratio 0.67 --save-path ../custom_data/custom.yml

Now you may train on your custom videos using the split file.

python train.py anchor-based --model-dir ../models/custom --splits ../custom_data/custom.yml
python evaluate.py anchor-based --model-dir ../models/custom --splits ../custom_data/custom.yml

Inference

To predict the summary of a raw video, use infer.py. For example, run

python infer.py anchor-based --ckpt-path ../models/custom/checkpoint/custom.yml.0.pt \
  --source ../custom_data/videos/EE-bNr36nyA.mp4 --save-path ./output.mp4

Acknowledgments

We gratefully thank the below open-source repo, which greatly boost our research.

  • Thank KTS for the effective shot generation algorithm.
  • Thank DR-DSN for the pre-processed public datasets.
  • Thank VASNet for the training and evaluation pipeline.

Citation

If you find our codes or paper helpful, please consider citing.

@article{zhu2020dsnet,
  title={DSNet: A Flexible Detect-to-Summarize Network for Video Summarization},
  author={Zhu, Wencheng and Lu, Jiwen and Li, Jiahao and Zhou, Jie},
  journal={IEEE Transactions on Image Processing},
  volume={30},
  pages={948--962},
  year={2020}
}