Learning target-focusing convolutional regression model for visual object tracking
Abstract: Discriminative correlation filters (DCFs) have been widely used in the tracking community recently. DCFs-based trackers utilize samples generated by circularly shifting from an image patch to train a ridge regression model, and estimate target location using a response map generated by the correlation filters. However, the generated samples produce some negative effects and the response map is vulnerable to noise interference, which degrades tracking performance. In this paper, to solve the aforementioned drawbacks, we propose a target-focusing convolutional regression (CR) model for visual object tracking tasks (called TFCR). This model uses a target-focusing loss function to alleviate the influence of background noise on the response map of the current tracking image frame, which effectively improves the tracking accuracy. In particular, it can effectively balance the disequilibrium of positive and negative samples by reducing some effects of the negative samples that act on the object appearance model. Extensive experimental results illustrate that our TFCR tracker achieves competitive performance compared with state-of-the-art trackers.
The matlab code for TFCR tracker can be downloaded here[google] or here[baidu(password:nvb5)].
-
If you want to compare our results in your experiment, just download the raw experimental results.
-
If you want to test our experiment:
2.1 Download the code and unzip it in your computer.
2.2 Run the demo.m to test a tracking sequence using a default model.
2.3 Using run_TFCR.m to test the performance on OTB, TC or UAV benchmark.
-
Prerequisites: Ubuntu 18, Matlab 2017R, GTX1080Ti, CUDA8.0.
Dataste | OTB2013 | OTB2015 | TC128 | UAV123 |
---|---|---|---|---|
Prec. | 0.871 | 0.876 | 0.776 | 0.715 |
AUC | 0.671 | 0.665 | 0.564 | 0.512 |
If you find the code useful, please cite:
@article{TFCR,
title={Learning target-focusing convolutional regression model for visual object tracking},
author=Yuan, Di and Nana, Fan and He, Zhenyu},
journal={Knowledge-Based Systems},
DOI={https://doi.org/10.1016/j.knosys.2020.105526},
year={2020}
}
@inproceedings{song-iccv17-CREST,
author={Song, Yibing and Ma, Chao and Gong, Lijun and Zhang, Jiawei and Lau, Rynson and Yang, Ming-Hsuan},
title={CREST: Convolutional Residual Learning for Visual Tracking},
booktitle={IEEE International Conference on Computer Vision},
pages={2555-2564},
year={2017}
}
Feedbacks and comments are welcome! Feel free to contact us via dyuanhit@gmail.com
Some of the parameter settings and functions are borrowed from CREST(https://github.com/ybsong00/CREST-Release).