| paper | project website |
- 23-11-12. This paper has been accepted by ICCV 2023. Code is still quickly updating 🌝.
Although convolutional neural networks (CNNs) have been proposed to remove adverse weather conditions in single images using a single set of pre-trained weights, they fail to restore weather videos due to the absence of temporal information. Furthermore, existing methods for removing adverse weather conditions (e.g., rain, fog, and snow) from videos can only handle one type of adverse weather. In this work, we propose the first framework for restoring videos from all adverse weather conditions by developing a video adverse-weather-component suppression network (ViWS-Net).To achieve this, we first devise a weather-agnostic video transformer encoder with multiple transformer stages. Moreover, we design a long short-term temporal modeling mechanism for weather messenger to early fuse input adjacent video frames and learn weather-specific information. We further introduce a weather discriminator with gradient reversion, to maintain the weather-invariant common information and suppress the weather-specific information in pixel features, by adversarially predicting weather types. Finally, we develop a messenger-driven video transformer decoder to retrieve the residual weather-specific feature, which is spatiotemporally aggregated with hierarchical pixel features and refined to predict the clean target frame of input videos.Experimental results, on benchmark datasets and real-world weather videos, demonstrate that our ViWS-Net outperforms current state-of-the-art methods in terms of restoring videos degraded by any weather condition.
conda env create -f environment.yaml
| RainMotion | REVIDE | KITTI-snow |
| Pretrained-weights | Checkpoint |
-
Training
python main_multi.py --batchSize 4 --data_dir Dataset/RainMotion/Test --save_folder weights
-
Testing
python eval_derain.py --data_dir Dataset/RainMotion/Test --model weights/model_motion.pth --output Results python eval_psnr_ssim.py --dataset RainMotion
If you find this code useful, please cite
@inproceedings{yang2023video,
title={Video Adverse-Weather-Component Suppression Network via Weather Messenger and Adversarial Backpropagation},
author={Yang, Yijun and Aviles-Rivero, Angelica I and Fu, Huazhu and Liu, Ye and Wang, Weiming and Zhu, Lei},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={13200--13210},
year={2023}
}