You should prepare GazeFollow and VideoAttentionTarget for training.
- Get GazeFollow.
- If train with auxiliary regression, use
scripts\gen_gazefollow_head_masks.py
to generate head masks. - Get VideoAttentionTarget.
Check ViTGaze/configs/common/dataloader
to modify DATA_ROOT.
Run
bash val.sh configs/gazefollow_518.py ${Path2checkpoint} gf
to evaluate on GazeFollow.
Run
bash val.sh configs/videoattentiontarget.py ${Path2checkpoint} vat
to evaluate on VideoAttentionTarget.