Skip to content

PaddlePaddle and PyTorch implementation of APViT and TransFER

License

Notifications You must be signed in to change notification settings

youqingxiaozhua/APViT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

APViT: Vision Transformer With Attentive Pooling for Robust Facial Expression Recognition

APViT is a simple and efficient Transformer-based method for facial expression recognition (FER). It builds on the TransFER, but introduces two attentive pooling (AP) modules that do not require any learnable parameters. These modules help the model focus on the most expressive features and ignore the less relevant ones. You can read more about our method in our paper.

Update

  • 2023-05-16: We add a Colab demo for inference, test and training on RAF-DB: Open In Colab
  • 2023-03-31: Added an notebook demo for inference.

Installation

This project is based on MMClassification and PaddleClas, please refer to their repos for installation.

Notable, our method does not rely on custome cuda operations in mmcv-full.

The pre-trained weight of IR-50 weight was downloaded from face.evoLVe, and ViT-Small was downloaded from pytorch-image-models.

Data preparation

  1. We use the face alignement codes in face.evl to align face images first.
  2. The downloaded RAF-DB are reorganized as follow:
data/
├─ RAF-DB/
│  ├─ basic/
│  │  ├─ EmoLabel/
│  │  │  ├─ train.txt
│  │  │  ├─ test.txt
│  │  ├─ Image/
│  │  │  ├─ aligned/
│  │  │  ├─ aligned_224/  # reagliend by MTCNN
  1. We provide a preprocessed version of the RAF-DB, which can be downloaded from (here)[https://github.com/youqingxiaozhua/APViT/releases/download/V1.0.0/RAF-DB.zip]. The password of the zip file is calculated by adding the pixel values of the RAF-DB/basic/Image/aligned/test_0001_aligned.jpg image. To obtain the password, use the following code:
import cv2
print(cv2.imread('data/RAF-DB/basic/Image/aligned/test_0001_aligned.jpg').sum())

PaddlePaddle Version

The PaddlePaddle version of TransFER is included in the Paddle folder.

Training

To train an APViT model with two GPUs, use:

python -m torch.distributed.launch --nproc_per_node=2 \
    train.py configs/apvit/RAF.py \
    --launcher pytorch

Evaluation

To evaluate the model with a given checkpoint, use:

PYTHONPATH=$(pwd):$PYTHONPATH \
python -m torch.distributed.launch --nproc_per_node=2 \
    tools/test.py configs/apvit/RAF.py \
    weights/APViT_RAF-3eeecf7d.pth \   # your checkpoint
    --launcher pytorch

Pretrained checkpoints

Model RAF-DB Config Download
APViT 91.98% config model

License

This project is released under the Apache 2.0 license.

Reference

If you use APViT or TransFER, please cite the paper:

@article{xue2022vision,
  title={Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition},
  author={Xue, Fanglei and Wang, Qiangchang and Tan, Zichang and Ma, Zhongsong and Guo, Guodong},
  journal={IEEE Transactions on Affective Computing},
  year={2022},
  publisher={IEEE}
}

@inproceedings{xue2021transfer,
  title={Transfer: Learning Relation-aware Facial Expression Representations with Transformers},
  author={Xue, Fanglei and Wang, Qiangchang and Guo, Guodong},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={3601--3610},
  year={2021}
}

About

PaddlePaddle and PyTorch implementation of APViT and TransFER

Resources

License

Stars

Watchers

Forks

Packages

No packages published