GPE | SPE | ||||
# Person | Number | Url | Action | Number | Url |
1 | 12,884 | https://reurl.cc/EzXD8R | Walk | 78,852 | https://reurl.cc/OqEL9D |
2 | 18,879 | https://reurl.cc/ygmX1a | Wave | 77,431 | https://reurl.cc/Z71vGQ |
3 | 27,694 | https://reurl.cc/e8Wa5W | Jump | 40,670 | https://reurl.cc/j5RYNM |
>=4 | 28,178 | https://reurl.cc/m9ZXd7 | Run | 41,238 | https://reurl.cc/EzXL3m |
Cross Period | X | X | Cross Period | X | https://reurl.cc/ld4mZq |
Total | 87,635 | X | Total | 238,191 | X |
$ git clone https://github.com/fingerk28/EASFN.git
create the virtual enciorment
$ cd EASFN/
$ python3 -m venv env
activate the virtual environment
$ source env/bin/activate
install the necessary packages (If there are errors about gdown package, you can ignore them.)
$ pip install -r requirements.txt
Download SPE or GPE dataset
$ chmod +x download_SPE.sh
$ ./download_SPE.sh
$ chmod +x download_GPE.sh
$ ./download_GPE.sh
When you find the download has exited, you need to press ctrl+c
to terminate the sh file by yourself.
If errors occur, you can download by yourself using the links provided above.
Please make sure that all four zipfiles in EASFN/dataset/SPE
or EASFN/dataset/GPE
.
- unzip the dataset ( You can substitute GPE for SPE)
$ cd EASFN/dataset/SPE
$ chmod +x SPE.sh
$ ./SPE.sh
You can adjust the training parameters in config/args.py
$ python3 train.py--dataset=SPE
$ python3 train.py --dataset=GPE
You can adjust the testing parameters in config/args.py
$ python3 test.py --dataset=SPE
$ python3 test.py --dataset=GPE
EASFN | PIW | WiSPPN | |
Walk | |||
Wave | |||
Jump | |||
Run |
From this comparison, we can easily observe that EASFN is better than the camera-based method in dark environment. Under poor illumination, Openpose is likely to loss some keypoints, such as wrist, ankle, and so forth. However, EASFN can correctly return all keypoints.
- Comparisons on the proposed benchmarks:
Metric | Benchmark | WiSPPN[1] | PIW[2] | EASFN |
MPJPE | SPE | 44.16 | 78.88 | 37.34 |
GPE | X | 119.60 | 44.14 | |
PCK@20 | SPE | 21.86% | 32.96% | 50.05% |
GPE | X | 27.64% | 43.98% |
- Comparative results against other methods on our SPE benchmark (PCK@20):
Action | WiSPPN | PIW | EASFN |
---|---|---|---|
Walk | 23.58% | 39.87% | 61.14% |
Wave | 25.92% | 33.06% | 45.81% |
Run | 22.12% | 37.91% | 58.11% |
Jump | 15.82% | 20.99% | 35.15% |
- Comparative results against other methods on our GPE benchmark (PCK@20):
# Person | PIW | EASFN |
---|---|---|
1-person | 53.37% | 72.65% |
2-person | 49.31% | 65.39% |
3-person | 19.54% | 34.75% |
>=4 person | 22.66% | 39.12% |
- Ablation study on our SPE benchmark:
Model | Architecture | PCK@20 |
---|---|---|
1 | SFN | 44.99% |
2 | ASFN | 45.95% |
3 | EASFN, D=1 | 48.08% |
4 | EASFN, D=3 (Proposed) | 50.05% |
5 | EASFN, D=5 | 45.95% |
- [1] Fei Wang, Stanislav Panev, Ziyi Dai, Jinsong Han, and Dong Huang. 2019. Can wifi estimate person pose?arXiv preprint arXiv:1904.00277(2019).
- [2] Fei Wang, Sanping Zhou, Stanislav Panev, Jinsong Han, and Dong Huang. 2019.Person-in-WiFi: Fine-grained person perception using WiFi. InProceedings of the IEEE International Conference on Computer Vision. 5452–5461.