This is partial implementations of our PR 2017 and CVPR 2018 papers.
The following code is based on Matlab R2015b, Python 2.7.14, and Pytorch 0.3.0.
-
$ git clone https://github.com/nkliuyifang/Skeleton-based-Human-Action-Recognition.git
-
Download and unzip datasets:
-
Open Matlab, and run "run.m"
-
$ python run.py
-
$ python show.py
We report average recognition accuracy over ten times of running:
Method | UTD-MHAD Cross Subject (%) |
Northwestern-UCLA Cross View (%) |
NTU RGB+D Cross View (%) |
---|---|---|---|
Single CNN (PR 2017) | 87.63 | 73.98 | 83.42 |
Single CNN + View Transform (PR 2017) | 89.74 | 84.30 | 87.13 |
Pose Evolution Image (CVPR 2018) | 88.84 | 75.65 | 84.72 |
Pose Evolution Image + View Transform | 88.14 | 86.61 | 86.38 |
Please cite the following paper if you use this repository in your research.
@article{PR 2017
title={Enhanced Skeleton Visualization for View Invariant Human Action Recognition},
author={Liu, Mengyuan and Liu, Hong and Chen, Chen},
journal={Pattern Recognition (PR)},
volume={68},
pages={346--362},
year={2017}
}
@inproceedings{CVPR 2018,
title={Recognizing Human Actions as the Evolution of Pose Estimation Maps},
author={Liu, Mengyuan and Yuan, Junsong},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={1159--1168},
year={2018}
}