Skip to content

PJJie/Online-Skeleton-based-Action-Recognition

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Online skeleton-based action recognition with multi-feature early fusion

Introduction

Existing skeleton-based action recognition methods input a whole segmented action sequence and adopt later fusion to integrate the multi-stream results, which causes a large amount of computation and is not suitable for online application. This paper proposed an online skeleton-based action recognition method with multi-feature early fusion. The approach integrated different types of input feature through the early embedding layer and combined the max pooling and hierarchical pooling to extract multi-semantic spatial information. Moreover, the selection strategy of skeleton sequences was carefully designed. A new 3D skeleton dataset, NTU-GAST Skeleton, with 17 joint points was made to be compatible with existing 3D human pose estimation methods for online action recognition. Experiments on two available benchmark datasets NTU60 and 120 RGB+D indicate that the proposed method achieves competitive performance comparison to state-of-the-art results with more than 50×less computational complexity.

Online Skeleton-based Action Recognition with A Single RGB Camera

Online-Skeleton-base-Action-Recognition Online-Skeleton-base-Action-Recognition

Prerequisites

The code is built with the following libraries:

Data Preparation

NTU-GAST Skeleton dataset

Samples of NTU-GAST Skeleton dataset
  • Download NTU-GAST Skeleton dataset from Baidu drive. Extraction code:3bid. Google drive: NTU-GAST Skeleton
  • Extract the dataset to ./data/NTU-GAST-Skeleton
  • We use the dataset of GAST60 (NTU-GAST60 Skeleton) as an example for description.
    cd ./data/gast60/
    # Preprocess .h5 file of skeletons to .pkl file
    python preprocess_gast60.py
    # Get the training and evaluation data
    python protocol_gast60.py

NTU RGB+D dataset

We use the dataset of NTU60 RGB+D as an example for description. We need to first dowload the NTU-RGB+D dataset.

  • Extract the dataset to ./data/ntu/nturgb+d_skeletons/
  • Process the data
    cd ./data/ntu
    # Get skeleton of each performer
    python get_raw_skes_data.py
    # Remove the bad skeleton 
    python get_raw_denoised_data.py
    # Transform the skeleton to the center of the first frame
    python seq_transformation.py

Training

# For the CS setting
python  main.py --network ESN --train 1 --case 0
# For the CV setting
python  main.py --network ESN --train 1 --case 1

Testing

  • Test the pre-trained models (./results/NTU60/SGN/)
# For the CS setting
python  main.py --network ESN --train 0 --case 0
# For the CV setting
python  main.py --network ESN--train 0 --case 1

Reference

  • The NTU-GAST Skeleton dataset was generated by GAST-Net

  • Note that some codes references SGN

About

Online 3D skeleton-based action recognition with a single RGB camera

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%