Action-Net is a dataset containing images of 16 different human actions.
Action-Net is a dataset containing images of human actions , collected in order to ensure that machine learning systems can be trained to understand human actions, gestures and activities. This is part of DeepQuest AI's to train machine learning systems to perceive, understand and act accordingly in solving problems in any environment they are deployed.
This is the first release of the Action-Net dataset. It contains 19,200 images that span cover 16 classes. The classes
included in this release are:
- Calling
- Clapping
- Cycling
- Dancing
- Drinking
- Eating
- Fighting
- Hugging
- Kissing
- Laughing
- Listening to Music
- Running
- Sitting
- Sleeping
- Texting
- Using Laptop
There are 1,200 images for each category, with 1000 images for trainings and 200 images for testing . We are working on adding more
categories in the future and will continue to improve the dataset.
>>> DOWNLOAD, TRAINING AND PREDICTION:
The Action-Net dataset is provided for download in the release section of this repository.
You can download the dataset via the link below.
https://github.com/OlafenwaMoses/Action-Net/releases/download/v1/action_net_v1.zip
We have also provided a python codebase to download the images, train ResNet50 on the images
and perform prediction using a pretrained model (also using ResNet50) provided in the release section of this repository.
The python codebase is contained in the action_net.py file and the model class labels for prediction is also provided the
model_class.json. The pretrained ResNet50 model is available for download via the link below.
https://github.com/OlafenwaMoses/Action-Net/releases/download/v1/action_net_ex-060_acc-0.745313.h5
This pre-trained model was trained for 60 epochs only, but it achieved over 74% accuracy on 3200 test images. You can see the prediction results on new images that were not part of the dataset in the Prediction Results section below. More experiments will enhance the accuracy of the model.
Running the experiment or prediction requires that you have Tensorflow, and Keras, OpenCV and ImageAI installed. You can install this dependencies via the commands below.
- Tensorflow 1.4.0 (and later versions) Install or install via pip
pip3 install --upgrade tensorflow
- OpenCV Install or install via pip
pip3 install opencv-python
- Keras 2.x Install or install via pip
pip3 install keras
- ImageAI 2.0.3
pip3 install imageai
>>> Video & Prediction Results
Click below to watch the video demonstration of the trained model at work.
eating : 100.0 drinking : 3.92037860508232e-09 using-laptop : 6.944534465709584e-11 calling : 5.7910951424891555e-12
eating : 99.44907426834106 drinking : 0.5508399568498135 using-phone : 5.766927415606915e-05 sitting : 1.1222620344142342e-05
fighting : 99.97442364692688 running : 0.01658390392549336 dancing : 0.008970857743406668 sitting : 7.210289965087213e-06
laughing : 99.99998807907104 clapping : 1.3144966715117334e-05 calling : 4.0294068526236515e-06 eating : 4.981405066217803e-07
running : 99.99852180480957 calling : 0.0009251662959286477 listening-to-music : 0.0002909338491008384 cycling : 0.00024121977730828803
- Kaiming H. et al, Deep Residual Learning for Image Recognition
https://arxiv.org/abs/1512.03385