Skip to content

hsmwfgie/ios_realtime-activity-classificator-for-PoseEstimation

 
 

Repository files navigation

Realtime Activity Classifier based on PoseEstimation-CoreML

This app enables a real-time activity detection for the activities "nothing, waveLeft, waveRight, callingLeft, callingRight". For this we use the iOS-App from https://github.com/tucan9389/PoseEstimation-CoreML. Within the app, the live image of the integrated camera is captured as a pixel stream and processed . The recorded pixels are converted to the suitable resolution of 192 x 192 pixels using the Vision Framework and passed to the model for image-based pose estimation. As a result, the coordinates of all recognized body keypoints for the current frame are returned. The integrated model is specially optimized for real-time applications with mobile devices and tailored to resource constrained environments. In this way, a classification rate of 15 frames per second can be achieved on an iPhone X. Therefore, we use a window size of 15 frames for estimating activities, which allows one prediction per second. Therefore we implement our own class, which collects the extracted body points within the app. In addition, all values are transformed and normalized in order to identify the actual movements independent of the image resolution. Finally, the activity classifier is used to detect the real activities. The required model was exported using Turi Create framework into the iOS compatible format (MLModel). An array with the individual 2D coordinates of the bone points is passed to the model, whereupon the performed activity is predicted.

About

The example uses the source code of PoseEstimation-Core ML

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Swift 99.2%
  • Python 0.8%