Human action classification for video, offline and natively on iOS via Core ML
Uses the kinetics-i3d
model to classify videos into one of 400 different action classes defined in Kinetics 400
Reference: See accompanying blog post
To install via Swift Package Manager, add VisualActionKit
to your Package.swift
file. Alternatively, add it from Xcode directly.
let package = Package(
...
dependencies: [
.package(url: "https://github.com/lukereichold/VisualActionKit.git", from: "0.1.0")
],
...
)
Then import VisualActionKit
wherever you’d like to use it:
import VisualActionKit
let url = Bundle.module.url(forResource: "writing", withExtension: "mp4")
let asset = AVAsset(url: url)
try Classifier.shared.classify(asset) { predictions in
print(predictions)
}
Contributions welcome. Please check out the issues.