Skip to content
This repository has been archived by the owner on Aug 28, 2024. It is now read-only.

Object Detection with PyTorch, Core ML, and Vision on iOS #91

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

hietalajulius
Copy link

Thank you for maintaining such a great project, keep it up!

Here's a proposal PR for an example app showing how to use a PyTorch model (YOLOv5) for object detection on iOS using its Core ML and Vision capabilities. The app itself has been adapted from the app presented in this tutorial: https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture

The example app does not depend on libtorch, but instead only uses Core ML and coremltools for exporting the PyTorch model into a Core ML model. The benefit of this approach is the (anecdotal) fact that many developers are hesitant to use CocoaPods in production apps. However, I understand that this may not be desirable in terms of the purposes of this repository.

Let me know if this contribution may be useful and if there are any suggestions for improving its quality!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants