This repository has been archived by the owner on Aug 28, 2024. It is now read-only.
Object Detection with PyTorch, Core ML, and Vision on iOS #91
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Thank you for maintaining such a great project, keep it up!
Here's a proposal PR for an example app showing how to use a PyTorch model (YOLOv5) for object detection on iOS using its Core ML and Vision capabilities. The app itself has been adapted from the app presented in this tutorial: https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
The example app does not depend on
libtorch
, but instead only uses Core ML andcoremltools
for exporting the PyTorch model into a Core ML model. The benefit of this approach is the (anecdotal) fact that many developers are hesitant to use CocoaPods in production apps. However, I understand that this may not be desirable in terms of the purposes of this repository.Let me know if this contribution may be useful and if there are any suggestions for improving its quality!