You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 25, 2023. It is now read-only.
The fns is a .mlmodel converted with this project and dragged into my xcode project.
Interestingly, the models offered here do not work in xcode 9.1. It complains about the missing header file (although it is there). So, I was not able to test with mlmodels from alternative sources.
The text was updated successfully, but these errors were encountered:
I also have bad_access in VNPixelBufferObservation. I think it is because errors on Vision framework side as direct CoreML model usage is ok. I can suggest a simple workaround:
I create a quick and straightforward example using CoreMLHelpers (only for demo purpose, you need to handle errors more cleverly): https://gist.github.com/opedge/1e3a80528e2d30d2238bc7b18e0a2020
Please note that you need to add bias to output image and convert it from BGR to RGB manually.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I am using the Vision framework but the program crashes as soon as I access the
pixelBuffer
property of the result observation. This is my code:The
fns
is a .mlmodel converted with this project and dragged into my xcode project.Interestingly, the models offered here do not work in xcode 9.1. It complains about the missing header file (although it is there). So, I was not able to test with mlmodels from alternative sources.
The text was updated successfully, but these errors were encountered: