Skip to content

An example of CoreML using a pre-trained VGG16 model

Notifications You must be signed in to change notification settings

alephao/CoreMLExample

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CoreMLExample

In this example we use AVFoundation to continuously get image data from the back camera, and try to detect the dominant objects present in the image by using a pre-trained VGG16 model.

Setup

To run this project, you need to download a pre-trained VGG16 model (I couldn't add it here because the file is larger than 100mb) and you can do it by running the setup.sh on the root folder. This will download the pre-trained model from apple's website.

git clone https://github.com/alaphao/CoreMLExample.git
cd CoreMLExample
./setup.sh

If you prefer, you can download the model here and move it to the CoreMLExample folder.

Requirements

  • Xcode 9 beta
  • Swift 4
  • iOS 11

Useful Links

About

An example of CoreML using a pre-trained VGG16 model

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published