Skip to content

๐Ÿ‘๐Ÿ“ธ Easily add computer vision to your SwiftUI app

License

Notifications You must be signed in to change notification settings

stevewight/VisionCam

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

21 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

VisionCam

๐Ÿ‘๐Ÿ“ธ Easily add computer vision to your SwiftUI app

VisionCam simplifies the process of building SwiftUI camera apps that utilize computer vision. It handles most of the boilerplate AVCaptureSession setup and input/output connection as well as UIKit -> SwiftUI integration with UIViewControllerRepresentable.

Note: Alpha version

Usage

Face Detection and Tracking

To easily detect and track all faces within the camera feed, use the FaceCam view.

You can do this like any other SwiftUI view (i.e. adding to the rootViewController in the SceneDelegate class).

The FaceCam view will provide a ViewBuilder that passes the face observations as a parameter to the block.

    // In the scene(_,session:,connectionOptions:) method within your SceneDelegate class
    // add FaceCam view to the root UIHostingController
    
    //...
    
    let cam = FaceCam { in observations
        if let firstObservation = observations.first {
            FacePathView(for: firstObservation)
        }
    }

    let window = UIWindow(windowScene: windowScene)
    window.rootViewController = UIHostingController(rootView: cam)
    window.makeKeyAndVisible()
    
    //...

The above will use the provided FacePathView and display it on the first VNFaceObservation found by the system.

You can create your own custom views for face tracking by using some provided helper extensions on VNFaceObservation and Path.

    //...

    let cam = FaceCam { observations in
        GeometryReader { geo in
            if let obs = observations.first,
               let rect = obs.boxRect(size: geo.size) {
                Rectangle()
                    .path(in: rect)
                    .faceTransform(newY: geo.size.height)
                    .stroke(Color.red)
            }
        }
    }
    
    //...

In the above example, once we unwrap our first observation, the .boxRect(size:) method (provided by VisionCam) projects our view from the normalized to the image coordinate space and returns the proper face observation CGRect.

Note: we are using the GeometryReader to get the parent views size for projecting the coordinate space change.

The call to faceTransform(newY:) is also important as it translates and scales the view rect to the proper size/position in the preview space.

Installation

Swift Package Manager (SPM)

The Swift Package Manager can be used to install VisionCam.

Follow the instructions for adding package dependencies provided in the Apple documentation.

Alternatively if you already have your Package.swift set up, you could add a value for VisionCam to your dependencies array:

    //...

    dependencies: [
        .package(
            url: "https://github.com/stevewight/VisionCam.git"
            from: "0.0.1"
        )
    ]

    //...

Update Info.plist

In the apps Info.plist, make sure to add the NSCameraUsageDescription key (Privacy - Camera Usage Description) or you will get an error when VisionCam attempts to access the devices camera.

Road Map

  • Face detection and tracking
  • Text detection and tracking
  • Body and hand pose detection and tracking
  • Custom model detection and tracking (with CoreML)

License

VisionCam is released under the MIT license. See LICENSE for details.

About

๐Ÿ‘๐Ÿ“ธ Easily add computer vision to your SwiftUI app

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages