You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 28, 2024. It is now read-only.
May I check if the object detection code in D2Go is runnable over the swift UI framework? I modified the existing D2GO project by removing delegate, storyboard files and created new ones for swift UI framework. The backend codes in Inference and Utils folder remain unchanged. When I tried to load a picture and run the model inference, the model does not give me the correct outputs.
The text was updated successfully, but these errors were encountered:
Okay it seems that I had to define a class level variable for the pixel buffer before handing it over to c++ side. I think the variable gets deallocated of the stack when the inference code is called leading to memory problems when torch::from_blob() is called.
import SwiftUI
struct ContentView: View {
var inferencer = ObjectDetector()
@State var pixelBuffer : [Float32] = [] // have to declare pixel buffer here if not wont work properly
private func runInference() {
//perform action
let image = UIImage(named: "test1.png")!
let resizedImage = image.resized(to: CGSize(width: CGFloat(PrePostProcessor.inputWidth), height: CGFloat(PrePostProcessor.inputHeight)))
self.pixelBuffer = resizedImage.normalized()!
DispatchQueue.global().async {
guard let outputs = self.inferencer.module.detect(image: &self.pixelBuffer) else {
return
}
print(outputs)
}
}
var body: some View {
Button(action: {
runInference()
}) {
Text("Submit Drawing").bold()
}
}
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
May I check if the object detection code in D2Go is runnable over the swift UI framework? I modified the existing D2GO project by removing delegate, storyboard files and created new ones for swift UI framework. The backend codes in Inference and Utils folder remain unchanged. When I tried to load a picture and run the model inference, the model does not give me the correct outputs.
The text was updated successfully, but these errors were encountered: