Enabling Inference using OpenVINO and GPU #212
Replies: 4 comments 3 replies
-
hi, to update this discussion. I did take sometime to read FastPathology paper. it did answer my curiosity! Fast can run specifically on OpenVINO GPU. Since from the paper it has benchmark data, I also try to find the benchmark script that the authors used, so that I can reproduce or learn how to enable it. I cannot find it (I might not looking deep enough). Again I am tagging @smistad and @andreped, in case you two missed my discussion. |
Beta Was this translation helpful? Give feedback.
-
Hi By default FAST's OpenVINO engine automatically selects which device to use. Not sure what it uses to determine which device is best. To see what it uses you can add the following line above your segmentation = fast.SegmentationNetwork.create line: You should then see an output like so:
To my surprise I notice that OpenVINO now automatically selects the CPU for this specific model, and not the GPU. Maybe OpenVINO believes this model will run faster on the CPU. Not sure. Unfortunately, I don't think there is a way of selecting which device to use from Python right now. This is something we should add though. |
Beta Was this translation helpful? Give feedback.
-
@andreped thank you for sharing the benchmark script, now i can see how to implement it :) @smistad thank you for this, yeah i did check on my side also it runs on CPU. So it is confirm it can be implemented on C++, I will try to migrate the python pipeline to C++ and will share here for anyone references. Bear with me my C++ is rusty haha :D |
Beta Was this translation helpful? Give feedback.
-
Here is the C++ implementation, i also added the Unfortunately, it is still running on CPU, this is the log from the reporter:
Double confirmed with CPU and GPU utilization still the same as previously shared. Can someone reproduce this and see whether it behaves the same? I tried with other model Thank you guys |
Beta Was this translation helpful? Give feedback.
-
Hi, I have question is it possible to use OpenVINO as inference engine and run the inference on purely GPU/iGPU?
Right now I managed to run segmentation inference example with this script (not specify any inference device):
And here is the snapshot of the CPU/GPU utilization, I don't know and not sure if this already consider utilize the GPU or not.
I think this can be optimize by running the inference purely on GPU since OpenVINO support inferencing on multi device (CPU/GPU/NPU). To add I am testing on Intel Core Ultra 7 155H it has iGPU and NPU, wanted to test the inference performance on both seperately.
I did try to add
.setDevice('GPU')
on theSegmentationNetwork
(blindly try, sorry for this silly approach)I got this error:
TypeError: ProcessObject.setDevice() missing 1 required positional argument: 'device'
This is so far I can get, now I know it needs two argument
deviceNumber
anddevice
. I don't know what the right 'variable' to this argument. Hope this approach is close to the solution.May anyone leads me to the right path/answer @smistad @andreped. Thank you
Thank you too for this amazing work!
Beta Was this translation helpful? Give feedback.
All reactions