-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High inference time using r1.0 and master #28
Comments
|
2.This is my config.yml for master Inference ConfigVIDEO_INPUT: 0 # Input Must be OpenCV readable TestingIMAGE_PATH: 'test_images' # path for test_*.py test_images Object_DetectionWIDTH: 600 # OpenCV only supports 4:3 formats others will be converted speed hackSPLIT_MODEL: True # Splits Model into a GPU and CPU session (currently only works for ssd_mobilenets) TrackingUSE_TRACKER: False # Use a Tracker (currently only works properly WITHOUT split_model) ModelOD_MODEL_NAME: 'ssd_mobilenet_v11_coco' DeepLabALPHA: 0.3 # mask overlay factor (also for mask_rcnn) ModelDL_MODEL_NAME: 'deeplabv3_mnv2_pascal_train_aug_2018_01_29'
Thanks again |
I ran the script test_objectdetection.py and, What I observed is when loading the model it is using GPU, but during detection GPU usage is 0%. |
Hi @gustavz
The model ran successfully on Jetson TX2 but the inference time was quite slow. I tried both r1.0 branch and the master branch, the inference time were-
For master:
18.15, 2.39, 2.62, 2.53 seconds
While for r1.0:
22.34, 0.27, 0.17, 0.13 seconds
for 4 images respectively.
Visualization was switched off.
Is there anything I'm missing that makes it this slow?
Thanks
The text was updated successfully, but these errors were encountered: