You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your interest.
Notice that our FADNet can only run fast as ~15 fps on a desktop/server GPU. For mobile GPUs like Jetson devices, the code without optimization can be very slow. Some tools like TensorRT might help.
Besides, you can also try our latest FADNet++, which has different variants for different computing devices.
We actually tried the FADNet++ as well, but the pre-trained model posted was still the original FADNet. Any chance you could update the FADNet pre-trained model? Would really love to try it out.
Settings:
Nano Jetson Xavier No
GPU: NVidia Volta
Power mode: 20W 6 Core
PyTorch 1.13
CUDA: 11.4
——————
We were able to run FADNet offline with sample dataset, but it was extremely slow (4~5 frames per second generated) on a input resolution of 512x256
We’d like to know what is the bottle neck that’s running it this slow as FADNet claims to be fast and accurate, so we must have done something wrong…
I can provide the code if needed, but just want to get a general idea first of how fast the FADNet can actually run ideally
—————
The text was updated successfully, but these errors were encountered: