-
Notifications
You must be signed in to change notification settings - Fork 356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference time on pytorch version #129
Comments
hi, i'm having trouble when I configure the python enviroment of the torch version, could you please teach me how to configure the enviroment of the torch version? |
You can refer to the RAFT environment requirements for configuration. |
I'm sorry, what is "the RAFT environment "? I follw the steps in Torch/readme.md, but it seems like cuda 11.2 of 3060 doesnt match with the cuda the code need.. |
1.RAFT is a well-known optical flow estimation algorithm which is also open source. |
Thanks! You saved my day! |
Hi, sorry to bother you again. I have successfully configure the enviroment of RAFT, the problem is RAFT is based on python3, and I still cant run pwcnet-torch in this enviroment successfully... by the way, here is my email: hogfeg@zju.edu.cn, if you feel inconvenient to explain here, we can communicate via email, thanks for you generous help again! |
I use the pytorch version of pwcnet to test the inference time, the image resolution is 1024 x 512, the graphics card used is RTX3060, and the inference time is 0.08s. There is a big gap with the reasoning time in the paper. Is this because the pytorch version is slower than the caffe version?
The text was updated successfully, but these errors were encountered: