-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This code is able to reproduce similar results to those in the original paper #22
Comments
Hi, Zhenyu. Thank you very much for sharing the results and your feedback! It is nice to hear that the original results could be reproduced with our PyTorch implementation. |
@JesseZhang92 I faced the same problems you mentioned in your last issue about performance. Can I know what did you do to reproduce the same results as the original paper |
Hi@AnwarLabib, I just follow the settings in the original paper. I think the problem may be caused by the evaluation code. Last time I didn't follow the evaluation method in the original project https://github.com/mrharicot/monodepth/tree/master/utils and the performances were always worse. But this time I use a right evaluation metric and the performances are satisfactory. I think you may check your evaluation code to see if it exactly matches the original one. |
How long did it take you to train for 50 epoch? I found that it was very slow to train. |
Hi @AI-slam, Do you mean it is slow to run one epoch or the speed of convergence is slow? In my environment it usually won't take too long time to run one epoch, and the running time is between 1000-2500 seconds. Usually the network performs well after 20 epochs, so it takes nearly one day to finish the training. For the learning rate, 1e-4 is a good choice if you use Adam, as too small lr will lead to a slow convergence. |
Thank you so much. That was my problem. |
Thanks for your reply, @JesseZhang92, it is slow to run one epoch with my machine. Which experiment you have made that is able to reproduce almost the same results as Godard's paper, kitti split or eigen split? Please give a more specific instruction. |
Hi @AI-slam, I use the code to reproduce almost the same results as Godard's paper on Eigen's split. If it is too slow to run a full training procedure, maybe you can try 10 epochs and the results are also satisfactory. |
@JesseZhang92 what if I resize the image to 128 x 416, Can we reproduce the same performance? |
Hi @carpdm, I haven't tried different input size. If you multiply disparity by the right width, I think you may reproduce similar results. |
well, I cannot get the performance described in this paper(only about ~0.16 on abs_rel, I think small images reduce the infomation feeded to the network, what do you think? @JesseZhang92 |
Hi @JesseZhang92, could you please tell me when you calculate the abs_rel do you set the model in eval mode (model.eval()) or train mode? Because I think my model produces worse results when I use model.eval() because batch normalization behaves differently in eval and in test mode. |
Hi @AnwarLabib, I use model.train() while training and model.eval() while testing. When you use model.eval(), actually you use running_mean and running_var stored in the buffers. It is a standard setting for most network trainings and testings. For abs_rel, my result is 0.1415 (if my memory is right) within 50m on Eigen's split. Maybe you could compare all of the metrics to see if in some metrics this pytorch version is able to obtain better results than the original paper. As the differences between pytorch and tensorflow are subsistent, getting exact the same numbers may not be very easy. |
@carpdm According to your results, I agree with your opinion. |
@JesseZhang92 Hi, thank you very much for sharing the inspiring conclusion that this code could reproduce similar result as those in original paper. I want to know that whether you have tried the stereo type and got the similar result (Abs Rel:0.068 on KITTI 2015 stereo 200 training set). And if so, how do you set the super parameters like learning rate? Looking forward to your reply. |
Same! It is also slow to run one epoch with my machine. Every epoch takes an average of 12,000 seconds. What did you do ? |
Hello @eveyone, if anyone has tried this model, can you share the inference time this model takes or the FPS you are getting with the default resolution? |
Hi, months ago I opened an issue about performance reproducing and I forgot to give answers and feedbacks. Really sorry for that. Now as the previous issue has been closed, I open this one to tell the users that this code is able to reproduce almost the same results as Godard's paper. The parameter setting is suitable. Using the evaluation codes in https://github.com/mrharicot/monodepth/tree/master/utils is able to evaluate the performance on depth metrics. Thanks again for your impressive work!
Best,
Zhenyu
The text was updated successfully, but these errors were encountered: