GithubHelp home page GithubHelp logo

Comments (16)

JesseZhang92 avatar JesseZhang92 commented on May 23, 2024 1

@JesseZhang92 I faced the same problems you mentioned in your last issue about performance. Can I know what did you do to reproduce the same results as the original paper

Hi@AnwarLabib, I just follow the settings in the original paper. I think the problem may be caused by the evaluation code. Last time I didn't follow the evaluation method in the original project https://github.com/mrharicot/monodepth/tree/master/utils and the performances were always worse. But this time I use a right evaluation metric and the performances are satisfactory. I think you may check your evaluation code to see if it exactly matches the original one.

from monodepth-pytorch.

JesseZhang92 avatar JesseZhang92 commented on May 23, 2024 1

Hi @JesseZhang92, could you please tell me when you calculate the abs_rel do you set the model in eval mode (model.eval()) or train mode? Because I think my model produces worse results when I use model.eval() because batch normalization behaves differently in eval and in test mode.

Hi @AnwarLabib, I use model.train() while training and model.eval() while testing. When you use model.eval(), actually you use running_mean and running_var stored in the buffers. It is a standard setting for most network trainings and testings. For abs_rel, my result is 0.1415 (if my memory is right) within 50m on Eigen's split. Maybe you could compare all of the metrics to see if in some metrics this pytorch version is able to obtain better results than the original paper. As the differences between pytorch and tensorflow are subsistent, getting exact the same numbers may not be very easy.

from monodepth-pytorch.

NikolasEnt avatar NikolasEnt commented on May 23, 2024

Hi, Zhenyu. Thank you very much for sharing the results and your feedback! It is nice to hear that the original results could be reproduced with our PyTorch implementation.

from monodepth-pytorch.

AnwarLabib avatar AnwarLabib commented on May 23, 2024

@JesseZhang92 I faced the same problems you mentioned in your last issue about performance. Can I know what did you do to reproduce the same results as the original paper

from monodepth-pytorch.

AI-slam avatar AI-slam commented on May 23, 2024

How long did it take you to train for 50 epoch? I found that it was very slow to train.
@JesseZhang92 @AnwarLabib

from monodepth-pytorch.

JesseZhang92 avatar JesseZhang92 commented on May 23, 2024

How long did it take you to train for 50 epoch? I found that it was very slow to train.
@JesseZhang92 @AnwarLabib

Hi @AI-slam, Do you mean it is slow to run one epoch or the speed of convergence is slow? In my environment it usually won't take too long time to run one epoch, and the running time is between 1000-2500 seconds. Usually the network performs well after 20 epochs, so it takes nearly one day to finish the training. For the learning rate, 1e-4 is a good choice if you use Adam, as too small lr will lead to a slow convergence.

from monodepth-pytorch.

AnwarLabib avatar AnwarLabib commented on May 23, 2024

@JesseZhang92 I faced the same problems you mentioned in your last issue about performance. Can I know what did you do to reproduce the same results as the original paper

Hi@AnwarLabib, I just follow the settings in the original paper. I think the problem may be caused by the evaluation code. Last time I didn't follow the evaluation method in the original project https://github.com/mrharicot/monodepth/tree/master/utils and the performances were always worse. But this time I use a right evaluation metric and the performances are satisfactory. I think you may check your evaluation code to see if it exactly matches the original one.

Thank you so much. That was my problem.

from monodepth-pytorch.

AI-slam avatar AI-slam commented on May 23, 2024

Thanks for your reply, @JesseZhang92, it is slow to run one epoch with my machine. Which experiment you have made that is able to reproduce almost the same results as Godard's paper, kitti split or eigen split? Please give a more specific instruction.

from monodepth-pytorch.

JesseZhang92 avatar JesseZhang92 commented on May 23, 2024

Hi @AI-slam, I use the code to reproduce almost the same results as Godard's paper on Eigen's split. If it is too slow to run a full training procedure, maybe you can try 10 epochs and the results are also satisfactory.

from monodepth-pytorch.

 avatar commented on May 23, 2024

@JesseZhang92 what if I resize the image to 128 x 416, Can we reproduce the same performance?

from monodepth-pytorch.

JesseZhang92 avatar JesseZhang92 commented on May 23, 2024

@JesseZhang92 what if I resize the image to 128 x 416, Can we reproduce the same performance?

Hi @carpdm, I haven't tried different input size. If you multiply disparity by the right width, I think you may reproduce similar results.

from monodepth-pytorch.

 avatar commented on May 23, 2024

well, I cannot get the performance described in this paper(only about ~0.16 on abs_rel, I think small images reduce the infomation feeded to the network, what do you think? @JesseZhang92

from monodepth-pytorch.

AnwarLabib avatar AnwarLabib commented on May 23, 2024

Hi @JesseZhang92, could you please tell me when you calculate the abs_rel do you set the model in eval mode (model.eval()) or train mode? Because I think my model produces worse results when I use model.eval() because batch normalization behaves differently in eval and in test mode.

from monodepth-pytorch.

JesseZhang92 avatar JesseZhang92 commented on May 23, 2024

well, I cannot get the performance described in this paper(only about ~0.16 on abs_rel, I think small images reduce the infomation feeded to the network, what do you think? @JesseZhang92

@carpdm According to your results, I agree with your opinion.

from monodepth-pytorch.

wanghao14 avatar wanghao14 commented on May 23, 2024

@JesseZhang92 Hi, thank you very much for sharing the inspiring conclusion that this code could reproduce similar result as those in original paper. I want to know that whether you have tried the stereo type and got the similar result (Abs Rel:0.068 on KITTI 2015 stereo 200 training set). And if so, how do you set the super parameters like learning rate? Looking forward to your reply.

from monodepth-pytorch.

lilingge avatar lilingge commented on May 23, 2024

Thanks for your reply, @JesseZhang92, it is slow to run one epoch with my machine. Which experiment you have made that is able to reproduce almost the same results as Godard's paper, kitti split or eigen split? Please give a more specific instruction.

Same! It is also slow to run one epoch with my machine. Every epoch takes an average of 12,000 seconds. What did you do ?

from monodepth-pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.