Comments (16)
@JesseZhang92 I faced the same problems you mentioned in your last issue about performance. Can I know what did you do to reproduce the same results as the original paper
Hi@AnwarLabib, I just follow the settings in the original paper. I think the problem may be caused by the evaluation code. Last time I didn't follow the evaluation method in the original project https://github.com/mrharicot/monodepth/tree/master/utils and the performances were always worse. But this time I use a right evaluation metric and the performances are satisfactory. I think you may check your evaluation code to see if it exactly matches the original one.
from monodepth-pytorch.
Hi @JesseZhang92, could you please tell me when you calculate the abs_rel do you set the model in eval mode (model.eval()) or train mode? Because I think my model produces worse results when I use model.eval() because batch normalization behaves differently in eval and in test mode.
Hi @AnwarLabib, I use model.train() while training and model.eval() while testing. When you use model.eval(), actually you use running_mean and running_var stored in the buffers. It is a standard setting for most network trainings and testings. For abs_rel, my result is 0.1415 (if my memory is right) within 50m on Eigen's split. Maybe you could compare all of the metrics to see if in some metrics this pytorch version is able to obtain better results than the original paper. As the differences between pytorch and tensorflow are subsistent, getting exact the same numbers may not be very easy.
from monodepth-pytorch.
Hi, Zhenyu. Thank you very much for sharing the results and your feedback! It is nice to hear that the original results could be reproduced with our PyTorch implementation.
from monodepth-pytorch.
@JesseZhang92 I faced the same problems you mentioned in your last issue about performance. Can I know what did you do to reproduce the same results as the original paper
from monodepth-pytorch.
How long did it take you to train for 50 epoch? I found that it was very slow to train.
@JesseZhang92 @AnwarLabib
from monodepth-pytorch.
How long did it take you to train for 50 epoch? I found that it was very slow to train.
@JesseZhang92 @AnwarLabib
Hi @AI-slam, Do you mean it is slow to run one epoch or the speed of convergence is slow? In my environment it usually won't take too long time to run one epoch, and the running time is between 1000-2500 seconds. Usually the network performs well after 20 epochs, so it takes nearly one day to finish the training. For the learning rate, 1e-4 is a good choice if you use Adam, as too small lr will lead to a slow convergence.
from monodepth-pytorch.
@JesseZhang92 I faced the same problems you mentioned in your last issue about performance. Can I know what did you do to reproduce the same results as the original paper
Hi@AnwarLabib, I just follow the settings in the original paper. I think the problem may be caused by the evaluation code. Last time I didn't follow the evaluation method in the original project https://github.com/mrharicot/monodepth/tree/master/utils and the performances were always worse. But this time I use a right evaluation metric and the performances are satisfactory. I think you may check your evaluation code to see if it exactly matches the original one.
Thank you so much. That was my problem.
from monodepth-pytorch.
Thanks for your reply, @JesseZhang92, it is slow to run one epoch with my machine. Which experiment you have made that is able to reproduce almost the same results as Godard's paper, kitti split or eigen split? Please give a more specific instruction.
from monodepth-pytorch.
Hi @AI-slam, I use the code to reproduce almost the same results as Godard's paper on Eigen's split. If it is too slow to run a full training procedure, maybe you can try 10 epochs and the results are also satisfactory.
from monodepth-pytorch.
@JesseZhang92 what if I resize the image to 128 x 416, Can we reproduce the same performance?
from monodepth-pytorch.
@JesseZhang92 what if I resize the image to 128 x 416, Can we reproduce the same performance?
Hi @carpdm, I haven't tried different input size. If you multiply disparity by the right width, I think you may reproduce similar results.
from monodepth-pytorch.
well, I cannot get the performance described in this paper(only about ~0.16 on abs_rel, I think small images reduce the infomation feeded to the network, what do you think? @JesseZhang92
from monodepth-pytorch.
Hi @JesseZhang92, could you please tell me when you calculate the abs_rel do you set the model in eval mode (model.eval()) or train mode? Because I think my model produces worse results when I use model.eval() because batch normalization behaves differently in eval and in test mode.
from monodepth-pytorch.
well, I cannot get the performance described in this paper(only about ~0.16 on abs_rel, I think small images reduce the infomation feeded to the network, what do you think? @JesseZhang92
@carpdm According to your results, I agree with your opinion.
from monodepth-pytorch.
@JesseZhang92 Hi, thank you very much for sharing the inspiring conclusion that this code could reproduce similar result as those in original paper. I want to know that whether you have tried the stereo type and got the similar result (Abs Rel:0.068 on KITTI 2015 stereo 200 training set). And if so, how do you set the super parameters like learning rate? Looking forward to your reply.
from monodepth-pytorch.
Thanks for your reply, @JesseZhang92, it is slow to run one epoch with my machine. Which experiment you have made that is able to reproduce almost the same results as Godard's paper, kitti split or eigen split? Please give a more specific instruction.
Same! It is also slow to run one epoch with my machine. Every epoch takes an average of 12,000 seconds. What did you do ?
from monodepth-pytorch.
Related Issues (20)
- RMSE keeps increasing after 25 epochs training while disparity prediction looks fine
- Question about how to get the original image HOT 1
- How do you compute L-R Consistency HOT 1
- Difference between ResNet50_md and ResNet model
- why the disp_gradient_loss, lr_loss is zero and the total loss is not getting converging?
- test my data HOT 2
- How can I get deep information? HOT 2
- disparity map error HOT 1
- [Question] How to get the metric disparity value? HOT 3
- [Feature requested] Support for depth-separable(depth-wise) backbones such as MobileNetV2, EfficientNet... HOT 1
- Can anyone provide the download URL of pretrained model 'monodepth_resnet18_001_cpt.pth' in this repository? HOT 6
- Why horizontal flip HOT 1
- Key already registered with the same priority: GroupSpatialSoftmax HOT 1
- Does this work on stereo image pair (not video) HOT 3
- How to interpolate the sparse gt depth map? HOT 1
- What is the loss value of the trained model? HOT 1
- Training data with monocular HOT 1
- datasets HOT 1
- How can I generate depth maps for other image datasets using the code here?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from monodepth-pytorch.