GithubHelp home page GithubHelp logo

guangmingw / doplearning Goto Github PK

View Code? Open in Web Editor NEW
81.0 81.0 13.0 985 KB

Code for T-ITS paper "Unsupervised Learning of Depth, Optical Flow and Pose with Occlusion from 3D Geometry" and for ICRA paper "Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple Masks".

License: MIT License

Python 100.00%
deep-learning depth-prediction optical-flow self-supervised-learning unsupervised-learning visual-odometry

doplearning's People

Contributors

guangmingw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

doplearning's Issues

Modifying depth_evaluation_utils for a custom dataset

I'd like to test the pretrained networks you provide on a custom, 82-frame dataset with just no scene changes. I looked at the code in stillbox_eval/depth_evaluation_utils.py, however much remains unclear. For instance:

  • What is the code's understanding of scene, ref_imgs and tgt?
  • Is the first frame of my sequence as the tgt and the rest sequentially listed in ref_imgs?
  • What if I don't have any depth map to provide? I just want to test the pretrained dataset, not train the network myself

In short, could you provide a simple overview of the steps needed to load a custom dataset?

About pretrained models

Hi! @guangmingw thanks for your codes.
I can't download the pretrained models, I gauss that it is because the URL cannot be accessed from the external network. Can you give a link that is publicly accessible?

about scale consistency problem

Hi! Thanks a lot for releasing so useful code. Since it is an unsupervised method, how you deal with the scale problems in depth estimation? I mean that during testing how to ensure that the scale of estimated depth for every mono video be consistent?

how to train flownet

hello, after training posenet and dispnet by the command in readme. how to train the flownet?

there are 7 losses in train.py but the command in readme just used w1 and w3, how to use other losses, is their performance not good enough?

cannot import name 'build_flow_not_intersect' from 'loss_functions_summary'

I run train.py according to the tutorial, but the following error is reported:
Traceback (most recent call last): File "/home/lcj/lcj/SLAM/DOPlearning-master/train.py", line 21, in <module> from loss_functions_summary import compute_errors, compute_epe, compute_all_epes, flow_diff, spatial_normalize_max, spatial_normalize_mean, new_static_mask, build_flow_not_intersect ImportError: cannot import name 'build_flow_not_intersect' from 'loss_functions_summary' (/home/lcj/lcj/SLAM/DOPlearning-master/loss_functions_summary.py)
I checked loss_functions_summary.py, there is no build_flow_not_intersect function here.

output of test_pose is not good

Hello, thanks for your codes.
I used your pretrained model for test_pose. but it does not performed well.

Uploading sequences09_pretrained.png…
I use evo to evaluate the trajectory. the "gt" is sample['poses'] in test_pose.py line 91, and the "predictions" is your output. (cause both they are relative transform, I manually converted them into absolute transform.)

So I train the model according to your tutorial. But it goes worse after 2000 epochs.
I do not know why. Do you have some output of the posenet? Or can you give me some advise.

你好,感谢你分享代码。
我用你提供的model跑test_pose,但是画出来的轨迹表现很不好。(我在kitti_odometry的各个sequences上都试了下)
我使用evo工具来绘制轨迹。由于代码的输出是相对变换,我手动将test_pose.py第91行的predictions 和sample['poses']转换成绝对变换来绘制,他们的变化方式相同,因此应该不是这里有问题。
然后我打算自己训练,按照你提供的参数,只修改了路径。但是即使2000 epochs之后,跑出来的轨迹还是有问题。
我不知道是不是我的操作有问题,能否看一下你的posenet输出的可视化结果,又或者你能给我一些别的帮助。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.