GithubHelp home page GithubHelp logo

Comments (8)

shaohua0116 avatar shaohua0116 commented on August 29, 2024

Our flow module is similar to AFN except that the architecture is fully-convolutional (i.e. without FC layers). We do not use DOAFN because it requires 3D supervision (e.g. surface normal), which makes it hard to apply to real-world scenes.

from multiview2novelview.

ehabhelmy82 avatar ehabhelmy82 commented on August 29, 2024

Thank you very much for your reply.
One more question, on what basis do you choose the number of iterations used for each data set, for example i noticed that you trained your model as follow:
Synthia, model_checkpoint_path: "model-33001"
Car model_checkpoint_path: "model-140001"
Chair model_checkpoint_path: "model-180001"
KITTI model_checkpoint_path: "model-41001"
Why you stopped kitti at 41001 and car at 180001?
Thanks again

from multiview2novelview.

ehabhelmy82 avatar ehabhelmy82 commented on August 29, 2024

???

from multiview2novelview.

shaohua0116 avatar shaohua0116 commented on August 29, 2024

It takes longer to train models for cars and chairs, partially because we train those models using four source views.

from multiview2novelview.

ehabhelmy82 avatar ehabhelmy82 commented on August 29, 2024

I understand this point but my question why you stopped at model 33001 for synthia not 70001 for example?
on what basis do you decide to terminate training at a certain model? why this model specially and why not the same for all data sets?

from multiview2novelview.

shaohua0116 avatar shaohua0116 commented on August 29, 2024

I stopped the models once the training curves on TensorBoard converge.
Why not looking at validation sets? The main reason is that given the training set is extremely large (considering all the permutations of multiple source/target views), it is less likely a model can simply overfit. Also, I follow the training and test splits proposed in prior work, where no validation sets were provided.

from multiview2novelview.

ehabhelmy82 avatar ehabhelmy82 commented on August 29, 2024

Many thanks
Can you kindly give me the command you used to monitor training curve?

from multiview2novelview.

shaohua0116 avatar shaohua0116 commented on August 29, 2024

You can host TensorBoard under ./train_dir (using tensorboard --logdir . --port 6006) and then you should be able to see this under the IMAGES tab and losses under the SCALARS tab.

from multiview2novelview.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.