GithubHelp home page GithubHelp logo

bambnet's People

Contributors

junjun-jiang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

bambnet's Issues

您好,想请问下代码的事情

您好,
如果我没有理解错误的话,在您的文章中,是用defocusmask将一张图图像分成4张图像分别进入预设好的sub-restorer,但是在您的模型包括训练和测试文件当中这个好像没有体现出来,不知道是我的理解错还是少看了文章的部分,能不能请您解答一下呢?

About pretrained model ?

Thanks for providing the code for the BaMBNet, I think it’s a very interesting method!
However , it only can be train by single GPU, it will take a lot of time to reproduce the performance.
I just want to see the results generated by the model, can you provide your pretrained model ?
Thanks again for your help!

Input for model in test.py

Many thanks for your work. I am confused about the input for model in test.py. It seems that the COC maps are not used as input for model in test.py. While the COC maps ARE used as input for model in train.py. Is it used for ablation study or a wrong setting. Many thanks.

The input of model in train.py and test.py

  1. In your code train.py, the model input is (l_img, r_img, b_img). Is b_img COC map of train dataset? But b_img is not obtained in your blur_train.py. So do I need to change your code blur_test.py to obtain b_img?
  2. In your code test.py, the model input is (l_img, r_img, r_img).If there’s been a mistake?Why do you concatnate two r_img?

How to update the thresholds r2, r3, r4 in your code?

In Section III C of your paper, the thresholds r2-4 are updated by adaptively learning from a small amountof meta-data. However, I don't find this part in your code. I only find you give four static_kernel_size . ”Combined with the model training process, generatingthe defocus mask is a nested optimization“ Where is the nested optimization?

About first step COCMap training?

Thanks for projects again!
I used the config you given to train 20 epochs Cocmap on dataset of Canon by the preprocessing code 'image_to_patch_filter.py' , and the inference results are shown in the figure below Figure 1, all pixels is black with the value >1000.There are several questions:
1.The pixel value of the image is uint16, and the effect of the sub-area described in the paper cannot be seen. I would like to know whether my training is wrong or yours is the same. 2.Why three channel and not a single channel, and what's the meaning of the pixel value on each result graph?
In addition, I modified the config "niter: 500000 epoch: 300 #20" , trained it at about 45 epoch(70 000 iter), and the phenomenon of unsupervised loss=0 would appear. Then, the test results were shown in Figure 2, is that normal?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.