GithubHelp home page GithubHelp logo

joeylitalien / noise2noise-pytorch Goto Github PK

View Code? Open in Web Editor NEW
297.0 297.0 58.0 31 MB

PyTorch Implementation of Noise2Noise (Lehtinen et al., 2018)

License: MIT License

Python 91.48% Shell 8.52%
deep-learning denoising pytorch

noise2noise-pytorch's People

Contributors

joeylitalien avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

noise2noise-pytorch's Issues

just for the test.py

Hello, I would like to ask, why the test image I input is 1024 * 1024, but the output is indeed 256 * 256, and I did not see where the picture was cropped, I look forward to your reply, thank you.

About U-Net model and data pre-processing

I am curious about the differences from the paper model

  1. Why did you choose to use ConvTranspose instead of Upsampling
  2. You used RELU in all parts and Leaky RELU after the last layer. But from the paper I think the author meant Leaky RELU everywhere except the last layer. And In the last layer only linear activation

Unrelated question
3) Did you use any pre-processing for the images e.g. means subtraction, normalization etc. I think that would be needed since we don't have BN layers

I am trying to implement the model in Tensorflow and having the problem of INF loss that starts in the 2nd epoch. So I hope your answers would help me. Thank you in advance!

Update: I seem to have solved the problem by adding Batch Norm layers to the UNET model. But still I am puzzled how the authors managed to get a stable training without Batch Norm

OpenEXR Error while running the train files.

Traceback (most recent call last):
File "train.py", line 7, in
from datasets import load_dataset
File "/Users/snehagathani/Desktop/noise/src/datasets.py", line 9, in
from utils import load_hdr_as_tensor
File "/Users/snehagathani/Desktop/noise/src/utils.py", line 12, in
import OpenEXR
ImportError: dlopen(/anaconda3/lib/python3.6/site-packages/OpenEXR.cpython-36m-darwin.so, 2): Symbol not found: __ZN7Imf_2_314TypedAttributeISsE13readValueFromERNS_7IStreamEii
Referenced from: /anaconda3/lib/python3.6/site-packages/OpenEXR.cpython-36m-darwin.so
Expected in: flat namespace
in /anaconda3/lib/python3.6/site-packages/OpenEXR.cpython-36m-darwin.so

This error comes for all the training files.

datasets.py issues?

Hello,there may be problems with this place
raise ValueError('Invalid noise type: {}'.format(noise_type))
should be
raise ValueError('Invalid noise type: {}'.format(self.noise_type))

Confused about the unet.py code, is it reasonable?

Hi I checked the code feel confused with the code in unet.py:

    def forward(self, x):
        """Through encoder, then decoder by adding U-skip connections. """

        # Encoder
        pool1 = self._block1(x)
        pool2 = self._block2(pool1)
        pool3 = self._block2(pool2)
        pool4 = self._block2(pool3)
        pool5 = self._block2(pool4)

pool2-pool5 is computed by self._block2, So it means pool2-pool5 re-use the same conv weight and bias. Does it accepted? I think the u-net should use different conv weight of different layer.

Using my own noisy-noisy pairs

Dear Authors,

There are no instructions on how to use the code for my own noisy-noisy pairs. The current code takes a dataset of images as input and applies two independent noises on each instance, leading to pairs of noisy-noisy train set. But what if I already have my own pairs of noisy-noisy images and want to train the network on them? Is this possible with current architecture?

Best,
Ali.

GPU not being used

i used --cuda on google colab but gpu is not being used how to fix it

About image reconstruction

hello, i want to know if you have codes about image reconstruction or if the codes can be used for image reconstruction.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.