some general notes and approaces on deep image prior
- the structure of a generator network is sufficient to capture image statistics
- INVERSE PROBLEMS, randomly initialized network can be used as prior
- THE STRUCTURE OF THE NETWORK MUST RESONATE WITH THE STRUCTURE OF THE DATA
- you need a generator network
- I start with some random weights to the NN and iteratively update THEM
- updating with a comparison to the original image, super resolution is reasonable
- a network untrained descends much faster towards a "real" image rather than random noise.
- in inpainting, I'm just calculating the loss on the non-mask pixels! sounds pretty reasonable!
- in general I should tune the network for every specific image..
- for LARGE HOLE INPAINTING, an inizialization with meshgrid gradient rather than uniform noise
- noise based regularization, perturb the input
- the convolutional operation impose self-similarity on the generated images
- taking into consideration the likeness of a network to rebiuld random vs non random images. maybe related to a so called random distance..
- what if we sample the random creations a CNN and use them instead of random layers? huh?