Comments (32)
In fact, a "PatchGAN" is just a convnet! Or you could say all convnets are patchnets: the power of convnets is that they process each image patch identically and independently, which makes things very cheap (# params, time, memory), and, amazingly, turns out to work.
The difference between a PatchGAN and regular GAN discriminator is that rather the regular GAN maps from a 256x256 image to a single scalar output, which signifies "real" or "fake", whereas the PatchGAN maps from 256x256 to an NxN array of outputs X, where each X_ij signifies whether the patch ij in the image is real or fake. Which is patch ij in the input? Well, output X_ij is just a neuron in a convnet, and we can trace back its receptive field to see which input pixels it is sensitive to. In the CycleGAN architecture, the receptive fields of the discriminator turn out to be 70x70 patches in the input image!
This is all mathematically equivalent to if we had manually chopped up the image into 70x70 overlapping patches, run a regular discriminator over each patch, and averaged the results.
Maybe it would have been better if we called it a "Fully Convolutional GAN" like in FCNs... it's the same idea :)
from pytorch-cyclegan-and-pix2pix.
Here is a visual receptive field calculator: https://fomoro.com/tools/receptive-fields/#
I converted the math into python to make it easier to understand:
def f(output_size, ksize, stride):
return (output_size - 1) * stride + ksize
last_layer = f(output_size=1, ksize=4, stride=1)
# Receptive field: 4
fourth_layer = f(output_size=last_layer, ksize=4, stride=1)
# Receptive field: 7
third_layer = f(output_size=fourth_layer, ksize=4, stride=2)
# Receptive field: 16
second_layer = f(output_size=third_layer, ksize=4, stride=2)
# Receptive field: 34
first_layer = f(output_size=second_layer, ksize=4, stride=2)
# Receptive field: 70
print(first_layer)
from pytorch-cyclegan-and-pix2pix.
-
The "70" is implicit, it's not written anywhere in the code but instead emerges as a mathematical consequence of the network architecture.
-
The math is here: https://github.com/phillipi/pix2pix/blob/master/scripts/receptive_field_sizes.m
from pytorch-cyclegan-and-pix2pix.
That's a good point! Batchnorm does have this property. So to be precise we should say the PatchGAN architecture is equivalent to chopping up the image into 70x70 patches, making a big batch out of these patches, and running a discriminator on each patch, with batchnorm applied across the batch, then averaging the results.
from pytorch-cyclegan-and-pix2pix.
Edit: see defineD
from pytorch-cyclegan-and-pix2pix.
Hi @phillipi @junyanz ,
I understood how patch sizes are calculated implicitly by tracing back the receptive field sizes of successive convolutional layers. But don't you think batch normalization sort of harms the overall idea of patch-gan discriminator? I mean theoretically each member X_ij of the final NxN output should just be dependent on some 70x70 patch in the original image. And that any changes beyond that 70x70 patch should not result in change in the value of X_ij. But if we use batch normalization then that won't necessarily be true right?
from pytorch-cyclegan-and-pix2pix.
I have a question.
-
I saw the code(class NLayerDiscriminator(nn.Module)), but I do not see the number 70 anywhere.
So why is it called 70x70 patchGAN?
that is, Why is it the number 70? -
the output of the code is 30x30x1. (X_ij)
The patch of patchGAN was called 70x70. (ij)
You said, you traceback and found that patch ij is 70x70, how did you do it?
from pytorch-cyclegan-and-pix2pix.
Thanks. Then what is the difference of the output of D without sigmoid. For example, in LSGAN, if the output of D is very large (far from 1 or 0), can the loss function work? since the real labels are still set to 1 and false labels set to 0.
from pytorch-cyclegan-and-pix2pix.
I would like to share some points on why the patch number is counted by:
(output_size - 1) * stride + ksize
Here is what I think. For any i (input feature map size), k (kernel size), p (zero padding size) and s (stride), the output feature map size (o) is:
o = floor((i+2*p-k)/s)+1
when calculating patch number, it is supposed that p=0, so it is very clear that the calculation process above is just the opposite of the patch number calculation process.
from pytorch-cyclegan-and-pix2pix.
I think the padding was a holdover from the DCGAN architecture. I can't remember if there is a good reason for it. Might have been to make a 256x256 input map to a 1x1 output, in the DCGAN discriminator.
Zero padding also has the effect that it helps localize where you are in the image, since you can see this border of zeros when you are near an image boundary. That can sometimes be beneficial.
from pytorch-cyclegan-and-pix2pix.
thanks @emilwallner
Great picture, like it!
from pytorch-cyclegan-and-pix2pix.
Can you tell me which line in the code represents patchGAN?
from pytorch-cyclegan-and-pix2pix.
Yes that would be a better explanation! And thanks for your response to this.
from pytorch-cyclegan-and-pix2pix.
Hello phillipi,
thanks for you explaination and for sharing your implementation!
I'm also trying to better understand PatchGAN Discriminator, and I understand that is equivalent to a convnet from a design point of view. In other words, if I have to implement a patchgan discriminator, I should do as you did.
But what happens if I already got a (pre-trained) neural network which accepts as input the receptive-field (in this case 70x70 images) of a bigger image (e.g. 1024x1024)? I couldn't figure out how the network should be integrated efficiently or rewritten using convolutional layers without modifying the architecture of the pre-trained network.
P.S. I'm trying to implement this in tensorflow, but I don't think it's a platform-related issue.
Thank you!
from pytorch-cyclegan-and-pix2pix.
tensorflow extract_image_patches is differentiable func and can be used in training
from pytorch-cyclegan-and-pix2pix.
well,I understood How the PathGAN work!thx.
from pytorch-cyclegan-and-pix2pix.
Hi, I am wondering why sigmoid activation is not used for pathGAN, since the true patch should be close to 1, while the false should be close to 0.
from pytorch-cyclegan-and-pix2pix.
The sigmoid is contained in the loss function here. But note that some variants of GAN discriminators don't use a sigmoid (e.g., see LSGANs or WGANs).
from pytorch-cyclegan-and-pix2pix.
I believe in LSGAN the loss is squared distance from the labels. So if the output of D is very large, D will get a large penalty and it will learn to make a smaller output. Eventually, D should learn to output the correct labels, since those minimize the loss (and the loss is nice and smooth, just squared distance).
from pytorch-cyclegan-and-pix2pix.
Why is a padding of 1 being used in every convolution in the discriminator? If we feed the discriminator an image of size 70x70 we get an output of 6x6. Wouldn't it make more sense to not use a padding and instead get one single output 1x1 for a 70x70 input?
from pytorch-cyclegan-and-pix2pix.
Can you tell me which line in the code represents patchGAN?
It lies in here. 538th line in the networks.py
from pytorch-cyclegan-and-pix2pix.
70
Thank you!
But What does 'output_size' mean here?
from pytorch-cyclegan-and-pix2pix.
70
Thank you!
But What does 'output_size' mean here?
It just means the width/height of the output feature map,
We can calculate the receptive field of the prior layer according to its output_size
from pytorch-cyclegan-and-pix2pix.
Hi, as the discriminator outputs 30x30x1 matrix, does that mean the 70x70 patch was moved over the input image 30 times in each direction (horizontal and vertical) to map to single output for all of them?
from pytorch-cyclegan-and-pix2pix.
Answered at #1106.
from pytorch-cyclegan-and-pix2pix.
Hello phillipi,
i am wondering whether 'padding' is necessary in conv processing?
from pytorch-cyclegan-and-pix2pix.
I doubt it has a big effect. You could try removing it and see what happens.
from pytorch-cyclegan-and-pix2pix.
Thx,and i wonder whether 'PatchGAN' discriminator (convnet in fact in your responsed) is applied to a 3-d model(C-H-W-L 4dim in code)still work?
if so,use conv3d() instead right?and so called'3-d PatchGAN' can discriminate the local of 3-d model which is real or fake?
from pytorch-cyclegan-and-pix2pix.
thanks @emilwallner
from pytorch-cyclegan-and-pix2pix.
The one thing I'm struggling to understand is that the discriminator looks at 70 x 70 patches. But if I understand correctly, it's input is the conditional image concatenated with either the real image or synthesised image. So if it's only looking at small patches at a time, how does it learn the relationship between the two images? How does it check that the conditional input has actually informed the image that has been generated?
from pytorch-cyclegan-and-pix2pix.
Most of the applications used in the paper only require local color and texture transfer. In these cases, 70x70 patches might be enough (for a 256x input image). Later work (e.g., pix2pixHD) has explored using multi-scale discriminators, which can look at more pixels.
from pytorch-cyclegan-and-pix2pix.
If this structure is added to the generator, will it have a good effect? Is there any Ablation Experiment in this regard
from pytorch-cyclegan-and-pix2pix.
Related Issues (20)
- 请问作者 HOT 1
- 生成图像的质量太低怎么办
- 请问对于输入图像大小不一样时,该代码中是否对图像进行预处理了呢?
- CPU HOT 1
- It gets stuck when running for more than 20 rounds, but the graphics card is still running. What is the reason for this?
- Is the Model Capable of Processing and Maintaining Consistent Output Sizes Across Varied Image Dimensions? HOT 6
- 请问测试集应该下载保存到什么文件夹呢,每次都会报错'./checkpoints\\facades_label2photo_pretrained\\latest_net_G.pth'
- Hello, HOT 1
- introducing CycleGAN-Turbo and pix2pix-turbo
- Add ROI specific loss function to generator
- Error testing my own dataset using pix2pix
- How can I train with multi-input like (A1 ,A2……)->B? HOT 1
- How to finish Scene Text Editing task using Pix2Pix
- Input images are stretched when using test HOT 1
- Error in combine_A_and_B.py for TIFF-files
- Training Parameters or Architecture Settings Recommendations
- Perform interpolation between Input and Output
- value of lambda_identity not to change the color
- good
- The test results with pix2pix were poor
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-cyclegan-and-pix2pix.