GithubHelp home page GithubHelp logo

Comments (32)

phillipi avatar phillipi commented on June 23, 2024 495

In fact, a "PatchGAN" is just a convnet! Or you could say all convnets are patchnets: the power of convnets is that they process each image patch identically and independently, which makes things very cheap (# params, time, memory), and, amazingly, turns out to work.

The difference between a PatchGAN and regular GAN discriminator is that rather the regular GAN maps from a 256x256 image to a single scalar output, which signifies "real" or "fake", whereas the PatchGAN maps from 256x256 to an NxN array of outputs X, where each X_ij signifies whether the patch ij in the image is real or fake. Which is patch ij in the input? Well, output X_ij is just a neuron in a convnet, and we can trace back its receptive field to see which input pixels it is sensitive to. In the CycleGAN architecture, the receptive fields of the discriminator turn out to be 70x70 patches in the input image!

This is all mathematically equivalent to if we had manually chopped up the image into 70x70 overlapping patches, run a regular discriminator over each patch, and averaged the results.

Maybe it would have been better if we called it a "Fully Convolutional GAN" like in FCNs... it's the same idea :)

from pytorch-cyclegan-and-pix2pix.

emilwallner avatar emilwallner commented on June 23, 2024 151

Here is a visual receptive field calculator: https://fomoro.com/tools/receptive-fields/#

I converted the math into python to make it easier to understand:

def f(output_size, ksize, stride):
    return (output_size - 1) * stride + ksize

last_layer = f(output_size=1, ksize=4, stride=1)
# Receptive field: 4
fourth_layer = f(output_size=last_layer, ksize=4, stride=1)
# Receptive field: 7
third_layer = f(output_size=fourth_layer, ksize=4, stride=2)
# Receptive field: 16
second_layer = f(output_size=third_layer, ksize=4, stride=2)
# Receptive field: 34
first_layer = f(output_size=second_layer, ksize=4, stride=2)
# Receptive field: 70

print(first_layer)

from pytorch-cyclegan-and-pix2pix.

phillipi avatar phillipi commented on June 23, 2024 24
  1. The "70" is implicit, it's not written anywhere in the code but instead emerges as a mathematical consequence of the network architecture.

  2. The math is here: https://github.com/phillipi/pix2pix/blob/master/scripts/receptive_field_sizes.m

from pytorch-cyclegan-and-pix2pix.

phillipi avatar phillipi commented on June 23, 2024 24

That's a good point! Batchnorm does have this property. So to be precise we should say the PatchGAN architecture is equivalent to chopping up the image into 70x70 patches, making a big batch out of these patches, and running a discriminator on each patch, with batchnorm applied across the batch, then averaging the results.

from pytorch-cyclegan-and-pix2pix.

phillipi avatar phillipi commented on June 23, 2024 15

Edit: see defineD

from pytorch-cyclegan-and-pix2pix.

utkarshojha avatar utkarshojha commented on June 23, 2024 11

Hi @phillipi @junyanz ,
I understood how patch sizes are calculated implicitly by tracing back the receptive field sizes of successive convolutional layers. But don't you think batch normalization sort of harms the overall idea of patch-gan discriminator? I mean theoretically each member X_ij of the final NxN output should just be dependent on some 70x70 patch in the original image. And that any changes beyond that 70x70 patch should not result in change in the value of X_ij. But if we use batch normalization then that won't necessarily be true right?

from pytorch-cyclegan-and-pix2pix.

taki0112 avatar taki0112 commented on June 23, 2024 7

I have a question.

  1. I saw the code(class NLayerDiscriminator(nn.Module)), but I do not see the number 70 anywhere.
    So why is it called 70x70 patchGAN?
    that is, Why is it the number 70?

  2. the output of the code is 30x30x1. (X_ij)
    The patch of patchGAN was called 70x70. (ij)
    You said, you traceback and found that patch ij is 70x70, how did you do it?

from pytorch-cyclegan-and-pix2pix.

daifeng2016 avatar daifeng2016 commented on June 23, 2024 3

Thanks. Then what is the difference of the output of D without sigmoid. For example, in LSGAN, if the output of D is very large (far from 1 or 0), can the loss function work? since the real labels are still set to 1 and false labels set to 0.

from pytorch-cyclegan-and-pix2pix.

FunkyKoki avatar FunkyKoki commented on June 23, 2024 2

I would like to share some points on why the patch number is counted by:
(output_size - 1) * stride + ksize

Here is what I think. For any i (input feature map size), k (kernel size), p (zero padding size) and s (stride), the output feature map size (o) is:
o = floor((i+2*p-k)/s)+1

when calculating patch number, it is supposed that p=0, so it is very clear that the calculation process above is just the opposite of the patch number calculation process.

from pytorch-cyclegan-and-pix2pix.

phillipi avatar phillipi commented on June 23, 2024 1

I think the padding was a holdover from the DCGAN architecture. I can't remember if there is a good reason for it. Might have been to make a 256x256 input map to a 1x1 output, in the DCGAN discriminator.

Zero padding also has the effect that it helps localize where you are in the image, since you can see this border of zeros when you are near an image boundary. That can sometimes be beneficial.

from pytorch-cyclegan-and-pix2pix.

CHENHUI-X avatar CHENHUI-X commented on June 23, 2024 1

thanks @emilwallner Screen Shot 2021-06-15 at 3 18 18 pm

Great picture, like it!

from pytorch-cyclegan-and-pix2pix.

taki0112 avatar taki0112 commented on June 23, 2024

Can you tell me which line in the code represents patchGAN?

from pytorch-cyclegan-and-pix2pix.

utkarshojha avatar utkarshojha commented on June 23, 2024

Yes that would be a better explanation! And thanks for your response to this.

from pytorch-cyclegan-and-pix2pix.

edoardogiacomello avatar edoardogiacomello commented on June 23, 2024

Hello phillipi,
thanks for you explaination and for sharing your implementation!
I'm also trying to better understand PatchGAN Discriminator, and I understand that is equivalent to a convnet from a design point of view. In other words, if I have to implement a patchgan discriminator, I should do as you did.
But what happens if I already got a (pre-trained) neural network which accepts as input the receptive-field (in this case 70x70 images) of a bigger image (e.g. 1024x1024)? I couldn't figure out how the network should be integrated efficiently or rewritten using convolutional layers without modifying the architecture of the pre-trained network.
P.S. I'm trying to implement this in tensorflow, but I don't think it's a platform-related issue.

Thank you!

from pytorch-cyclegan-and-pix2pix.

iperov avatar iperov commented on June 23, 2024

tensorflow extract_image_patches is differentiable func and can be used in training

from pytorch-cyclegan-and-pix2pix.

huicongzhang avatar huicongzhang commented on June 23, 2024

well,I understood How the PathGAN work!thx.

from pytorch-cyclegan-and-pix2pix.

daifeng2016 avatar daifeng2016 commented on June 23, 2024

Hi, I am wondering why sigmoid activation is not used for pathGAN, since the true patch should be close to 1, while the false should be close to 0.

from pytorch-cyclegan-and-pix2pix.

phillipi avatar phillipi commented on June 23, 2024

The sigmoid is contained in the loss function here. But note that some variants of GAN discriminators don't use a sigmoid (e.g., see LSGANs or WGANs).

from pytorch-cyclegan-and-pix2pix.

phillipi avatar phillipi commented on June 23, 2024

I believe in LSGAN the loss is squared distance from the labels. So if the output of D is very large, D will get a large penalty and it will learn to make a smaller output. Eventually, D should learn to output the correct labels, since those minimize the loss (and the loss is nice and smooth, just squared distance).

from pytorch-cyclegan-and-pix2pix.

serwansj avatar serwansj commented on June 23, 2024

Why is a padding of 1 being used in every convolution in the discriminator? If we feed the discriminator an image of size 70x70 we get an output of 6x6. Wouldn't it make more sense to not use a padding and instead get one single output 1x1 for a 70x70 input?

from pytorch-cyclegan-and-pix2pix.

JustinAsdz avatar JustinAsdz commented on June 23, 2024

Can you tell me which line in the code represents patchGAN?

It lies in here. 538th line in the networks.py

from pytorch-cyclegan-and-pix2pix.

xcc13 avatar xcc13 commented on June 23, 2024

70

Thank you!
But What does 'output_size' mean here?

from pytorch-cyclegan-and-pix2pix.

JustinAsdz avatar JustinAsdz commented on June 23, 2024

70

Thank you!
But What does 'output_size' mean here?

It just means the width/height of the output feature map,
We can calculate the receptive field of the prior layer according to its output_size

from pytorch-cyclegan-and-pix2pix.

shaurov2253 avatar shaurov2253 commented on June 23, 2024

Hi, as the discriminator outputs 30x30x1 matrix, does that mean the 70x70 patch was moved over the input image 30 times in each direction (horizontal and vertical) to map to single output for all of them?

from pytorch-cyclegan-and-pix2pix.

junyanz avatar junyanz commented on June 23, 2024

Answered at #1106.

from pytorch-cyclegan-and-pix2pix.

yfwang-master avatar yfwang-master commented on June 23, 2024

Hello phillipi,
i am wondering whether 'padding' is necessary in conv processing?

from pytorch-cyclegan-and-pix2pix.

phillipi avatar phillipi commented on June 23, 2024

I doubt it has a big effect. You could try removing it and see what happens.

from pytorch-cyclegan-and-pix2pix.

yfwang-master avatar yfwang-master commented on June 23, 2024

Thx,and i wonder whether 'PatchGAN' discriminator (convnet in fact in your responsed) is applied to a 3-d model(C-H-W-L 4dim in code)still work?
if so,use conv3d() instead right?and so called'3-d PatchGAN' can discriminate the local of 3-d model which is real or fake?

from pytorch-cyclegan-and-pix2pix.

johndpope avatar johndpope commented on June 23, 2024

thanks @emilwallner
Screen Shot 2021-06-15 at 3 18 18 pm

from pytorch-cyclegan-and-pix2pix.

emcrobert avatar emcrobert commented on June 23, 2024

The one thing I'm struggling to understand is that the discriminator looks at 70 x 70 patches. But if I understand correctly, it's input is the conditional image concatenated with either the real image or synthesised image. So if it's only looking at small patches at a time, how does it learn the relationship between the two images? How does it check that the conditional input has actually informed the image that has been generated?

from pytorch-cyclegan-and-pix2pix.

junyanz avatar junyanz commented on June 23, 2024

Most of the applications used in the paper only require local color and texture transfer. In these cases, 70x70 patches might be enough (for a 256x input image). Later work (e.g., pix2pixHD) has explored using multi-scale discriminators, which can look at more pixels.

from pytorch-cyclegan-and-pix2pix.

yearep7 avatar yearep7 commented on June 23, 2024

If this structure is added to the generator, will it have a good effect? Is there any Ablation Experiment in this regard

from pytorch-cyclegan-and-pix2pix.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.