GithubHelp home page GithubHelp logo

Comments (10)

mseitzer avatar mseitzer commented on July 17, 2024

Hi,

I made a similar script to yours, and I get nearly identical feature maps between Pytorch and Tensorflow for identical inputs.

I don't think that you use the Inception model in this repository with the latest changed by me, but the original Inception from torchvision. This Inception implementation does not correspond to the one used in Tensorflow's FID. Could you please run your tests again, but with the newest model (and weights) provided here?

from pytorch-fid.

AtlantixJJ avatar AtlantixJJ commented on July 17, 2024

I downloaded your model and run the script with command python test_inception.py --load_path <your_model>
So I used your weight.
Could you please provide your script?

from pytorch-fid.

AtlantixJJ avatar AtlantixJJ commented on July 17, 2024

What's more, in my result the difference is concentrated on edge. Perhaps the padding is not consistent.
tf_pytorch_error.pdf
rabsdiff

from pytorch-fid.

AtlantixJJ avatar AtlantixJJ commented on July 17, 2024

Sorry you are right. I didn't use your model. Now my results are

=> Pytorch pool3:
[0.13110925 0.5254472  0.22908828 0.02930989 0.24280292 0.51165456]
=> Tensorflow pool3:
[0.1308305  0.52602327 0.22930455 0.02920746 0.24395223 0.5096492 ]
=> Mean abs difference
0.0011506503

I believe your repo is correct. But I am wondering why the official Pytorch Inception is different. Is the official network architecture different from yours? Because I ported the Tensorflow weights directly and the weights I used should be exactly same.

from pytorch-fid.

mseitzer avatar mseitzer commented on July 17, 2024

Yes, I discovered that there are some minor differences between the Pytorch implementation and the Inception model used by FID. There are different Inception graphs/weights published. If you open the graph in Tensorboard, you can see some differences.

You can see the necessary changes to the model in this commit: f64228c. The changes are marked in comments beginning with Patch.

By the way, I think you can get even lower mean difference if you use exactly the same input tensor for both Tensorflow and Pytorch. So resize your image only once, with PIL, and then skip the resize of the Tensorflow graph (with input key FID_Inception_Net/Mul:0, but be aware that this requires your inputs to in range [-1, 1]).

from pytorch-fid.

AtlantixJJ avatar AtlantixJJ commented on July 17, 2024

Yes. It is now on 1e-5. Thank you for your work!

I also discovered how the difference in my network come from. The TF InceptionA module's average pooling does not count padded 0s on edges while Pytorch counted the padded 0s. Also on the last layer TF's implementation uses MAX pooling (which is incorrect) rather than the average pooling used in Pytorch.

from pytorch-fid.

AtlantixJJ avatar AtlantixJJ commented on July 17, 2024

@mseitzer But do you really think we should use the result exactly the same as TTUR? According to your patch, they do not use the correct Inception code. Maybe the official weight of Pytorch is better for evaluation.

from pytorch-fid.

mseitzer avatar mseitzer commented on July 17, 2024

Well, I don't think there is much choice here. The authors of FID chose this version of Inception, and to stay comparable, everyone has to use this version now. I guess the FID authors just took the same model Inception score uses: https://github.com/openai/improved-gan/blob/master/inception_score/model.py

Also, this Inception model is not "wrong", it just does not directly correspond to the paper. Its feature maps should still contain what the authors of FID and Inception scores want to measure.

But I do think it is a bit concerning that FID score gives wildly different results if you use a different version of Inception. In my opinion, the metric should not be sensitive to that.

from pytorch-fid.

mseitzer avatar mseitzer commented on July 17, 2024

Fixed by #16

from pytorch-fid.

vibss2397 avatar vibss2397 commented on July 17, 2024

hey @AtlantixJJ can you tell me what change you made to the test_inception.py script which led you to get better results?

P.S. Nvm, the images i was feeding were corrupted, which is why i was getting wrong values. It works perfectly, thanks 💯

from pytorch-fid.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.