GithubHelp home page GithubHelp logo

gaparmar / clean-fid Goto Github PK

View Code? Open in Web Editor NEW
863.0 9.0 66.0 4.72 MB

PyTorch - FID calculation with proper image resizing and quantization steps [CVPR 2022]

Home Page: https://www.cs.cmu.edu/~clean-fid/

License: MIT License

Python 100.00%
computer-vision deep-learning computer-graphics pytorch generative-adversarial-network gan image-manipulation image-generation fid-score fid-calculation fid frechet-inception-distance frechet-distance

clean-fid's Introduction

clean-fid for Evaluating Generative Models


Downloads Downloads

Project | Paper | Slides | Colab-FID | Colab-Resize | Leaderboard Tables
Quick start: Calculate FID | Calculate KID

[New] Computing the FID using CLIP features [Kynkäänniemi et al, 2022] is now supported. See here for more details.

The FID calculation involves many steps that can produce inconsistencies in the final metric. As shown below, different implementations use different low-level image quantization and resizing functions, the latter of which are often implemented incorrectly.

We provide an easy-to-use library to address the above issues and make the FID scores comparable across different methods, papers, and groups.

FID Steps


Corresponding Manuscript

On Aliased Resizing and Surprising Subtleties in GAN Evaluation
Gaurav Parmar, Richard Zhang, Jun-Yan Zhu
CVPR, 2022
CMU and Adobe

If you find this repository useful for your research, please cite the following work.

@inproceedings{parmar2021cleanfid,
  title={On Aliased Resizing and Surprising Subtleties in GAN Evaluation},
  author={Parmar, Gaurav and Zhang, Richard and Zhu, Jun-Yan},
  booktitle={CVPR},
  year={2022}
}


Aliased Resizing Operations

The definitions of resizing functions are mathematical and should never be a function of the library being used. Unfortunately, implementations differ across commonly-used libraries. They are often implemented incorrectly by popular libraries. Try out the different resizing implementations in the Google colab notebook here.


The inconsistencies among implementations can have a drastic effect of the evaluations metrics. The table below shows that FFHQ dataset images resized with bicubic implementation from other libraries (OpenCV, PyTorch, TensorFlow, OpenCV) have a large FID score (≥ 6) when compared to the same images resized with the correctly implemented PIL-bicubic filter. Other correctly implemented filters from PIL (Lanczos, bilinear, box) all result in relatively smaller FID score (≤ 0.75). Note that since TF 2.0, the new flag antialias (default: False) can produce results close to PIL. However, it was not used in the existing TF-FID repo and set as False by default.

JPEG Image Compression

Image compression can have a surprisingly large effect on FID. Images are perceptually indistinguishable from each other but have a large FID score. The FID scores under the images are calculated between all FFHQ images saved using the corresponding JPEG format and the PNG format.

Below, we study the effect of JPEG compression for StyleGAN2 models trained on the FFHQ dataset (left) and LSUN outdoor Church dataset (right). Note that LSUN dataset images were collected with JPEG compression (quality 75), whereas FFHQ images were collected as PNG. Interestingly, for LSUN dataset, the best FID score (3.48) is obtained when the generated images are compressed with JPEG quality 87.


Quick Start

  • install the library
    pip install clean-fid
    

Computing FID

  • Compute FID between two image folders
    from cleanfid import fid
    score = fid.compute_fid(fdir1, fdir2)
    
  • Compute FID between one folder of images and pre-computed datasets statistics (e.g., FFHQ)
    from cleanfid import fid
    score = fid.compute_fid(fdir1, dataset_name="FFHQ", dataset_res=1024, dataset_split="trainval70k")
    
  • Compute FID using a generative model and pre-computed dataset statistics:
    from cleanfid import fid
    # function that accepts a latent and returns an image in range[0,255]
    gen = lambda z: GAN(latent=z, ... , <other_flags>)
    score = fid.compute_fid(gen=gen, dataset_name="FFHQ",
            dataset_res=256, num_gen=50_000, dataset_split="trainval70k")
    

Computing CLIP-FID

To use the CLIP features when computing the FID [Kynkäänniemi et al, 2022], specify the flag model_name="clip_vit_b_32"

  • e.g. to compute the CLIP-FID between two folders of images use the following commands.
    from cleanfid import fid
    score = fid.compute_fid(fdir1, fdir2, mode="clean", model_name="clip_vit_b_32")
    

Computing KID

The KID score can be computed using a similar interface as FID. The dataset statistics for KID are only precomputed for smaller datasets AFHQ, BreCaHAD, and MetFaces.

  • Compute KID between two image folders
    from cleanfid import fid
    score = fid.compute_kid(fdir1, fdir2)
    
  • Compute KID between one folder of images and pre-computed datasets statistics
    from cleanfid import fid
    score = fid.compute_kid(fdir1, dataset_name="brecahad", dataset_res=512, dataset_split="train")
    
  • Compute KID using a generative model and pre-computed dataset statistics:
    from cleanfid import fid
    # function that accepts a latent and returns an image in range[0,255]
    gen = lambda z: GAN(latent=z, ... , <other_flags>)
    score = fid.compute_kid(gen=gen, dataset_name="brecahad", dataset_res=512, num_gen=50_000, dataset_split="train")
    

Supported Precomputed Datasets

We provide precompute statistics for the following commonly used configurations. Please contact us if you want to add statistics for your new datasets.

Task Dataset Resolution Reference Split # Reference Images mode
Image Generation cifar10 32 train 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation cifar10 32 test 10,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation ffhq 1024, 256 trainval 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation ffhq 1024, 256 trainval70k 70,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation lsun_church 256 train 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation lsun_church 256 trainfull 126,227 clean
Image Generation lsun_horse 256 train 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation lsun_horse 256 trainfull 2,000,340 clean
Image Generation lsun_cat 256 train 50,000 clean, legacy_tensorflow, legacy_pytorch
Image Generation lsun_cat 256 trainfull 1,657,264 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation afhq_cat 512 train 5153 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation afhq_dog 512 train 4739 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation afhq_wild 512 train 4738 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation brecahad 512 train 1944 clean, legacy_tensorflow, legacy_pytorch
Few Shot Generation metfaces 1024 train 1336 clean, legacy_tensorflow, legacy_pytorch
Image to Image horse2zebra 256 test 140 clean, legacy_tensorflow, legacy_pytorch
Image to Image cat2dog 256 test 500 clean, legacy_tensorflow, legacy_pytorch

Using precomputed statistics In order to compute the FID score with the precomputed dataset statistics, use the corresponding options. For instance, to compute the clean-fid score on generated 256x256 FFHQ images use the command:

fid_score = fid.compute_fid(fdir1, dataset_name="ffhq", dataset_res=256,  mode="clean", dataset_split="trainval70k")

Create Custom Dataset Statistics

  • dataset_path: folder where the dataset images are stored

  • custom_name: name to be used for the statistics

  • Generating custom statistics (saved to local cache)

    from cleanfid import fid
    fid.make_custom_stats(custom_name, dataset_path, mode="clean")
    
  • Using the generated custom statistics

    from cleanfid import fid
    score = fid.compute_fid("folder_fake", dataset_name=custom_name,
              mode="clean", dataset_split="custom")
    
  • Removing the custom stats

    from cleanfid import fid
    fid.remove_custom_stats(custom_name, mode="clean")
    
  • Check if a custom statistic already exists

    from cleanfid import fid
    fid.test_stats_exists(custom_name, mode)
    

Backwards Compatibility

We provide two flags to reproduce the legacy FID score.

  • mode="legacy_pytorch"
    This flag is equivalent to using the popular PyTorch FID implementation provided here
    The difference between using clean-fid with this option and code is ~2e-06
    See doc for how the methods are compared

  • mode="legacy_tensorflow"
    This flag is equivalent to using the official implementation of FID released by the authors.
    The difference between using clean-fid with this option and code is ~2e-05
    See doc for detailed steps for how the methods are compared


Building clean-fid locally from source

python setup.py bdist_wheel
pip install dist/*

CleanFID Leaderboard for common tasks

We compute the FID scores using the corresponding methods used in the original papers and using the Clean-FID proposed here. All values are computed using 10 evaluation runs. We provide an API to query the results shown in the tables below directly from the pip package.

If you would like to add new numbers and models to our leaderboard, feel free to contact us.

CIFAR-10 (few shot)

The test set is used as the reference distribution and compared to 10k generated images.

100% data (unconditional)

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2 (+ada + tuning) [Karras et al, 2020] - † - † 8.20 ± 0.10
stylegan2 (+ada) [Karras et al, 2020] - † - † 9.26 ± 0.06
stylegan2 (diff-augment) [Zhao et al, 2020] [ckpt] 9.89 9.90 ± 0.09 10.85 ± 0.10
stylegan2 (mirror-flips) [Karras et al, 2020] [ckpt] 11.07 11.07 ± 0.10 12.96 ± 0.07
stylegan2 (without-flips) [Karras et al, 2020] - † - † 14.53 ± 0.13
AutoGAN (config A) [Gong et al, 2019] - † - † 21.18 ± 0.12
AutoGAN (config B) [Gong et al, 2019] - † - † 22.46 ± 0.15
AutoGAN (config C) [Gong et al, 2019] - † - † 23.62 ± 0.30

† These methods use the training set as the reference distribution and compare to 50k generated images

20% data

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 12.15 12.12 ± 0.15 14.18 ± 0.13
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 23.08 23.01 ± 0.19 29.49 ± 0.17

10% data

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 14.50 14.53 ± 0.12 16.98 ± 0.18
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 36.02 35.94 ± 0.17 43.60 ± 0.17

CIFAR-100 (few shot)

The test set is used as the reference distribution and compared to 10k generated images.

100% data

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 16.54 16.44 ± 0.19 18.44 ± 0.24
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 15.22 15.15 ± 0.13 16.80 ± 0.13

20% data

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 32.30 32.26 ± 0.19 34.88 ± 0.14
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 16.65 16.74 ± 0.10 18.49 ± 0.08

10% data

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 45.87 45.97 ± 0.20 46.77 ± 0.19
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 20.75 20.69 ± 0.12 23.40 ± 0.09

FFHQ

all images @ 1024x1024
Values are computed using 50k generated images

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID Reference Split
stylegan1 (config A) [Karras et al, 2020] 4.4 4.39 ± 0.03 4.77 ± 0.03 trainval
stylegan2 (config B) [Karras et al, 2020] 4.39 4.43 ± 0.03 4.89 ± 0.03 trainval
stylegan2 (config C) [Karras et al, 2020] 4.38 4.40 ± 0.02 4.79 ± 0.02 trainval
stylegan2 (config D) [Karras et al, 2020] 4.34 4.34 ± 0.02 4.78 ± 0.03 trainval
stylegan2 (config E) [Karras et al, 2020] 3.31 3.33 ± 0.02 3.79 ± 0.02 trainval
stylegan2 (config F) [Karras et al, 2020] [ckpt] 2.84 2.83 +- 0.03 3.06 +- 0.02 trainval
stylegan2 [Karras et al, 2020] [ckpt] N/A 2.76 ± 0.03 2.98 ± 0.03 trainval70k

140k - images @ 256x256 (entire training set with horizontal flips) The 70k images from trainval70k set is used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
zCR [Zhao et al, 2020] 3.45 ± 0.19 3.29 ± 0.01 3.40 ± 0.01
stylegan2 [Karras et al, 2020] 3.66 ± 0.10 3.57 ± 0.03 3.73 ± 0.03
PA-GAN [Zhang and Khoreva et al, 2019] 3.78 ± 0.06 3.67 ± 0.03 3.81 ± 0.03
stylegan2-ada [Karras et al, 2020] 3.88 ± 0.13 3.84 ± 0.02 3.93 ± 0.02
Auxiliary rotation [Chen et al, 2019] 4.16 ± 0.05 4.10 ± 0.02 4.29 ± 0.03
Adaptive Dropout [Karras et al, 2020] 4.16 ± 0.05 4.09 ± 0.02 4.20 ± 0.02
Spectral Norm [Miyato et al, 2018] 4.60 ± 0.19 4.43 ± 0.02 4.65 ± 0.02
WGAN-GP [Gulrajani et al, 2017] 6.54 ± 0.37 6.19 ± 0.03 6.62 ± 0.03

† reported by [Karras et al, 2020]

30k - images @ 256x256 (Few Shot Generation)
The 70k images from trainval70k set is used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2 [Karras et al, 2020] [ckpt] 6.16 6.14 ± 0.064 6.49 ± 0.068
DiffAugment-stylegan2 [Zhao et al, 2020] [ckpt] 5.05 5.07 ± 0.030 5.18 ± 0.032

10k - images @ 256x256 (Few Shot Generation)
The 70k images from trainval70k set is used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2 [Karras et al, 2020] [ckpt] 14.75 14.88 ± 0.070 16.04 ± 0.078
DiffAugment-stylegan2 [Zhao et al, 2020] [ckpt] 7.86 7.82 ± 0.045 8.12 ± 0.044

5k - images @ 256x256 (Few Shot Generation)
The 70k images from trainval70k set is used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2 [Karras et al, 2020] [ckpt] 26.60 26.64 ± 0.086 28.17 ± 0.090
DiffAugment-stylegan2 [Zhao et al, 2020] [ckpt] 10.45 10.45 ± 0.047 10.99 ± 0.050

1k - images @ 256x256 (Few Shot Generation)
The 70k images from trainval70k set is used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2 [Karras et al, 2020] [ckpt] 62.16 62.14 ± 0.108 64.17 ± 0.113
DiffAugment-stylegan2 [Zhao et al, 2020] [ckpt] 25.66 25.60 ± 0.071 27.26 ± 0.077

LSUN Categories

100% data
The 50k images from train set is used as the reference images and compared to 50k generated images.

Category Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
Outdoor Churches stylegan2 [Karras et al, 2020] [ckpt] 3.86 3.87 ± 0.029 4.08 ± 0.028
Horses stylegan2 [Karras et al, 2020] [ckpt] 3.43 3.41 ± 0.021 3.62 ± 0.023
Cat stylegan2 [Karras et al, 2020] [ckpt] 6.93 7.02 ± 0.039 7.47 ± 0.035

LSUN CAT - 30k images (Few Shot Generation)
All 1,657,264 images from trainfull split are used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 10.12 10.15 ± 0.04 10.87 ± 0.04
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 9.68 9.70 ± 0.07 10.25 ± 0.07

LSUN CAT - 10k images (Few Shot Generation)
All 1,657,264 images from trainfull split are used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 17.93 17.98 ± 0.09 18.71 ± 0.09
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 12.07 12.04 ± 0.08 12.53 ± 0.08

LSUN CAT - 5k images (Few Shot Generation)
All 1,657,264 images from trainfull split are used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 34.69 34.66 ± 0.12 35.85 ± 0.12
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 16.11 16.11 ± 0.09 16.79 ± 0.09

LSUN CAT - 1k images (Few Shot Generation)
All 1,657,264 images from trainfull split are used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2-mirror-flips [Karras et al, 2020] [ckpt] 182.85 182.80 ± 0.21 185.86 ± 0.21
stylegan2-diff-augment [Zhao et al, 2020] [ckpt] 42.26 42.07 ± 0.16 43.12 ± 0.16

AFHQ (Few Shot Generation)

AFHQ Dog
All 4739 images from train split are used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2 [Karras et al, 2020] [ckpt] 19.37 19.34 ± 0.08 20.10 ± 0.08
stylegan2-ada [Karras et al, 2020] [ckpt] 7.40 7.41 ± 0.02 7.61 ± 0.02

AFHQ Wild
All 4738 images from train split are used as the reference images and compared to 50k generated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
stylegan2 [Karras et al, 2020] [ckpt] 3.48 3.55 ± 0.03 3.66 ± 0.02
stylegan2-ada [Karras et al, 2020] [ckpt] 3.05 3.01 ± 0.02 3.03 ± 0.02

BreCaHAD (Few Shot Generation)

All 1944 images from train split are used as the reference images and compared to 50k generated images.

Model Legacy
FID
(reported)
Legacy
FID
(reproduced)
Clean-FID Legacy
KID
(reported)
10^3
Legacy
KID
(reproduced)
10^3
Clean
KID
10^3
stylegan2 [Karras et al, 2020] [ckpt] 97.72 97.46 ± 0.17 98.35 ± 0.17 89.76 89.90 ± 0.31 92.51 ± 0.32
stylegan2-ada [Karras et al, 2020] [ckpt] 15.71 15.70 ± 0.06 15.63 ± 0.06 2.88 2.93 ± 0.08 3.08 ± 0.08

MetFaces (Few Shot Generation)

All 1336 images from train split are used as the reference images and compared to 50k generated images.

Model Legacy
FID
(reported)
Legacy
FID
(reproduced)
Clean-FID Legacy
KID
(reported)
10^3
Legacy
KID
(reproduced)
10^3
Clean
KID
10^3
stylegan2 [Karras et al, 2020] [ckpt] 57.26 57.36 ± 0.10 65.74 ± 0.11 35.66 35.69 ± 0.16 40.90 ± 0.14
stylegan2-ada [Karras et al, 2020] [ckpt] 18.22 18.18 ± 0.03 19.60 ± 0.03 2.41 2.38 ± 0.05 2.86 ± 0.04

Horse2Zebra (Image to Image Translation)

All 140 images from test split are used as the reference images and compared to 120 translated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
CUT [Park et al, 2020] 45.5 45.51 43.71
Distance [Benaim and Wolf et al, 2017] reported by [Park et al, 2020] 72.0 71.97 71.01
FastCUT [Park et al, 2020] 73.4 73.38 72.53
CycleGAN [Zhu et al, 2017] reported by [Park et al, 2020] 77.2 77.20 75.17
SelfDistance [Benaim and Wolf et al, 2017] reported by [Park et al, 2020] 80.8 80.78 79.28
GCGAN [Fu et al, 2019] reported by [Park et al, 2020] 86.7 85.86 83.65
MUNIT [Huang et al, 2018] reported by [Park et al, 2020] 133.8 - † 120.48
DRIT [Lee et al, 2017] reported by [Park et al, 2020] 140.0 - † 99.56

† The translated images for these methods were intitially compared by [Park et al, 2020] using .jpeg compression. We retrain these two methods using the same protocal and generate the images as .png for a fair comparision.


Cat2Dog (Image to Image Translation)

All 500 images from test split are used as the reference images and compared to 500 translated images.

Model Legacy-FID
(reported)
Legacy-FID
(reproduced)
Clean-FID
CUT [Park et al, 2020] 76.2 76.21 77.58
FastCUT [Park et al, 2020] 94.0 93.95 95.37
GCGAN [Fu et al, 2019] reported by [Park et al, 2020] 96.6 96.61 96.49
MUNIT [Huang et al, 2018] reported by [Park et al, 2020] 104.4 - † 123.73
DRIT [Lee et al, 2017] reported by [Park et al, 2020] 123.4 - † 127.21
SelfDistance [Benaim and Wolf et al, 2017] reported by [Park et al, 2020] 144.4 144.42 147.23
Distance [Benaim and Wolf et al, 2017] reported by [Park et al, 2020] 155.3 155.34 158.39

† The translated images for these methods were intitially compared by [Park et al, 2020] using .jpeg compression. We retrain these two methods using the same protocal and generate the images as .png for a fair comparision.


Related Projects

torch-fidelity: High-fidelity performance metrics for generative models in PyTorch.
TTUR: Two time-scale update rule for training GANs.
LPIPS: Perceptual Similarity Metric and Dataset.


Licenses

All material in this repository is made available under the MIT License.

inception_pytorch.py is derived from the PyTorch implementation of FID provided by Maximilian Seitzer. These files were originally shared under the Apache 2.0 License.

inception-2015-12-05.pt is a torchscript model of the pre-trained Inception-v3 network by Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. The network was originally shared under Apache 2.0 license on the TensorFlow Models repository. The torchscript wrapper is provided by Tero Karras and Miika Aittala and Janne Hellsten and Samuli Laine and Jaakko Lehtinen and Timo Aila which is released under the Nvidia Source Code License.

clean-fid's People

Contributors

bruce-willis avatar chenwu98 avatar dave-epstein avatar erenyesilyurt avatar gaparmar avatar junyanz avatar lmxyy avatar nupurkmr9 avatar shlomostept avatar tiankaihang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clean-fid's Issues

A possible bug

Hi Gaurav, thanks for sharing this amazing tool! I spot a block of suspicious lines that might worth your attention. Specifically, this resizing function of the default "clean" resizer:

elif library == "PIL" and not quantize_after:
s1, s2 = output_size
def resize_single_channel(x_np):
img = Image.fromarray(x_np.astype(np.float32), mode='F')
img = img.resize(output_size, resample=dict_name_to_filter[library][filter])
return np.asarray(img).reshape(s1, s2, 1)
def func(x):
x = [resize_single_channel(x[:, :, idx]) for idx in range(3)]
x = np.concatenate(x, axis=2).astype(np.float32)
return x

It seems to me that the output_size in L47 (s1, s2) supposes to be a (w, h) tuple whilie L48 expects it as (h, w). What do you think? This might mean that the default resizer only works for square output resolution.

Shouldn't be a big problem since it does not affect default behavior.

Images in a big numpy npy file rather than in a folder

Thank you for your great work!! It's really helpful.

I wonder if we can calculate the fid between a numpy file (.npy) that contains an array in the shape (B, C, H, W) and pre-computed datasets statistics?

Massive thanks in advance.

resize function that can backpropagate gradient

Dear clean-fid group,

Thank you for sharing this great work, I really like it.

You mention in the paper that the PIL implementation is anti alias. But the PIL library could not back backpropagate gradient. Do you have plan to implement a way resize function that support anti alias and backpropagation? Or could you inform me the right way to do this?

I believe this will be very usefull for the community. For example, we invert a real image to the latent space of GAN, usually we dont use the full resolution image. The generated image must downsampling then compare to real image (LPIPS), and then backpropagate gradient. This is the implementation in Stylegan-ada (https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/projector.py#L97), it uses the torch resize function which could not anti alias as you mentioned in the paper.

Thank you for your help.

Best Wishes,

Alex

A better way to compute the FID

Hello, I think the following implementation of the Fréchet distance is faster than the current one and would allow to drop the scipy dependency.

def frechet_distance(mu_x: Tensor, sigma_x: Tensor, mu_y: Tensor, sigma_y: Tensor) -> Tensor:
    a = (mu_x - mu_y).square().sum(dim=-1)
    b = sigma_x.trace() + sigma_y.trace()
    c = torch.linalg.eigvals(sigma_x @ sigma_y).sqrt().real.sum(dim=-1)

    return a + b - 2 * c

The implementation is based on two facts:

  1. The trace of $A$ equals the sum of its eigenvalues.
  2. The eigenvalues of $\sqrt{A}$ are the square-roots of the eigenvalues of $A$.

Cannot compute fid for a generator, using images in fdir2.

This code is from fid.py line 459-470.

elif gen is not None:
    if not verbose:
        print(f"compute FID of a model with {dataset_name}-{dataset_res} statistics")
    score = fid_model(gen, dataset_name, dataset_res, dataset_split,
            model=feat_model, z_dim=z_dim, num_gen=num_gen,
            mode=mode, num_workers=num_workers, batch_size=batch_size,
            device=device, verbose=verbose)
    return score

# compute fid for a generator, using images in fdir2
elif gen is not None and fdir2 is not None:

There is no way we enter the last elif, so I can't compare my generator with images in fdir2. Is this intentional?

urllib.error.HTTPError: HTTP Error 404: Not Found

I found that when calculating custom dataset, using UPPERCASE named custom_name such as "dragan_Rs", it will be ERROR. And i change it to "dragan_rs", its OK.
I don't know why.

The complete error as follows:
Traceback (most recent call last):
  File "/home/user/duzongwei/Projects/FSGAN/metrics/fid.py", line 46, in <module>
    compute_fid_kid(fake_fdir, custom_name)
  File "/home/user/duzongwei/Projects/FSGAN/metrics/fid.py", line 15, in compute_fid_kid
    score_fid = fid.compute_fid(fake_fdir, dataset_name=custom_name, mode=mode, dataset_split=dataset_split)
  File "/home/user/anaconda3/envs/dzw_gan/lib/python3.7/site-packages/cleanfid/fid.py", line 456, in compute_fid
    batch_size=batch_size, device=device, verbose=verbose)
  File "/home/user/anaconda3/envs/dzw_gan/lib/python3.7/site-packages/cleanfid/fid.py", line 179, in fid_folder
    mode=mode, seed=0, split=dataset_split)
  File "/home/user/anaconda3/envs/dzw_gan/lib/python3.7/site-packages/cleanfid/features.py", line 58, in get_reference_statistics
    fpath = check_download_url(local_folder=stats_folder, url=url)
  File "/home/user/anaconda3/envs/dzw_gan/lib/python3.7/site-packages/cleanfid/downloads_helper.py", line 36, in check_download_url
    with urllib.request.urlopen(url) as response, open(local_path, 'wb') as f:
  File "/home/user/anaconda3/envs/dzw_gan/lib/python3.7/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/home/user/anaconda3/envs/dzw_gan/lib/python3.7/urllib/request.py", line 531, in open
    response = meth(req, response)
  File "/home/user/anaconda3/envs/dzw_gan/lib/python3.7/urllib/request.py", line 641, in http_response
    'http', request, response, code, msg, hdrs)

FID clip crashes when evaluated on `cpu` device

Passing device="cpu" and model_name="clip_vit_b_32" crashes with the following error:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper___slow_conv2d_forward)

Not friendly to Windows :-)

Default saved path
No such file or directory: '/tmp\\inception-2015-12-05.pt'
which is not so suitable for Windows ~

Different statistic of FFHQ256 with precompute statistic.

We follow the StyleGAN-ADA to extract FFHQ256 from tfrecord file. However, when I computed the statistic, it results in different statistics from your pre-computed trainval70K. I want to know whether the calculation steps have changed?

Can KID be negative number

I am trying to compute KID, but it is generating negative values. Can KID be a negative number?

Here is the code that I used:
`
from cleanfid import fid
fdir1 = my_folder1_path
fdir2 = my_folder2_path

kid_score = fid.compute_kid(fdir1, fdir2)
`

Each folder has only 6 images. My kid_score is -0.0406.

Could someone please help me understand why the KID is less that zero?

Thank you,
Chandrakanth

General question

Is the teaser figure depicting problems of downsampling still the same, given the new Pytorch 2.0 update? (i.e., is this issue the same for the current implementation of PyTorch bilinear...).

I know it is not a direct issue on the repo, but I could not find the answer or any updates about this.

Thanks for the great work!

AttributeError: Can't pickle local object 'make_resizer.<locals>.func'

Env: python 3.9.6, clean-fid: 0.1.15
My code:

from cleanfid import fid
score = fid.compute_fid('./fake_images', '../real_images')

Error:

  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\site-packages\cleanfid\fid.py", line 389, in compute_fid
    score = compare_folders(fdir1, fdir2, feat_model,
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\site-packages\cleanfid\fid.py", line 238, in compare_folders
    np_feats1 = get_folder_features(fdir1, feat_model, num_workers=num_workers,
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\site-packages\cleanfid\fid.py", line 131, in get_folder_features
    np_feats = get_files_features(files, model, num_workers=num_workers,
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\site-packages\cleanfid\fid.py", line 109, in get_files_features
    for batch in tqdm(dataloader, desc=description):
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1180, in __iter__
    for obj in iterable:
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__
    return self._get_iterator()
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__
    w.start()
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\10034\AppData\Local\Programs\Python\Python39\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'make_resizer.<locals>.func'

How to speed up the fid?

Hi, thanks for sharing your work.
In my case, I have a reference folder (fdir1), and about 50 target folders (fdir2), each containing hundreds of images. It takes a long time to calculate the score by default fid.compute_fid(fdir1, fdir2). It seems it is chocked by the CPU. Is there any way to speed up it?

Uppercase JPEG extension ignored by get_folder_features

Hi!
Uppercase JPEG extension ignored by get_folder_features
I've noticed this issue while making custom statistics from a folder with .JPEG files.
It seems like it has been thought of here for processing .zip

files = [x for x in files if os.path.splitext(x)[1].lower()[1:] in EXTENSIONS]

but not here for processing folders

clean-fid/cleanfid/fid.py

Lines 140 to 141 in b1d8934

files = sorted([file for ext in EXTENSIONS
for file in glob(os.path.join(fdir, f"**/*.{ext}"), recursive=True)])

Probably the easiest fix is to expand the EXTENSIONS with the upper-case versions

Reproducing clean-fid score on StyleGANv2

Hi,

Thanks for your contribution on GAN research!

We tried to get clean-fid for the images generated by styleganv2 using your code. However, the score was about 24, which was very different from the results reported in the paper. 50k images were generated using official checkpoints and calculated using pre-computed datasets statistics.

How to solve it?

Best,
Jungeun Kim

device=torch.device('cpu')

/home/m11113013/.local/lib/python3.8/site-packages/scipy/init.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.24.3)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
compute FID between two folders
Found 8091 images in the folder /dataset/flickr/images/
FID : 0%| | 0/506 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/m11113013/ProjectCode/MasterProject4/model/metric.py", line 12, in
score = calculate_fid(p1, p1)
File "/home/m11113013/ProjectCode/MasterProject4/model/metric.py", line 5, in calculate_fid
return fid.compute_fid(x_dir, y_dir, mode='clean', num_workers=0, batch_size=16, device=torch.device("cpu"))
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 478, in compute_fid
score = compare_folders(fdir1, fdir2, feat_model,
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 269, in compare_folders
np_feats1 = get_folder_features(fdir1, feat_model, num_workers=num_workers,
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 147, in get_folder_features
np_feats = get_files_features(files, model, num_workers=num_workers,
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 119, in get_files_features
l_feats.append(get_batch_features(batch, model, device))
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 88, in get_batch_features
feat = model(batch.to(device))
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/features.py", line 25, in model_fn
def model_fn(x): return model(x)
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 154, in forward
raise RuntimeError("module must have its parameters and buffers "
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu

import torch
from cleanfid import fid

def calculate_fid(x_dir, y_dir):
return fid.compute_fid(x_dir, y_dir, mode='clean', num_workers=0, batch_size=16, device=torch.device("cpu"))

if name == "main":
p1 = '/dataset/flickr/images/'
score = calculate_fid(p1, p1)
print(score)

python version: 3.8.16
pytorch version: 1.12.1
cuda version: 11.3

Fail to create custom dataset statistics for KID computation

Can I create custom dataset statistics to compute KID?

In my trial, after make_custom_stats, the invoking of compute_fid works fine but compute_kid tries to download the statistics:

>>> fid.compute_kid('sunglasses/0', dataset_name='sunglasses', mode='clean', dataset_res=256, dataset_split='custom')
compute KID of a folder with sunglasses statistics
downloading statistics to /home/brando/.local/lib/python3.7/site-packages/cleanfid/stats/sunglasses_clean_custom_na_kid.npz
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/brando/.local/lib/python3.7/site-packages/cleanfid/fid.py", line 335, in compute_kid
    mode=mode, seed=0, split=dataset_split, metric="KID")
  File "/home/brando/.local/lib/python3.7/site-packages/cleanfid/features.py", line 95, in get_reference_statistics
    fpath = check_download_url(local_folder=stats_folder, url=url)
  File "/home/brando/.local/lib/python3.7/site-packages/cleanfid/downloads_helper.py", line 27, in check_download_url
    with urllib.request.urlopen(url) as response, open(local_path, 'wb') as f:
  File "/usr/lib/python3.7/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.7/urllib/request.py", line 531, in open
    response = meth(req, response)
  File "/usr/lib/python3.7/urllib/request.py", line 641, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python3.7/urllib/request.py", line 569, in error
    return self._call_chain(*args)
  File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.7/urllib/request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found

Detailed context can be found in #9.

Resolution in image folders

Hi,

I am a little confused that should the resolution of images in 2 folders be the same or different (folder1: 256x256, folder2: 1024x1024)? If not, can we use PIL to resize it or use torch transform resize?

Many thanks.

About the resize function used by different libraries

Recently, I've come across a post on LinkedIn that describes how we should carefully choose the right resize function while stressing the fact that using different libraries/frameworks leads to different results. So, I decided to test it myself.
Click here to find the post that I took the inspiration from.

The following is the code snippet that I've edited(using this colab notebook) to give the correct way of using resize methods in different frameworks.

import numpy as np
import torch
import torchvision.transforms.functional as F
from torchvision import transforms
from torchvision.transforms.functional import InterpolationMode
from PIL import Image
import tensorflow as tf
import cv2
import matplotlib.pyplot as plt
from skimage import draw

image = np.ones((128, 128), dtype=np.float64)
rr, cc = draw.circle_perimeter(64, 64, radius=45, shape=image.shape)
image[rr, cc] = 0
plt.imshow(image, cmap='gray')
print(f"Unique values of image: {np.unique(arr)}")
print(image.dtype)
output_size = 17
def inspect_img(*, img):
    plt.imshow(img, cmap='gray')
    print(f"Value of pixel with coordinates (14,9): {img[14, 9]}")

def resize_PIL(*, img, output_size):
    img = Image.fromarray(image)
    img = img.resize((output_size, output_size), resample=Image.BICUBIC)
    img = np.asarray(img,dtype=np.float64)
    inspect_img(img=img)
    return img
def resize_pytorch(*, img, output_size):
    img = F.resize(Image.fromarray(np.float64(img)), # Provide a PIL image rather than a Tensor.
                   size=output_size, 
                   interpolation=InterpolationMode.BICUBIC)
    img = np.asarray(img, dtype=np.float64) 
    inspect_img(img=img)
    return img
def resize_tensorflow(*, img, output_size):
    img = img[tf.newaxis, ..., tf.newaxis]
    img = tf.image.resize(img, size = [output_size] * 2, method="bicubic", antialias=True)
    img = img[0, ..., 0].numpy()
    inspect_img(img=img)
    return img
image_PIL = resize_PIL(img=image, output_size=output_size)
image_pytorch = resize_pytorch(img=image, output_size=output_size)
image_tensorflow = resize_tensorflow(img=image, output_size=output_size)
assert np.array_equal(image_PIL, image_pytorch) == True, 'Not Identical!'
# assert np.array_equal(image_PIL, image_tensorflow) == True, 'Not Identical!'  --> fails
assert np.allclose(image_PIL, image_tensorflow) == True, 'Not Close!'
# assert np.array_equal(image_tensorflow, image_pytorch) == True, 'Not Identical!'  --> fails 
assert np.allclose(image_tensorflow, image_pytorch) == True, 'Not Close!'
# tensorflow gives a slightly different values than pytorch and PIL.

which gives us the following results:

result

Therefore, TensorFlow, PyTorch, and PIL give similar results if the resize method is used properly like in the above snippet code.

You can read my comments on linkedin to find out how I came to this solution.

The only remaining library is OpenCV which I'll test in the future.

Have a great day/night!

clean-fid build_resizer Import error

I am working on vid2vid GAN model from Nvidia-Imaginaire library. It uses clean-fid library. While running the model on google colab. I encounter this error related to the clean-fid.

from imaginaire.evaluation import compute_fid
File "/content/imaginaire/imaginaire/evaluation/init.py", line 5, in
from .fid import compute_fid, compute_fid_data
File "/content/imaginaire/imaginaire/evaluation/fid.py", line 10, in
from imaginaire.evaluation.common import load_or_compute_activations
File "/content/imaginaire/imaginaire/evaluation/common.py", line 14, in
from cleanfid.resize import build_resizer
File "/usr/local/lib/python3.7/dist-packages/cleanfid/resize.py", line 10, in
from cleanfid.utils import *
File "/usr/local/lib/python3.7/dist-packages/cleanfid/utils.py", line 5, in
from cleanfid.resize import build_resizer
ImportError: cannot import name 'build_resizer' from 'cleanfid.resize' (/usr/local/lib/python3.7/dist-packages/cleanfid/resize.py)

The Issue is it is not able import the build_resizer from cleanfid.resize.
I don't have this issue like two days back now I am having this issue. Is it due to the new version.

Thanks in advance

Custom inception model requirement for 2D content.

I'm very new to all this so this question might not make sense, but am I right in the assumption that FID relies on some pretrained model to evaluate the difference between the "fake" and "real" folders? If so, I assume such a model was trained only on generic photo content, which will limit its efficiency for drawn pictures and anime in particular. Or just having a large enough "real" folder will be enough?

If not, do I have to fine-tune said model for this specific content or there are pretrained models for 2D images that I could use here?

HTTPError: HTTP Error 404: Not Found

I tried fid.compute_fid function with cifar10 dataset.
It went perfectly until last week.
Seem like the dataset's URL is no longer supported.
Does anyone have the same error as me?

compute FID of a folder with cifar10 statistics
downloading statistics to /usr/local/lib/python3.10/dist-packages/cleanfid/stats/cifar10_clean_train_1024.npz

HTTPError Traceback (most recent call last)
in <cell line: 1>()
----> 1 score_clean = fid.compute_fid("folder_real", dataset_name="cifar10")
2 print(f"clean-fid score is {score_clean:.3f}")

9 frames
/usr/local/lib/python3.10/dist-packages/cleanfid/fid.py in compute_fid(fdir1, fdir2, gen, mode, model_name, num_workers, batch_size, device, dataset_name, dataset_res, dataset_split, num_gen, z_dim, custom_feat_extractor, verbose, custom_image_tranform, custom_fn_resize, use_dataparallel)
488 if verbose:
489 print(f"compute FID of a folder with {dataset_name} statistics")
--> 490 score = fid_folder(fdir1, dataset_name, dataset_res, dataset_split,
491 model=feat_model, mode=mode, model_name=model_name,
492 custom_fn_resize=custom_fn_resize, custom_image_tranform=custom_image_tranform,

/usr/local/lib/python3.10/dist-packages/cleanfid/fid.py in fid_folder(fdir, dataset_name, dataset_res, dataset_split, model, mode, model_name, num_workers, batch_size, device, verbose, custom_image_tranform, custom_fn_resize)
171 custom_image_tranform=None, custom_fn_resize=None):
172 # Load reference FID statistics (download if needed)
--> 173 ref_mu, ref_sigma = get_reference_statistics(dataset_name, dataset_res,
174 mode=mode, model_name=model_name, seed=0, split=dataset_split)
175 fbname = os.path.basename(fdir)

/usr/local/lib/python3.10/dist-packages/cleanfid/features.py in get_reference_statistics(name, res, mode, model_name, seed, split, metric)
64 mod_path = os.path.dirname(cleanfid.file)
65 stats_folder = os.path.join(mod_path, "stats")
---> 66 fpath = check_download_url(local_folder=stats_folder, url=url)
67 stats = np.load(fpath)
68 mu, sigma = stats["mu"], stats["sigma"]

/usr/local/lib/python3.10/dist-packages/cleanfid/downloads_helper.py in check_download_url(local_folder, url)
34 os.makedirs(local_folder, exist_ok=True)
35 print(f"downloading statistics to {local_path}")
---> 36 with urllib.request.urlopen(url) as response, open(local_path, 'wb') as f:
37 shutil.copyfileobj(response, f)
38 return local_path

/usr/lib/python3.10/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
214 else:
215 opener = _opener
--> 216 return opener.open(url, data, timeout)
217
218 def install_opener(opener):

/usr/lib/python3.10/urllib/request.py in open(self, fullurl, data, timeout)
523 for processor in self.process_response.get(protocol, []):
524 meth = getattr(processor, meth_name)
--> 525 response = meth(req, response)
526
527 return response

/usr/lib/python3.10/urllib/request.py in http_response(self, request, response)
632 # request was successfully received, understood, and accepted.
633 if not (200 <= code < 300):
--> 634 response = self.parent.error(
635 'http', request, response, code, msg, hdrs)
636

/usr/lib/python3.10/urllib/request.py in error(self, proto, *args)
561 if http_err:
562 args = (dict, 'default', 'http_error_default') + orig_args
--> 563 return self._call_chain(*args)
564
565 # XXX probably also want an abstract factory that knows when it makes

/usr/lib/python3.10/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
494 for handler in handlers:
495 func = getattr(handler, meth_name)
--> 496 result = func(*args)
497 if result is not None:
498 return result

/usr/lib/python3.10/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs)
641 class HTTPDefaultErrorHandler(BaseHandler):
642 def http_error_default(self, req, fp, code, msg, hdrs):
--> 643 raise HTTPError(req.full_url, code, msg, hdrs, fp)
644
645 class HTTPRedirectHandler(BaseHandler):

HTTPError: HTTP Error 404: Not Found

Resize per channel reasons

Thanks for sharing this great work. I'm just wondering about the reasons behind resize the image per channel and not resizing it as 3 channels array. I'm referring to this piece of code in the file resize.py.

def resize_single_channel(x_np, output_size):
    s1, s2 = output_size
    img = Image.fromarray(x_np.astype(np.float32), mode='F')
    img = img.resize(output_size, resample=Image.Resampling.BICUBIC)
    return np.asarray(img).clip(0, 255).reshape(s2, s1, 1)
def resize_channels(x, new_size):
    x = [resize_single_channel(x[:, :, idx], new_size) for idx in range(3)]
    x = np.concatenate(x, axis=2).astype(np.uint8)
    return x

I hope this is the way to ask questions here since It's my first time to ask a question on Issues instead of submitting an issue.

Thanks for your understanding.

FID error

RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory

How to solve it?

Feature request for centercrop

My dataset has nonsquare samples, so on training, I'm doing random crop and for validation, I'm calculating fid with center cropped copy of dataset.
Would be cool to have some option for passing my own transform, for example.

Extracting features of varying dimensionality from the Inception-V3 model

Thanks for this excellent repository. Comparing with https://github.com/mseitzer/pytorch-fid, I would like to extract features from different pooling layers like the first max pooling features (64), second max pooling features (192), pre-aux classifier features (768), and final average pooling features (2048) and compare FID scores. I believe the default option in your case is extracting the features from the final average pooling layer. Correct me if I am wrong.

from cleanfid import fid
fdir1 = '/content/gdrive/MyDrive/syn'
fdir2 = '/content/gdrive/MyDrive/orig'
score = fid.compute_fid(fdir1, fdir2)
print(score)

Is there an option to modify the function call to extract features from different layers and compare the scores? Thanks in advance.

ImageNet-1k statistics?

Hello, thanks for the great work and the package.
Are there any plans to release ImageNet-1k statistics?
if not, I can try to do it, and provide the steps to reproduce.

Imaginary component with more than 2048 images

Hi,

I am facing the Imaginary component issue unless I have more than 2048 images in each folder.

I use this line of code to compute it.
fid_score = fid.compute_fid(source, target, mode="clean", verbose=True, dataset_split="custom", use_dataparallel=False)

The weird thing is that happens only on validation set, on a subset of training set it works well. Unless I perform exact same operations on both, as saving model predictions and targets using torchvision.

I am using clean-fid==0.1.35
CUDA 11.8
Python 3.10.13
numpy==1.26
torch==2.0.1
Debian 11

Thanks for tyour help in advance ;)

open-cv is missing from the dependencies

However, it is required when using fid.make_custom_stats(). I used pip.

Also, the README instructs one to pip install -r requirements.txt while such a file is not present.

Error for InceptionV3W ~ is it different from regular InceptionV3?

I want to cleanly calculate the FID score because I had some differences due to resizing. I appreciate the work you have done to make the process more consistent.

I am running the module on PyTorch 1.9.0, CUDA 11.2, and get the following error when calculating the FID score in the "clean" mode using the "torchscript_inception" model from https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt:

RuntimeError: MALFORMED INPUT: lanes dont match
On the following line:

features = self.layers.forward(x2, ).view((bs, 2048))

It is a very peculiar error coming from this file:
https://github.com/pytorch/pytorch/blob/fce85480b97d8d79e92e11fbcd3c03a25ae446e0/torch/csrc/jit/tensorexpr/types.h#L35

I'm not familiar with TorchScript but maybe it has something to do with the model being composed/uploaded in another Pytorch version. It is probably better to ask in the Pytorch github what causes this.

However, I want to ask if just using the regular InceptionV3 model provided here would give the same FID score / would also be good practice. In short: calculate FID in "clean" mode but use the "pytorch_inception" model

Why pinned requests==2.25.1?

This version is pretty old...I just want to know why is it being pinned, as it conflicts with many of the other libraries I have.

cuda error?

UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.24.3)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
compute FID between two folders
Found 8091 images in the folder /dataset/flickr/images/
FID : 0%| | 0/506 [00:02<?, ?it/s]
Traceback (most recent call last):
File "/home/m11113013/ProjectCode/MasterProject4/model/metric.py", line 11, in
score = calculate_fid(p1, p1)
File "/home/m11113013/ProjectCode/MasterProject4/model/metric.py", line 4, in calculate_fid
return fid.compute_fid(x_dir, y_dir, mode='clean', num_workers=0, batch_size=16)
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 478, in compute_fid
score = compare_folders(fdir1, fdir2, feat_model,
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 269, in compare_folders
np_feats1 = get_folder_features(fdir1, feat_model, num_workers=num_workers,
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 147, in get_folder_features
np_feats = get_files_features(files, model, num_workers=num_workers,
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 119, in get_files_features
l_feats.append(get_batch_features(batch, model, device))
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/fid.py", line 88, in get_batch_features
feat = model(batch.to(device))
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/features.py", line 25, in model_fn
def model_fn(x): return model(x)
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/_utils.py", line 461, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/m11113013/miniconda3/envs/pytorch/lib/python3.8/site-packages/cleanfid/inception_torchscript.py", line 54, in forward
features = self.layers.forward(x2, ).view((bs, 2048))
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch/torch/nn/modules/container/___torch_mangle_9.py", line 45, in forward
_17 = self.mixed_10
_18 = self.pool2
input0 = (_0).forward(input, )
~~~~~~~~~~~ <--- HERE
input1 = (_1).forward(input0, )
input2 = (_2).forward(input1, )
File "code/torch.py", line 74, in forward
_22 = self.stride
_23 = self.padding
x3 = torch.conv2d(x, _21, None, [_22, _22], [_23, _23], [1, 1], 1)
~~~~~~~~~~~~ <--- HERE
x4 = _20(x3, self.mean, self.var, None, self.beta, False, 0.10000000000000001, 0.001, )
x5 = torch.torch.nn.functional.relu(x4, False, )

Traceback of TorchScript, original code (most recent call last):
File "C:\Users\tkarras\Anaconda3\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
def forward(self, input):
for module in self:
input = module(input)
~~~~~~ <--- HERE
return input
File "c:\p4research\research\tkarras\dnn\gan3support\feature_detectors\inception.py", line 28, in forward
def forward(self, x):
x = torch.nn.functional.conv2d(x, self.weight.to(x.dtype), stride=self.stride, padding=self.padding)
~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
x = torch.nn.functional.batch_norm(x, running_mean=self.mean, running_var=self.var, bias=self.beta, eps=1e-3)
x = torch.nn.functional.relu(x)
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

from cleanfid import fid

def calculate_fid(x_dir, y_dir):
return fid.compute_fid(x_dir, y_dir, mode='clean', num_workers=0, batch_size=16)

if name == "main":
p1 = '/dataset/flickr/images/'
score = calculate_fid(p1, p1)
print(score)

python version: 3.8.16
pytorch version: 1.12.1
cuda version:11.3

'z_batch' size BUG for Generator with 4D/3D tensors as inputs

I was trying to load a Generator from Progan implementation in pytorch. This generator receives a tensor of size
(batch_size, Z_DIM, 1, 1) as input, where Z_DIM is the latent vector dimension (e.g. 512). The code in
cleanfid/fid.py, line 214 accept an input of size z_batch = torch.randn((batch_size, z_dim)).to(device),
which is a 2D tensor, suitable only for non-RGB images (like in Vanilla GAN or other examples). I think
the code should also support 4D tensors for RGB images, as most GAN implementations accept
4D tensors as inputs to their generators.

[Error]:

compute FID of a model with <name-of-precomputed-statistics-file>
FID model:   0%|                                                                       | 0/1563 [00:00<?, ?it/s]
Traceback (most recent call last):
....
....
  File "/home/user/Documents/GANS/metrics/cleanfid/fid.py", line 251, in fid_model
    np_feats = get_model_features(G, model, mode=mode,
  File "/home/user/Documents/GANS/metrics/cleanfid/fid.py", line 214, in get_model_features
    img_batch = G(z_batch)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 204, in forward
    input = module(input)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 956, in forward
    return F.conv_transpose2d(
RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv_transpose2d, but got input of size: [32, 512]

code snippet [fid.py]:

"""
Compute the FID stats from a generator model
"""
def get_model_features(G, model, mode="clean", z_dim=512,
        num_gen=50_000, batch_size=128, device=torch.device("cuda"),
        desc="FID model: ", verbose=True, return_z=False,
        custom_image_tranform=None, custom_fn_resize=None):
    if custom_fn_resize is None:
        fn_resize = build_resizer(mode)
    else:
        fn_resize = custom_fn_resize
    
    # Generate test features
    num_iters = int(np.ceil(num_gen / batch_size))
    l_feats = []
    latents = []
    if verbose:
        pbar = tqdm(range(num_iters), desc=desc)
    else:
        pbar = range(num_iters)
    for idx in pbar:
        with torch.no_grad():
            z_batch = torch.randn((batch_size, z_dim)).to(device)
            if return_z:
                latents.append(z_batch)
            # generated image is in range [0,255]
            img_batch = G(z_batch)
            # split into individual batches for resizing if needed
            if mode != "legacy_tensorflow":
                l_resized_batch = []
                for idx in range(batch_size):

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.