GithubHelp home page GithubHelp logo

iceclear / stablesr Goto Github PK

View Code? Open in Web Editor NEW
2.1K 24.0 129.0 10.99 MB

[IJCV2024] Exploiting Diffusion Prior for Real-World Image Super-Resolution

Home Page: https://iceclear.github.io/projects/stablesr/

License: Other

Python 100.00%
stable-diffusion super-resolution stablesr

stablesr's Issues

Google colab not working even with PRO and high-RAM

Traceback (most recent call last):
File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py", line 319, in
main()
File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py", line 238, in main
img_list_ori = os.listdir(opt.init_img)
NotADirectoryError: [Errno 20] Not a directory: 'inputs/user_upload/frame162.jpg'

color correction>>>>>>>>>>>
Use adain color correction

Loading model from ./vqgan_cfw_00011.ckpt
Global Step: 18000
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 1.13.1 with CUDA None (you have 2.0.1+cu117)
Python 3.10.11 (you have 3.10.10)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
/usr/local/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 64, 64) = 16384 dimensions.
making attention of type 'vanilla' with 512 in_channels
/usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth

missing>>>>>>>>>>>>>>>>>>>

What's the latent and sample mean

I try to train the model,but the dataloader have the input of gt,lq,latent and sample.I don't understand how to get latent and samples ? how to get my custom data's latent and samples

problem when inference with 768v model

I update the codes for inferencing images with latest sd2.1-768v model.
But I got all none values after diffusion sample.
Is there something wrong?
my command:
python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas.py --config configs/stableSRNew/v2-finetune_text_T_768v.yaml
--ckpt ./stablesr_768v_000139.ckpt --vqgan_ckpt vqgan_cfw_00011.ckpt
--init-img inputs/test_example --outdir results_srgb/0706_v768 --ddpm_steps 200
--dec_w 0.0 --colorfix_type nofix --n_samples 1 --input_size 768

CFW module weight didn't work as expected.

Hello, I run the Test command 3
python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt CKPT_PATH --vqgan_ckpt VQGANCKPT_PATH --init-img INPUT_PATH --outdir OUT_DIR --ddpm_steps 200 --dec_w 0.5 --colorfix_type adain
to inference a img, when I tune the weight dec_w in the set {0.2, 0.5, 0.8} but got almost the same result, which I think lacks fidelity, how could I get SR result more similar to the original image as the paper claimed.
Here is my result:
top_left is ori image, top_right is sr result of dec_w=0.2, button_left is sr result of dec_w=0.5, button_right is sr result of dec_w=0.8
2023-06-27 17-48-46 的屏幕截图

Why does this error occur?

Traceback (most recent call last):
File "/content/StableSR/scripts/sr_val_ddpm_text_T_vqganfin_old.py", line 4, in
import PIL
ModuleNotFoundError: No module named 'PIL'

All the libraries were installed, I previously translated the Google Colab notebook to another language.
Every time I get this error, every time I start up.

[Question] training with higher resolution

Hello. I own an A100 and am about to train my own stableSR model for non-realistic images. Since the video card has 80gb vram, can I use a resolution other than 512 for training? For example 1024 or higher. It seems that i MUST to use sd2.1-512 as a stable-diffusion model and can't use 768-v, so this question arose. Does it make sense to change the resolution in config to something higher than 512 or will it only worsen the result?

How to prepare CFW_trainingdata?

Thanks for your great work! I'm trying to understand your training pipeline.

I would like to know how to prepare CFW_trainingdata in the second step as you mentioned in README file. The data folder contains inputs, gts, latents and samples.

It seems the CFW_trainingdata comes from the diffusion model pretrained in the first stage. But I still don't know how to generate the data.

Greatly appreciate your response.

batch size is 192 in training? CUDA OOM

Hi, I'm trying to train stablesr by the command you provided:
python main.py --train --base configs/stableSRNew/v2-finetune_text_T_512.yaml --gpus GPU_ID, --name NAME --scale_lr False

As your mentioned, the training batch size is 192, so I modified line 131 to "batch size: 192" and line 137 to "queue_size: 192". But it got the error "CUDA out of memory" when running. If I just keep the config file unmodified (batch size: 6, queue_size: 180), the training process can run well.

So, is the batch size 192 as mentioned in paper? If so, how should I modify the config file? Thanks in advance.

Tile strategy: CUDA out of memory.

execute this tile command :
python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt CKPT_PATH --vqgan_ckpt VQGANCKPT_PATH --init-img INPUT_PATH --outdir OUT_DIR --skip_grid --ddpm_steps 200 --dec_w 0.5

still CUDA out of memory.
my cuda memory size is 15G

Is there any way to solve this problem?

Replication issue

Thank you for sharing the code. And I try to train the model from scratch following your train script and config, everything is same except the DIV8k dataset(i don't have DIV8k). By the time I tested it, the model has been trained for 12000 steps( vs your 16500 steps).

The train script is:

python main.py --train --base configs/stableSRNew/v2-finetune_text_T_512.yaml --gpus 0,1,2,3,4,5,6,7 --name StableSR_Replicate --scale_lr False

The test scripts is:

python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt CKPT_PATH --vqgan_ckpt VQGANCKPT_PATH --init-img INPUT_PATH --outdir OUT_DIR --ddpm_steps 200 --dec_w 0.0 --colorfix_type adain

the input image is :
OST_120

the model results i trained:
OST_120

your pretrained model results:
OST_120 (1)

What makes the difference? Is it training steps or DIV8K dataset? Or other something?

Stuck in epoch 0%

When I want to train SFT, the process get stuck in epoch 0:0% for long time,I have normal result in test
The CLI i use
python main.py --train --base configs/stableSRNew/v2-finetune_text_T_512.yaml --gpus 0, --name NAME --scale_lr False
image

Have anyboby has some ideas to solve?

Cuda OOM

Keep geting out of memory issues no matter the test. I tried the tile test too.
Currently Using a 4GB GPU

python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt C:/Users/leolo/Desktop/models/stablesr.ckpt --vqgan_ckpt C:/Users/leolo/Desktop/models/vqgan_cfw.ckpt --init-img C:/Users/leolo/Pictures/frame/1.png --outdir C:/Users/leolo/Pictures/frame/out/ --skip_grid --ddpm_steps 200 --dec_w 0.5

A bit confused at the running of it, am I supposed to follow the "train" and "resume" section first before "test"?

sorry for the long error log

C:\Users\leolo\.conda\envs\stablesr\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to C:\Users\leolo/.cache\torch\hub\checkpoints\vgg16-397923af.pth
100%|███████████████████████████████████████████████████████████████████████████████| 528M/528M [00:41<00:00, 13.3MB/s]
Downloading vgg_lpips model from https://heibox.uni-heidelberg.de/f/607503859c864bc1b30b/?dl=1 to taming/modules/autoencoder/lpips\vgg.pth
8.19kB [00:00, 22.1kB/s]
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips\vgg.pth
>>>>>>>>>>>>>>>>>missing>>>>>>>>>>>>>>>>>>>
[]
>>>>>>>>>>>>>>>>>trainable_list>>>>>>>>>>>>>>>>>>>
['decoder.fusion_layer_2.encode_enc_1.norm1.weight', 'decoder.fusion_layer_2.encode_enc_1.norm1.bias', 'decoder.fusion_layer_2.encode_enc_1.conv1.weight', 'decoder.fusion_layer_2.encode_enc_1.conv1.bias', 'decoder.fusion_layer_2.encode_enc_1.norm2.weight', 'decoder.fusion_layer_2.encode_enc_1.norm2.bias', 'decoder.fusion_layer_2.encode_enc_1.conv2.weight', 'decoder.fusion_layer_2.encode_enc_1.conv2.bias', 'decoder.fusion_layer_2.encode_enc_1.conv_out.weight', 'decoder.fusion_layer_2.encode_enc_1.conv_out.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv5.bias', 'decoder.fusion_layer_2.encode_enc_3.norm1.weight', 'decoder.fusion_layer_2.encode_enc_3.norm1.bias', 'decoder.fusion_layer_2.encode_enc_3.conv1.weight', 'decoder.fusion_layer_2.encode_enc_3.conv1.bias', 'decoder.fusion_layer_2.encode_enc_3.norm2.weight', 'decoder.fusion_layer_2.encode_enc_3.norm2.bias', 'decoder.fusion_layer_2.encode_enc_3.conv2.weight', 'decoder.fusion_layer_2.encode_enc_3.conv2.bias', 'decoder.fusion_layer_1.encode_enc_1.norm1.weight', 'decoder.fusion_layer_1.encode_enc_1.norm1.bias', 'decoder.fusion_layer_1.encode_enc_1.conv1.weight', 'decoder.fusion_layer_1.encode_enc_1.conv1.bias', 'decoder.fusion_layer_1.encode_enc_1.norm2.weight', 'decoder.fusion_layer_1.encode_enc_1.norm2.bias', 'decoder.fusion_layer_1.encode_enc_1.conv2.weight', 'decoder.fusion_layer_1.encode_enc_1.conv2.bias', 'decoder.fusion_layer_1.encode_enc_1.conv_out.weight', 'decoder.fusion_layer_1.encode_enc_1.conv_out.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv5.bias', 'decoder.fusion_layer_1.encode_enc_3.norm1.weight', 'decoder.fusion_layer_1.encode_enc_3.norm1.bias', 'decoder.fusion_layer_1.encode_enc_3.conv1.weight', 'decoder.fusion_layer_1.encode_enc_3.conv1.bias', 'decoder.fusion_layer_1.encode_enc_3.norm2.weight', 'decoder.fusion_layer_1.encode_enc_3.norm2.bias', 'decoder.fusion_layer_1.encode_enc_3.conv2.weight', 'decoder.fusion_layer_1.encode_enc_3.conv2.bias', 'loss.discriminator.main.0.weight', 'loss.discriminator.main.0.bias', 'loss.discriminator.main.2.weight', 'loss.discriminator.main.3.weight', 'loss.discriminator.main.3.bias', 'loss.discriminator.main.5.weight', 'loss.discriminator.main.6.weight', 'loss.discriminator.main.6.bias', 'loss.discriminator.main.8.weight', 'loss.discriminator.main.9.weight', 'loss.discriminator.main.9.bias', 'loss.discriminator.main.11.weight', 'loss.discriminator.main.11.bias']
>>>>>>>>>>>>>>>>>Untrainable_list>>>>>>>>>>>>>>>>>>>
['encoder.conv_in.weight', 'encoder.conv_in.bias', 'encoder.down.0.block.0.norm1.weight', 'encoder.down.0.block.0.norm1.bias', 'encoder.down.0.block.0.conv1.weight', 'encoder.down.0.block.0.conv1.bias', 'encoder.down.0.block.0.norm2.weight', 'encoder.down.0.block.0.norm2.bias', 'encoder.down.0.block.0.conv2.weight', 'encoder.down.0.block.0.conv2.bias', 'encoder.down.0.block.1.norm1.weight', 'encoder.down.0.block.1.norm1.bias', 'encoder.down.0.block.1.conv1.weight', 'encoder.down.0.block.1.conv1.bias', 'encoder.down.0.block.1.norm2.weight', 'encoder.down.0.block.1.norm2.bias', 'encoder.down.0.block.1.conv2.weight', 'encoder.down.0.block.1.conv2.bias', 'encoder.down.0.downsample.conv.weight', 'encoder.down.0.downsample.conv.bias', 'encoder.down.1.block.0.norm1.weight', 'encoder.down.1.block.0.norm1.bias', 'encoder.down.1.block.0.conv1.weight', 'encoder.down.1.block.0.conv1.bias', 'encoder.down.1.block.0.norm2.weight', 'encoder.down.1.block.0.norm2.bias', 'encoder.down.1.block.0.conv2.weight', 'encoder.down.1.block.0.conv2.bias', 'encoder.down.1.block.0.nin_shortcut.weight', 'encoder.down.1.block.0.nin_shortcut.bias', 'encoder.down.1.block.1.norm1.weight', 'encoder.down.1.block.1.norm1.bias', 'encoder.down.1.block.1.conv1.weight', 'encoder.down.1.block.1.conv1.bias', 'encoder.down.1.block.1.norm2.weight', 'encoder.down.1.block.1.norm2.bias', 'encoder.down.1.block.1.conv2.weight', 'encoder.down.1.block.1.conv2.bias', 'encoder.down.1.downsample.conv.weight', 'encoder.down.1.downsample.conv.bias', 'encoder.down.2.block.0.norm1.weight', 'encoder.down.2.block.0.norm1.bias', 'encoder.down.2.block.0.conv1.weight', 'encoder.down.2.block.0.conv1.bias', 'encoder.down.2.block.0.norm2.weight', 'encoder.down.2.block.0.norm2.bias', 'encoder.down.2.block.0.conv2.weight', 'encoder.down.2.block.0.conv2.bias', 'encoder.down.2.block.0.nin_shortcut.weight', 'encoder.down.2.block.0.nin_shortcut.bias', 'encoder.down.2.block.1.norm1.weight', 'encoder.down.2.block.1.norm1.bias', 'encoder.down.2.block.1.conv1.weight', 'encoder.down.2.block.1.conv1.bias', 'encoder.down.2.block.1.norm2.weight', 'encoder.down.2.block.1.norm2.bias', 'encoder.down.2.block.1.conv2.weight', 'encoder.down.2.block.1.conv2.bias', 'encoder.down.2.downsample.conv.weight', 'encoder.down.2.downsample.conv.bias', 'encoder.down.3.block.0.norm1.weight', 'encoder.down.3.block.0.norm1.bias', 'encoder.down.3.block.0.conv1.weight', 'encoder.down.3.block.0.conv1.bias', 'encoder.down.3.block.0.norm2.weight', 'encoder.down.3.block.0.norm2.bias', 'encoder.down.3.block.0.conv2.weight', 'encoder.down.3.block.0.conv2.bias', 'encoder.down.3.block.1.norm1.weight', 'encoder.down.3.block.1.norm1.bias', 'encoder.down.3.block.1.conv1.weight', 'encoder.down.3.block.1.conv1.bias', 'encoder.down.3.block.1.norm2.weight', 'encoder.down.3.block.1.norm2.bias', 'encoder.down.3.block.1.conv2.weight', 'encoder.down.3.block.1.conv2.bias', 'encoder.mid.block_1.norm1.weight', 'encoder.mid.block_1.norm1.bias', 'encoder.mid.block_1.conv1.weight', 'encoder.mid.block_1.conv1.bias', 'encoder.mid.block_1.norm2.weight', 'encoder.mid.block_1.norm2.bias', 'encoder.mid.block_1.conv2.weight', 'encoder.mid.block_1.conv2.bias', 'encoder.mid.attn_1.norm.weight', 'encoder.mid.attn_1.norm.bias', 'encoder.mid.attn_1.q.weight', 'encoder.mid.attn_1.q.bias', 'encoder.mid.attn_1.k.weight', 'encoder.mid.attn_1.k.bias', 'encoder.mid.attn_1.v.weight', 'encoder.mid.attn_1.v.bias', 'encoder.mid.attn_1.proj_out.weight', 'encoder.mid.attn_1.proj_out.bias', 'encoder.mid.block_2.norm1.weight', 'encoder.mid.block_2.norm1.bias', 'encoder.mid.block_2.conv1.weight', 'encoder.mid.block_2.conv1.bias', 'encoder.mid.block_2.norm2.weight', 'encoder.mid.block_2.norm2.bias', 'encoder.mid.block_2.conv2.weight', 'encoder.mid.block_2.conv2.bias', 'encoder.norm_out.weight', 'encoder.norm_out.bias', 'encoder.conv_out.weight', 'encoder.conv_out.bias', 'decoder.conv_in.weight', 'decoder.conv_in.bias', 'decoder.mid.block_1.norm1.weight', 'decoder.mid.block_1.norm1.bias', 'decoder.mid.block_1.conv1.weight', 'decoder.mid.block_1.conv1.bias', 'decoder.mid.block_1.norm2.weight', 'decoder.mid.block_1.norm2.bias', 'decoder.mid.block_1.conv2.weight', 'decoder.mid.block_1.conv2.bias', 'decoder.mid.attn_1.norm.weight', 'decoder.mid.attn_1.norm.bias', 'decoder.mid.attn_1.q.weight', 'decoder.mid.attn_1.q.bias', 'decoder.mid.attn_1.k.weight', 'decoder.mid.attn_1.k.bias', 'decoder.mid.attn_1.v.weight', 'decoder.mid.attn_1.v.bias', 'decoder.mid.attn_1.proj_out.weight', 'decoder.mid.attn_1.proj_out.bias', 'decoder.mid.block_2.norm1.weight', 'decoder.mid.block_2.norm1.bias', 'decoder.mid.block_2.conv1.weight', 'decoder.mid.block_2.conv1.bias', 'decoder.mid.block_2.norm2.weight', 'decoder.mid.block_2.norm2.bias', 'decoder.mid.block_2.conv2.weight', 'decoder.mid.block_2.conv2.bias', 'decoder.up.0.block.0.norm1.weight', 'decoder.up.0.block.0.norm1.bias', 'decoder.up.0.block.0.conv1.weight', 'decoder.up.0.block.0.conv1.bias', 'decoder.up.0.block.0.norm2.weight', 'decoder.up.0.block.0.norm2.bias', 'decoder.up.0.block.0.conv2.weight', 'decoder.up.0.block.0.conv2.bias', 'decoder.up.0.block.0.nin_shortcut.weight', 'decoder.up.0.block.0.nin_shortcut.bias', 'decoder.up.0.block.1.norm1.weight', 'decoder.up.0.block.1.norm1.bias', 'decoder.up.0.block.1.conv1.weight', 'decoder.up.0.block.1.conv1.bias', 'decoder.up.0.block.1.norm2.weight', 'decoder.up.0.block.1.norm2.bias', 'decoder.up.0.block.1.conv2.weight', 'decoder.up.0.block.1.conv2.bias', 'decoder.up.0.block.2.norm1.weight', 'decoder.up.0.block.2.norm1.bias', 'decoder.up.0.block.2.conv1.weight', 'decoder.up.0.block.2.conv1.bias', 'decoder.up.0.block.2.norm2.weight', 'decoder.up.0.block.2.norm2.bias', 'decoder.up.0.block.2.conv2.weight', 'decoder.up.0.block.2.conv2.bias', 'decoder.up.1.block.0.norm1.weight', 'decoder.up.1.block.0.norm1.bias', 'decoder.up.1.block.0.conv1.weight', 'decoder.up.1.block.0.conv1.bias', 'decoder.up.1.block.0.norm2.weight', 'decoder.up.1.block.0.norm2.bias', 'decoder.up.1.block.0.conv2.weight', 'decoder.up.1.block.0.conv2.bias', 'decoder.up.1.block.0.nin_shortcut.weight', 'decoder.up.1.block.0.nin_shortcut.bias', 'decoder.up.1.block.1.norm1.weight', 'decoder.up.1.block.1.norm1.bias', 'decoder.up.1.block.1.conv1.weight', 'decoder.up.1.block.1.conv1.bias', 'decoder.up.1.block.1.norm2.weight', 'decoder.up.1.block.1.norm2.bias', 'decoder.up.1.block.1.conv2.weight', 'decoder.up.1.block.1.conv2.bias', 'decoder.up.1.block.2.norm1.weight', 'decoder.up.1.block.2.norm1.bias', 'decoder.up.1.block.2.conv1.weight', 'decoder.up.1.block.2.conv1.bias', 'decoder.up.1.block.2.norm2.weight', 'decoder.up.1.block.2.norm2.bias', 'decoder.up.1.block.2.conv2.weight', 'decoder.up.1.block.2.conv2.bias', 'decoder.up.1.upsample.conv.weight', 'decoder.up.1.upsample.conv.bias', 'decoder.up.2.block.0.norm1.weight', 'decoder.up.2.block.0.norm1.bias', 'decoder.up.2.block.0.conv1.weight', 'decoder.up.2.block.0.conv1.bias', 'decoder.up.2.block.0.norm2.weight', 'decoder.up.2.block.0.norm2.bias', 'decoder.up.2.block.0.conv2.weight', 'decoder.up.2.block.0.conv2.bias', 'decoder.up.2.block.1.norm1.weight', 'decoder.up.2.block.1.norm1.bias', 'decoder.up.2.block.1.conv1.weight', 'decoder.up.2.block.1.conv1.bias', 'decoder.up.2.block.1.norm2.weight', 'decoder.up.2.block.1.norm2.bias', 'decoder.up.2.block.1.conv2.weight', 'decoder.up.2.block.1.conv2.bias', 'decoder.up.2.block.2.norm1.weight', 'decoder.up.2.block.2.norm1.bias', 'decoder.up.2.block.2.conv1.weight', 'decoder.up.2.block.2.conv1.bias', 'decoder.up.2.block.2.norm2.weight', 'decoder.up.2.block.2.norm2.bias', 'decoder.up.2.block.2.conv2.weight', 'decoder.up.2.block.2.conv2.bias', 'decoder.up.2.upsample.conv.weight', 'decoder.up.2.upsample.conv.bias', 'decoder.up.3.block.0.norm1.weight', 'decoder.up.3.block.0.norm1.bias', 'decoder.up.3.block.0.conv1.weight', 'decoder.up.3.block.0.conv1.bias', 'decoder.up.3.block.0.norm2.weight', 'decoder.up.3.block.0.norm2.bias', 'decoder.up.3.block.0.conv2.weight', 'decoder.up.3.block.0.conv2.bias', 'decoder.up.3.block.1.norm1.weight', 'decoder.up.3.block.1.norm1.bias', 'decoder.up.3.block.1.conv1.weight', 'decoder.up.3.block.1.conv1.bias', 'decoder.up.3.block.1.norm2.weight', 'decoder.up.3.block.1.norm2.bias', 'decoder.up.3.block.1.conv2.weight', 'decoder.up.3.block.1.conv2.bias', 'decoder.up.3.block.2.norm1.weight', 'decoder.up.3.block.2.norm1.bias', 'decoder.up.3.block.2.conv1.weight', 'decoder.up.3.block.2.conv1.bias', 'decoder.up.3.block.2.norm2.weight', 'decoder.up.3.block.2.norm2.bias', 'decoder.up.3.block.2.conv2.weight', 'decoder.up.3.block.2.conv2.bias', 'decoder.up.3.upsample.conv.weight', 'decoder.up.3.upsample.conv.bias', 'decoder.norm_out.weight', 'decoder.norm_out.bias', 'decoder.conv_out.weight', 'decoder.conv_out.bias', 'loss.logvar', 'loss.perceptual_loss.net.slice1.0.weight', 'loss.perceptual_loss.net.slice1.0.bias', 'loss.perceptual_loss.net.slice1.2.weight', 'loss.perceptual_loss.net.slice1.2.bias', 'loss.perceptual_loss.net.slice2.5.weight', 'loss.perceptual_loss.net.slice2.5.bias', 'loss.perceptual_loss.net.slice2.7.weight', 'loss.perceptual_loss.net.slice2.7.bias', 'loss.perceptual_loss.net.slice3.10.weight', 'loss.perceptual_loss.net.slice3.10.bias', 'loss.perceptual_loss.net.slice3.12.weight', 'loss.perceptual_loss.net.slice3.12.bias', 'loss.perceptual_loss.net.slice3.14.weight', 'loss.perceptual_loss.net.slice3.14.bias', 'loss.perceptual_loss.net.slice4.17.weight', 'loss.perceptual_loss.net.slice4.17.bias', 'loss.perceptual_loss.net.slice4.19.weight', 'loss.perceptual_loss.net.slice4.19.bias', 'loss.perceptual_loss.net.slice4.21.weight', 'loss.perceptual_loss.net.slice4.21.bias', 'loss.perceptual_loss.net.slice5.24.weight', 'loss.perceptual_loss.net.slice5.24.bias', 'loss.perceptual_loss.net.slice5.26.weight', 'loss.perceptual_loss.net.slice5.26.bias', 'loss.perceptual_loss.net.slice5.28.weight', 'loss.perceptual_loss.net.slice5.28.bias', 'loss.perceptual_loss.lin0.model.1.weight', 'loss.perceptual_loss.lin1.model.1.weight', 'loss.perceptual_loss.lin2.model.1.weight', 'loss.perceptual_loss.lin3.model.1.weight', 'loss.perceptual_loss.lin4.model.1.weight', 'quant_conv.weight', 'quant_conv.bias', 'post_quant_conv.weight', 'post_quant_conv.bias']
Global seed set to 42
Loading model from C:/Users/leolo/Desktop/models/stablesr.ckpt
Global Step: 16500
LatentDiffusionSRTextWT: Running in eps-prediction mode
DiffusionWrapper has 918.93 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 64, 64) = 16384 dimensions.
making attention of type 'vanilla' with 512 in_channels
>>>>>>>>>>>>>>>>model>>>>>>>>>>>>>>>>>>>>
['diffusion_model.input_blocks.1.0.spade.param_free_norm.weight', 'diffusion_model.input_blocks.1.0.spade.param_free_norm.bias', 'diffusion_model.input_blocks.1.0.spade.mlp_shared.0.weight', 'diffusion_model.input_blocks.1.0.spade.mlp_shared.0.bias', 'diffusion_model.input_blocks.1.0.spade.mlp_gamma.weight', 'diffusion_model.input_blocks.1.0.spade.mlp_gamma.bias', 'diffusion_model.input_blocks.1.0.spade.mlp_beta.weight', 'diffusion_model.input_blocks.1.0.spade.mlp_beta.bias', 'diffusion_model.input_blocks.2.0.spade.param_free_norm.weight', 'diffusion_model.input_blocks.2.0.spade.param_free_norm.bias', 'diffusion_model.input_blocks.2.0.spade.mlp_shared.0.weight', 'diffusion_model.input_blocks.2.0.spade.mlp_shared.0.bias', 'diffusion_model.input_blocks.2.0.spade.mlp_gamma.weight', 'diffusion_model.input_blocks.2.0.spade.mlp_gamma.bias', 'diffusion_model.input_blocks.2.0.spade.mlp_beta.weight', 'diffusion_model.input_blocks.2.0.spade.mlp_beta.bias', 'diffusion_model.input_blocks.4.0.spade.param_free_norm.weight', 'diffusion_model.input_blocks.4.0.spade.param_free_norm.bias', 'diffusion_model.input_blocks.4.0.spade.mlp_shared.0.weight', 'diffusion_model.input_blocks.4.0.spade.mlp_shared.0.bias', 'diffusion_model.input_blocks.4.0.spade.mlp_gamma.weight', 'diffusion_model.input_blocks.4.0.spade.mlp_gamma.bias', 'diffusion_model.input_blocks.4.0.spade.mlp_beta.weight', 'diffusion_model.input_blocks.4.0.spade.mlp_beta.bias', 'diffusion_model.input_blocks.5.0.spade.param_free_norm.weight', 'diffusion_model.input_blocks.5.0.spade.param_free_norm.bias', 'diffusion_model.input_blocks.5.0.spade.mlp_shared.0.weight', 'diffusion_model.input_blocks.5.0.spade.mlp_shared.0.bias', 'diffusion_model.input_blocks.5.0.spade.mlp_gamma.weight', 'diffusion_model.input_blocks.5.0.spade.mlp_gamma.bias', 'diffusion_model.input_blocks.5.0.spade.mlp_beta.weight', 'diffusion_model.input_blocks.5.0.spade.mlp_beta.bias', 'diffusion_model.input_blocks.7.0.spade.param_free_norm.weight', 'diffusion_model.input_blocks.7.0.spade.param_free_norm.bias', 'diffusion_model.input_blocks.7.0.spade.mlp_shared.0.weight', 'diffusion_model.input_blocks.7.0.spade.mlp_shared.0.bias', 'diffusion_model.input_blocks.7.0.spade.mlp_gamma.weight', 'diffusion_model.input_blocks.7.0.spade.mlp_gamma.bias', 'diffusion_model.input_blocks.7.0.spade.mlp_beta.weight', 'diffusion_model.input_blocks.7.0.spade.mlp_beta.bias', 'diffusion_model.input_blocks.8.0.spade.param_free_norm.weight', 'diffusion_model.input_blocks.8.0.spade.param_free_norm.bias', 'diffusion_model.input_blocks.8.0.spade.mlp_shared.0.weight', 'diffusion_model.input_blocks.8.0.spade.mlp_shared.0.bias', 'diffusion_model.input_blocks.8.0.spade.mlp_gamma.weight', 'diffusion_model.input_blocks.8.0.spade.mlp_gamma.bias', 'diffusion_model.input_blocks.8.0.spade.mlp_beta.weight', 'diffusion_model.input_blocks.8.0.spade.mlp_beta.bias', 'diffusion_model.input_blocks.10.0.spade.param_free_norm.weight', 'diffusion_model.input_blocks.10.0.spade.param_free_norm.bias', 'diffusion_model.input_blocks.10.0.spade.mlp_shared.0.weight', 'diffusion_model.input_blocks.10.0.spade.mlp_shared.0.bias', 'diffusion_model.input_blocks.10.0.spade.mlp_gamma.weight', 'diffusion_model.input_blocks.10.0.spade.mlp_gamma.bias', 'diffusion_model.input_blocks.10.0.spade.mlp_beta.weight', 'diffusion_model.input_blocks.10.0.spade.mlp_beta.bias', 'diffusion_model.input_blocks.11.0.spade.param_free_norm.weight', 'diffusion_model.input_blocks.11.0.spade.param_free_norm.bias', 'diffusion_model.input_blocks.11.0.spade.mlp_shared.0.weight', 'diffusion_model.input_blocks.11.0.spade.mlp_shared.0.bias', 'diffusion_model.input_blocks.11.0.spade.mlp_gamma.weight', 'diffusion_model.input_blocks.11.0.spade.mlp_gamma.bias', 'diffusion_model.input_blocks.11.0.spade.mlp_beta.weight', 'diffusion_model.input_blocks.11.0.spade.mlp_beta.bias', 'diffusion_model.middle_block.0.spade.param_free_norm.weight', 'diffusion_model.middle_block.0.spade.param_free_norm.bias', 'diffusion_model.middle_block.0.spade.mlp_shared.0.weight', 'diffusion_model.middle_block.0.spade.mlp_shared.0.bias', 'diffusion_model.middle_block.0.spade.mlp_gamma.weight', 'diffusion_model.middle_block.0.spade.mlp_gamma.bias', 'diffusion_model.middle_block.0.spade.mlp_beta.weight', 'diffusion_model.middle_block.0.spade.mlp_beta.bias', 'diffusion_model.middle_block.2.spade.param_free_norm.weight', 'diffusion_model.middle_block.2.spade.param_free_norm.bias', 'diffusion_model.middle_block.2.spade.mlp_shared.0.weight', 'diffusion_model.middle_block.2.spade.mlp_shared.0.bias', 'diffusion_model.middle_block.2.spade.mlp_gamma.weight', 'diffusion_model.middle_block.2.spade.mlp_gamma.bias', 'diffusion_model.middle_block.2.spade.mlp_beta.weight', 'diffusion_model.middle_block.2.spade.mlp_beta.bias', 'diffusion_model.output_blocks.0.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.0.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.0.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.0.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.0.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.0.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.0.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.0.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.1.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.1.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.1.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.1.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.1.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.1.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.1.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.1.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.2.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.2.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.2.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.2.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.2.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.2.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.2.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.2.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.3.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.3.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.3.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.3.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.3.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.3.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.3.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.3.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.4.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.4.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.4.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.4.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.4.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.4.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.4.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.4.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.5.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.5.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.5.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.5.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.5.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.5.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.5.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.5.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.6.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.6.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.6.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.6.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.6.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.6.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.6.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.6.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.7.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.7.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.7.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.7.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.7.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.7.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.7.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.7.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.8.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.8.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.8.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.8.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.8.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.8.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.8.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.8.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.9.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.9.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.9.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.9.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.9.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.9.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.9.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.9.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.10.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.10.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.10.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.10.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.10.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.10.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.10.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.10.0.spade.mlp_beta.bias', 'diffusion_model.output_blocks.11.0.spade.param_free_norm.weight', 'diffusion_model.output_blocks.11.0.spade.param_free_norm.bias', 'diffusion_model.output_blocks.11.0.spade.mlp_shared.0.weight', 'diffusion_model.output_blocks.11.0.spade.mlp_shared.0.bias', 'diffusion_model.output_blocks.11.0.spade.mlp_gamma.weight', 'diffusion_model.output_blocks.11.0.spade.mlp_gamma.bias', 'diffusion_model.output_blocks.11.0.spade.mlp_beta.weight', 'diffusion_model.output_blocks.11.0.spade.mlp_beta.bias']
>>>>>>>>>>>>>>>>>cond_stage_model>>>>>>>>>>>>>>>>>>>
[]
>>>>>>>>>>>>>>>>structcond_stage_model>>>>>>>>>>>>>>>>>>>>
['time_embed.0.weight', 'time_embed.0.bias', 'time_embed.2.weight', 'time_embed.2.bias', 'input_blocks.0.0.weight', 'input_blocks.0.0.bias', 'input_blocks.1.0.in_layers.0.weight', 'input_blocks.1.0.in_layers.0.bias', 'input_blocks.1.0.in_layers.2.weight', 'input_blocks.1.0.in_layers.2.bias', 'input_blocks.1.0.emb_layers.1.weight', 'input_blocks.1.0.emb_layers.1.bias', 'input_blocks.1.0.out_layers.0.weight', 'input_blocks.1.0.out_layers.0.bias', 'input_blocks.1.0.out_layers.3.weight', 'input_blocks.1.0.out_layers.3.bias', 'input_blocks.1.1.norm.weight', 'input_blocks.1.1.norm.bias', 'input_blocks.1.1.qkv.weight', 'input_blocks.1.1.qkv.bias', 'input_blocks.1.1.proj_out.weight', 'input_blocks.1.1.proj_out.bias', 'input_blocks.2.0.in_layers.0.weight', 'input_blocks.2.0.in_layers.0.bias', 'input_blocks.2.0.in_layers.2.weight', 'input_blocks.2.0.in_layers.2.bias', 'input_blocks.2.0.emb_layers.1.weight', 'input_blocks.2.0.emb_layers.1.bias', 'input_blocks.2.0.out_layers.0.weight', 'input_blocks.2.0.out_layers.0.bias', 'input_blocks.2.0.out_layers.3.weight', 'input_blocks.2.0.out_layers.3.bias', 'input_blocks.2.1.norm.weight', 'input_blocks.2.1.norm.bias', 'input_blocks.2.1.qkv.weight', 'input_blocks.2.1.qkv.bias', 'input_blocks.2.1.proj_out.weight', 'input_blocks.2.1.proj_out.bias', 'input_blocks.3.0.op.weight', 'input_blocks.3.0.op.bias', 'input_blocks.4.0.in_layers.0.weight', 'input_blocks.4.0.in_layers.0.bias', 'input_blocks.4.0.in_layers.2.weight', 'input_blocks.4.0.in_layers.2.bias', 'input_blocks.4.0.emb_layers.1.weight', 'input_blocks.4.0.emb_layers.1.bias', 'input_blocks.4.0.out_layers.0.weight', 'input_blocks.4.0.out_layers.0.bias', 'input_blocks.4.0.out_layers.3.weight', 'input_blocks.4.0.out_layers.3.bias', 'input_blocks.4.1.norm.weight', 'input_blocks.4.1.norm.bias', 'input_blocks.4.1.qkv.weight', 'input_blocks.4.1.qkv.bias', 'input_blocks.4.1.proj_out.weight', 'input_blocks.4.1.proj_out.bias', 'input_blocks.5.0.in_layers.0.weight', 'input_blocks.5.0.in_layers.0.bias', 'input_blocks.5.0.in_layers.2.weight', 'input_blocks.5.0.in_layers.2.bias', 'input_blocks.5.0.emb_layers.1.weight', 'input_blocks.5.0.emb_layers.1.bias', 'input_blocks.5.0.out_layers.0.weight', 'input_blocks.5.0.out_layers.0.bias', 'input_blocks.5.0.out_layers.3.weight', 'input_blocks.5.0.out_layers.3.bias', 'input_blocks.5.1.norm.weight', 'input_blocks.5.1.norm.bias', 'input_blocks.5.1.qkv.weight', 'input_blocks.5.1.qkv.bias', 'input_blocks.5.1.proj_out.weight', 'input_blocks.5.1.proj_out.bias', 'input_blocks.6.0.op.weight', 'input_blocks.6.0.op.bias', 'input_blocks.7.0.in_layers.0.weight', 'input_blocks.7.0.in_layers.0.bias', 'input_blocks.7.0.in_layers.2.weight', 'input_blocks.7.0.in_layers.2.bias', 'input_blocks.7.0.emb_layers.1.weight', 'input_blocks.7.0.emb_layers.1.bias', 'input_blocks.7.0.out_layers.0.weight', 'input_blocks.7.0.out_layers.0.bias', 'input_blocks.7.0.out_layers.3.weight', 'input_blocks.7.0.out_layers.3.bias', 'input_blocks.7.0.skip_connection.weight', 'input_blocks.7.0.skip_connection.bias', 'input_blocks.7.1.norm.weight', 'input_blocks.7.1.norm.bias', 'input_blocks.7.1.qkv.weight', 'input_blocks.7.1.qkv.bias', 'input_blocks.7.1.proj_out.weight', 'input_blocks.7.1.proj_out.bias', 'input_blocks.8.0.in_layers.0.weight', 'input_blocks.8.0.in_layers.0.bias', 'input_blocks.8.0.in_layers.2.weight', 'input_blocks.8.0.in_layers.2.bias', 'input_blocks.8.0.emb_layers.1.weight', 'input_blocks.8.0.emb_layers.1.bias', 'input_blocks.8.0.out_layers.0.weight', 'input_blocks.8.0.out_layers.0.bias', 'input_blocks.8.0.out_layers.3.weight', 'input_blocks.8.0.out_layers.3.bias', 'input_blocks.8.1.norm.weight', 'input_blocks.8.1.norm.bias', 'input_blocks.8.1.qkv.weight', 'input_blocks.8.1.qkv.bias', 'input_blocks.8.1.proj_out.weight', 'input_blocks.8.1.proj_out.bias', 'input_blocks.9.0.op.weight', 'input_blocks.9.0.op.bias', 'input_blocks.10.0.in_layers.0.weight', 'input_blocks.10.0.in_layers.0.bias', 'input_blocks.10.0.in_layers.2.weight', 'input_blocks.10.0.in_layers.2.bias', 'input_blocks.10.0.emb_layers.1.weight', 'input_blocks.10.0.emb_layers.1.bias', 'input_blocks.10.0.out_layers.0.weight', 'input_blocks.10.0.out_layers.0.bias', 'input_blocks.10.0.out_layers.3.weight', 'input_blocks.10.0.out_layers.3.bias', 'input_blocks.11.0.in_layers.0.weight', 'input_blocks.11.0.in_layers.0.bias', 'input_blocks.11.0.in_layers.2.weight', 'input_blocks.11.0.in_layers.2.bias', 'input_blocks.11.0.emb_layers.1.weight', 'input_blocks.11.0.emb_layers.1.bias', 'input_blocks.11.0.out_layers.0.weight', 'input_blocks.11.0.out_layers.0.bias', 'input_blocks.11.0.out_layers.3.weight', 'input_blocks.11.0.out_layers.3.bias', 'middle_block.0.in_layers.0.weight', 'middle_block.0.in_layers.0.bias', 'middle_block.0.in_layers.2.weight', 'middle_block.0.in_layers.2.bias', 'middle_block.0.emb_layers.1.weight', 'middle_block.0.emb_layers.1.bias', 'middle_block.0.out_layers.0.weight', 'middle_block.0.out_layers.0.bias', 'middle_block.0.out_layers.3.weight', 'middle_block.0.out_layers.3.bias', 'middle_block.1.norm.weight', 'middle_block.1.norm.bias', 'middle_block.1.qkv.weight', 'middle_block.1.qkv.bias', 'middle_block.1.proj_out.weight', 'middle_block.1.proj_out.bias', 'middle_block.2.in_layers.0.weight', 'middle_block.2.in_layers.0.bias', 'middle_block.2.in_layers.2.weight', 'middle_block.2.in_layers.2.bias', 'middle_block.2.emb_layers.1.weight', 'middle_block.2.emb_layers.1.bias', 'middle_block.2.out_layers.0.weight', 'middle_block.2.out_layers.0.bias', 'middle_block.2.out_layers.3.weight', 'middle_block.2.out_layers.3.bias', 'fea_tran.0.in_layers.0.weight', 'fea_tran.0.in_layers.0.bias', 'fea_tran.0.in_layers.2.weight', 'fea_tran.0.in_layers.2.bias', 'fea_tran.0.emb_layers.1.weight', 'fea_tran.0.emb_layers.1.bias', 'fea_tran.0.out_layers.0.weight', 'fea_tran.0.out_layers.0.bias', 'fea_tran.0.out_layers.3.weight', 'fea_tran.0.out_layers.3.bias', 'fea_tran.1.in_layers.0.weight', 'fea_tran.1.in_layers.0.bias', 'fea_tran.1.in_layers.2.weight', 'fea_tran.1.in_layers.2.bias', 'fea_tran.1.emb_layers.1.weight', 'fea_tran.1.emb_layers.1.bias', 'fea_tran.1.out_layers.0.weight', 'fea_tran.1.out_layers.0.bias', 'fea_tran.1.out_layers.3.weight', 'fea_tran.1.out_layers.3.bias', 'fea_tran.2.in_layers.0.weight', 'fea_tran.2.in_layers.0.bias', 'fea_tran.2.in_layers.2.weight', 'fea_tran.2.in_layers.2.bias', 'fea_tran.2.emb_layers.1.weight', 'fea_tran.2.emb_layers.1.bias', 'fea_tran.2.out_layers.0.weight', 'fea_tran.2.out_layers.0.bias', 'fea_tran.2.out_layers.3.weight', 'fea_tran.2.out_layers.3.bias', 'fea_tran.2.skip_connection.weight', 'fea_tran.2.skip_connection.bias', 'fea_tran.3.in_layers.0.weight', 'fea_tran.3.in_layers.0.bias', 'fea_tran.3.in_layers.2.weight', 'fea_tran.3.in_layers.2.bias', 'fea_tran.3.emb_layers.1.weight', 'fea_tran.3.emb_layers.1.bias', 'fea_tran.3.out_layers.0.weight', 'fea_tran.3.out_layers.0.bias', 'fea_tran.3.out_layers.3.weight', 'fea_tran.3.out_layers.3.bias', 'fea_tran.3.skip_connection.weight', 'fea_tran.3.skip_connection.bias']
Traceback (most recent call last):
  File "C:\Users\leolo\StableSR\scripts\sr_val_ddpm_text_T_vqganfin_old.py", line 404, in <module>
    main()
  File "C:\Users\leolo\StableSR\scripts\sr_val_ddpm_text_T_vqganfin_old.py", line 277, in main
    model = load_model_from_config(config, f"{opt.ckpt}")
  File "C:\Users\leolo\StableSR\scripts\sr_val_ddpm_text_T_vqganfin_old.py", line 127, in load_model_from_config
    model.cuda()
  File "C:\Users\leolo\.conda\envs\stablesr\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 127, in cuda
    return super().cuda(device=device)
  File "C:\Users\leolo\.conda\envs\stablesr\lib\site-packages\torch\nn\modules\module.py", line 689, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "C:\Users\leolo\.conda\envs\stablesr\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
    module._apply(fn)
  File "C:\Users\leolo\.conda\envs\stablesr\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
    module._apply(fn)
  File "C:\Users\leolo\.conda\envs\stablesr\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
    module._apply(fn)
  [Previous line repeated 3 more times]
  File "C:\Users\leolo\.conda\envs\stablesr\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
    param_applied = fn(param)
  File "C:\Users\leolo\.conda\envs\stablesr\lib\site-packages\torch\nn\modules\module.py", line 689, in <lambda>
    return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.43 GiB already allocated; 0 bytes free; 3.54 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Linked Colab Demo doesn't work.

This project looks really promising but the given demo doesn't work and make Colab fullcrash. Don't know if Colab has problems now but after trying to run the install of your Colab demo, I can't even disconnect/reconnect after the crash, I can't even delete runtime and must wait hours for a server-side reset of my account... :(

It seems that StableSR uses VAE (VAEGAN) instead of VQGAN.

Thanks for your great work! I noticed that StableSR uses ldm.models.autoencoder.AutoencoderKL instead of ldm.models.autoencoder.VQModel as its pre-trained autoencoder. Since VQGAN has a codebook to quantize continuous feature vectors into discrete feature vectors, is it better to modify the original paper to reduce the ambiguity?

Inability to fetch new data.

Thank you for your contribution. When I try to switch to other data, such as scientific data with three dimensions, for training, I initially can fetch the data correctly. However, it gets stuck in the middle of the first epoch, and upon debugging, I found that the trainer's queue is not adding new data continuously. Could you please advise on which parameter I can modify to allow my training to proceed?
image
Using the original dataloader to fetch data is completely normal.

Where do hyperparameters of RealESRGAN degradation come from?

Hi, and thank you for the awesome work!

Upon inspecting the provided configs, I've noticed there are differences in the degradation hyperparams in https://github.com/IceClear/StableSR/blob/main/configs/stableSRNew/v2-finetune_text_T_512.yaml compared with other implementations.

For example, the first degradation is written with the following config:

degradation:
  # the first degradation process
  resize_prob: [0.2, 0.7, 0.1]  # up, down, keep
  resize_range: [0.3, 1.5]
  gaussian_noise_prob: 0.5
  noise_range: [1, 15]
  poisson_scale_range: [0.05, 2.0]
  gray_noise_prob: 0.4
  jpeg_range: [60, 95]

while in https://github.com/xinntao/Real-ESRGAN/blob/master/options/finetune_realesrgan_x4plus.yml the config defines:

# the first degradation process
resize_prob: [0.2, 0.7, 0.1]  # up, down, keep
resize_range: [0.15, 1.5]
gaussian_noise_prob: 0.5
noise_range: [1, 30]
poisson_scale_range: [0.05, 3]
gray_noise_prob: 0.4
jpeg_range: [30, 95]

Are there any reasons for such discrepancies? I haven't found this anywhere in the paper. Also, when you measure metrics in table 1, do you create LR images with first or second degradation settings?

Thanks in advance!

512 vs 768

I really like the 512 model and thought the 768 would improve things. However the 768 model introduces strange texture patterns such as lines and crosshatches to various surfaces when using the recommended settings such as adding prompts and using a CFG Scale of 7. The 512 model does not seem to produce these imperfections and even adding prompts and increasing CFG Scale also improves 512 greatly and seems to be the best choice for now. Loving this addon nonetheless!

StableSR distorts Chinese characters in image

Good work and thanks for sharing the source code. From the examples shown in the paper, the StableSR works very well to restore text in images, but when I tested it in my image , I found that the model distorted text in images.

The test scripts is scripts/sr_val_ddpm_text_T_vqganfin_old.py,

the LR image is
lr
and the restored image is

restored_hr

Do you have any suggestions for that? Thanks

question about using LDM or DM

Hello, I found that you splited the larger image into several overlapping smaller patches and processing each one individually. My question is, since the larger image is splited, why still train and sample in latent space? Why not train and sample in spatial pixel space directly and using DM?
In fact, I found many SR methods based on diffusion model don't split images into small patch and train on them, which is different from CNN and GAN methods previously. They would rather put a larger image into "tight" latent space to compress it. It confused me a lot, hope you can help me.
Thanks in advance!

Diverse super-resolution results

Thanks for your great work! I ran scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas.py twice and found the two super-resolution results have slight differences. Is it normal? Looking forward to your reply.

CFW training

hello, I'm confused about the CFW trianing.
Firstly, Latent codes (4D tensors) of HR images is needed to train CFW, but in your paper (4.1. Implementation Details) it says : "Then we adopt the fine-tuned diffusion model to generate the corresponding latent codes Z0 given the above LR images as conditions.", so what is needed to train the CFW? I think the paper is right. So why do we need the HR image latents rather than LR image latents?
Secondly, how does fusion_weight work when training? should we select fusion_w randomly in range [0, 1]? But I cannot find it in the code.

Question for Finetuning

Dear author,

I tried to reproduce your work and I'm currently want to validate the result generated by the fine-tuned sd model. I used only one A100 gpu card for finetuning (currently at around 130 epoch) and test with the script "sr_val_ddpm_text_T_vqganfin_old.py" by reset the ckpt path and change the dec_w to 0.0. The rest result shows almost no difference with the input. But when I saw the training log, the validation results look pretty good. Do you have any idea for this issue?

Thanks a lot!

Implementation details about inserting SFT layers

Hi, I'm reading your code. As Readme said, you trained the Time-aware encoder with SFT first, and then trained CFW.

But I noticed that, in the code implementation, the operation for fusing SFT only appears in the Decoder_Mix as fuse_layer, which is in the CFW runing process.
I didn't notice the "Trainable SFT layer" in unet-related or even time-aware encoder-related codes.

Could you please explain more details about where you inserted SFT layer in unet or time-aware encoder?

Get the wired output image when using the pretrained model

Hi, thanks for your great works. Unlike others getting right SR images using your provided pretrained models, I get the very weird results when using the pretrained model. The command is as follow:

python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt pretrained_models/stablesr_000117.ckpt --vqgan_ckpt pretrained_models/vqgan_cfw_00011.ckpt --init-img inputs/test_example --outdir outputs/user_upload --ddpm_steps 200 --dec_w 0.5 --colorfix_type adain

When using the input image 'inputs/test_example/OST_120.png', the results likes this:
OST_120

The command line outputs likes this:
info1
info2
info3

I do not know where is the problems. I would appreciate it if anyone could help me.

how to train a vqgan model to conduct special task?

Hello, Thanks for your detailed introduction, which can't be better help to me. I have understanded how to train SFT、CFW; But VQGAN model makes me confusion, I wonder weather the model has effect to new task? it seems like that vqgan model just provide, thanks again...

about Zoom in and add more details

  1. Looking forward to the release of the code,
  2. may I ask whether our project can achieve enlargement and add more details,What projects can be implemented

Training Details

Hi,

Thank you for providing the training code. As I am trying to retrain the model, I wish to clarify a few details below:

  1. When downloading the v2-1_512-ema-pruned.ckpt (from here) as instructed, there are some keys unexpected, I wonder what are they used for?
Unexpected Keys: ['model_ema.decay', 'model_ema.num_updates']
  1. In the implementation the SFT layers is replaced with SPADE layers, I wonder what is the difference between them?

  2. I notice that multiple times the model is attempting to check hasattr(self, "split_input_params"), which is so far as I observed always False, may I know what is this argument used for? When it is True, it seems to use torch.nn.Fold to encode rather than the VAE encoder.

  3. It seems in training the paper uses DIV2K with HR resolution 512x512. However, the initial data has resolution 2k, I wonder what (may be resize) is done to fit this configuration?

  4. Do we train in aggregation sampling? it seems in Appendix B it does not have learnable parameters.

  5. May I know what are the training losses? In particular do you have image based loss (on pixels, rather than the latents)? If so, how do you handle the case mentioned in 4). with training image resolution > 512?

Thank you in advance for your time and answering!

SFT dataset

Hello, can you tell me what kind of data I need for SFT training and where to put it

testing any other size image

I tested the 128 * 128 input image with This script:
python scripts/sr_val_ddpm_text_T_vqganfin_old.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt /data/work/StableSR-main/model/stablesr_000117.ckpt --vqgan_ckpt /data/work/StableSR-main/model/vqgan_cfw_00011.ckpt --init-img /data/work/StableSR/datasetoyj/plant/ --outdir /data/work/StableSR/datasetoyj/plant_out --ddpm_steps 200 --dec_w 0.5
there were no problem.

but I encountered an error when testing any other size image with This script:
python scripts/sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt /data/work/StableSR-main/model/stablesr_000117.ckpt --vqgan_ckpt /data/work/StableSR-main/model/vqgan_cfw_00011.ckpt --init-img /data/work/StableSR/datasetoyj/human_all/ --outdir /data/work/StableSR/datasetoyj/human_all_out --ddpm_steps 200 --dec_w 0.5 --vqgantile_size 256 --vqgantile_stride 200
the error is:
...
│ /data/work/StableSR/ldm/models/diffusion/ddpm.py:2621 in │
│ p_mean_variance_canvas │
│ │
│ 2618 │ │ │ │ # print(noise_preds[row][col].size()) │
│ 2619 │ │ │ │ # print(tile_weights.size()) │
│ 2620 │ │ │ │ # print(noise_pred.size()) │
│ ❱ 2621 │ │ │ │ noise_pred[:, :, input_start_y:input_end_y, input_sta │
│ 2622 │ │ │ │ contributors[:, :, input_start_y:input_end_y, input_s │
│ 2623 │ │ # Average overlapping areas with more than 1 contributor │
│ 2624 │ │ noise_pred /= contributors │
╰──────────────────────────────────────────────────────────────────────────────╯
RuntimeError: The size of tensor a (32) must match the size of tensor b (64) at
non-singleton dimension 3

Blurred Image

The result after running the second test script:
panda_2
The original image:
panda

I did not train the model by my own dataset, but I ran the second test script by using pre-trained. But why does the image become blurred afterward?

Sampling strategy DDPM vs DDIM

Hi, thanks for your excellent work!

I have a question about the sampling strategy. Is there any specific reason why you decided to use DDPM sampling instead of DDIM? DDIM should produce comparable results while requiring considerably fewer iterations. Did you try to use DDIM? If yes, what happened?

Thanks!

Claudio

about vqgantile_size and vqgantile_stride

is vqgantile_size any integer greater than or equal to 512?

To accelerate inference, I use a smaller vqgantile_size, but when I run the following command:
python sr_val_ddpm_text_T_vqganfin_oldcanvas_tile.py --config configs/stableSRNew/v2-finetune_text_T_512.yaml --ckpt ckpt/stablesr_000117.ckpt --vqgan_ckpt ckpt/vqgan_cfw_00011.ckpt --init-img inputs/ --outdir output --ddpm_steps 60 --dec_w 0.5 --vqgantile_size 530 --vqgantile_stride 500 --upscale 4 --colorfix_type wavelet

I received the following error:
image

But when using vqgantile_size=512, the script is always normal

Colab notebook not running and not showing errors just warning

running the notebook on colab as it is.
as shown below it shows warning about xformers but then like a keyboard interruption.

color correction>>>>>>>>>>>
Use adain color correction

Loading model from ./vqgan_cfw_00011.ckpt
Global Step: 18000
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 1.13.1 with CUDA None (you have 2.0.1+cu117)
Python 3.10.11 (you have 3.10.10)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
/usr/local/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 64, 64) = 16384 dimensions.
making attention of type 'vanilla' with 512 in_channels
/usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/usr/local/lib/python3.10/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights.
warnings.warn(msg)
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /root/.cache/torch/hub/checkpoints/vgg16-397923af.pth
100% 528M/528M [00:02<00:00, 209MB/s]
Downloading vgg_lpips model from https://heibox.uni-heidelberg.de/f/607503859c864bc1b30b/?dl=1 to taming/modules/autoencoder/lpips/vgg.pth
8.19kB [00:00, 256kB/s]
loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips/vgg.pth

missing>>>>>>>>>>>>>>>>>>>
[]
trainable_list>>>>>>>>>>>>>>>>>>>
['decoder.fusion_layer_2.encode_enc_1.norm1.weight', 'decoder.fusion_layer_2.encode_enc_1.norm1.bias', 'decoder.fusion_layer_2.encode_enc_1.conv1.weight', 'decoder.fusion_layer_2.encode_enc_1.conv1.bias', 'decoder.fusion_layer_2.encode_enc_1.norm2.weight', 'decoder.fusion_layer_2.encode_enc_1.norm2.bias', 'decoder.fusion_layer_2.encode_enc_1.conv2.weight', 'decoder.fusion_layer_2.encode_enc_1.conv2.bias', 'decoder.fusion_layer_2.encode_enc_1.conv_out.weight', 'decoder.fusion_layer_2.encode_enc_1.conv_out.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb1.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb2.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.0.rdb3.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb1.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb2.conv5.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv1.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv1.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv2.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv2.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv3.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv3.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv4.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv4.bias', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv5.weight', 'decoder.fusion_layer_2.encode_enc_2.1.rdb3.conv5.bias', 'decoder.fusion_layer_2.encode_enc_3.norm1.weight', 'decoder.fusion_layer_2.encode_enc_3.norm1.bias', 'decoder.fusion_layer_2.encode_enc_3.conv1.weight', 'decoder.fusion_layer_2.encode_enc_3.conv1.bias', 'decoder.fusion_layer_2.encode_enc_3.norm2.weight', 'decoder.fusion_layer_2.encode_enc_3.norm2.bias', 'decoder.fusion_layer_2.encode_enc_3.conv2.weight', 'decoder.fusion_layer_2.encode_enc_3.conv2.bias', 'decoder.fusion_layer_1.encode_enc_1.norm1.weight', 'decoder.fusion_layer_1.encode_enc_1.norm1.bias', 'decoder.fusion_layer_1.encode_enc_1.conv1.weight', 'decoder.fusion_layer_1.encode_enc_1.conv1.bias', 'decoder.fusion_layer_1.encode_enc_1.norm2.weight', 'decoder.fusion_layer_1.encode_enc_1.norm2.bias', 'decoder.fusion_layer_1.encode_enc_1.conv2.weight', 'decoder.fusion_layer_1.encode_enc_1.conv2.bias', 'decoder.fusion_layer_1.encode_enc_1.conv_out.weight', 'decoder.fusion_layer_1.encode_enc_1.conv_out.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb1.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb2.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.0.rdb3.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb1.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb2.conv5.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv1.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv1.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv2.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv2.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv3.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv3.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv4.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv4.bias', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv5.weight', 'decoder.fusion_layer_1.encode_enc_2.1.rdb3.conv5.bias', 'decoder.fusion_layer_1.encode_enc_3.norm1.weight', 'decoder.fusion_layer_1.encode_enc_3.norm1.bias', 'decoder.fusion_layer_1.encode_enc_3.conv1.weight', 'decoder.fusion_layer_1.encode_enc_3.conv1.bias', 'decoder.fusion_layer_1.encode_enc_3.norm2.weight', 'decoder.fusion_layer_1.encode_enc_3.norm2.bias', 'decoder.fusion_layer_1.encode_enc_3.conv2.weight', 'decoder.fusion_layer_1.encode_enc_3.conv2.bias', 'loss.discriminator.main.0.weight', 'loss.discriminator.main.0.bias', 'loss.discriminator.main.2.weight', 'loss.discriminator.main.3.weight', 'loss.discriminator.main.3.bias', 'loss.discriminator.main.5.weight', 'loss.discriminator.main.6.weight', 'loss.discriminator.main.6.bias', 'loss.discriminator.main.8.weight', 'loss.discriminator.main.9.weight', 'loss.discriminator.main.9.bias', 'loss.discriminator.main.11.weight', 'loss.discriminator.main.11.bias']
Untrainable_list>>>>>>>>>>>>>>>>>>>
['encoder.conv_in.weight', 'encoder.conv_in.bias', 'encoder.down.0.block.0.norm1.weight', 'encoder.down.0.block.0.norm1.bias', 'encoder.down.0.block.0.conv1.weight', 'encoder.down.0.block.0.conv1.bias', 'encoder.down.0.block.0.norm2.weight', 'encoder.down.0.block.0.norm2.bias', 'encoder.down.0.block.0.conv2.weight', 'encoder.down.0.block.0.conv2.bias', 'encoder.down.0.block.1.norm1.weight', 'encoder.down.0.block.1.norm1.bias', 'encoder.down.0.block.1.conv1.weight', 'encoder.down.0.block.1.conv1.bias', 'encoder.down.0.block.1.norm2.weight', 'encoder.down.0.block.1.norm2.bias', 'encoder.down.0.block.1.conv2.weight', 'encoder.down.0.block.1.conv2.bias', 'encoder.down.0.downsample.conv.weight', 'encoder.down.0.downsample.conv.bias', 'encoder.down.1.block.0.norm1.weight', 'encoder.down.1.block.0.norm1.bias', 'encoder.down.1.block.0.conv1.weight', 'encoder.down.1.block.0.conv1.bias', 'encoder.down.1.block.0.norm2.weight', 'encoder.down.1.block.0.norm2.bias', 'encoder.down.1.block.0.conv2.weight', 'encoder.down.1.block.0.conv2.bias', 'encoder.down.1.block.0.nin_shortcut.weight', 'encoder.down.1.block.0.nin_shortcut.bias', 'encoder.down.1.block.1.norm1.weight', 'encoder.down.1.block.1.norm1.bias', 'encoder.down.1.block.1.conv1.weight', 'encoder.down.1.block.1.conv1.bias', 'encoder.down.1.block.1.norm2.weight', 'encoder.down.1.block.1.norm2.bias', 'encoder.down.1.block.1.conv2.weight', 'encoder.down.1.block.1.conv2.bias', 'encoder.down.1.downsample.conv.weight', 'encoder.down.1.downsample.conv.bias', 'encoder.down.2.block.0.norm1.weight', 'encoder.down.2.block.0.norm1.bias', 'encoder.down.2.block.0.conv1.weight', 'encoder.down.2.block.0.conv1.bias', 'encoder.down.2.block.0.norm2.weight', 'encoder.down.2.block.0.norm2.bias', 'encoder.down.2.block.0.conv2.weight', 'encoder.down.2.block.0.conv2.bias', 'encoder.down.2.block.0.nin_shortcut.weight', 'encoder.down.2.block.0.nin_shortcut.bias', 'encoder.down.2.block.1.norm1.weight', 'encoder.down.2.block.1.norm1.bias', 'encoder.down.2.block.1.conv1.weight', 'encoder.down.2.block.1.conv1.bias', 'encoder.down.2.block.1.norm2.weight', 'encoder.down.2.block.1.norm2.bias', 'encoder.down.2.block.1.conv2.weight', 'encoder.down.2.block.1.conv2.bias', 'encoder.down.2.downsample.conv.weight', 'encoder.down.2.downsample.conv.bias', 'encoder.down.3.block.0.norm1.weight', 'encoder.down.3.block.0.norm1.bias', 'encoder.down.3.block.0.conv1.weight', 'encoder.down.3.block.0.conv1.bias', 'encoder.down.3.block.0.norm2.weight', 'encoder.down.3.block.0.norm2.bias', 'encoder.down.3.block.0.conv2.weight', 'encoder.down.3.block.0.conv2.bias', 'encoder.down.3.block.1.norm1.weight', 'encoder.down.3.block.1.norm1.bias', 'encoder.down.3.block.1.conv1.weight', 'encoder.down.3.block.1.conv1.bias', 'encoder.down.3.block.1.norm2.weight', 'encoder.down.3.block.1.norm2.bias', 'encoder.down.3.block.1.conv2.weight', 'encoder.down.3.block.1.conv2.bias', 'encoder.mid.block_1.norm1.weight', 'encoder.mid.block_1.norm1.bias', 'encoder.mid.block_1.conv1.weight', 'encoder.mid.block_1.conv1.bias', 'encoder.mid.block_1.norm2.weight', 'encoder.mid.block_1.norm2.bias', 'encoder.mid.block_1.conv2.weight', 'encoder.mid.block_1.conv2.bias', 'encoder.mid.attn_1.norm.weight', 'encoder.mid.attn_1.norm.bias', 'encoder.mid.attn_1.q.weight', 'encoder.mid.attn_1.q.bias', 'encoder.mid.attn_1.k.weight', 'encoder.mid.attn_1.k.bias', 'encoder.mid.attn_1.v.weight', 'encoder.mid.attn_1.v.bias', 'encoder.mid.attn_1.proj_out.weight', 'encoder.mid.attn_1.proj_out.bias', 'encoder.mid.block_2.norm1.weight', 'encoder.mid.block_2.norm1.bias', 'encoder.mid.block_2.conv1.weight', 'encoder.mid.block_2.conv1.bias', 'encoder.mid.block_2.norm2.weight', 'encoder.mid.block_2.norm2.bias', 'encoder.mid.block_2.conv2.weight', 'encoder.mid.block_2.conv2.bias', 'encoder.norm_out.weight', 'encoder.norm_out.bias', 'encoder.conv_out.weight', 'encoder.conv_out.bias', 'decoder.conv_in.weight', 'decoder.conv_in.bias', 'decoder.mid.block_1.norm1.weight', 'decoder.mid.block_1.norm1.bias', 'decoder.mid.block_1.conv1.weight', 'decoder.mid.block_1.conv1.bias', 'decoder.mid.block_1.norm2.weight', 'decoder.mid.block_1.norm2.bias', 'decoder.mid.block_1.conv2.weight', 'decoder.mid.block_1.conv2.bias', 'decoder.mid.attn_1.norm.weight', 'decoder.mid.attn_1.norm.bias', 'decoder.mid.attn_1.q.weight', 'decoder.mid.attn_1.q.bias', 'decoder.mid.attn_1.k.weight', 'decoder.mid.attn_1.k.bias', 'decoder.mid.attn_1.v.weight', 'decoder.mid.attn_1.v.bias', 'decoder.mid.attn_1.proj_out.weight', 'decoder.mid.attn_1.proj_out.bias', 'decoder.mid.block_2.norm1.weight', 'decoder.mid.block_2.norm1.bias', 'decoder.mid.block_2.conv1.weight', 'decoder.mid.block_2.conv1.bias', 'decoder.mid.block_2.norm2.weight', 'decoder.mid.block_2.norm2.bias', 'decoder.mid.block_2.conv2.weight', 'decoder.mid.block_2.conv2.bias', 'decoder.up.0.block.0.norm1.weight', 'decoder.up.0.block.0.norm1.bias', 'decoder.up.0.block.0.conv1.weight', 'decoder.up.0.block.0.conv1.bias', 'decoder.up.0.block.0.norm2.weight', 'decoder.up.0.block.0.norm2.bias', 'decoder.up.0.block.0.conv2.weight', 'decoder.up.0.block.0.conv2.bias', 'decoder.up.0.block.0.nin_shortcut.weight', 'decoder.up.0.block.0.nin_shortcut.bias', 'decoder.up.0.block.1.norm1.weight', 'decoder.up.0.block.1.norm1.bias', 'decoder.up.0.block.1.conv1.weight', 'decoder.up.0.block.1.conv1.bias', 'decoder.up.0.block.1.norm2.weight', 'decoder.up.0.block.1.norm2.bias', 'decoder.up.0.block.1.conv2.weight', 'decoder.up.0.block.1.conv2.bias', 'decoder.up.0.block.2.norm1.weight', 'decoder.up.0.block.2.norm1.bias', 'decoder.up.0.block.2.conv1.weight', 'decoder.up.0.block.2.conv1.bias', 'decoder.up.0.block.2.norm2.weight', 'decoder.up.0.block.2.norm2.bias', 'decoder.up.0.block.2.conv2.weight', 'decoder.up.0.block.2.conv2.bias', 'decoder.up.1.block.0.norm1.weight', 'decoder.up.1.block.0.norm1.bias', 'decoder.up.1.block.0.conv1.weight', 'decoder.up.1.block.0.conv1.bias', 'decoder.up.1.block.0.norm2.weight', 'decoder.up.1.block.0.norm2.bias', 'decoder.up.1.block.0.conv2.weight', 'decoder.up.1.block.0.conv2.bias', 'decoder.up.1.block.0.nin_shortcut.weight', 'decoder.up.1.block.0.nin_shortcut.bias', 'decoder.up.1.block.1.norm1.weight', 'decoder.up.1.block.1.norm1.bias', 'decoder.up.1.block.1.conv1.weight', 'decoder.up.1.block.1.conv1.bias', 'decoder.up.1.block.1.norm2.weight', 'decoder.up.1.block.1.norm2.bias', 'decoder.up.1.block.1.conv2.weight', 'decoder.up.1.block.1.conv2.bias', 'decoder.up.1.block.2.norm1.weight', 'decoder.up.1.block.2.norm1.bias', 'decoder.up.1.block.2.conv1.weight', 'decoder.up.1.block.2.conv1.bias', 'decoder.up.1.block.2.norm2.weight', 'decoder.up.1.block.2.norm2.bias', 'decoder.up.1.block.2.conv2.weight', 'decoder.up.1.block.2.conv2.bias', 'decoder.up.1.upsample.conv.weight', 'decoder.up.1.upsample.conv.bias', 'decoder.up.2.block.0.norm1.weight', 'decoder.up.2.block.0.norm1.bias', 'decoder.up.2.block.0.conv1.weight', 'decoder.up.2.block.0.conv1.bias', 'decoder.up.2.block.0.norm2.weight', 'decoder.up.2.block.0.norm2.bias', 'decoder.up.2.block.0.conv2.weight', 'decoder.up.2.block.0.conv2.bias', 'decoder.up.2.block.1.norm1.weight', 'decoder.up.2.block.1.norm1.bias', 'decoder.up.2.block.1.conv1.weight', 'decoder.up.2.block.1.conv1.bias', 'decoder.up.2.block.1.norm2.weight', 'decoder.up.2.block.1.norm2.bias', 'decoder.up.2.block.1.conv2.weight', 'decoder.up.2.block.1.conv2.bias', 'decoder.up.2.block.2.norm1.weight', 'decoder.up.2.block.2.norm1.bias', 'decoder.up.2.block.2.conv1.weight', 'decoder.up.2.block.2.conv1.bias', 'decoder.up.2.block.2.norm2.weight', 'decoder.up.2.block.2.norm2.bias', 'decoder.up.2.block.2.conv2.weight', 'decoder.up.2.block.2.conv2.bias', 'decoder.up.2.upsample.conv.weight', 'decoder.up.2.upsample.conv.bias', 'decoder.up.3.block.0.norm1.weight', 'decoder.up.3.block.0.norm1.bias', 'decoder.up.3.block.0.conv1.weight', 'decoder.up.3.block.0.conv1.bias', 'decoder.up.3.block.0.norm2.weight', 'decoder.up.3.block.0.norm2.bias', 'decoder.up.3.block.0.conv2.weight', 'decoder.up.3.block.0.conv2.bias', 'decoder.up.3.block.1.norm1.weight', 'decoder.up.3.block.1.norm1.bias', 'decoder.up.3.block.1.conv1.weight', 'decoder.up.3.block.1.conv1.bias', 'decoder.up.3.block.1.norm2.weight', 'decoder.up.3.block.1.norm2.bias', 'decoder.up.3.block.1.conv2.weight', 'decoder.up.3.block.1.conv2.bias', 'decoder.up.3.block.2.norm1.weight', 'decoder.up.3.block.2.norm1.bias', 'decoder.up.3.block.2.conv1.weight', 'decoder.up.3.block.2.conv1.bias', 'decoder.up.3.block.2.norm2.weight', 'decoder.up.3.block.2.norm2.bias', 'decoder.up.3.block.2.conv2.weight', 'decoder.up.3.block.2.conv2.bias', 'decoder.up.3.upsample.conv.weight', 'decoder.up.3.upsample.conv.bias', 'decoder.norm_out.weight', 'decoder.norm_out.bias', 'decoder.conv_out.weight', 'decoder.conv_out.bias', 'loss.logvar', 'loss.perceptual_loss.net.slice1.0.weight', 'loss.perceptual_loss.net.slice1.0.bias', 'loss.perceptual_loss.net.slice1.2.weight', 'loss.perceptual_loss.net.slice1.2.bias', 'loss.perceptual_loss.net.slice2.5.weight', 'loss.perceptual_loss.net.slice2.5.bias', 'loss.perceptual_loss.net.slice2.7.weight', 'loss.perceptual_loss.net.slice2.7.bias', 'loss.perceptual_loss.net.slice3.10.weight', 'loss.perceptual_loss.net.slice3.10.bias', 'loss.perceptual_loss.net.slice3.12.weight', 'loss.perceptual_loss.net.slice3.12.bias', 'loss.perceptual_loss.net.slice3.14.weight', 'loss.perceptual_loss.net.slice3.14.bias', 'loss.perceptual_loss.net.slice4.17.weight', 'loss.perceptual_loss.net.slice4.17.bias', 'loss.perceptual_loss.net.slice4.19.weight', 'loss.perceptual_loss.net.slice4.19.bias', 'loss.perceptual_loss.net.slice4.21.weight', 'loss.perceptual_loss.net.slice4.21.bias', 'loss.perceptual_loss.net.slice5.24.weight', 'loss.perceptual_loss.net.slice5.24.bias', 'loss.perceptual_loss.net.slice5.26.weight', 'loss.perceptual_loss.net.slice5.26.bias', 'loss.perceptual_loss.net.slice5.28.weight', 'loss.perceptual_loss.net.slice5.28.bias', 'loss.perceptual_loss.lin0.model.1.weight', 'loss.perceptual_loss.lin1.model.1.weight', 'loss.perceptual_loss.lin2.model.1.weight', 'loss.perceptual_loss.lin3.model.1.weight', 'loss.perceptual_loss.lin4.model.1.weight', 'quant_conv.weight', 'quant_conv.bias', 'post_quant_conv.weight', 'post_quant_conv.bias']
Global seed set to 42
Loading model from ./stablesr_000117.ckpt
Global Step: 16500
LatentDiffusionSRTextWT: Running in eps-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 1024 and using 20 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 1024 and using 10 heads.
^C

Feature Mismatch for Replication

Hi, I have some questions about the network structure of CFW.

As shown in figure 2 in your paper and your code, according to my understanding, you concat two layers' features of the encoder and decoder. The enc_feat (immediate features) sizes are :

# enc_fea[3]: torch.Size([1, 512, 16, 16])
# enc_fea[2]: torch.Size([1, 512, 32, 32])  reserved 1
# enc_fea[1]: torch.Size([1, 256, 64, 64])  reserved 0
# enc_fea[0]: torch.Size([1, 128, 128, 128]) 

You choose the enc_fea[2] and enc_fea[1]. However the dec_fea sizes are:

# h: torch.Size([1, 512, 64, 64])
# h: torch.Size([1, 512, 128, 128])
# h: torch.Size([1, 256, 256, 256])
# h: torch.Size([1, 128, 512, 512])

It seems that enc_feat and dec_fea can't be concated. Thanks for your help in advance!

OOM

Amazing project!!

I used a 1024 * 800 image and executed the following command:
python sr_val_ddpm_text_T_vqganfin_oldcanvas.py --ckpt ckpt/stablesr_000117.ckpt --vqgan_ckpt ckpt/vqgan_cfw_00011.ckpt --init-img inputs/test_example/ --outdir output --ddpm_steps 200 --dec_w 0.5

By default, I hope to obtain an output with a resolution of 4K, but I got:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 55.62 GiB (GPU 0; 79.19 GiB total capacity; 31.25 GiB already allocated; 14.34 GiB free; 39.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

My xformers have been installed correctly:
image

and this is my test image:
lowres

train dataset

hi
can you share the training dataset ?
DIV8K dataset is hart to download .
thanks

RuntimeError: Error(s) in loading state_dict for EncoderUNetModelWT

RuntimeError: Error(s) in loading state_dict for EncoderUNetModelWT: Missing key(s) in state_dict: "time_embed.0.weight", "time_embed.0.bias", "time_embed.2.weight", "time_embed.2.bias", "input_blocks.0.0.weight", "input_blocks.0.0.bias", "input_blocks.1.0.in_layers.0.weight", "input_blocks.1.0.in_layers.0.bias", "input_blocks.1.0.in_layers.2.weight", "input_blocks.1.0.in_layers.2.bias", "input_blocks.1.0.emb_layers.1.weight", "input_blocks.1.0.emb_layers.1.bias", "input_blocks.1.0.out_layers.0.weight", "input_blocks.1.0.out_layers.0.bias", "input_blocks.1.0.out_layers.3.weight", "input_blocks.1.0.out_layers.3.bias", "input_blocks.1.1.norm.weight", "input_blocks.1.1.norm.bias", "input_blocks.1.1.qkv.weight", "input_blocks.1.1.qkv.bias", "input_blocks.1.1.proj_out.weight", "input_blocks.1.1.proj_out.bias", "input_blocks.2.0.in_layers.0.weight", "input_blocks.2.0.in_layers.0.bias", "input_blocks.2.0.in_layers.2.weight", "input_blocks.2.0.in_layers.2.bias", "input_blocks.2.0.emb_layers.1.weight", "input_blocks.2.0.emb_layers.1.bias", "input_blocks.2.0.out_layers.0.weight", "input_blocks.2.0.out_layers.0.bias", "input_blocks.2.0.out_layers.3.weight", "input_blocks.2.0.out_layers.3.bias", "input_blocks.2.1.norm.weight", "input_blocks.2.1.norm.bias", "input_blocks.2.1.qkv.weight", "input_blocks.2.1.qkv.bias", "input_blocks.2.1.proj_out.weight", "input_blocks.2.1.proj_out.bias", "input_blocks.3.0.op.weight", "input_blocks.3.0.op.bias", "input_blocks.4.0.in_layers.0.weight", "input_blocks.4.0.in_layers.0.bias", "input_blocks.4.0.in_layers.2.weight", "input_blocks.4.0.in_layers.2.bias", "input_blocks.4.0.emb_layers.1.weight", "input_blocks.4.0.emb_layers.1.bias", "input_blocks.4.0.out_layers.0.weight", "input_blocks.4.0.out_layers.0.bias", "input_blocks.4.0.out_layers.3.weight", "input_blocks.4.0.out_layers.3.bias", "input_blocks.4.1.norm.weight", "input_blocks.4.1.norm.bias", "input_blocks.4.1.qkv.weight", "input_blocks.4.1.qkv.bias", "input_blocks.4.1.proj_out.weight", "input_blocks.4.1.proj_out.bias", "input_blocks.5.0.in_layers.0.weight", "input_blocks.5.0.in_layers.0.bias", "input_blocks.5.0.in_layers.2.weight", "input_blocks.5.0.in_layers.2.bias", "input_blocks.5.0.emb_layers.1.weight", "input_blocks.5.0.emb_layers.1.bias", "input_blocks.5.0.out_layers.0.weight", "input_blocks.5.0.out_layers.0.bias", "input_blocks.5.0.out_layers.3.weight", "input_blocks.5.0.out_layers.3.bias", "input_blocks.5.1.norm.weight", "input_blocks.5.1.norm.bias", "input_blocks.5.1.qkv.weight", "input_blocks.5.1.qkv.bias", "input_blocks.5.1.proj_out.weight", "input_blocks.5.1.proj_out.bias", "input_blocks.6.0.op.weight", "input_blocks.6.0.op.bias", "input_blocks.7.0.in_layers.0.weight", "input_blocks.7.0.in_layers.0.bias", "input_blocks.7.0.in_layers.2.weight", "input_blocks.7.0.in_layers.2.bias", "input_blocks.7.0.emb_layers.1.weight", "input_blocks.7.0.emb_layers.1.bias", "input_blocks.7.0.out_layers.0.weight", "input_blocks.7.0.out_layers.0.bias", "input_blocks.7.0.out_layers.3.weight", "input_blocks.7.0.out_layers.3.bias", "input_blocks.7.0.skip_connection.weight", "input_blocks.7.0.skip_connection.bias", "input_blocks.7.1.norm.weight", "input_blocks.7.1.norm.bias", "input_blocks.7.1.qkv.weight", "input_blocks.7.1.qkv.bias", "input_blocks.7.1.proj_out.weight", "input_blocks.7.1.proj_out.bias", "input_blocks.8.0.in_layers.0.weight", "input_blocks.8.0.in_layers.0.bias", "input_blocks.8.0.in_layers.2.weight", "input_blocks.8.0.in_layers.2.bias", "input_blocks.8.0.emb_layers.1.weight", "input_blocks.8.0.emb_layers.1.bias", "input_blocks.8.0.out_layers.0.weight", "input_blocks.8.0.out_layers.0.bias", "input_blocks.8.0.out_layers.3.weight", "input_blocks.8.0.out_layers.3.bias", "input_blocks.8.1.norm.weight", "input_blocks.8.1.norm.bias", "input_blocks.8.1.qkv.weight", "input_blocks.8.1.qkv.bias", "input_blocks.8.1.proj_out.weight", "input_blocks.8.1.proj_out.bias", "input_blocks.9.0.op.weight", "input_blocks.9.0.op.bias", "input_blocks.10.0.in_layers.0.weight", "input_blocks.10.0.in_layers.0.bias", "input_blocks.10.0.in_layers.2.weight", "input_blocks.10.0.in_layers.2.bias", "input_blocks.10.0.emb_layers.1.weight", "input_blocks.10.0.emb_layers.1.bias", "input_blocks.10.0.out_layers.0.weight", "input_blocks.10.0.out_layers.0.bias", "input_blocks.10.0.out_layers.3.weight", "input_blocks.10.0.out_layers.3.bias", "input_blocks.11.0.in_layers.0.weight", "input_blocks.11.0.in_layers.0.bias", "input_blocks.11.0.in_layers.2.weight", "input_blocks.11.0.in_layers.2.bias", "input_blocks.11.0.emb_layers.1.weight", "input_blocks.11.0.emb_layers.1.bias", "input_blocks.11.0.out_layers.0.weight", "input_blocks.11.0.out_layers.0.bias", "input_blocks.11.0.out_layers.3.weight", "input_blocks.11.0.out_layers.3.bias", "middle_block.0.in_layers.0.weight", "middle_block.0.in_layers.0.bias", "middle_block.0.in_layers.2.weight", "middle_block.0.in_layers.2.bias", "middle_block.0.emb_layers.1.weight", "middle_block.0.emb_layers.1.bias", "middle_block.0.out_layers.0.weight", "middle_block.0.out_layers.0.bias", "middle_block.0.out_layers.3.weight", "middle_block.0.out_layers.3.bias", "middle_block.1.norm.weight", "middle_block.1.norm.bias", "middle_block.1.qkv.weight", "middle_block.1.qkv.bias", "middle_block.1.proj_out.weight", "middle_block.1.proj_out.bias", "middle_block.2.in_layers.0.weight", "middle_block.2.in_layers.0.bias", "middle_block.2.in_layers.2.weight", "middle_block.2.in_layers.2.bias", "middle_block.2.emb_layers.1.weight", "middle_block.2.emb_layers.1.bias", "middle_block.2.out_layers.0.weight", "middle_block.2.out_layers.0.bias", "middle_block.2.out_layers.3.weight", "middle_block.2.out_layers.3.bias", "fea_tran.0.in_layers.0.weight", "fea_tran.0.in_layers.0.bias", "fea_tran.0.in_layers.2.weight", "fea_tran.0.in_layers.2.bias", "fea_tran.0.emb_layers.1.weight", "fea_tran.0.emb_layers.1.bias", "fea_tran.0.out_layers.0.weight", "fea_tran.0.out_layers.0.bias", "fea_tran.0.out_layers.3.weight", "fea_tran.0.out_layers.3.bias", "fea_tran.1.in_layers.0.weight", "fea_tran.1.in_layers.0.bias", "fea_tran.1.in_layers.2.weight", "fea_tran.1.in_layers.2.bias", "fea_tran.1.emb_layers.1.weight", "fea_tran.1.emb_layers.1.bias", "fea_tran.1.out_layers.0.weight", "fea_tran.1.out_layers.0.bias", "fea_tran.1.out_layers.3.weight", "fea_tran.1.out_layers.3.bias", "fea_tran.2.in_layers.0.weight", "fea_tran.2.in_layers.0.bias", "fea_tran.2.in_layers.2.weight", "fea_tran.2.in_layers.2.bias", "fea_tran.2.emb_layers.1.weight", "fea_tran.2.emb_layers.1.bias", "fea_tran.2.out_layers.0.weight", "fea_tran.2.out_layers.0.bias", "fea_tran.2.out_layers.3.weight", "fea_tran.2.out_layers.3.bias", "fea_tran.2.skip_connection.weight", "fea_tran.2.skip_connection.bias", "fea_tran.3.in_layers.0.weight", "fea_tran.3.in_layers.0.bias", "fea_tran.3.in_layers.2.weight", "fea_tran.3.in_layers.2.bias", "fea_tran.3.emb_layers.1.weight", "fea_tran.3.emb_layers.1.bias", "fea_tran.3.out_layers.0.weight", "fea_tran.3.out_layers.0.bias", "fea_tran.3.out_layers.3.weight", "fea_tran.3.out_layers.3.bias", "fea_tran.3.skip_connection.weight", "fea_tran.3.skip_connection.bias".
Time taken: 17.57sTorch active/reserved: 2515/2600 MiB, Sys VRAM: 4125/12288 MiB (33.57%)

Question about Autoencoder loss function, and training pipeline confirmation

In the training step function, the self.loss is utilized to calculate the aeloss and discloss. However, I noticed that in the v2-finetune_text_T_512.yaml file, the lossconfig is specified as torch.nn.Identity. I'm unable to locate where the loss function is defined in this case.

Moreover, I would appreciate some clarification regarding the training pipeline. As per my understanding, the initial step involves fine-tuning the stable diffusion with a time-aware encoder, wherein the parameters remain frozen except for SFT. Once this fine-tuning process is completed, the parameters of the Diffusion part are then frozen, and the CFW module is trained.

I'm uncertain about the accuracy of my understanding, and I kindly request your guidance in confirming my interpretation. I would greatly appreciate a response.

finetune

Hi,if I want to finetune on my datasets based on your model.It seems that if I set the ckpt path with stablesr_000117-001.ckpt,it would have errors of missing key.Please support finetune,thanks! @IceClear

Question about performance gap on test set

Hi! Thanks for the great work! But we found there is performance gap between ours and results reported in the paper.

dataset: Using div2k(DIV2K/DIV2K_valid_HR) and degrade pipeline in your yaml file, we get 3k lr-hr images. The main code is as follows.
model: we run sr_val_ddpm_text_T_vqganfin_old.py with the yaml v2-finetune_text_T_512.yaml (w=1) and pretrinaed model(stablesr_000117.ckpt and vqgan_cfw_00011.ckpt)

evaluate: we the repo to evaluete all metrics
our results: psnr 20.21; SSIM: 0.5163; LPIPS: 0.3042; FID: 26.13; CLIP-IQA: 0.6209; MUSIQ: 64.69
results in the paper: psnr 23.14; SSIM: 0.5681; LPIPS: 0.3077; FID: 26.14; CLIP-IQA: 0.6197; MUSIQ: 64.31

Due to the huge gap in psnr, could you please share your test set to help us avoid the randomness caused by dataset making. By the way, we would like to know, for difference w, there is only one model, that is to say we only need to set w=1 when train the model and change different w when test?

`def get_lr_data(batch, resize_lq=True):

"""Degradation pipeline, modified from Real-ESRGAN:
https://github.com/xinntao/Real-ESRGAN
"""
jpeger = DiffJPEG(differentiable=False).cuda()  # simulate JPEG compression artifacts
usm_sharpener = USMSharp().cuda()  # do usm sharpening

im_gt = batch['gt'].cuda()
if use_usm:
    im_gt = usm_sharpener(im_gt)
im_gt = im_gt.to(memory_format=torch.contiguous_format).float()
kernel1 = batch['kernel1'].cuda()
kernel2 = batch['kernel2'].cuda()
sinc_kernel = batch['sinc_kernel'].cuda()

ori_h, ori_w = im_gt.size()[2:4]

# ----------------------- The first degradation process ----------------------- #
# blur
out = filter2D(im_gt, kernel1)
# random resize
updown_type = random.choices(
        ['up', 'down', 'keep'],
        config.degradation['resize_prob'],
        )[0]
if updown_type == 'up':
    scale = random.uniform(1, config.degradation['resize_range'][1])
elif updown_type == 'down':
    scale = random.uniform(config.degradation['resize_range'][0], 1)
else:
    scale = 1
mode = random.choice(['area', 'bilinear', 'bicubic'])
out = F.interpolate(out, scale_factor=scale, mode=mode)
# add noise
gray_noise_prob = config.degradation['gray_noise_prob']
if random.random() < config.degradation['gaussian_noise_prob']:
    out = random_add_gaussian_noise_pt(
        out,
        sigma_range=config.degradation['noise_range'],
        clip=True,
        rounds=False,
        gray_prob=gray_noise_prob,
        )
else:
    out = random_add_poisson_noise_pt(
        out,
        scale_range=config.degradation['poisson_scale_range'],
        gray_prob=gray_noise_prob,
        clip=True,
        rounds=False)
# JPEG compression
jpeg_p = out.new_zeros(out.size(0)).uniform_(*config.degradation['jpeg_range'])
out = torch.clamp(out, 0, 1)  # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts
out = jpeger(out, quality=jpeg_p)

# ----------------------- The second degradation process ----------------------- #
# blur
if random.random() < config.degradation['second_blur_prob']:
    out = filter2D(out, kernel2)
# random resize
updown_type = random.choices(
        ['up', 'down', 'keep'],
        config.degradation['resize_prob2'],
        )[0]
if updown_type == 'up':
    scale = random.uniform(1, config.degradation['resize_range2'][1])
elif updown_type == 'down':
    scale = random.uniform(config.degradation['resize_range2'][0], 1)
else:
    scale = 1
mode = random.choice(['area', 'bilinear', 'bicubic'])
out = F.interpolate(
        out,
        size=(int(ori_h / config.sf * scale),
                int(ori_w / config.sf * scale)),
        mode=mode,
        )
# add noise
gray_noise_prob = config.degradation['gray_noise_prob2']
if random.random() < config.degradation['gaussian_noise_prob2']:
    out = random_add_gaussian_noise_pt(
        out,
        sigma_range=config.degradation['noise_range2'],
        clip=True,
        rounds=False,
        gray_prob=gray_noise_prob,
        )
else:
    out = random_add_poisson_noise_pt(
        out,
        scale_range=config.degradation['poisson_scale_range2'],
        gray_prob=gray_noise_prob,
        clip=True,
        rounds=False,
        )

# JPEG compression + the final sinc filter
# We also need to resize images to desired sizes. We group [resize back + sinc filter] together
# as one operation.
# We consider two orders:
#   1. [resize back + sinc filter] + JPEG compression
#   2. JPEG compression + [resize back + sinc filter]
# Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines.
if random.random() < 0.5:
    # resize back + the final sinc filter
    mode = random.choice(['area', 'bilinear', 'bicubic'])
    out = F.interpolate(
            out,
            size=(ori_h // config.sf,
                    ori_w // config.sf),
            mode=mode,
            )
    out = filter2D(out, sinc_kernel)
    # JPEG compression
    jpeg_p = out.new_zeros(out.size(0)).uniform_(*config.degradation['jpeg_range2'])
    out = torch.clamp(out, 0, 1)
    out = jpeger(out, quality=jpeg_p)
else:
    # JPEG compression
    jpeg_p = out.new_zeros(out.size(0)).uniform_(*config.degradation['jpeg_range2'])
    out = torch.clamp(out, 0, 1)
    out = jpeger(out, quality=jpeg_p)
    # resize back + the final sinc filter
    mode = random.choice(['area', 'bilinear', 'bicubic'])
    out = F.interpolate(
            out,
            size=(ori_h // config.sf,
                    ori_w // config.sf),
            mode=mode,
            )
    out = filter2D(out, sinc_kernel)

# clamp and round
im_lq = torch.clamp(out, 0, 1.0)

# random crop
gt_size = config.degradation['gt_size']
im_gt, im_lq = paired_random_crop(im_gt, im_lq, gt_size, config.sf)
lq, gt = im_lq, im_gt

if resize_lq:
    lq = F.interpolate(
            lq,
            size=(gt.size(-2),
                    gt.size(-1)),
            mode='bicubic',
            )

if random.random() < config.degradation['no_degradation_prob'] or torch.isnan(lq).any():
        lq = gt

lq = torch.clamp(lq, 0, 1.0)

# convert tensor(RGB) to Image
lq = (lq.cpu().detach().numpy().transpose((0,2,3,1))*255).astype(np.uint8) # (N,H,W,C)
gt = (gt.cpu().detach().numpy().transpose((0,2,3,1))*255).astype(np.uint8) # (N,H,W,C)

return lq, gt`

Additional comparisons to Tiled DDPM, ControlNet Tile, Loopback Scaler and DeepFloyed.

Hello, thanks for the work! We see many classic SR methods in the paper. The comparison to Real-ESRGAN+ looks promising!

However, it seems that the paper wants to claim that “our method using both synthetic and real world benchmarks demonstrates its superiority over current state-of-the-art approaches”. Just wondering would we have some comparisons to some real baselines and more common methods that people actually use?

For example:

Tiled diffusion’s DDIM inversion:
https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111

ControlNet Tile’s updates yesterday (looks like they are going to use this SR-like model to compete MidjourneyV5/5.1 in image details):
https://github.com/lllyasviel/ControlNet-v1-1-nightly#ControlNet-11-Tile

Loopback Scaler:
https://civitai.com/models/23188/loopback-scaler

DeepFloyd’s 256 stage model (IF-III-L):
https://github.com/deep-floyd/IF

Some of these methods are likely to use prompts, yet it seems that getting a prompt from small image is trivial for BLIP, and all ControlNets have a ‘guessmode’ that can use empty string as prompts. Loopback Scaler and Tiled diffusion seem to suggest people always using same string as prompts whatever the image is so they actually do not require prompts.

Most of these methods can be easily used by installing a latest version of automatic1111.

Test with multiple GPUs

Hi,

I am aware that the current sr_val_ddpm_text_T_vqganfin_oldcanvas.py may have OOM issue for images with high resolutions. However, I wonder if it is possible to run it with multiple GPUs, or is it possible to run the tiling version parallelly on multiple GPUs?

In addition, I wonder what is the expected memory required for an image with resolution 2048 * 1024 without tiling? I am using a 24G 3090 but the memory is not sufficient.

Thank you in advance for your answer!

How to use the LR images in training?

Hi, I'm trying to understand the pipeline of your great work. But I'm not sure how to use the training data. Is there any diffusion process (adding noise to traning images) in your training process?

From what I understood, the training process is:
first, construct LR images by degrading HR images (eg. DIV8K),
then, in the first training stage, input these LR images (without adding noise) to the VQGAN, Unet, to get the output images, which are compared with GT (HR) images to calculate the loss,
and, in the second training stage, input these LR images (without adding noise) , latent, samples, GT to train CFW.

I didn't find the diffusion process (adding noise to traning images) in the training process. Is there any mistakes in my understanding?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.