GithubHelp home page GithubHelp logo

dome272 / vqgan-pytorch Goto Github PK

View Code? Open in Web Editor NEW
420.0 3.0 71.0 27 KB

Pytorch implementation of VQGAN (Taming Transformers for High-Resolution Image Synthesis) (https://arxiv.org/pdf/2012.09841.pdf)

License: MIT License

Python 100.00%

vqgan-pytorch's Introduction

Note:

Code Tutorial + Implementation Tutorial

Qries Qries

VQGAN

Vector Quantized Generative Adversarial Networks (VQGAN) is a generative model for image modeling. It was introduced in Taming Transformers for High-Resolution Image Synthesis. The concept is build upon two stages. The first stage learns in an autoencoder-like fashion by encoding images into a low-dimensional latent space, then applying vector quantization by making use of a codebook. Afterwards, the quantized latent vectors are projected back to the original image space by using a decoder. Encoder and Decoder are fully convolutional. The second stage is learning a transformer for the latent space. Over the course of training it learns which codebook vectors go along together and which not. This can then be used in an autoregressive fashion to generate before unseen images from the data distribution.

Results for First Stage (Reconstruction):

1. Epoch:

50. Epoch:

Results for Second Stage (Generating new Images):

Original Left | Reconstruction Middle Left | Completion Middle Right | New Image Right

1. Epoch:

100. Epoch:

Note: Let the model train for even longer to get better results.


Train VQGAN on your own data:

Training First Stage

  1. (optional) Configure Hyperparameters in training_vqgan.py
  2. Set path to dataset in training_vqgan.py
  3. python training_vqgan.py

Training Second Stage

  1. (optional) Configure Hyperparameters in training_transformer.py
  2. Set path to dataset in training_transformer.py
  3. python training_transformer.py

Citation

@misc{esser2021taming,
      title={Taming Transformers for High-Resolution Image Synthesis}, 
      author={Patrick Esser and Robin Rombach and Björn Ommer},
      year={2021},
      eprint={2012.09841},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

vqgan-pytorch's People

Contributors

dome272 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

vqgan-pytorch's Issues

real image in save results looks strange

in the train_vqgan.py, the code real_fake_images = torch.cat((imgs[:4], decoded_images.add(1).mul(0.5)[:4])) should be revised real_fake_images = torch.cat((imgs.add(1).mul(0.5)[:4], decoded_images.add(1).mul(0.5)[:4]))

Discriminator loss mostly zero - it shouldn't be right?

Hi, my discriminator loss curve looks like this:
image
I don't think that's right because it means most of the times the discriminator knows the generated image is fake but the generator failed to generate better images...
Does anyone have similar issue?

Some questions to produce good generation results.

Hi, thank you very much for sharing your clean implementation. I would like to ask you some questions:

  1. How much data we need to produce a good generation result.
  2. Does the data need to be of the same category (e.g. flowers).
  3. Does transformer require pre-training.

Looking forward for your reply, thanks in advance.

This is a great Repo

I just want to say this is a great repo! I spent two days tuning the environment of the original taming-transformer repo but fails with incompatibility of different packages. But this one is so neat and clean with very basic dependencies. I successfully run it the first time I try. Thank you the repo owner!!!

Best,
Dayang

Conditional image generation problem

Hello, your tutorial is great! I have a question I would like to ask, how do I add conditions in the form of pictures when training the second stage transformer

Black border

Hi! Thank you for sharing this project and for the video tutorial on writing it! I have a question: why do the generated images all have a black border? Can this be fixed? I tried to add "reflect" to all Conv2d layers but that didn't fix it.

Environment Setup

Thanks for the clean and nice implementation of VQGAN.
Could you please provide a requirements.txt or environment.yaml file to set up the environment?
Thanks.

add label conditions problem

Hello, your tutorial is great! I have a question I would like to ask: when I want to add label conditions, such as gender or age, or both, how should the transformer be adjusted, and could you provide an example?

conditional image generation

Hello, your tutorial is great! I have a question I would like to ask, how do I add conditions in the form of pictures when training the second stage transformer

The issue with training results.

Hello, thank you very much for your code and videos!
I'm using this code repository to train on the flowers dataset with a batch size of 32 for 200 epochs, but the reconstructed images still only have rough outlines without specific details.
Is there something wrong somewhere?
image

How to distinguish sos token(default = 0) and quantified image token zero ?

Thanks for your video.
Since the transformer take in the quantified image token generated by VQGAN, which codebook has indices (0~n_embed-1), and transformer’s sos token is also set to zero defaultly. Could you tell me why we don't distinguish codebook vector0 and the sos token when training transformer?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.