GithubHelp home page GithubHelp logo

mishalaskin / vqvae Goto Github PK

View Code? Open in Web Editor NEW
567.0 7.0 67.0 4.88 MB

A pytorch implementation of the vector quantized variational autoencoder (https://arxiv.org/abs/1711.00937)

Python 10.14% Jupyter Notebook 89.86%

vqvae's Introduction

Vector Quantized Variational Autoencoder

This is a PyTorch implementation of the vector quantized variational autoencoder (https://arxiv.org/abs/1711.00937).

You can find the author's original implementation in Tensorflow here with an example you can run in a Jupyter notebook.

Installing Dependencies

To install dependencies, create a conda or virtual environment with Python 3 and then run pip install -r requirements.txt.

Running the VQ VAE

To run the VQ-VAE simply run python3 main.py. Make sure to include the -save flag if you want to save your model. You can also add parameters in the command line. The default values are specified below:

parser.add_argument("--batch_size", type=int, default=32)
parser.add_argument("--n_updates", type=int, default=5000)
parser.add_argument("--n_hiddens", type=int, default=128)
parser.add_argument("--n_residual_hiddens", type=int, default=32)
parser.add_argument("--n_residual_layers", type=int, default=2)
parser.add_argument("--embedding_dim", type=int, default=64)
parser.add_argument("--n_embeddings", type=int, default=512)
parser.add_argument("--beta", type=float, default=.25)
parser.add_argument("--learning_rate", type=float, default=3e-4)
parser.add_argument("--log_interval", type=int, default=50)

Models

The VQ VAE has the following fundamental model components:

  1. An Encoder class which defines the map x -> z_e
  2. A VectorQuantizer class which transform the encoder output into a discrete one-hot vector that is the index of the closest embedding vector z_e -> z_q
  3. A Decoder class which defines the map z_q -> x_hat and reconstructs the original image

The Encoder / Decoder classes are convolutional and inverse convolutional stacks, which include Residual blocks in their architecture see ResNet paper. The residual models are defined by the ResidualLayer and ResidualStack classes.

These components are organized in the following folder structure:

models/
    - decoder.py -> Decoder
    - encoder.py -> Encoder
    - quantizer.py -> VectorQuantizer
    - residual.py -> ResidualLayer, ResidualStack
    - vqvae.py -> VQVAE

PixelCNN - Sampling from the VQ VAE latent space

To sample from the latent space, we fit a PixelCNN over the latent pixel values z_ij. The trick here is recognizing that the VQ VAE maps an image to a latent space that has the same structure as a 1 channel image. For example, if you run the default VQ VAE parameters you'll RGB map images of shape (32,32,3) to a latent space with shape (8,8,1), which is equivalent to an 8x8 grayscale image. Therefore, you can use a PixelCNN to fit a distribution over the "pixel" values of the 8x8 1-channel latent space.

To train the PixelCNN on latent representations, you first need to follow these steps:

  1. Train the VQ VAE on your dataset of choice
  2. Use saved VQ VAE parameters to encode your dataset and save discrete latent space representations with np.save API. In the quantizer.py this is the min_encoding_indices variable.
  3. Specify path to your saved latent space dataset in utils.load_latent_block function.
  4. Run the PixelCNN script

To run the PixelCNN, simply type

python pixelcnn/gated_pixelcnn.py

as well as any parameters (see the argparse statements). The default dataset is LATENT_BLOCK which will only work if you have trained your VQ VAE and saved the latent representations.

vqvae's People

Contributors

mishalaskin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

vqvae's Issues

Embedding loss backwards?

referencing this line:

loss = torch.mean((z_q.detach()-z)**2) + self.beta * \
torch.mean((z_q - z.detach()) ** 2)

From what I can read in the paper the loss is :

log p ... + ||sg[z_e(x)]-e||^2 + beta * ||z_e(x)-sg[e]||^2

but in this code it seems to me to be backwards, I.E.

log p ... + ||z_e(x)-sg[e]||^2 + beta * ||sg[z_e(x)]-e||^2

About the shape of variable 'min_encoding_indices'

Hi,
Having trained the VQVAE model, I managed to extract the encoded code, i.e. min_encoding_indices. However, I found that its shape is (batch_sizeimage_size/16, 1). That is, take a batch of 10 images with 6464 pixels as an example, the shape of this variable would be (101616, 1), i.e. (2560, 1). This confuses me a little, because in the following code when I need to use it to train PixelCNN, after runing
for batch_idx, (x, label) in enumerate(test_loader):
the shape of x would be (batch_size, 1). Besides, I don't quite understand why labels all filled with zero are needed. Is there any modification needed to this code?
Thanks.

Why your way to get quantized vectors is so tortuous?

Thanks for your awesome VQ-VAE implementation!

I read the code here, which is for getting quantized vectors:

min_encodings = torch.zeros(
min_encoding_indices.shape[0], self.n_e).to(device)
min_encodings.scatter_(1, min_encoding_indices, 1)
# get quantized latent vectors
z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape)

I am wondering why don't you just use the forward function of the nn.Embedding object self.embedding, like this:

min_encoding_indices = torch.argmin(d, dim=1)
z_q = self.embedding(min_encoding_indices)

(The two ways can get the same results.)

If I change the code like this, will I get into some trouble? (unable to pass gradient... etc)

Thank you so much!

Request for a license file

Hi, could you please add a license file (MIT license etc.) to this repository so we know what to do if we partially reused and referenced the codes? thanks!

question about perplexity

I have some doubts about the calculation of perplexity

e_mean = torch.mean(min_encodings, dim=0)
perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10)))

dim=0 is it to average a dimension of all samples, rather than averaging the embedding(from codebook) of the samples?

thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.