GithubHelp home page GithubHelp logo

examples's People

Contributors

chuanli11 avatar cooperll avatar corey-lambda avatar hopeisme avatar justinpinkney avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

examples's Issues

What is a "correctly formatted local directory"?

In the Stable Diffusion Finetuning README, it is mentioned that the dataset from lambdalabs/pokemon-blip-cpations is on Huggingface Hub, but "could also be a correctly formatted local directory."

What is the correct format?

I've replaced data.params.train.params.name in the config from lambdalabs/pokemon-blip-cpations to /home/ozzah/finetuning_dataset which contains images and captions but there is an error.

Trying to reproduce, got eror

All same& but got this errore
RuntimeError: Given groups=1, weight of size [128, 3, 3, 3], expected input[512, 1, 512, 3] to have 3 channels, but got 1 channels instead

size mismatch error when running fine-tuning command for stable-diffusion.

Thanks for your wonderful open-source.

But, when I ran the next command, I got a size mismatch error when loading pre-trained weights.

python main.py     -t     --base configs/stable-diffusion/pokemon.yaml     --gpus 0     --scale_lr False     --num_nodes 1     --check_val_every_n_epoch 10     --finetune_from ./models/ldm/stable-diffusion-v1/sd-v1-4-full-ema.ckpt

It's around p670 of main.py.
model = instantiate_from_config(config.model)
Thank you for your help.

RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
        size mismatch for text_model.embeddings.token_embedding.weight: copying a param with shape torch.Size([49408, 768]) from checkpoint, the shape in current model is torch.Size([49408, 512]).
        size mismatch for text_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([77, 768]) from checkpoint, the shape in current model is torch.Size([77, 512]).
        size mismatch for text_model.encoder.layers.0.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.0.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.0.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.0.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.0.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.0.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.0.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.0.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.0.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.0.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.0.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.0.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.0.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.0.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.1.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.1.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.1.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.1.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.1.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.1.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.1.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.2.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.2.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.2.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.2.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.2.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.2.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.2.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.2.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.3.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.3.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.3.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.3.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.3.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.3.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.3.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.3.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.4.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.4.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.4.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.4.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.4.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.4.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.4.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.4.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.5.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.5.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.5.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.5.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.5.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.5.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.5.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.5.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.6.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.6.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.6.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.6.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.6.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.6.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.6.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.6.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.7.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.7.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.7.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.7.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.7.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.7.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.7.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.7.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.8.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.8.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.8.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.8.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.8.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.8.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.8.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.8.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.9.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.9.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.9.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.9.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.9.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.9.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.9.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.9.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.10.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.10.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.10.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.10.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.10.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.10.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.10.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.10.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.self_attn.k_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.11.self_attn.k_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.self_attn.v_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.11.self_attn.v_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.self_attn.q_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.11.self_attn.q_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.self_attn.out_proj.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is torch.Size([512, 512]).
        size mismatch for text_model.encoder.layers.11.self_attn.out_proj.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.layer_norm1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.layer_norm1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.mlp.fc1.weight: copying a param with shape torch.Size([3072, 768]) from checkpoint, the shape in current model is torch.Size([2048, 512]).
        size mismatch for text_model.encoder.layers.11.mlp.fc1.bias: copying a param with shape torch.Size([3072]) from checkpoint, the shape in current model is torch.Size([2048]).
        size mismatch for text_model.encoder.layers.11.mlp.fc2.weight: copying a param with shape torch.Size([768, 3072]) from checkpoint, the shape in current model is torch.Size([512, 2048]).
        size mismatch for text_model.encoder.layers.11.mlp.fc2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.layer_norm2.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.encoder.layers.11.layer_norm2.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.final_layer_norm.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for text_model.final_layer_norm.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([512]).

Stopped during 64th epoch with no error message

I tried running the code on a lamdalabs A100 instance and it stopped in the middle of the 64th epoch. There is no error message or anything, so maybe there was an issue with the instance rather than the code? Maybe a memory issue or something?

I made some small changes, so maybe that was the cause of the issue. I modified these settings in order to accommodate the different GPU size:
BATCH_SIZE = 2
N_GPUS = 1
ACCUMULATE_BATCHES = 4

killed when epoch 0 finished

@justinpinkney @stephenbalaban @jmhummel @eolecvk
appreciated for your great work. So amazing.
When I run training code, I got the error:
Epoch 0: 100%|█| 209/209 [06:12<00:00, 1.78s/it, loss=0.0685, v_num=0, train/loss_simple_step=0.0223, train/loss_vlb_step=8.78e-5, train/loss_step=0.0223, global_step=208.0,
Summoning checkpoint.
Killed

what is wrong. help me. thanks

ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers. SD fine tuning

Hello, I'm following the SD fine tuning tutorial. I ran with the Pokemon dataset and all was well, so I formatted my own dataset, edited the .yaml, forked the repo and am having this issue with my code when starting the first training epoch:

'ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.'

Full traceback:
`Traceback (most recent call last):
File "/content/stable-diffusion/main.py", line 905, in
trainer.fit(model, data)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
self._run(model)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
self._dispatch()

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
self.accelerator.start_training(self)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
return self._run_train()

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train
self.fit_loop.run()

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance
epoch_output = self.epoch_loop.run(train_dataloader)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 130, in advance
batch_output = self.batch_loop.run(batch, self.iteration_count, self._dataloader_idx)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 101, in run
super().run(batch, batch_idx, dataloader_idx)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/base.py", line 111, in run
self.advance(*args, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 148, in advance
result = self._run_optimization(batch_idx, split_batch, opt_idx, optimizer)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 202, in _run_optimization
self._optimizer_step(optimizer, opt_idx, batch_idx, closure)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 396, in _optimizer_step
model_ref.optimizer_step(
File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/core/lightning.py", line 1618, in optimizer_step
optimizer.step(closure=optimizer_closure)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/core/optimizer.py", line 209, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/core/optimizer.py", line 129, in __optimizer_step
trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 296, in optimizer_step
self.run_optimizer_step(optimizer, opt_idx, lambda_closure, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 303, in run_optimizer_step
self.training_type_plugin.optimizer_step(optimizer, lambda_closure=lambda_closure, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 226, in optimizer_step
optimizer.step(closure=lambda_closure, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/torch/optim/optimizer.py", line 113, in wrapper
return func(*args, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/torch/optim/adamw.py", line 119, in step
loss = closure()

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 236, in _training_step_and_backward_closure
result = self.training_step_and_backward(split_batch, batch_idx, opt_idx, optimizer, hiddens)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 537, in training_step_and_backward
result = self._training_step(split_batch, batch_idx, opt_idx, hiddens)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 307, in _training_step
training_step_output = self.trainer.accelerator.training_step(step_kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 193, in training_step
return self.training_type_plugin.training_step(*step_kwargs.values())

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/plugins/training_type/ddp.py", line 383, in training_step
return self.model(*args, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/distributed.py", line 1008, in forward
output = self._run_ddp_forward(*inputs, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0])

File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)

File "/usr/local/lib/python3.9/dist-packages/pytorch_lightning/overrides/base.py", line 82, in forward
output = self.module.training_step(*inputs, **kwargs)

File "/content/stable-diffusion/ldm/models/diffusion/ddpm.py", line 406, in training_step
loss, loss_dict = self.shared_step(batch)

File "/content/stable-diffusion/ldm/models/diffusion/ddpm.py", line 872, in shared_step
x, c = self.get_input(batch, self.first_stage_key)

File "/usr/local/lib/python3.9/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)

File "/content/stable-diffusion/ldm/models/diffusion/ddpm.py", line 742, in get_input
c = self.get_learned_conditioning(xc)

File "/content/stable-diffusion/ldm/models/diffusion/ddpm.py", line 619, in get_learned_conditioning
c = self.cond_stage_model.encode(c)

File "/content/stable-diffusion/ldm/modules/encoders/modules.py", line 280, in encode
return self(text)

File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)

File "/content/stable-diffusion/ldm/modules/encoders/modules.py", line 271, in forward
batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True,

File "/usr/local/lib/python3.9/dist-packages/transformers/tokenization_utils_base.py", line 2484, in call
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)

File "/usr/local/lib/python3.9/dist-packages/transformers/tokenization_utils_base.py", line 2570, in _call_one
return self.batch_encode_plus(

File "/usr/local/lib/python3.9/dist-packages/transformers/tokenization_utils_base.py", line 2761, in batch_encode_plus
return self._batch_encode_plus(

File "/usr/local/lib/python3.9/dist-packages/transformers/tokenization_utils.py", line 733, in _batch_encode_plus
first_ids = get_input_ids(ids)

File "/usr/local/lib/python3.9/dist-packages/transformers/tokenization_utils.py", line 713, in get_input_ids
raise ValueError(

ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.`

For full context, this is my dataset, formatted to be structured the same as the Pokemon dataset - https://huggingface.co/datasets/pimentooliver/fungi_futures

And here is my modified script -

!(python main.py \ -t \ --base /content/stable-diffusion/configs/stable-diffusion/rewrite_yaml.yaml \ --gpus "$gpu_list" \ --scale_lr False \ --num_nodes 1 \ --check_val_every_n_epoch 10 \ --finetune_from "$ckpt_path" \ data.params.batch_size="$BATCH_SIZE" \ lightning.trainer.accumulate_grad_batches="$ACCUMULATE_BATCHES" \ data.params.validation.params.n_gpus="$N_GPUS" \ )

Any advice much appreciated, thank you.

freeze model

How to freeze different parts of the model?

Invalid --gpus argument

Dear author,

I am running the pokemon_finetune.ipynb with the following setting.

# 2xA6000:
BATCH_SIZE = 4
N_GPUS = 1
ACCUMULATE_BATCHES = 1

gpu_list = ",".join((str(x) for x in range(N_GPUS))) + ","
print(f"Using GPUs: {gpu_list}")

I run the python main.py code block

# Run training
!(python main.py \
    -t \
    --base configs/stable-diffusion/pokemon.yaml \
    --gpus "$gpu_list" \
    --scale_lr False \
    --num_nodes 1 \
    --check_val_every_n_epoch 10 \
    --finetune_from "$ckpt_path" \
    data.params.batch_size="$BATCH_SIZE" \
    lightning.trainer.accumulate_grad_batches="$ACCUMULATE_BATCHES" \
    data.params.validation.params.n_gpus="$NUM_GPUS" \
)

I got an error saying that

main.py: error: argument --gpus: invalid _gpus_allowed_type value: ''

Could you please let me know why?

What is your pokemon configuration

Hi, thanks for your awesome work. I followed your instruction to finetune Stable Diffusion on the pokemon dataset. There are no problems in my training process, but my model is not as good as yours. It can generate image in pokemon style but seems to forget old information such as Donald trump or Obama. I tried your model on HF, it did not forget Obama and can generate better image without weird detail. I tried to modify the warm up steps to 10000, decrease Lr a little bit. The result did get better but still not as good as yours. So I wonder what is your configuration? Thank you.

CUDA out of memory Issue

Hi, very great work!

I just followed the instruction of pokemon_finetune.ipynb and tried to run it on Colab with one Tesla V100 and High ram

with the setting of

BATCH_SIZE = 1
N_GPUS = 1
ACCUMULATE_BATCHES = 1

It did show something down below so I think it's okay

Epoch 0: 0% 0/833 [00:00<00:00, 5637.51it/s] Summoning checkpoint.
tcmalloc: large alloc 1258086400 bytes == 0x7fa9e73c0000 @
...

until the RuntimeError: CUDA out of memory occurred.

Since I am already using the best of Colab to run it, so I wonder whether there is a way/trick to make it executable on Colab Pro+?

let's say trick of saving memory or use another smaller but similar model instead maybe?

Many thanks !!

Can not reproduce the result

I try to reproduce the performance on Pokemon dataset with training configs like:
python main.py
-t
--base configs/stable-diffusion/pokemon.yaml
--gpus 4,5,6,7
--scale_lr False
--num_nodes 1
--check_val_every_n_epoch 10
--finetune_from ./weights/sd-v1-4-full-ema.ckpt
data.params.batch_size="1"
lightning.trainer.accumulate_grad_batches="2"
data.params.validation.params.n_gpus="4"

but the generated images seem nothing but chaos after 200 epochs.
here are some results

yoda
image

A cute bunny rabit
image

any suggestions?

what is the expected epoch loss?

i try to reproduce the result, but failed
here is my config:
gpu(v100): 2
batch size:2
accumulate_batches: 2

after training 500 epochs, the train epoch loss is about 0.01, but the generated image is bad.

can u guys paste your training metrics, such as epoch loss etc
i think this should help others a lot. thanks.

Too slow on 2xA100 SXM4

Hello,
I started training on 2xA100 SMX4 according to your tutorial. I am using pokemon.yaml file. My dataset contains 1743 images and I am loading it via huggingface. The training has been going on for 13 hours and the first epoch isn't even over yet. There are neither images produced from validation texts nor a saved checkpoint in the log folder. It says your training takes 6 hours with the 2xA6000, wouldn't you expect a similar performance from the A100?
Ekran Resmi 2022-10-20 14 36 45
Ekran Resmi 2022-10-20 14 36 36

can't reproduce and got noise issue

TRAIN: I follow the example and use V100 to reproduce, I just change the batch size from 4 to 1 in configs/stable-diffusion/pokemon.yaml
python main.py -t --base configs/stable-diffusion/pokemon.yaml --gpus 1 --scale_lr False --num_nodes 1 --check_val_every_n_epoch 10 --finetune_from sd-v1-4-full-ema.ckpt

TEST: After training about 300 epochs, I use scripts/txt2img.py to test:
1、first I use the original checkpoint sd-v1-4-full-ema.ckpt to test and get the below result:
python scripts/txt2img.py --prompt 'robotic cat with wings' --outdir '/outputs/generated_pokemon' --H 512 --W 512 --n_samples 4 --config '/configs/stable-diffusion/pokemon.yaml' --ckpt 'sd-v1-4-full-ema.ckpt'
image
2、and then I use epoch=000002.ckpt、epoch=000004.ckpt、epoch=000007.ckpt、epoch=000009.ckpt、epoch=0000012.ckpt......to test again, and the result becomes more and more like noise, and at last i only generate all black picture.
python scripts/txt2img.py --prompt 'robotic cat with wings' --outdir '/outputs/generated_pokemon' --H 512 --W 512 --n_samples 4 --config '/configs/stable-diffusion/pokemon.yaml' --ckpt 'logs/2022-10-28T12-32-02_pokemon/checkpoints/epoch=000002.ckpt'
1)epoch=000002.ckpt result:
image
2)epoch=000004.ckpt result:
image
3)epoch=000007.ckpt result:
image
4)epoch=000009.ckpt result:
image
5)epoch=000012.ckpt result:
image
6)epoch=000014.ckpt result:
image
......
7)epoch=000048.ckpt result:
image
Is there anyone meet the same issue? or could you someone help to solve the problem?

shape error

hi, my environment is 3090ti x 3

i have a shape error

RuntimeError: Given groups=1, weight of size [128, 3, 3, 3], expected input[512, 1, 512, 3] to have 3 channels, but got 1 channels instead

and here is my command

3x3090ti:

BATCH_SIZE = 3
N_GPUS = 3
ACCUMULATE_BATCHES = 1

gpu_list = ",".join((str(x) for x in range(N_GPUS))) + ","
print(f"Using GPUs: {gpu_list}")

get_ipython().system('(python main.py -t --base configs/stable-diffusion/pokemon.yaml --gpus "$gpu_list" --scale_lr False --num_nodes 1 --check_val_every_n_epoch 10 --finetune_from /home/shlee/.cache/huggingface/hub/models--CompVis--stable-diffusion-v-1-4-original/snapshots/0834a76f88354683d3f7ef271cadd28f4757a8cc/sd-v1-4-full-ema.ckpt data.params.batch_size= "$BATCH_SIZE" )')

显卡配置

您好,想训练模型的话,最低配置是多少呀

Train Non-Square Size image

Thanks for your great work!
I want to train my own dataset, however the image are not the same size, curent method will convert them all to 512x512(seems using crop), will this affect the training performance, Is there a proper way to train these images?

wrong number of channels

When I run the code with my own data, I run into this error:

RuntimeError: Given groups=1, weight of size [128, 3, 3, 3], expected input[3, 1, 512, 512] to have 3 channels, but got 1 channels instead

First I thought, that my images might be grayscaled (not RGB) and thus have only one channel instead of 3, but this doesn't seem to be the case. Any ideas on how to solve the issue?

finetuning could include more info on making datasets

I want to try making a better version of the pokemon dataset, but I'm not clear on how to combine the spreadsheet / images into a dataset. perhaps you could provide some steps on how exactly the pokemon one was created. or at least link to a related tutorial or piece of software.

Error when trying to reproduce on Colab Pro Plus with A100 GPU

I am getting the following error when runing the notebook on colab pro plus with one A100 GPU:

  File "main.py", line 812, in <module>
    trainer = Trainer.from_argparse_args(trainer_opt, **trainer_kwargs)
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/properties.py", line 421, in from_argparse_args
    return from_argparse_args(cls, args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/argparse.py", line 52, in from_argparse_args
    return cls(**trainer_kwargs)
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 40, in insert_env_defaults
    return fn(self, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 446, in __init__
    terminate_on_nan,
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/training_trick_connector.py", line 50, in on_trainer_init
    self.configure_accumulated_gradients(accumulate_grad_batches)
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/training_trick_connector.py", line 66, in configure_accumulated_gradients
    raise TypeError("Gradient accumulation supports only int and dict types")
TypeError: Gradient accumulation supports only int and dict types

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 909, in <module>
    if trainer.global_rank == 0:
NameError: name 'trainer' is not defined

下载数据集似乎出错了,请问如何手动下载数据?

[util.py instantiate_from_config] config['target']: ldm.data.simple.hf_dataset
[simple.py hf_dataset] name: lambdalabs/pokemon-blip-captions
/usr/local/anaconda3/lib/python3.8/site-packages/huggingface_hub/utils/_deprecation.py:97: FutureWarning: Deprecated argument(s) used in 'dataset_info': token. Will not be supported from version '0.12'.
warnings.warn(message, FutureWarning)

Traceback (most recent call last):
File "main.py", line 846, in
data.prepare_data()
File "/usr/local/anaconda3/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py", line 428, in wrapped_fn
fn(*args, **kwargs)
File "/home/hello/lzs/stable_diffusion/sd_finetune/examples/stable-diffusion-finetuning/stable-diffusion/main.py", line 211, in prepare_data
instantiate_from_config(data_cfg)
File "/home/hello/lzs/stable_diffusion/sd_finetune/examples/stable-diffusion-finetuning/stable-diffusion/ldm/util.py", line 80, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/home/hello/lzs/stable_diffusion/sd_finetune/examples/stable-diffusion-finetuning/stable-diffusion/ldm/data/simple.py", line 126, in hf_dataset
ds = load_dataset(name, split=split)
File "/usr/local/anaconda3/lib/python3.8/site-packages/datasets/load.py", line 1723, in load_dataset
builder_instance = load_dataset_builder(
File "/usr/local/anaconda3/lib/python3.8/site-packages/datasets/load.py", line 1500, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/usr/local/anaconda3/lib/python3.8/site-packages/datasets/load.py", line 1247, in dataset_module_factory
raise e1 from None
File "/usr/local/anaconda3/lib/python3.8/site-packages/datasets/load.py", line 1228, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/usr/local/anaconda3/lib/python3.8/site-packages/datasets/load.py", line 819, in get_module
hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info(
File "/usr/local/anaconda3/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 94, in _inner_fn
return fn(*args, **kwargs)
File "/usr/local/anaconda3/lib/python3.8/site-packages/huggingface_hub/utils/_deprecation.py", line 98, in inner_f
return f(*args, **kwargs)
File "/usr/local/anaconda3/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1357, in dataset_info
r = requests.get(path, headers=headers, timeout=timeout, params=params)
File "/usr/local/anaconda3/lib/python3.8/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/anaconda3/lib/python3.8/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/anaconda3/lib/python3.8/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/anaconda3/lib/python3.8/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/anaconda3/lib/python3.8/site-packages/requests/adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=100.0)

Run the model

!(python scripts/txt2img.py
--prompt 'robotic cat with wings' \

每次都是出这个错,下载数据集似乎出错了,请问如何手动下载数据?就是如果让它自己下载会一直报这个错,如果手动下载就可以让它直接读已经下载好的数据,请问会下载到哪个文件夹?

Unable to select DDP

When I ran the main.py script for training according to the tutorial, the following error occurred
image
ValueError: You selected an invalid accelerator name: accelerator='ddp'. Available names are: cpu, cuda, hpu, ipu, mps, tpu.

How can I solve it?

Keep getting: NameError: name 'trainer' is not defined

Run training

!(python main.py
-t
--base configs/stable-diffusion/pokemon.yaml
--gpus "$gpu_list"
--scale_lr False
--num_nodes 1
--check_val_every_n_epoch 10
--finetune_from "$ckpt_path"
data.params.batch_size="$BATCH_SIZE"
lightning.trainer.accumulate_grad_batches="$ACCUMULATE_BATCHES"
data.params.validation.params.n_gpus="$NUM_GPUS"
)

Global seed set to 23
Running on GPUs 0,1,
Traceback (most recent call last):
File "main.py", line 670, in
model = instantiate_from_config(config.model)
File "/home/stable-diffusion/ldm/util.py", line 79, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/home/stable-diffusion/ldm/util.py", line 87, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
File "/opt/conda/lib/python3.8/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1014, in _gcd_import
File "", line 991, in _find_and_load
File "", line 975, in _find_and_load_unlocked
File "", line 671, in _load_unlocked
File "", line 843, in exec_module
File "", line 219, in _call_with_frames_removed
File "/home/stable-diffusion/ldm/models/diffusion/ddpm.py", line 26, in
from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL
File "/home/stable-diffusion/ldm/models/autoencoder.py", line 6, in
from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer
ImportError: cannot import name 'VectorQuantizer2' from 'taming.modules.vqvae.quantize' (/opt/conda/lib/python3.8/site-packages/taming/modules/vqvae/quantize.py)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 935, in
if trainer.global_rank == 0:
NameError: name 'trainer' is not defined

Cannot open checkpoints

Running pokemon finetuning on jupyter notebooks using a custom dataset. The checkpoints file is not opening for some reason so I can't access the checkpoints.

AttributeError: module 'keras.backend' has no attribute 'is_tensor'

When running on Colab Pro.
Below error occurs:
AttributeError: module 'keras.backend' has no attribute 'is_tensor'
Any idea how to fix?

Run training

!(python main.py
-t
--base configs/stable-diffusion/pokemon.yaml
--gpus "$gpu_list"
--scale_lr False
--num_nodes 1
--check_val_every_n_epoch 10
--finetune_from "$ckpt_path"
data.params.batch_size="$BATCH_SIZE"
lightning.trainer.accumulate_grad_batches="$ACCUMULATE_BATCHES"
data.params.validation.params.n_gpus="$NUM_GPUS"
)
Thanks.

"if any Pokémon enthusiasts feel like writing some captions manually please get in touch!"

With regard to "But if any Pokémon enthusiasts feel like writing some captions manually please get in touch!", if there's a way to do this from a webpage on a phone I'll do this. I can't promise I'll do a lot, but if it's easy enough to get into then perhaps other people will join in.

If each image had multiple captions (either from augmented images put through BLIP or from actual people) perhaps training with all of the encoded captions blended together within the attention of the model a la MixFeat blending features in a hidden state (rather than each caption independently) will create a more expressive model. Not blending the conditioning output with the 77 tokens, since that would only blend by token in the same position.

EOFError: Ran out of input

Hi,

Thanks for your great repo. Could you please help me figure out why I get this error?

I am on Windows platform and I fix others issues according to the guide on https://github.com/hlky/sd-enable-textual-inversion

`Python
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

Traceback (most recent call last):
File "C:\Users\rusong.li\Desktop\finetune\main.py", line 906, in
trainer.fit(model, data)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 553, in fit
self._run(model)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 918, in _run
self._dispatch()
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 986, in _dispatch
self.accelerator.start_training(self)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 92, in start_training
self.training_type_plugin.start_training(trainer)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 161, in start_training
self._results = trainer.run_stage()
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 996, in run_stage
return self._run_train()
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1045, in _run_train
self.fit_loop.run()
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\loops\base.py", line 111, in run
self.advance(*args, **kwargs)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 200, in advance
epoch_output = self.epoch_loop.run(train_dataloader)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\loops\base.py", line 111, in run
self.advance(*args, **kwargs)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 118, in advance
_, (batch, is_last) = next(dataloader_iter)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\profiler\base.py", line 104, in profile_iterable
value = next(iterator)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 625, in prefetch_iterator
last = next(it)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 546, in next
return self.request_next_batch(self.loader_iters)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 532, in loader_iters
self._loader_iters = self.create_loader_iters(self.loaders)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 590, in create_loader_iters
return apply_to_collection(loaders, Iterable, iter, wrong_dtype=(Sequence, Mapping))
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 96, in apply_to_collection
return function(data, *args, **kwargs)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\torch\utils\data\dataloader.py", line 435, in iter
return self._get_iterator()
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\torch\utils\data\dataloader.py", line 381, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\rusong.li\Desktop\finetune\venv\lib\site-packages\torch\utils\data\dataloader.py", line 1034, in init
w.start()
File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 336, in _Popen
return Popen(process_obj)
File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "C:\Users\rusong.li\AppData\Local\Programs\Python\Python310\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'hf_dataset..pre_process'. Did you mean: '_loader_iters'?
`

sample_gs_* vs samples_cfg_scale3.*

Hello everyone,
I am fine tuning this repo on a dataset composed of 12045 images. Unfortunately my gpu can only handle 1 image per batch, therefore I have batch_size=1 and accumulate_grad_batches=16.
At Epoch 14, the sample_gs_* images are actually quite good, but the sample_cfg_scale3.* images are quite bad.
Is this normal or am I overfitting? Do I need to train it for more epochs? The loss is around 0.19 at this point of the training.

Being afraid of overfitting, I decided to reduce the lr to 1e-5 and change accumulate_grad_batches=6.
However, even though I am at epoch 8, I can already see good sample_gs_* images but bad samples_cfg_scale3.* images.

Is this normal during training?

3090 and out of memory

RuntimeError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.68 GiB total capacity; 20.57 GiB already allocated; 94.44 MiB free; 20.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Fu**king it! Can you help me?

By the way, all the size and works are 1 ! ! !

Has any able to train successfully?

I wonder if anyone has successfully finetune the model? I have difficulty training using the code, either finetuning from StableDiffusion checkpoint or from scratch.

I noticed that after around 2000 steps, the loss spiked from 0.1 to 1 then stuck there. Inference from checkpoints show that when loss was 0.1, the generated images was "ok" i.e. if finetune, still able to generate recognizable shape of my custom dataset despite not pretty; if from scratch, can see blob of noise. However, when the loss stuck at 1.0, all it can generate is black.

I tried different random seed but didn't help.

Resume from checkpoint

Hello,

Thank you for giving this example on how to finetune stable diffusion!
The training seems to be working fine. However, when I try to resume from an intermediate checkpoint that was created upon training, the reconstructed images come out to be just noisy blurs. Any ideas what I am doing wrong here when resuming training from a checkpoint that was created on the way? I just changed the .cpkt in finetune_from and also tried the resume_from flag, but both did not work. Thank you for your help

SD-Fine-Tune, "Caught TypeError in DataLoader"

Hi, I have tried to run the notebook code on Colab with your pokemon dataset, and it was generating the checkpoint.
I tried with my own dataset with only 5 images to test out before I jump into with my own 100k+ images, but I get two TypeErrors. Everything looks fine because your hf_dataset() function does the heavy lifting.

https://huggingface.co/datasets/treksis/test_pinkeyrepo

image

TypeError: Caught TypeError in DataLoader worker process 0

image

TypeError: img should be PIL Image. Got <class 'dict'>

image

pokemon models

hi, the pokemon model's state_dict keys is different from original model. and the result of pokemon model is noise? Plz help~

pokemon model: ema-only-epoch=000142.ckpt
original model: sd-clip-vit-l14-img-embed_ema_only.ckpt

import torch
pokemon_model = torch.load('ema-only-epoch=000142.ckpt', map_location="cpu")
ori_model = torch.load('sd-clip-vit-l14-img-embed_ema_only.ckpt', map_location="cpu")
len(pokemon_model['state_dict'].keys())
1145
len(ori_model['state_dict'].keys())
1394

promp "Cute Obama creature"
pokemon's result is like Gaussian Noise

looking forward to your reply

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.