GithubHelp home page GithubHelp logo

ali-vilab / vgen Goto Github PK

View Code? Open in Web Editor NEW
2.8K 32.0 251.0 61.75 MB

Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models

Home Page: https://i2vgen-xl.github.io

Python 99.92% Shell 0.08%
diffusion-models video-synthesis

vgen's Introduction

VGen

figure1

VGen is an open-source video synthesis codebase developed by the Tongyi Lab of Alibaba Group, featuring state-of-the-art video generative models. This repository includes implementations of the following methods:

VGen can produce high-quality videos from the input text, images, desired motion, desired subjects, and even the feedback signals provided. It also offers a variety of commonly used video generation tools such as visualization, sampling, training, inference, join training using images and videos, acceleration, and more.

Open in Spaces Paper page Open in Spaces YouTube Replicate

🔥News!!!

  • [2024.06] We release the code and models of InstructVideo. InstructVideo enables the LoRA fine-tuning and inference in VGen. Feel free to use LoRA fine-tuning for other tasks.
  • [2024.04] We release the models of DreamVideo and ModelScopeT2V V1.5!!! ModelScopeT2V V1.5 is further fine-tuned on ModelScopeT2V for 365k iterations with more data.
  • [2024.04] We release the code and models of TF-T2V!
  • [2024.04] We release the code and models of VideoLCM!
  • [2024.03] We release the training and inference code of DreamVideo!
  • [2024.03] We release the code and model of HiGen!!
  • [2024.01] The gradio demo of I2VGen-XL has been completed in HuggingFace, thanks to our colleague @Wenmeng Zhou and @AK for the support, and welcome to try it out.
  • [2024.01] We support running the gradio app locally, thanks to our colleague @Wenmeng Zhou for the support and @AK for the suggestion, and welcome to have a try.
  • [2024.01] Thanks @Chenxi for supporting the running of i2vgen-xl on Replicate. Feel free to give it a try.
  • [2024.01] The gradio demo of I2VGen-XL has been completed in Modelscope, and welcome to try it out.
  • [2023.12] We have open-sourced the code and models for DreamTalk, which can produce high-quality talking head videos across diverse speaking styles using diffusion models.
  • [2023.12] We release TF-T2V that can scale up existing video generation techniques using text-free videos, significantly enhancing the performance of both Modelscope-T2V and VideoComposer at the same time.
  • [2023.12] We updated the codebase to support higher versions of xformer (0.0.22), torch2.0+, and removed the dependency on flash_attn.
  • [2023.12] We release InstructVideo that can accept human feedback signals to improve VLDM
  • [2023.12] We release the diffusion based expressive talking head generation DreamTalk
  • [2023.12] We release the high-efficiency video generation method VideoLCM
  • [2023.12] We release the code and model of I2VGen-XL and the ModelScope T2V
  • [2023.12] We release the T2V method HiGen and customizing T2V method DreamVideo.
  • [2023.12] We write an introduction document for VGen and compare I2VGen-XL with SVD.
  • [2023.11] We release a high-quality I2VGen-XL model, please refer to the Webpage

TODO

  • Release the technical papers and webpage of I2VGen-XL
  • Release the code and pretrained models that can generate 1280x720 videos
  • Release the code and models of DreamTalk that can generate expressive talking head
  • Release the code and pretrained models of HumanDiff
  • Release models optimized specifically for the human body and faces
  • Updated version can fully maintain the ID and capture large and accurate motions simultaneously
  • Release other methods and the corresponding models

Preparation

The main features of VGen are as follows:

  • Expandability, allowing for easy management of your own experiments.
  • Completeness, encompassing all common components for video generation.
  • Excellent performance, featuring powerful pre-trained models in multiple tasks.

Installation

conda create -n vgen python=3.8
conda activate vgen
pip install torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

You also need to ensure that your system has installed the ffmpeg command. If it is not installed, you can install it using the following command:

sudo apt-get update && apt-get install ffmpeg libsm6 libxext6  -y

Datasets

We have provided a demo dataset that includes images and videos, along with their lists in data.

Please note that the demo images used here are for testing purposes and were not included in the training.

Clone the code

git clone https://github.com/ali-vilab/VGen.git
cd VGen

Getting Started with VGen

(1) Train your text-to-video model

Executing the following command to enable distributed training is as easy as that.

python train_net.py --cfg configs/t2v_train.yaml

In the t2v_train.yaml configuration file, you can specify the data, adjust the video-to-image ratio using frame_lens, and validate your ideas with different Diffusion settings, and so on.

  • Before the training, you can download any of our open-source models for initialization. Our codebase supports custom initialization and grad_scale settings, all of which are included in the Pretrain item in yaml file.
  • During the training, you can view the saved models and intermediate inference results in the workspace/experiments/t2v_traindirectory.

After the training is completed, you can perform inference on the model using the following command.

python inference.py --cfg configs/t2v_infer.yaml

Then you can find the videos you generated in the workspace/experiments/test_img_01 directory. For specific configurations such as data, models, seed, etc., please refer to the t2v_infer.yaml file.

If you want to directly load our previously open-sourced Modelscope T2V model, please refer to this link.

(2) Run the I2VGen-XL model

(i) Download model and test data:

!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('damo/I2VGen-XL', cache_dir='models/', revision='v1.0.0')

or you can also download it through HuggingFace (https://huggingface.co/damo-vilab/i2vgen-xl):

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/damo-vilab/i2vgen-xl

(ii) Run the following command:

python inference.py --cfg configs/i2vgen_xl_infer.yaml

or you can run:

python inference.py --cfg configs/i2vgen_xl_infer.yaml  test_list_path data/test_list_for_i2vgen.txt test_model models/i2vgen_xl_00854500.pth

The test_list_path represents the input image path and its corresponding caption. Please refer to the specific format and suggestions within demo file data/test_list_for_i2vgen.txt. test_model is the path for loading the model. In a few minutes, you can retrieve the high-definition video you wish to create from the workspace/experiments/test_list_for_i2vgen directory. At present, we find that the current model performs inadequately on anime images and images with a black background due to the lack of relevant training data. We are consistently working to optimize it.

(iii) Run the gradio app locally:

python gradio_app.py

(iv) Run the model on ModelScope and HuggingFace:

Due to the compression of our video quality in GIF format, please click 'HRER' below to view the original video.

Input Image

Click HERE to view the generated video.

Input Image

Click HERE to view the generated video.

Input Image

Click HERE to view the generated video.

Input Image

Click HERE to view the generated video.

(ii) Run the following command:

python inference.py --cfg configs/i2vgen_xl_train.yaml

In a few minutes, you can retrieve the high-definition video you wish to create from the workspace/experiments/test_img_01 directory. At present, we find that the current model performs inadequately on anime images and images with a black background due to the lack of relevant training data. We are consistently working to optimize it.

(3) Run the HiGen model

(i) Download model:

!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/HiGen', cache_dir='models/')

Then you might need the following command to move the checkpoints to the "models/" directory:

mv ./models/iic/HiGen/* ./models/

(ii) Run the following command for text-to-video generation:

python inference.py --cfg configs/higen_infer.yaml

In a few minutes, you can retrieve the videos you wish to create from the workspace/experiments/text_list_for_t2v_share directory. Then you can execute the following command to perform super-resolution on the generated videos:

python inference.py --cfg configs/sr600_infer.yaml

Finally, you can retrieve the high-definition video from the workspace/experiments/text_list_for_t2v_share directory.

Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.

Click HERE to view the generated video.

Click HERE to view the generated video.

(4) DreamVideo

Our DreamVideo uses ModelScopeT2V V1.5 as the base video diffusion model. ModelScopeT2V V1.5 is further fine-tuned on ModelScopeT2V for 365k iterations with more data.

Download ModelScopeT2V V1.5 and adapter weights of DreamVideo

!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/dreamvideo-t2v', cache_dir='models/')

Then you might need the following command to move the checkpoints to the "models/" directory:

mv ./models/iic/dreamvideo-t2v/* ./models/

Or you can download the checkpoint of ModelScopeT2V V1.5 and adapter weights of DreamVideo from this link.

Training

(i) Subject Learning

Step 1: learn a textual identity using Textual Inversion.

python train_net.py --cfg configs/dreamvideo/subjectLearning/dog2_subjectLearning_step1.yaml

Step 2: train an identity adapter by incorporating the learned textual identity.

python train_net.py --cfg configs/dreamvideo/subjectLearning/dog2_subjectLearning_step2.yaml

Tips:

  • Generally, step 1 takes 1500 to 3000 training steps, and step 2 takes 500 to 1000 training steps. For certain subjects (like cats, etc.), excessive training may generate unnatural videos, and using text embedding with fewer training steps or reducing the training steps of step 2 may help.
  • For some subjects (like dogs, etc.), setting use_mask_diffusion to True may achieve better results. Make sure to put the binary masks of the subject into the folder data/images/custom/YOUR_SUBJECT/masks, and you can use SAM to obtain these masks.

(ii) Motion Learning

Train a motion adapter on the given videos.

python train_net.py --cfg configs/dreamvideo/motionLearning/carTurn_motionLearning.yaml

You can customize your own configuration files for subject/motion learning.

Tips:

  • Generally, motion learning takes 500 to 2000 training steps.
  • Try setting p_image_zero from 0 to 0.5 to adjust the effect of appearance guidance during training.
  • Try increasing training steps or increasing the learning rate for single video motion customization to better align the motion pattern.

Inference

(i) Subject Customization

python inference.py --cfg configs/dreamvideo/infer/subject_dog2.yaml

(ii) Motion Customization

python inference.py --cfg configs/dreamvideo/infer/motion_carTurn.yaml

For inference with appearance guidance, make sure to add images of foreground objects (e.g., any image of a bear) to the folder data/images/motionReferenceImgs and modify your test file.

Tips:

  • Try setting appearance_guide_strength_cond and appearance_guide_strength_uncond from 0 to 1 to adjust the effect of appearance guidance during inference.
  • We do not use DDIM Inversion by default. However, for single video motion customization, you can try setting inverse_noise_strength to 0~0.5 to better align the training video. For multi-video motion customization, we recommend setting inverse_noise_strength to 0.

(iii) Joint Customization

python inference.py --cfg configs/dreamvideo/infer/joint_dog2_carTurn.yaml

Tips:

  • Try changing identity_adapter_index and motion_adapter_index for better results. Typically, increasing identity_adapter_index improves identity preservation, while increasing motion_adapter_index enhances motion alignment. Balance the two for optimal results.

Examples

We provide some examples for inference. Before you start, make sure you download the models.

(i) Subject Customization

python inference.py --cfg configs/dreamvideo/infer/examples/subject_dog2.yaml

python inference.py --cfg configs/dreamvideo/infer/examples/subject_wolf_plushie.yaml
Subject Generated Video Subject Generated Video
dog "a * eating pizza"
seed: 2767
wolf plushie "a * running in the forest"
seed: 2339

(ii) Motion Customization

python inference.py --cfg configs/dreamvideo/infer/examples/motion_carTurn.yaml

python inference.py --cfg configs/dreamvideo/infer/examples/motion_playingGuitar.yaml
Motion Generated Video Motion Generated Video
"a car running on the road" "a lion running on the road"
seed: 8888
"a person is playing guitar" "a monkey is playing guitar on Mars"
seed: 8888

(iii) Joint Customization

python inference.py --cfg configs/dreamvideo/infer/examples/joint_dog2_carTurn.yaml

python inference.py --cfg configs/dreamvideo/infer/examples/joint_dog2_playingGuitar.yaml

python inference.py --cfg configs/dreamvideo/infer/examples/joint_wolf_plushie_carTurn.yaml

python inference.py --cfg configs/dreamvideo/infer/examples/joint_wolf_plushie_playingGuitar.yaml
dog wolf plushie
"a car running on the road" "a * running on the beach"
seed: 8888
"a * running on the road"
seed: 3677
"a person is playing guitar" "a * is playing guitar on the moon"
seed: 8888
"a * is playing guitar"
seed: 6071

(5) Run the TF-T2V (CVPR-2024) model

(i) Download model:

!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/tf-t2v', cache_dir='models/')

Then you might need the following command to move the checkpoints to the "models/" directory:

mv ./models/iic/tf-t2v/* ./models/

(ii) We provide a config file for generating 16-frame video with 448x256 resolution. The command is as follows:

python inference.py --cfg configs/tft2v_t2v_infer.yaml

(If there are environmental problems during operation, we also provide the environment configuration "tft2v_environment.yaml" of TF-T2V for your reference.)

In a few minutes, you can retrieve the videos you wish to create from the workspace/experiments/text_list_for_tft2v directory. Then you can execute the following command to perform super-resolution on the generated videos:

python inference.py --cfg configs/tft2v_16frames_sr600_infer.yaml

Finally, you can retrieve the high-definition video from the workspace/experiments/text_list_for_tft2v directory. (It should be noted that the super-resolution model only supports 32-frame input, and 16-frame video cannot be used, thus we construct a pseudo 32-frame video by copying frames.)

Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.

Click HERE to view the generated video.

Click HERE to view the generated video.

(iii) Additionally, you can run the following command for text-to-video generation (32 frames):

python inference.py --cfg configs/tft2v_t2v_32frames_infer.yaml

In a few minutes, you can retrieve the videos you wish to create from the workspace/experiments/text_list_for_tft2v_32frame directory. Then you can execute the following command to perform super-resolution on the generated videos:

python inference.py --cfg configs/tft2v_32frames_sr600_infer.yaml

Finally, you can retrieve the high-definition video from the workspace/experiments/text_list_for_tft2v_32frame directory. (It should be noted that the super-resolution model only supports 32-frame input, and 16-frame video cannot be used.)

Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.

Click HERE to view the generated video.

Click HERE to view the generated video.

(iv) Run the following command for compositional video generation like videocomposer (32 frames):

python inference.py --cfg configs/tft2v_vcomposer_32frames_infer.yaml

In a few minutes, you can retrieve the videos you wish to create from the workspace/experiments/vid_list_vcomposer_32frame directory. Then you can execute the following command to perform super-resolution on the generated videos:

python inference.py --cfg configs/tft2v_vcomposer_32frames_sr600_infer.yaml

Finally, you can retrieve the high-definition video from the workspace/experiments/vid_list_vcomposer_32frame directory.

Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.

Click HERE to view the generated video.

Click HERE to view the generated video.

(v) We also provide a config file for generating 16-frame video with 448x256 resolution under the compositional video synthesis setting. The command is as follows:

python inference.py --cfg configs/tft2v_vcomposer_infer.yaml

You can also generate a 16-frame video with 896x512 resolution within one model by running:

python inference.py --cfg configs/tft2v_vcomposer_896x512_infer.yaml

It should be noted that the super-resolution model only supports 32-frame input, and 16-frame video cannot be used.

(6) Run the VideoLCM model

(i) Download models as in TF-T2V (if you have already downloaded them in TF-T2V, skip this step):

!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/tf-t2v', cache_dir='models/')

Then you might need the following command to move the checkpoints to the "models/" directory:

mv ./models/iic/tf-t2v/* ./models/

(ii) Run the following command for text-to-video generation (16 frames with 448x256 resolution):

python inference.py --cfg configs/videolcm_t2v_infer.yaml

To generate high-resolution videos (1280x720 resolution), you can run the following command:

python inference.py --cfg configs/videolcm_t2v_16frames_sr600_infer.yaml

Due to the compression of our video quality in GIF format, please click 'HERE' below to view the original video.

Click HERE to view the generated video.

Click HERE to view the generated video.

(iii) Run the following command for compositional video generation (16 frames with 448x256 resolution):

python inference.py --cfg configs/videolcm_vcomposer_infer.yaml

(7) InstructVideo (CVPR 2024)

Feel free to reach out ([email protected]) if have questions.

Dataset preparation and environment configuration

The training of InstructVideo requires video-text pairs to save computational cost during reward fine-tuning. In the paper, we utilize a small set of videos in WebVid to fine-tune our base model. The file list is shown under the folder:

data/instructvideo/webvid_simple_animals_2_selected_20_train_file_list/00000.txt

You should try filtering the videos from your webvid dataset to compose the training data. Another alternative is to use your own video-text pairs. (I tested InstructVideo on WebVid data and some proprietary data. Both worked.)

Concerning the environment configuration, you should follow the instructions for VGen installation.

Pre-trained weights preparation

!pip install modelscope
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/InstructVideo', cache_dir='models/')

You need to move the checkpoints to the "models/" directory:

mv ./models/iic/InstructVideo/* ./models/

Note that models/model_scope_v1-4_0600000.pth is the pre-trained base model used in the paper. The fine-tuned model is placed under the folder models/instructvideo-finetuned.

You can get access to the provided files on Instructvideo ModelScope Page.

The inference of InstructVideo

You can leverage the provided fine-tuned checkpoints to generate videos by running the command:

bash configs/instructvideo/eval_generate_videos.sh

This command uses yaml files under configs/instructvideo/eval, containing caption file paths for generating videos of in-domain animals, new animals and non-animals. Feel free to switch among them or replace them with your own captions. Although we fine-tuned using 20-step DDIM, you can still use 50-step DDIM generation.

The reward fine-tuning of InstructVideo

You can perform InstrcutVideo reward fine-tuning by running the command:

bash configs/instructvideo/train.sh

Since performing reward fine-tuning can lead to over-optimization, I strongly recommend checking the generation performance on some evaluation captions regularly (like the captions indicated in configs/instructvideo/eval).

(8) Other methods

In preparation!!

Customize your own approach

Our codebase essentially supports all the commonly used components in video generation. You can manage your experiments flexibly by adding corresponding registration classes, including ENGINE, MODEL, DATASETS, EMBEDDER, AUTO_ENCODER, VISUAL, DIFFUSION, PRETRAIN, and can be compatible with all our open-source algorithms according to your own needs. If you have any questions, feel free to give us your feedback at any time.

BibTeX

If this repo is useful to you, please cite our corresponding technical paper.

@article{wang2023videocomposer,
  title={Videocomposer: Compositional Video Synthesis with Motion Controllability},
  author={Wang, Xiang and Yuan, Hangjie and Zhang, Shiwei and Chen, Dayou and Wang, Jiuniu and Zhang, Yingya and Shen, Yujun and Zhao, Deli and Zhou, Jingren},
  journal={NeurIPS},
  volume={36},
  year={2023}
}
@article{2023i2vgenxl,
  title={I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models},
  author={Zhang, Shiwei and Wang, Jiayu and Zhang, Yingya and Zhao, Kang and Yuan, Hangjie and Qing, Zhiwu and Wang, Xiang  and Zhao, Deli and Zhou, Jingren},
  booktitle={arXiv preprint arXiv:2311.04145},
  year={2023}
}
@article{wang2023modelscope,
  title={Modelscope text-to-video technical report},
  author={Wang, Jiuniu and Yuan, Hangjie and Chen, Dayou and Zhang, Yingya and Wang, Xiang and Zhang, Shiwei},
  journal={arXiv preprint arXiv:2308.06571},
  year={2023}
}
@inproceedings{dreamvideo,
  title={DreamVideo: Composing Your Dream Videos with Customized Subject and Motion},
  author={Wei, Yujie and Zhang, Shiwei and Qing, Zhiwu and Yuan, Hangjie and Liu, Zhiheng and Liu, Yu and Zhang, Yingya and Zhou, Jingren and Shan, Hongming},
  booktitle={CVPR},
  year={2024}
}
@inproceedings{higen,
  title={Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation},
  author={Qing, Zhiwu and Zhang, Shiwei and Wang, Jiayu and Wang, Xiang and Wei, Yujie and Zhang, Yingya and Gao, Changxin and Sang, Nong },
  booktitle={CVPR},
  year={2024}
}
@article{wang2023videolcm,
  title={VideoLCM: Video Latent Consistency Model},
  author={Wang, Xiang and Zhang, Shiwei and Zhang, Han and Liu, Yu and Zhang, Yingya and Gao, Changxin and Sang, Nong },
  journal={arXiv preprint arXiv:2312.09109},
  year={2023}
}
@article{ma2023dreamtalk,
  title={DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models},
  author={Ma, Yifeng and Zhang, Shiwei and Wang, Jiayu and Wang, Xiang and Zhang, Yingya and Deng Zhidong},
  journal={arXiv preprint arXiv:2312.09767},
  year={2023}
}
@inproceedings{InstructVideo,
  title={InstructVideo: Instructing Video Diffusion Models with Human Feedback},
  author={Yuan, Hangjie and Zhang, Shiwei and Wang, Xiang and Wei, Yujie and Feng, Tao and Pan, Yining and Zhang, Yingya and Liu, Ziwei and Albanie, Samuel and Ni, Dong},
  booktitle={CVPR},
  year={2024}
}
@inproceedings{TFT2V,
  title={A Recipe for Scaling up Text-to-Video Generation with Text-free Videos},
  author={Wang, Xiang and Zhang, Shiwei and Yuan, Hangjie and Qing, Zhiwu and Gong, Biao and Zhang, Yingya and Shen, Yujun and Gao, Changxin and Sang, Nong},
  booktitle={CVPR},
  year={2024}
}

Acknowledgement

We would like to express our gratitude for the contributions of several previous works to the development of VGen. This includes, but is not limited to Composer, ModelScopeT2V, Stable Diffusion, OpenCLIP, WebVid-10M, LAION-400M, Pidinet and MiDaS. We are committed to building upon these foundations in a way that respects their original contributions.

Disclaimer

This open-source model is trained with using WebVid-10M and LAION-400M datasets and is intended for RESEARCH/NON-COMMERCIAL USE ONLY.

vgen's People

Contributors

chenxwh avatar dailingx avatar jacobyuan7 avatar phi-line avatar qinzhi-0110 avatar steven-swzhang avatar wangxiang1230 avatar weilllllls avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vgen's Issues

Issue with video flickering and suddenly turning gray

Hi, thank you very much for your amazing work. When I used test_list_for_i2vgen.txt to test the model, I found that the model occasionally produced such a video (that is, it started to flick and finally turned into solid gray), which looked very abnormal. I want to confirm if you have encountered this kind of problem during use. Can you help me?

img_0008_01_00_A_dog_in_a_suit_and_tie_faces_the_camera_17.mp4

Cannot install requirements

ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

Edit: The confusion came from the way the instructions listed the installation steps, without listing the order, which requires the cloning of the github project before installing requirements.

Where is 'workspace/model_bk/model_scope_0267000.pth' in the configs/t2v_train.yaml?

Pretrain: { 'type': pretrain_specific_strategies, 'fix_weight': False, 'grad_scale': 0.5, 'resume_checkpoint': 'workspace/model_bk/model_scope_0267000.pth', 'sd_keys_path': 'models/stable_diffusion_image_key_temporal_attention_x1.json', }

The Pretrain term has a resume checkpoint. But I can't find the model weight anywhere. Is it this one?
https://modelscope.cn/models/damo/text-to-video-synthesis/files/text2video_pytorch_model.pth

cuda out of memory

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 880.00 MiB (GPU 0; 23.70 GiB total capacity; 20.08 GiB already allocated; 602.56 MiB free; 21.64 GiB reserved in total by PyTorch)

I using 3090 with 24g memory, so how many memory did it need

T2V weights

Hello, thank you for your work. Where can I find the model that can generate the following video results? The original Modelscope-T2V weights should not be able to produce such good results.
image

Video generation issue

Hi, hello
I am running this code on the server of Ubuntu 20.01 and A100. The code shows that I have completed running and generated the video, but I cannot find the generated video in the workspace.

How to choose between T2V and I2V?

Thank you for your excellent work! It appears that I2V and I2V are two different works, T2V originates from Modelscope-T2V, while I2V is from I2VGen-XL. We would like to know which one, T2V or I2V, is more suitable for our video fine-tuning training. If we only have a few tens of thousands of 2K text-video pairs, which model has stronger generalization capabilities? We aim to achieve better results in generating video content from textual descriptions. We look forward to your response and greatly appreciate it!

About the training dataset

Regarding the training dataset, would you mind me asking how did you collect 35 million single-shot text-video pairs from the long public datasets? My initial understanding was that the label for long video may not be suitable for short video clips. Many Thanks.

problems about i2v demo

thank you for your efforts in this great work, when I run the provided i2v demo, I got this video, I don't know why, could you help me ?
here is log and result:
[2023-12-19 10:43:15,802] INFO: Going into it2v_fullid_img_text inference on 0 gpu
[2023-12-19 10:43:15,817] INFO: Loading ViT-H-14 model config.
[2023-12-19 10:43:26,889] INFO: Loading pretrained ViT-H-14 weights (models/open_clip_pytorch_model.bin).
[2023-12-19 10:44:24,635] INFO: Restored from models/v2-1_512-ema-pruned.ckpt
[2023-12-19 10:44:54,441] INFO: Load model from models/i2vgen_xl_00854500.pth with status
[2023-12-19 10:44:55,889] INFO: There are 3 videos. with 4 times
[2023-12-19 10:44:55,889] INFO: Skip ## To test our images, it is recommended to run one data point at a time (i.e., uncommenting only one line at a time), which should reproduce our results.
[2023-12-19 10:44:55,889] INFO: Skip ## To test our images, it is recommended to run one data point at a time (i.e., uncommenting only one line at a time), which should reproduce our results.
[2023-12-19 10:44:55,889] INFO: Skip ## To test our images, it is recommended to run one data point at a time (i.e., uncommenting only one line at a time), which should reproduce our results.
[2023-12-19 10:44:55,889] INFO: Skip ## To test our images, it is recommended to run one data point at a time (i.e., uncommenting only one line at a time), which should reproduce our results.
[2023-12-19 10:44:55,893] INFO: Skip #data/test_images/img_0001.jpg|||A green frog floats on the surface of the water on green lotus leaves, with several pink lotus flowers, in a Chinese painting style.
[2023-12-19 10:44:55,893] INFO: Skip #data/test_images/img_0001.jpg|||A green frog floats on the surface of the water on green lotus leaves, with several pink lotus flowers, in a Chinese painting style.
[2023-12-19 10:44:55,893] INFO: Skip #data/test_images/img_0001.jpg|||A green frog floats on the surface of the water on green lotus leaves, with several pink lotus flowers, in a Chinese painting style.
[2023-12-19 10:44:55,898] INFO: Skip #data/test_images/img_0001.jpg|||A green frog floats on the surface of the water on green lotus leaves, with several pink lotus flowers, in a Chinese painting style.
[2023-12-19 10:44:55,898] INFO: [8]/[3] Begin to sample data/test_images/img_0002.png|||A blonde girl in jeans ...
[2023-12-19 10:44:58,454] INFO: GPU Memory used 16.52 GB
[2023-12-19 10:49:32,112] INFO: Save video to dir workspace/experiments/test_list_for_i2vgen/img_0002_01_00_A_blonde_girl_in_jeans_08.mp4:
[2023-12-19 10:49:32,112] INFO: [9]/[3] Begin to sample data/test_images/img_0002.png|||A blonde girl in jeans ...
[2023-12-19 10:49:32,746] INFO: GPU Memory used 31.60 GB
[2023-12-19 10:53:58,297] INFO: Save video to dir workspace/experiments/test_list_for_i2vgen/img_0002_01_00_A_blonde_girl_in_jeans_09.mp4:
[2023-12-19 10:53:58,297] INFO: [10]/[3] Begin to sample data/test_images/img_0002.png|||A blonde girl in jeans ...
[2023-12-19 10:53:59,041] INFO: GPU Memory used 31.73 GB
[2023-12-19 10:58:26,552] INFO: Save video to dir workspace/experiments/test_list_for_i2vgen/img_0002_01_00_A_blonde_girl_in_jeans_10.mp4:
[2023-12-19 10:58:26,553] INFO: [11]/[3] Begin to sample data/test_images/img_0002.png|||A blonde girl in jeans ...
[2023-12-19 10:58:27,266] INFO: GPU Memory used 31.73 GB
[2023-12-19 11:03:03,906] INFO: Save video to dir workspace/experiments/test_list_for_i2vgen/img_0002_01_00_A_blonde_girl_in_jeans_11.mp4:
[2023-12-19 11:03:03,907] INFO: Congratulations! The inference is completed!

i2v-xl1

I2V architecture

Great work team.
I have few questions

  1. In the diag
    image
    it should be VLDM instead of LDM right ?
  2. In base stage, how does LDM is generating video from input image. Generally LDM uses 2D U-net which are capable of generating images only right ?. Let's say if its an VLDM which uses 3D Unet then input should mulitple frames of noise images right ?
  3. In refinement stage, For each frame are we applying diffusion and denoise process ? Here also we are using LDM which again uses 2D convolution operations but for temporal coherence we need 3D convolutions right ?

I think I am missing something, can you please help me here.
Thanks a lot in advance.

error in inference process (no valid convolution algorithms available in CuDNN)

One error appear during the inference process, as follows:

-- Process 7 terminated with the following error: Traceback (most recent call last): File "/root/miniconda3/envs/vgen/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, *args) File "/mmu-ocr/weijiawu/MovieDiffusion/i2vgen-xl/tools/inferences/inference_i2vgen_entrance.py", line 171, in worker y_visual, y_text, y_words = clip_encoder(image=image_tensor, text=captions) File "/root/miniconda3/envs/vgen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/mmu-ocr/weijiawu/MovieDiffusion/i2vgen-xl/tools/modules/clip_embedder.py", line 185, in forward xi = self.model.encode_image(image.to(self.device)) if image is not None else None File "/root/miniconda3/envs/vgen/lib/python3.8/site-packages/open_clip/model.py", line 547, in encode_image return self.visual(image) File "/root/miniconda3/envs/vgen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/root/miniconda3/envs/vgen/lib/python3.8/site-packages/open_clip/model.py", line 394, in forward x = self.conv1(x) # shape = [*, width, grid, grid] File "/root/miniconda3/envs/vgen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/root/miniconda3/envs/vgen/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/root/miniconda3/envs/vgen/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: no valid convolution algorithms available in CuDNN

Does anyone know how to fix it? thks~

Optimizing the memory for lower-end GPUs

Hi there,

Have you tested on lower-end gpus for the model? I tried the code and it doesn't seem to work on A6000, not to mention on A10, etc. Is there any plan to optimize the model for the lower-memory gpus?

how to reduce dynamicity?

My inputs consist of pictures of people.
Result contain melting people.
I think it's because hyperparameters related with dynamicity is too high.
How can i reduce dynamicity?

flash-attn install failed

RuntimeError:
      The detected CUDA version (12.2) mismatches the version that was used to compile
      PyTorch (11.3). Please make sure to use the same CUDA versions.

      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for flash-attn
  Running setup.py clean for flash-attn
  Building wheel for future (setup.py) ... done
  Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492024 sha256=1db8ebe22124761ed948511526efe5885139151d45616f670daa921e552c3afe
  Stored in directory: /home/ubuntu/.cache/pip/wheels/a0/0b/ee/e6994fadb42c1354dcccb139b0bf2795271bddfe6253ccdf11
Successfully built easydict fairscale future
Failed to build flash-attn
ERROR: Could not build wheels for flash-attn, which is required to install pyproject.toml-based projects

The torch version in the requirements.txt file may need to be adjusted.

Enhancement Suggestion: Integration of Real-time Feedback for Video Synthesis Customisation

Dear VGen Contributors,

I hope this message finds you well. I am reaching out to propose an enhancement to the VGen codebase that could significantly augment the user experience and functionality of the video synthesis process.

Issue Description:
Currently, the VGen framework offers an impressive array of capabilities for video generation, including the synthesis of videos from text, images, and various feedback signals. However, one area that could be further developed is the integration of real-time feedback into the video synthesis pipeline.

Proposed Enhancement:
I suggest the implementation of a feature that allows users to provide real-time feedback during the video synthesis process. This could enable users to make on-the-fly adjustments to the generated content, such as tweaking the subject's appearance, modifying the motion trajectory, or altering the video's narrative flow.

Potential Benefits:

  • Increased Customisation: Users would have greater control over the final output, ensuring that the generated videos align more closely with their creative vision.
  • Enhanced Interactivity: By allowing real-time adjustments, the tool becomes more interactive and engaging, which could be particularly beneficial for artists and content creators.
  • Iterative Improvement: Real-time feedback could facilitate a more iterative creative process, where users can refine their videos without starting from scratch.

Implementation Considerations:

  • A user interface that allows for the input of real-time feedback without overwhelming the user.
  • Efficient algorithms that can quickly incorporate feedback and update the video synthesis in near real-time.
  • Adequate testing to ensure that the feature is robust and user-friendly.

I believe this enhancement could be a valuable addition to the VGen project, fostering a more dynamic and user-centric approach to video generation. I would be delighted to discuss this further and contribute to the development of this feature.

Thank you for considering my suggestion. I look forward to your thoughts and feedback.

Best regards,
yihong1120

t2v inference

Hi, thanks for sharing the code and model.

I am trying to do some t2v inference with this codebase. I downloaded the t2v model text2video_pytorch_model.pth from modelscope and modified the yaml config. Then I run python inference.py --cfg configs/t2v_infer.yaml, but the results seem to be abnormal.

Is this model incompatible with the current codebase? If so, could you please give me a link to the right t2v model?

Thank you.

xformers install error

Running on Windows 11

Attempting requirements.txt install returns:

  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/fd/20/da92c5ee5d20cb34e35a630ecf42a6dcd22523d5cb5adb56a0ffe8d03cfa/xformers-0.0.13.tar.gz (292 kB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [8 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\Administrator\AppData\Local\Temp\pip-install-ogjjnmtc\xformers_a9585bbd2bdc42d49f9afc885802555d\setup.py", line 239, in <module>
          ext_modules=get_extensions(),
        File "C:\Users\Administrator\AppData\Local\Temp\pip-install-ogjjnmtc\xformers_a9585bbd2bdc42d49f9afc885802555d\setup.py", line 157, in get_extensions
          raise RuntimeError(
      RuntimeError: CUTLASS submodule not found. Did you forget to run `git submodule update --init --recursive` ?
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

Error persists even after running git submodule update --init --recursive in the directory.

when will release InstrucVideo code?

and could you please tell me where is the code of DRaFT you used for comparision from? because I did not find the release code of DRaFT. Thank you!

The released I2VGen-XL model seems like a single stage model

Hi,

Thanks for sharing the code for I2VGen-XL model.

From closer inspection of the unet_i2vgen.py file, it seems to me the released I2VGen-XL model is a single stage model where it takes in both image and text at the same time, without performing two stage processing as claimed in the paper. Is this the case?

Thanks!

question about watermark

will having watermarks in the dataset affect the training results?
What is the way to remove the watermark from the video?

thanks advance

Running inference without CPU and GPU on machine

Are there steps for being able to run inference without having a GPU / CUDA functionality?

I was able to install the CPU versions of PyTorch, torchvision, and torchaudio but am getting errors with flash-attn==0.2.0

first frame of i2v

Hi, Congrats on this great work.

I noticed that the first frame from I2V model sometimes is not consistent with the condition image.

So I am curious about the definition of the I2V task:

Is the model trained to generate the next N frames given the first image? or
to generate the conditional first frame and next N-1 frames?

Training cost.

Congrats to such an awesome work. It's nice to see the tech report. May I ask the estimated training cost of the whole framework?

(PACKED_PER_VAL=)

I followed the exact instructions in the github, and get this error when trying to run the inference script:


(vgen) ╭─arthur at aquarelle in ~/dev/ai/i2vgen-xl on main✘✘✘ 24-01-05 - 23:41:31
╰─(vgen) ⠠⠵ python inference.py --cfg configs/t2v_infer.yaml                   on main|…5
/home/arthur/.pyenv/versions/3.7.17/lib/python3.7/site-packages/torch/cuda/__init__.py:83: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 803: system has unsupported display driver / cuda driver combination (Triggered internally at  ../c10/cuda/CUDAFunctions.cpp:109.)
  return torch._C._cuda_getDeviceCount() > 0
Traceback (most recent call last):
  File "inference.py", line 14, in <module>
    from tools import *
  File "/home/arthur/dev/ai/i2vgen-xl/tools/__init__.py", line 3, in <module>
    from .modules import *
  File "/home/arthur/dev/ai/i2vgen-xl/tools/modules/__init__.py", line 5, in <module>
    from .unet import *
  File "/home/arthur/dev/ai/i2vgen-xl/tools/modules/unet/__init__.py", line 1, in <module>
    from .unet_i2vgen import *
  File "/home/arthur/dev/ai/i2vgen-xl/tools/modules/unet/unet_i2vgen.py", line 4, in <module>
    import xformers.ops
  File "/home/arthur/.pyenv/versions/3.7.17/lib/python3.7/site-packages/xformers/ops/__init__.py", line 8, in <module>
    from .fmha import (
  File "/home/arthur/.pyenv/versions/3.7.17/lib/python3.7/site-packages/xformers/ops/fmha/__init__.py", line 10, in <module>
    from . import attn_bias, cutlass, decoder, flash, small_k, triton, triton_splitk
  File "<fstring>", line 1
    (PACKED_PER_VAL=)
                   ^
SyntaxError: invalid syntax
(vgen) ╭─arthur at aquarelle in ~/dev/ai/i2vgen-xl on main✘✘✘ 24-01-05 - 23:41:36
╰─(vgen) ⠠⠵     

what am I doing wrong?

thanks.

ERROR: No matching distribution found for motion-vector-extractor==1.0.6

Hi ~ thx4ur nice work!
there‘s something wrong with motion-vector-extractor installation
My envs :
Linux version 5.4.0-162-generic Ubuntu 9.4.0-1ubuntu1~20.04.1
gcc version 8.5.0
cuda11.3-py3.8-pytorch1.12
pip 23.3.1

ERROR: Could not find a version that satisfies the requirement motion-vector-extractor (from versions: none) ERROR: No matching distribution found for motion-vector-extractor

New Text-to-Video from StabilityAI to compare.

I understand that this is a mostly academic project and not a product but recently StabilityAI released a new Image to Video model. I'd love to see ModelScope's Version to compare both project! Hopefully the release will be soon to go along with StabilityAI.

snapshot_download failed to finish the download

while running snapshot_download, I got the following message :
FileIntegrityError: File models/temp/tmpsutrl900/open_clip_pytorch_model.bin integrity check failed, the download may be incomplete, please try again.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.