Auto1111 extension consisting of implementation of ModelScope text2video using only Auto1111 webui dependencies and downloadable models (so no logins required anywhere)
8gbs vram should be enough to run on GPU with low vram vae on at 256x256 (some opts are not working properly rn, however we are already getting reports of people launching 192x192 videos with 4gbs of vram). 24 frames length 256x256 video definitely fits into 12gbs of NVIDIA GeForce RTX 2080 Ti. We will appreciate any help with this extension, especially pull-requests.
There is a known issue with ffmpeg stitching, if ffmpeg fails and it outputs something like 'tuple split failed', go to 'stable-diffusion-webui/outputs/img2img-images/text2video-modelscope' and grab the frames from there until it's fixed.
Test examples:
Prompt: flowers turning into lava
out.mp4
Prompt: cinematic explosion by greg rutkowski
vid.mp4
Prompt: really attractive anime girl skating, by makoto shinkai, cinematic lighting
gosh.mp4
Download the following files from HuggingFace:
- VQGAN_autoencoder.pth
- configuration.json
- open_clip_pytorch_model.bin
- text2video_pytorch_model.pth
And put them in stable-diffusion-webui/models/ModelScope/t2v
. Create those 2 folders if they are missing.
HuggingFace space:
https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis
The model PyTorch implementation from ModelScope:
https://github.com/modelscope/modelscope/tree/master/modelscope/models/multi_modal/video_synthesis
Google Colab from the devs:
https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing