GithubHelp home page GithubHelp logo

aejion / 4diffusion Goto Github PK

View Code? Open in Web Editor NEW
66.0 7.0 1.0 4.72 MB

Official code for 4Diffusion: Multi-view Video Diffusion Model for 4D Generation.

License: Apache License 2.0

Python 99.92% Shell 0.08%

4diffusion's Introduction

4Diffusion: Multi-view Video Diffusion Model for 4D Generation

| Project Page | Paper |

Official code for 4Diffusion: Multi-view Video Diffusion Model for 4D Generation.

The paper presents a novel 4D generation pipeline, namely 4Diffusion, aimed at generating spatial-temporally consistent 4D content from a monocular video. We design a multi-view video diffusion model 4DM to capture multi-view spatial-temporal correlations for multi-view video generation.

Installation Requirements

The code is compatible with python 3.10.0 and pytorch 2.0.1. To create an anaconda environment named 4diffusion with the required dependencies, run:

conda create -n 4diffusion python==3.10.0
conda activate 4diffusion

pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt

4D Data

We filter out animated 3D shapes from the vast 3D data corpus of Objaverse-1.0. We provide ids of the curated data in dataset/uid.npy. We will also release the rendered multi-view videos (To be uploaded) for future works.

Quickstart

Download pre-trained models

Please download 4DM and ImageDream modelcard and put them under ./ckpts/.

Multi-view Video Generation

To generate multi-view videos, run:

bash threestudio/models/imagedream/scripts/demo.sh

please configure the image(input monocular video path), text(text prompt), and num_video_frames(number of frames of input monocular video) in demo.sh. The results can be found in threestudio/models/imagedream/4dm.

We use rembg to segment the foreground object for 4D generation.

# name denotes the folder's name under threestudio/models/imagedream/4dm
python threestudio/models/imagedream/scripts/remove_bg.py --name yoda

4D Generation

To generate 4D content from a monocular video, run:

# system.prompt_processor_multi_view.prompt: text prompt
# system.prompt_processor_multi_view.image_path: monocular video path
# data.multi_view.image_path: anchor video path (anchor loss in Sec3.3)
# system.prompt_processor_multi_view.image_num: number of frames for training, default: 8
# system.prompt_processor_multi_view.total_num: number of frames of input monocular video
# data.multi_view.anchor_view_num: anchor view for anchor loss. 0: 0 azimuth; 1: 90 azimuth; 2: 180 azimuth; 3: 270 azimuth
python launch.py --config ./configs/4diffusion.yaml --train \ 
                system.prompt_processor_multi_view.prompt='baby yoda in the style of Mormookiee' \
                system.prompt_processor_multi_view.image_path='./threestudio/models/imagedream/assets/yoda/0_rgba.png' \
                data.multi_view.image_path='./threestudio/models/imagedream/4dm/yoda' \
                system.prompt_processor_multi_view.image_num=8 \
                system.prompt_processor_multi_view.total_num=25 \
                data.multi_view.anchor_view_num=0

The results can be found in outputs/4diffusion.

Citing

If you find 4Diffusion helpful, please consider citing:

@article{zhang20244diffusion,
  title={4Diffusion: Multi-view Video Diffusion Model for 4D Generation},
  author={Zhang, Haiyu and Chen, Xinyuan and Wang, Yaohui and Liu, Xihui and Wang, Yunhong and Qiao, Yu},
  journal={arXiv preprint arXiv:2405.20674},
  year={2024}
}

Credits

This code is built on the threestudio-project, 4D-fy, and ImageDream. Thanks to the maintainers for their contribution to the community!

4diffusion's People

Contributors

aejion avatar

Stargazers

Yaoyuan Liang avatar Zongrui Li avatar  avatar wgqtmac avatar coolcoolのyisuanwang avatar inFinith avatar  avatar xxx avatar  avatar JHowe avatar zhanglb avatar Chief Accelerator avatar Krtolica Vujadin avatar Yuseung (Phillip) Lee avatar  avatar  avatar fighting! avatar Zhengyi Wang avatar Hao Zhang avatar  avatar wanghy avatar Jie Wang avatar  avatar  avatar Chaoran Feng avatar miguel alves avatar 爱可可-爱生活 avatar Tao Xie avatar Yangyi Huang avatar Dongyu Yan avatar Ruijie Lu avatar Snow avatar  avatar Sandalots avatar 唐国梁Tommy avatar Z avatar Tariq Hassan avatar  avatar Chen Wang avatar PlatformKit avatar  avatar Weijie Lyu avatar  avatar  avatar chaochao avatar YANHONG ZENG avatar YudongGuo avatar Mingwei Li avatar Xinyang Li avatar Rekkles avatar Oli_Zhan avatar  avatar Zhenyu Tang avatar Lu Ming avatar Tianyi Yan avatar Xin Ma avatar Sauradip Nag avatar Nithin Gopalakrishnan Nair avatar Hyeonho, Jeong avatar Mitchell Mosure avatar Weining Ren avatar Shi Guo avatar PRAYER avatar  avatar kiui avatar Said avatar

Watchers

Snow avatar Kostas Georgiou avatar Weining Ren avatar  avatar  avatar Zhenyu Tang avatar Wenxuan Zhu avatar

Forkers

jackzhousz

4diffusion's Issues

FileNotFoundError

When I want to generate 4D content from a monocular video, I met this error:

File "/userhome/4Diffusion-master/threestudio/models/prompt_processors/base.py", line 346, in configure
self.load_text_embeddings()
File "/userhome/4Diffusion-master/threestudio/models/prompt_processors/base.py", line 402, in load_text_embeddings
self.text_embeddings = self.load_from_cache(self.prompt)[None, ...]
File "/userhome/4Diffusion-master/threestudio/models/prompt_processors/base.py", line 420, in load_from_cache
raise FileNotFoundError(
FileNotFoundError: Text embedding file .threestudio_cache/text_embeddings/2c4b12d35cd3ecaf7c28def82abcd4cf.pt for model stabilityai/stable-diffusion-2-1-base and prompt [baby yoda in the style of Mormookiee] not found.

Do you know why this error happened and how can I fix it ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.