GithubHelp home page GithubHelp logo

kxhit / zero123-hf Goto Github PK

View Code? Open in Web Editor NEW
125.0 3.0 9.0 1.88 MB

A diffuser implementation of Zero123. Zero-1-to-3: Zero-shot One Image to 3D Object (ICCV23)

Home Page: https://zero123.cs.columbia.edu/

License: MIT License

Python 100.00%
accelerator diffusers image-to-3d novel-view-synthesis single-view-reconstruction stable-diffusion zero-shot

zero123-hf's Introduction

Zero-1-to-3: Zero-shot One Image to 3D Object

A HuggingFace Diffusers implementation of Zero123.

Merged into Diffusers Repo here.

Updates

Usage

Pytorch 2.0 for faster training and inference.

conda create -f environment.yml

or

conda create -n zero123-hf python=3.9
conda activate zero123-hf
pip install -r requirements.txt

Install xformer properly to enable efficient transformers.

conda install xformers -c xformers
# from source
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers

Run diffusers pipeline demo:

python test_zero1to3.py

Run our gradio demo for novel view synthesis:

python gradio_new.py

Training

Download Zero123's Objaverse Renderings data:

wget https://tri-ml-public.s3.amazonaws.com/datasets/views_release.tar.gz

Configure accelerator by

accelerate config

Launch training:

Follow Original Zero123, fp32, gradient checkpointing, and EMA are turned on.

accelerate launch train_zero1to3.py \
--train_data_dir /data/zero123/views_release \
--pretrained_model_name_or_path lambdalabs/sd-image-variations-diffusers \
--train_batch_size 192 \
--dataloader_num_workers 16 \
--output_dir logs \
--use_ema \
--gradient_checkpointing \
--mixed_precision no

While bf16/fp16 is also supported by running below

accelerate launch train_zero1to3.py \
--train_data_dir /data/zero123/views_release \
--pretrained_model_name_or_path lambdalabs/sd-image-variations-diffusers \
--train_batch_size 192 \
--dataloader_num_workers 16 \
--output_dir logs \
--use_ema \
--gradient_checkpointing \
--mixed_precision bf16

For monitoring training progress, we recommand wandb for its simplicity and powerful features.

wandb login

Acknowledgement

This repository is based on original Zero1to3 and popular HuggingFace diffusion framework diffusers.

Citation

If you find this work useful, a citation will be appreciated via:

@misc{zero123-hf,
    Author = {Xin Kong},
    Year = {2023},
    Note = {https://github.com/kxhit/zero123-hf},
    Title = {Zero123-hf: a diffusers implementation of zero123}
}

@misc{liu2023zero1to3,
      title={Zero-1-to-3: Zero-shot One Image to 3D Object}, 
      author={Ruoshi Liu and Rundi Wu and Basile Van Hoorick and Pavel Tokmakov and Sergey Zakharov and Carl Vondrick},
      year={2023},
      eprint={2303.11328},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

zero123-hf's People

Contributors

kxhit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

zero123-hf's Issues

Training and environment files

Hi author, thank you for your excellent work. I found the environment.yml and the training code train_zero1to3.py in ReadMe, but they are unavailable now. Will they be available later?

How to convert trained checkpoint back to zero123-xl.ckpt liked format?

After training for args.checkpointing_steps times, two pytorch_model.bin, an optimizer.bin, a scheduler.bin and a random_states.pkl are saved at the log dir. How to convert these checkpoints back into the zero123-xl,ckpt liked format to produce 3D shapes? How to make the checkpoints that can be used by stable-dreamfusion or threestudio?

windows10 When I call (gradio_new.py), nothing happens for a long time

Thank you for such a great job ~ but I had some issues using your code.
The model selected was kxic/zero123-105000, and when I used gradio_new.py to generate a new view, I found that the program did not work properly. But when I used test_zero1to3.py, it worked.

My lab environment was created exactly like READMD.md(Since I found that windows couldn't use xfomers, I didn't install it, and in the code I turned them all off), here are some of my experimental environment configurations

  • windows10
  • cuda 11.3

When I call (gradio_new.py), nothing happens for a long time
Here is a screenshot of what I got using gradio_new.py

image

The displayed results in CMD are as follows

(zero123-hf-new) PS J:\Research_Program\zero123-hf> python .\gradio_new.py
sys.argv:
['.\\gradio_new.py']
Instantiating LatentDiffusion...
cc_projection\diffusion_pytorch_model.safetensors not found
Instantiating Carvekit HiInterface...
Instantiating StableDiffusionSafetyChecker...
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
Instantiating AutoFeatureExtractor...
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\gradio\routes.py:559: DeprecationWarning:
        on_event is deprecated, use lifespan event handlers instead.

        Read more about it in the
        [FastAPI docs for Lifespan Events](https://fastapi.tiangolo.com/advanced/events/).

  @app.on_event("startup")
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\fastapi\applications.py:4495: DeprecationWarning:
        on_event is deprecated, use lifespan event handlers instead.

        Read more about it in the
        [FastAPI docs for Lifespan Events](https://fastapi.tiangolo.com/advanced/events/).

  return self.router.on_event(event_type)
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\gradio\blocks.py:1381: DeprecationWarning: The `enable_queue` parameter has been deprecated. Please use the `.queue()` method instead.
  warnings.warn(
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\gradio\routes.py:559: DeprecationWarning:
        on_event is deprecated, use lifespan event handlers instead.

        Read more about it in the
        [FastAPI docs for Lifespan Events](https://fastapi.tiangolo.com/advanced/events/).

  @app.on_event("startup")
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\fastapi\applications.py:4495: DeprecationWarning:
        on_event is deprecated, use lifespan event handlers instead.

        Read more about it in the
        [FastAPI docs for Lifespan Events](https://fastapi.tiangolo.com/advanced/events/).

  return self.router.on_event(event_type)
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\gradio\routes.py:559: DeprecationWarning:
        on_event is deprecated, use lifespan event handlers instead.

        Read more about it in the
        [FastAPI docs for Lifespan Events](https://fastapi.tiangolo.com/advanced/events/).

  @app.on_event("startup")
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\fastapi\applications.py:4495: DeprecationWarning:
        on_event is deprecated, use lifespan event handlers instead.

        Read more about it in the
        [FastAPI docs for Lifespan Events](https://fastapi.tiangolo.com/advanced/events/).

  return self.router.on_event(event_type)
Running on local URL:  http://127.0.0.1:8888
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\starlette\templating.py:178: DeprecationWarning: The `name` is not the first parameter anymore. The first parameter should be the `Request` instance.
Replace `TemplateResponse(name, {"request": request})` by `TemplateResponse(request, name)`.
  warnings.warn(
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\pydantic\main.py:1024: PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.6/migration/
  warnings.warn('The `dict` method is deprecated; use `model_dump` instead.', category=PydanticDeprecatedSince20)

Execute the process using test_zero1to3.py,it worked.

(zero123-hf-new) PS J:\Research_Program\zero123-hf> python .\test_zero1to3.py
vae\diffusion_pytorch_model.safetensors not found
D:\ProgramData\Miniconda3\envs\zero123-hf-new\lib\site-packages\gradio\routes.py:25: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  import pkg_resources
Instantiating Carvekit HiInterface...
old input_im:
(256, 256)
Infer foreground mask (preprocess_image) took 3.782s.
new input_im: array[256, 256, 3] f32 n=196608 (0.8Mb) x∈[0.043, 1.000] μ=0.943 σ=0.154
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:10<00:00,  4.85it/s]

DDPMScheduler or DDIMScheduler is used?

Hi,

thanks for the great implementation.

I have a small question regarding the scheduler used in training. In the training script, the DDPMScheduler is used as here:
https://github.com/kxhit/zero123-hf/blob/master/train_zero1to3.py#L571

However, in the huggingface checkpoint, it says the DDIMScheduler is used: https://huggingface.co/kxic/zero123-xl/blob/main/scheduler/scheduler_config.json.

I wonder if you could give me a confirmation on which scheduler is used in training and inference? Or for huggingface library if i do DDPMScheduler.from_pretrained('kxic/zero123-xl', subfolder="scheduler"), still the DDIM scheduler used in checkpoint will be loaded?

Modifying unet architecture

Hi! thank you for sharing the work.

I would appreciate if you could give some advice on how to modify zero123's unet architecture.

I want to add a custom layer to zero123's unet module. I think it would be convenient if I could use diffusers library.

I see that you load a unet model by using from_pretrained.
Suppose I did load a unet model. How can I add a custom layer to this loaded unet model? Do I have to write a function that does this or is there an easier way like by using diffuers lib functions?

Basically, I want to make a custom unet model based off of a loaded unet model using from_pretrained.

FYI, this author wrote this function to use one's own unet model.
https://github.com/facebookresearch/ViewDiff/blob/7df36e54728f760a119dac91a1fad01fab0a71aa/viewdiff/model/custom_unet_2d_condition.py#L809

Is that a typical way to achieve my goal?

Thanks!

Problems about fine-tuning from zero123-xl.ckpt

Noticing that the code of Load scheduler and models use args.pretrained_model_name_or_path to load DDPM, CLIP, and so on from huggingface.co/models. How could zero123-hf tune a model from ckpt provided by zero123 like 105000.ckpt 165000.ckpt or zero123-xl.ckpt

Question regarding the gradient accumulation

Hi, thanks for your implementation.

I noticed you accumulated gradients of two models with two different context managers (here). Could you let me know if you verified your implementation with gradient accumulation step different than 1? Apparently, this approach can be erroneous according to this and followed-up comments. I believe the newer version of hf's accelerate has already allowed the context manager to receive multiple models as in here.

convert_zero123_to_diffusers error

After run diffusers/scripts/convert_zero123_to_diffusers.py , i got a dir like this.
1715287875682
But when i want to use it in diffusers like this, it is report an Error.
pipe = StableDiffusionControlNetImg2ImgPipeline.from_pretrained( "./model/zero1to3", controlnet=controlnet, torch_dtype=torch.float16, local_files_only = True )

ValueError: Pipeline <class 'diffusers.pipelines.controlnet.pipeline_controlnet_img2img.StableDiffusionControlNetImg2ImgPipeline'> expected {'text_encoder', 'scheduler', 'feature_extractor', 'safety_checker', 'vae', 'controlnet', 'unet', 'tokenizer'}, but only {'scheduler', 'feature_extractor', 'controlnet', 'vae', 'unet'} were passed.

Any help would be greatly appreciated!

Timelines

Hey!
Thanks for taking up this initiative.
I wanted to know when will the training script be released publicly?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.