GithubHelp home page GithubHelp logo

eltociear / zeroshape Goto Github PK

View Code? Open in Web Editor NEW

This project forked from zxhuang1698/zeroshape

0.0 1.0 0.0 8.65 MB

Code repository for "ZeroShape: Regression-based Zero-shot Shape Reconstruction".

Home Page: https://zixuanh.com/projects/zeroshape.html

Python 100.00%

zeroshape's Introduction

ZeroShape

[Project Page] [Paper]

The repository currently includes demo, training, evaluation code and data for ZeroShape.

Dependencies

If your GPU supports CUDA 10.2, please install the dependencies by running

conda env create --file requirements.yaml

If you need newer CUDA versions, please install the dependencies manually:

conda create -n zeroshape python=3 pytorch::pytorch=1.11 pytorch::torchvision=0.12 cudatoolkit=10.2 (change these to your desired version)
conda install -c conda-forge tqdm pyyaml pip matplotlib trimesh tensorboard
pip install pyrender opencv-python pymcubes ninja timm

Demo

Please download the pretrained weights for shape reconstruction at this url and place it under the weights folder. We have prepared some images and masks under the examples folder. To reconstruct their shape, please run:

python demo.py --yaml=options/shape.yaml --task=shape --datadir=examples --eval.vox_res=128 --ckpt=weights/shape.ckpt

The results will be saved under the examples/preds folder. Feel free to try the demo with your own images by putting them into the examples folder. If you do not have masks, consider using external tools such as Rembg.

If you want to estimate the visible surface (depth and intrinsics), please download the pretrained weights for visible surface estimation at this url and place it under the weights folder. Then run:

python demo.py --yaml=options/depth.yaml --task=depth --datadir=examples --ckpt=weights/depth.ckpt

The results will be saved under the examples/preds folder.

Data

Please download our curated training and evaluation data at the following links:

Data Link
Training Data this url
OmniObject3D this url
Ocrtoc this url
Pix3D this url

After extracting the data, organize your data folder as follows:

data
├── train_data/
|   ├── objaverse_LVIS/
|   |   ├── images_processed/
|   |   ├── lists/
|   |   ├── ...
|   ├── ShapeNet55/
|   |   ├── images_processed/
|   |   ├── lists/
|   |   ├── ...
├── OmniObject3D/
|   ├── images_processed/
|   ├── lists/
|   ├── ...
├── Ocrtoc/
|   ├── images_processed/
|   ├── lists/
|   ├── ...
├── Pix3D/
|   ├── img_processed/
|   ├── lists/
|   ├── ...
├── ...

Note that you do not have to download all the data. For example, if you only want to perform evaluation on one of the data source, feel free to only download and organize that specific one accordingly.

Training

The first step of training ZeroShape is to pretrain the depth and intrinsics estimator. If you have downloaded the weights already (see demo), you can skip this step and use our pretrained weights at weights/depth.ckpt. If you want to train everything from scratch yourself, please run

python train.py --yaml=options/depth.yaml --name=run-depth

The visualization and results will be saved at output/depth/run-depth. Once the training is finished, copy the weights from output/depth/run-depth/best.ckpt to weights/depth.ckpt.

To train the full reconstruction model, please run

python train.py --yaml=options/shape.yaml --name=run-shape

The visualization and results will be saved at output/shape/run-shape.

Evaluating

To evaluate the model on a specific test set (omniobj3d|ocrtoc|pix3d), please run

python evaluate.py --yaml=options/shape.yaml --name=run-shape --data.dataset_test=name_of_test_set --eval.vox_res=128 --eval.brute_force --eval.batch_size=1 --resume

The evaluation results will be printed and saved at output/depth/run-shape. If you want to evalute the checkpoint we provided instead, feel free to create an empty folder output/shape/run-shape and move weights/shape.ckpt to output/shape/run-shape/best.ckpt

References

If you find our work helpful, please consider citing our paper.

@article{huang2023zeroshape,
  title={ZeroShape: Regression-based Zero-shot Shape Reconstruction},
  author={Huang, Zixuan and Stojanov, Stefan and Thai, Anh and Jampani, Varun and Rehg, James M},
  journal={arXiv preprint arXiv:2312.14198},
  year={2023}
}

zeroshape's People

Contributors

zxhuang1698 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.