GithubHelp home page GithubHelp logo

microsoft / dualoctreegnn Goto Github PK

View Code? Open in Web Editor NEW
107.0 6.0 11.0 805 KB

Dual Octree Graph Networks for Learning Adaptive Volumetric Shape Representations

License: MIT License

Python 100.00%

dualoctreegnn's Introduction

Dual Octree Graph Networks

This repository contains the implementation of our papers Dual Octree Graph Networks. The experiments are conducted on Ubuntu 18.04 with 4 V400 GPUs (32GB memory). The code is released under the MIT license.

Dual Octree Graph Networks for Learning Adaptive Volumetric Shape Representations
Peng-Shuai Wang, Yang Liu, and Xin Tong
ACM Transactions on Graphics (SIGGRAPH), 41(4), 2022

teaser

1. Installation

  1. Install Conda and create a Conda environment.

    conda create --name dualocnn python=3.7
    conda activate dualocnn
  2. Install PyTorch-1.9.1 with conda according to the official documentation.

    conda install pytorch==1.9.1 torchvision==0.10.1 cudatoolkit=10.2 -c pytorch
  3. Install ocnn-pytorch from O-CNN.

    git clone https://github.com/microsoft/O-CNN.git
    cd O-CNN/pytorch
    pip install -r requirements.txt
    python setup.py install --build_octree
  4. Clone this repository and install other requirements.

    git clone https://github.com/microsoft/DualOctreeGNN.git
    cd  DualOctreeGNN
    pip install -r requirements.txt

2. Shape Reconstruction with ShapeNet

2.1 Data Preparation

  1. Download ShapeNetCore.v1.zip (31G) from ShapeNet and place it into the folder data/ShapeNet.

  2. Convert the meshes in ShapeNetCore.v1 to signed distance fields (SDFs).

    python tools/shapenet.py --run convert_mesh_to_sdf

    Note that this process is relatively slow, it may take several days to finish converting all the meshes from ShapeNet. And for simplicity, I did not use multiprocessing of python to speed up. If the speed is a matter, you can simultaneously execute multiple python commands manually by specifying the start and end index of the mesh to be processed. An example is shown as follows:

    python tools/shapenet.py --run convert_mesh_to_sdf --start 10000 --end 20000

    The ShapeNetConv.v1 contains 57k meshes. After unzipping, the total size is about 100G. And the total sizes of the generated SDFs and the repaired meshes are 450G and 90G, respectively. Please make sure your hard disk has enough space.

  3. Sample points and ground-truth SDFs for the learning process.

    python tools/shapenet.py --run generate_dataset
  4. If you just want to forward the pretrained network, the test point clouds (330M) can be downloaded manually from here. After downloading the zip file, unzip it to the folder data/ShapeNet/test.input.

2.2 Experiment

  1. Train: Run the following command to train the network on 4 GPUs. The training takes 17 hours on 4 V100 GPUs. The trained weight and log can be downloaded here.

    python dualocnn.py  --config configs/shapenet.yaml SOLVER.gpu 0,1,2,3
  2. Test: Run the following command to generate the extracted meshes. It is also possible to specify other trained weights by replacing the parameter after SOLVER.ckpt.

    python dualocnn.py --config configs/shapenet_eval.yaml  \
           SOLVER.ckpt logs/shapenet/shapenet/checkpoints/00300.model.pth
  3. Evaluate: We use the code of ConvONet to compute the evaluation metrics. Following the instructions here to reproduce our results in Table 1.

2.3 Generalization

  1. Test: Run the following command to test the trained network on unseen 5 categories of ShapeNet:

    python dualocnn.py --config configs/shapenet_unseen5.yaml  \
           SOLVER.ckpt logs/shapenet/shapenet/checkpoints/00300.model.pth
  2. Evaluate: Following the instructions here to reproduce our results on the unseen dataset in Table 1.

3. Synthetic Scene Reconstruction

3.1 Data Preparation

Download and unzip the synthetic scene dataset (205G in total) and the data splitting filelists by ConvONet via the following command. If needed, the ground truth meshes can be downloaded from here (90G).

python tools/room.py --run generate_dataset

3.2 Experiment

  1. Train: Run the following command to train the network on 4 GPUs. The training takes 27 hours on 4 V100 GPUs. The trained weight and log can be downloaded here.

    python dualocnn.py  --config configs/synthetic_room.yaml SOLVER.gpu 0,1,2,3
  2. Test: Run the following command to generate the extracted meshes.

    python dualocnn.py --config configs/synthetic_room_eval.yaml  \
        SOLVER.ckpt logs/room/room/checkpoints/00900.model.pth
  3. Evaluate: Following the instructions here to reproduce our results in Table 5.

4. Unsupervised Surface Reconstruction with DFaust

4.1 Data Preparation

  1. Download the DFaust dataset, unzip the raw scans into the folder data/dfaust/scans, and unzip the ground-truth meshes into the folder data/dfaust/mesh_gt. Note that the ground-truth meshes are used in computing evaluation metric and NOT used in training.

  2. Run the following command to prepare the dataset.

    python tools/dfaust.py --run genereate_dataset
  3. For convenience, we also provide the dataset for downloading.

    python tools/dfaust.py --run download_dataset

4.2 Experiment

  1. Train: Run the following command to train the network on 4 GPUs. The training takes 20 hours on 4 V100 GPUs. The trained weight and log can be downloaded here.

    python dualocnn.py  --config configs/dfaust.yaml SOLVER.gpu 0,1,2,3
  2. Test: Run the following command to generate the meshes with the trained weights.

    python dualocnn.py --config configs/dfaust_eval.yaml  \
           SOLVER.ckpt logs/dfaust/dfaust/checkpoints/00600.model.pth
  3. Evaluate: To calculate the evaluation metric, we need first rescale the mesh into the original size, since the point clouds are scaled during the data processing stage.

    python tools/dfaust.py  \
           --mesh_folder   logs/dfaust_eval/dfaust  \
           --output_folder logs/dfaust_eval/dfaust_rescale  \
           --run rescale_mesh

    Then our results in Table 6 can be reproduced in the file metrics.csv.

    python tools/compute_metrics.py  \
           --mesh_folder logs/dfaust_eval/dfaust_rescale  \
           --filelist data/dfaust/filelist/test.txt  \
           --ref_folder data/dfaust/mesh_gt  \
           --filename_out logs/dfaust_eval/dfaust_rescale/metrics.csv

4.3 Generalization

In the Figure 1 and 11 of our paper, we test the generalization ability of our network on several out-of-distribution point clouds. Please download the point clouds from here, and place the unzipped data to the folder data/shapes. Then run the following command to reproduce the results:

python dualocnn.py --config configs/shapes.yaml  \
       SOLVER.ckpt logs/dfaust/dfaust/checkpoints/00600.model.pth  \

5. Autoencoder with ShapeNet

5.1 Data Preparation

Following the instructions here to prepare the dataset.

5.2 Experiment

  1. Train: Run the following command to train the network on 4 GPUs. The training takes 24 hours on 4 V100 GPUs. The trained weight and log can be downloaded here.

    python dualocnn.py  --config configs/shapenet_ae.yaml SOLVER.gpu 0,1,2,3
  2. Test: Run the following command to generate the extracted meshes.

    python dualocnn.py --config configs/shapenet_ae_eval.yaml  \
           SOLVER.ckpt logs/shapenet/ae/checkpoints/00300.model.pth
  3. Evaluate: Run the following command to evaluate the predicted meshes. Then our results in Table 7 can be reproduced in the file metrics.4096.csv.

    python tools/compute_metrics.py  \
           --mesh_folder logs/shapenet_eval/ae  \
           --filelist data/ShapeNet/filelist/test_im.txt \
           --ref_folder data/ShapeNet/mesh  \
           --num_samples 4096 \
           --filename_out logs/shapenet_eval/ae/metrics.4096.csv

dualoctreegnn's People

Contributors

microsoft-github-operations[bot] avatar microsoftopensource avatar wang-ps avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dualoctreegnn's Issues

Point cloud surface reconstruction of large scenes

Hi @wang-ps,

Thanks for sharing the fantastic work!

We have recently been working on point cloud surface reconstruction of large scenes. I want to ask if DualOctreeGNN can be used for surface reconstruction of point clouds for large scenes like this.

L7

Thank you for your response in advance!

Scale duiring the evaluate

Hi,
I have run the code

python dualocnn.py --config configs/shapenet_eval.yaml SOLVER.ckpt logs/shapenet/shapenet/checkpoints/00300.model.pth

and get the reconstructed meshes. I find that the mesh is in [-0.4,0.4]^2. Did you scale them into [-0.5,0.5]^2 during the evaluation (Table 1)?
And can we input the point cloud without noise?

Scale of sdf?

Hi,

Thanks for your great work! I noticed that the paper said the mesh is processed to the watertight and scaled into the bounding box [-0.5,0.5]^3. I am wondering what the scale is of the computed SDF? It seems that the mesh2sdf.compute() receives a mesh in the bounding box [-1,1]^3.
So, does it mean the sdf is calculated in this range? i.e., the output (128,128,128) tensor should be SDF values of grid-coordinates in [-1,1]^3?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.