GithubHelp home page GithubHelp logo

dif-net's Introduction

Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence

This is an official pytorch implementation of the following paper:

Y. Deng, J. Yang, and X. Tong, Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence, IEEE Computer Vision and Pattern Recognition (CVPR), 2021.

Paper | Slides

Abstract: We propose a novel Deformed Implicit Field (DIF) representation for modeling 3D shapes of a category and generating dense correspondences among shapes. With DIF, a 3D shape is represented by a template implicit field shared across the category, together with a 3D deformation field and a correction field dedicated for each shape instance. Shape correspondences can be easily established using their deformation fields. Our neural network, dubbed DIF-Net, jointly learns a shape latent space and these fields for 3D objects belonging to a category without using any correspondence or part label. The learned DIF-Net can also provides reliable correspondence uncertainty measurement reflecting shape structure discrepancy. Experiments show that DIF-Net not only produces high-fidelity 3D shapes but also builds high-quality dense correspondences across different shapes. We also demonstrate several applications such as texture transfer and shape editing, where our method achieves compelling results that cannot be achieved by previous methods.

Features

● Surface reconstruction

We achieve comparable results on surface reconstruction task with other implicit-based shape representation.

● Dense correspondence reasoning

Our model gives reasonable dense correspondence between shapes in a category.

● Awareness of structure discrepancy

Our model predicts correspondence uncertainty between shapes in a category, which depicts structure discrepancies.

Installation

  1. Clone the repository and set up a conda environment with all dependencies as follows:
git clone https://github.com/microsoft/DIF-Net.git --recursive
cd DIF-Net
conda env create -f environment.yml
source activate dif
  1. Install torchmeta. Before installation, comment out line 3 in pytorch-meta/torchmeta/datasets/utils.py, otherwise the library cannot be imported correctly. Then, run the following script:
cd pytorch-meta
python setup.py install

Generating shapes with pre-trained models

  1. Download the pre-trained models from this link and organize the directory structure as follows:
DIF-Net
│
└─── models
    │
    └─── car
    │   │
    |   └─── checkpoints
    |       |
    |       └─── *.pth
    │
    └─── plane
    │   │
    |   └─── checkpoints
    |       |
    |       └─── *.pth
    ...
  1. Run the following script to generate 3D shapes using a pre-trained model:
# generate 3D shapes of certain subjects in certain category
python generate.py --config=configs/generate/<category>.yml --subject_idx=0,1,2

The script should generate meshes with color-coded template coordinates (in ply format) into ./recon subfolder. The color of a surface point records the 3D location of its corresponding point in the template space, which indicates dense correspondence information. We recommand using MeshLab to visualize the meshes.

Training a model from scratch

Data preparation

We provide our pre-processed evaluation data from ShapeNet-v2 as an example. Data can be download from this link (four categories, and 100 shapes for each category respectively. 7 GB in total). The data contains surface points along with normals, and randomly sampled free space points with their SDF values. The data should be organized as the following structure:

DIF-Net
│
└─── datasets
    │
    └─── car
    │   │
    |   └─── surface_pts_n_normal
    |   |   |
    |   |   └─── *.mat
    │   |
    |   └─── free_space_pts
    |       |
    |       └─── *.mat    
    |
    └─── plane
    │   │
    |   └─── surface_pts_n_normal
    |   |   |
    |   |   └─── *.mat
    │   |
    |   └─── free_space_pts
    |       |
    |       └─── *.mat    
    ...

To generate the whole training set, we follow mesh_to_sdf provided by marian42 to extract surface points and normals, as well as calculate SDF values for ShapeNet meshes. Please follow the instruction of the repository to install it.

Training networks

Run the following script to train a network from scratch using the pre-processed data:

# train dif-net of certain category
python train.py --config=configs/train/<category>.yml

By default, we train the network with a batchsize of 256 for 60 epochs on 8 Tesla V100 GPUs, which takes around 4 hours. Please adjust the batchsize according to your own configuration.

Evaluation

To evaluate the trained models, run the following script:

# evaluate dif-net of certain category
python evaluate.py --config=configs/eval/<category>.yml

The script will first embed test shapes of certain category into DIF-Net latent space, then calculate chamfer distance between embedded shapes and ground truth point clouds. We use Pytorch3D for chamfer distance calculation. Please follow the instruction of the repository to install it.

Contact

If you have any questions, please contact Yu Deng ([email protected]) and Jiaolong Yang ([email protected])

License

Copyright © Microsoft Corporation.

Licensed under the MIT license.

Citation

Please cite the following paper if this work helps your research:

@inproceedings{deng2021deformed,
	title={Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence},
	author={Yu Deng and Jiaolong Yang and Xin Tong},
    booktitle={IEEE Computer Vision and Pattern Recognition},
    year={2021}
}

Acknowledgement

This implementation takes SIREN as a reference. We thank the authors for their excellent work.

dif-net's People

Contributors

microsoftopensource avatar yangjiaolong avatar yudeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dif-net's Issues

Error while trying to train

Hello,

I am getting an issue when running python train.py --config=configs/train/table.yml
The error occuring is

Traceback (most recent call last):
  File "/home/lkbr/anaconda3/envs/dif/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 39, in _open_file
    return open(file_like, mode), True
FileNotFoundError: [Errno 2] No such file or directory: 'datasets/table/surface_pts_n_normal/1011e1c9812b84d2a9ed7bb5b55809f8.mat'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 71, in <module>
    sdf_dataset = dataset.PointCloudMulti(root_dir=data_path, max_num_instances=meta_params['num_instances'],**meta_params)
  File "/home/lkbr/repos/DIF-Net/dataset.py", line 111, in __init__
    on_surface_points=on_surface_points,expand=expand,max_points=max_points) for idx, dir in enumerate(self.instance_dirs)]
  File "/home/lkbr/repos/DIF-Net/dataset.py", line 111, in <listcomp>
    on_surface_points=on_surface_points,expand=expand,max_points=max_points) for idx, dir in enumerate(self.instance_dirs)]
  File "/home/lkbr/repos/DIF-Net/dataset.py", line 23, in __init__
    point_cloud = loadmat(pointcloud_path)
  File "/home/lkbr/anaconda3/envs/dif/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 216, in loadmat
    with _open_file_context(file_name, appendmat) as f:
  File "/home/lkbr/anaconda3/envs/dif/lib/python3.6/contextlib.py", line 81, in __enter__
    return next(self.gen)
  File "/home/lkbr/anaconda3/envs/dif/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 19, in _open_file_context
    f, opened = _open_file(file_like, appendmat, mode)
  File "/home/lkbr/anaconda3/envs/dif/lib/python3.6/site-packages/scipy/io/matlab/mio.py", line 45, in _open_file
    return open(file_like, mode), True
FileNotFoundError: [Errno 2] No such file or directory: 'datasets/table/surface_pts_n_normal/1011e1c9812b84d2a9ed7bb5b55809f8.mat'

Which seems to be at the very first .mat file. The error occurs when trying to train any of the datasets provided by you.
My system is Xubuntu with a GTX 1080 and Intel CPU. Let me know if you need any other system information.

question about torchmeta

in torchmeta floder I didn't see any files like utils.py to comment out. did something changed in this folder?

Correspondences between reconstruction and template

Hi,

I'm interested in the dense correspondence between reconstruction and template.

It looks like finding a coordinate in the template space / source shape space into a target shape space is a non-trivial problem. In your paper, you establish the correspondences using the nearest neighbors in the template space.

Is there more efficient strategy?

Thanks!

Jiancheng

texture transfer

Hi, thanks for the great work!

I'm doing a research project and your texture transfer is quite interesting and might be applicable for my case. Would you have any plan to release the code about how you produced figure10?

datasets

Is the data in the Google network disk preprocessed?

Source Shapes for label transfer

Hello. Thank you for your interesting work.
I want to ask if I can get the IDs you've used for the label transfer task. I tried to find them by comparing Fig. III and datas one by one, but I can't find them clearly since there are too many similar objects. I'd appreciate for you answer.

texture transfer detail

Hi, thanks for your great work! It's quite interesting and the texture transfer results shown in the project are so great! And I am now trying to redo the texture transfer process.
I notice that you use 128 ** 3 samples for creating mesh and calculate the correspondencing template coords. I used the same samples to transfer texture, but I found that the final texture transfer result is quite blurred.
So would you please tell me more detail about the texture transfer so as to reproduce figure10?

Some questions about preparation

Thanks for your outstanding paper! However, I encoutered some problems during preparation work, and hope can get your help.

  1. It seems that the provided torchmeta requires torch <=1.4.0, however the torch version in environment.yml is 1.5.0. Will there be some problems?
  2. I downloaded traing data which contains 100 .mat files per category, however according to /split/train/xxx.txt files, it seems that each category requires 4000 mats to train. You mentioned that you used mesh_to_sdf to generate the whole training set, but I don't understand how. Could you please tell me?

shape editing

Excuse me, will the shape editing code be released again?

Shape editing

Hi,
In the paper you wrote that you can also preform shape editing.
Is that code also available?

Thanks!

Training data preparation not working

Hi, I have an issue regarding data preparation.
As you mentioned in Readme, I tried to prepare ShapeNetv2 dataset for training by using mesh_to_sdf.
I used get_surface_point_cloud and sample_sdf_near_surface function (like the captured image below), but the result was really different from your uploaded eval dataset.
image

Do I need to change detailed arguments for both functions to get the same results?

Selection_779
This is the given data from eval dataset, and
Selection_778
This is our failure version.

Thanks in advance.

ShapeNet dataset

Hello. Thank you for your work.
I'd like to ask how you've chosen data for training or evaluation. For example, ShapeNetCore.v2 consists of about 6000 chair objects. However, only 4100 objects were used for training and evaluating DIF-Net.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.