This repository contains the source codes for the paper AtlasNet: A Papier-Mâché Approach to Learning Mesh Synthesis. The network is able to synthesize a mesh (point cloud + connectivity) from a low-resolution point cloud, or from an image.
If you find this work useful in your research, please consider citing:
@inproceedings{groueix2018,
title={{AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation}},
author={Groueix, Thibault and Fisher, Matthew and Kim, Vladimir G. and Russell, Bryan and Aubry, Mathieu},
booktitle={Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year={2018}
}
The project page is available http://imagine.enpc.fr/~groueixt/atlasnet/
## Download the repository
git clone [email protected]:ThibaultGROUEIX/AtlasNet.git
## Create python env with relevant packages
conda create --name pytorch-atlasnet --file auxiliary/spec-file.txt
source activate pytorch-atlasnet
pip install pandas visdom tqdm
conda install pytorch=0.1.12 cuda80 -c soumith #Update cuda80 to cuda90 if relevant
conda install torchvision
This implementation uses Pytorch. Please note that the Chamfer Distance code doesn't work on all versions of pytorch because of some weird error with the batch norm layers. It has been tested on v1.12, v3 and the latest sources available to date.
Python/Pytorch | v1.12 | v2 | v3.1 | 0.4.0a0+ea02833 | 0.4.x latest |
---|---|---|---|---|---|
2.7 | ✔️ 👍 😃 | 🚫 👎 😞 | 🚫 👎 😞 | ✔️ 👍 😃 | 🚫 👎 😞 |
3.6 | ✔️👍 😃 | ? | ? | 🚫 👎 😞 | 🚫 👎 😞 |
Recommended : Python 2.7, Pytorch 1.12
If you need v4 :
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch ; git reset --hard ea02833 #Go to this specific commit that works fine for the chamfer distance
# Then follow pytorch install instruction as usual
Developped in python 2.7, so might need a few adjustements for python 3.6. I only tested "train_AE_AtlasNet.py" in python 3.6.
cd AtlasNet/nndistance/src
nvcc -c -o nnd_cuda.cu.o nnd_cuda.cu -x cu -Xcompiler -fPIC -arch=sm_52
cd ..
python build.py
python test.py
We used the ShapeNet dataset for 3D models, and rendered views from 3D-R2N2:
When using the provided data make sure to respect the shapenet license.
- The point clouds from ShapeNet, with normals go in
data/customShapeNet
- The corresponding normalized mesh (for the metro distance) go in
data/ShapeNetCorev2Normalized
- the rendered views go in
data/ShapeNetRendering
The trained models and some corresponding results are also available online :
- The trained_models go in
trained_models/
In case you need the results of ICP on PointSetGen output :
Require 3GB RAM on the GPU and 5sec to run. Pass --cuda 0
to run without gpu (9sec).
python inference/demo.py --cuda 1
This script takes as input a 137 * 137 image (from ShapeNet), run it through a trained resnet encoder, then decode it through a trained atlasnet with 25 learned parameterizations, and save the output to output.ply
- First launch a visdom server :
python -m visdom.server -p 8888
- Launch the training. Check out all the options in
./training/train_AE_AtlasNet.py
.
export CUDA_VISIBLE_DEVICES=0 #whichever you want
source activate pytorch-atlasnet
git pull
env=AE_AtlasNet
nb_primitives=25
python ./training/train_AE_AtlasNet.py --env $env --nb_primitives $nb_primitives |& tee ${env}.txt
- Monitor your training on http://localhost:8888/
-
Compute some results with your trained model
python ./inference/run_AE_AtlasNet.py
The trained models accessible here have the following performances, slightly better than the one reported in the paper. The number reported is the chamfer distance.
val_loss | 0.0014795344685297 |
---|---|
watercraft | 0.00127737027906 |
monitor | 0.0016588120616 |
car | 0.00152693425022 |
couch | 0.00171516126198 |
cabinet | 0.00168296881168 |
lamp | 0.00232362473947 |
plane | 0.000833268054194 |
speaker | 0.0025417242402 |
table | 0.00149979386376 |
chair | 0.00156113364435 |
bench | 0.00120812499892 |
firearm | 0.000626943988977 |
cellphone | 0.0012117530635 |
val_loss | 0.00400863720389 |
---|---|
watercraft | 0.00336707355723 |
monitor | 0.00456469316226 |
car | 0.00306795421868 |
couch | 0.00404269965806 |
cabinet | 0.00355917039209 |
lamp | 0.0114094304694 |
plane | 0.00192791500002 |
speaker | 0.00780984506137 |
table | 0.00368373458016 |
chair | 0.00407004468516 |
bench | 0.0030023689528 |
firearm | 0.00192803189235 |
cellphone | 0.00293665724291 |
- Evaluate quantitatively the reconstructed meshes : METRO DISTANCE
The generated 3D models' surfaces are not oriented. As a consequence, some area will appear dark if you directly visualize the results in Meshlab. You have to incorporate your own fragment shader in Meshlab, that flip the normals in they are hit by a ray from the wrong side. An exemple is given for the Phong BRDF.
sudo mv /usr/share/meshlab/shaders/phong.frag /usr/share/meshlab/shaders/phong.frag.bak
sudo cp auxiliary/phong.frag /usr/share/meshlab/shaders/phong.frag #restart Meshlab
The code for the Chamfer Loss was taken from Fei Xia'a repo : PointGan. Many thanks to him !
This work was funded by Adobe System and Ecole Doctorale MSTIC.
- The point clouds from ShapeNet, with normals go in
data/customShapeNet
- The corresponding normalized mesh (for the metro distance) go in
data/ShapeNetCorev2Normalized
- the rendered views go in
data/ShapeNetRendering
The trained models and some corresponding results are also available online :
- The trained_models go in
trained_models/