This is a PyTorch implementation of the paper:
Representing Point Clouds with Generative Conditional Invertible Flow Networks
Michał Stypułkowski, Kacper Kania, Maciej Zamorski, Maciej Zięba, Tomasz Trzciński, Jan Chorowski
Preprint. Under review.
This paper focuses on a novel generative approach for 3D point clouds that makes use of invertible flow-based models. The main idea of the method is to treat a point cloud as a probability density in 3D space that is modeled using a cloud-specific neural network. To capture the similarity between point clouds we rely on parameter sharing among networks, with each cloud having only a small embedding vector that defines it. We use invertible flows networks to generate the individual point clouds, and to regularize the embedding vectors. We evaluate the generative capabilities of the model both in qualitative and quantitative manner.
Stored in requirements.txt
.
Run the training process with:
python experiments/train/train_model.py --config configs/cif_train.yaml
You can also download pretrained models for:
Run python experiments/test/EXPERIMENT_NAME.py --config configs/cif_eval.yaml
,
where EXPERIMENT_NAME
can be on of the following:
train_reconstruction
to reconstruct the training settest_reconstruction
to reconstruct the test setsampling
to sample new objectsinterpolation
to interpolate between shapes in latent spacecommon_rare
to find most casual and unique point clouds in the datasetmetrics_eval
to evaluate performance of the model. You need to installpytorch_structural_losses
from here. Please note that it calculates only Coverage and MMD. For full evaluation we used PointFlow's script.
- Install docker container of Mitsuba Renderer as:
$ docker build -t mitsuba <path-to-downloaded-repository-of-mitsuba>
$ docker run -it -p 8000:8000 --name mitsuba mitsuba:latest
This will start a service for rendering purposes.
- Run:
$ python utils/visualize_points.py \
<input-file> \
<output-folder-or-file> \
[--torch] \
[--rotated] \
[--batch] \
[--port <port>]
where:
<input-file>
is a path to either*.npy
file or a file that can be depickled bytorch
. The file contains points asN x 3
matrix orB x N x 3
whereB
is a batch size andN
- number of points. If the file has 3 dimensions, you need to use--batch
as well.<output-folder-or-file>
is a directory where renders from a batch should saved as1.png, 2.png, ..., <B>.png
. If--batch
is not used then it should be a file path, for example:~/output/img.png
.--torch
optional flag pointing that the<input-file>
should be depickled withtorch.load
--rotated
optional flag that rotates point clouds prior rendering. It should be used in cases where the rendered shape is rotated.--port <port>
is a port for the mitsuba service if it was run with other port than8000
.
@article{stypulkowski2020cif,
title={Representing Point Clouds with Generative Conditional Invertible Flow Networks},
author={Stypu{\l}kowski, Micha{\l} and Kania, Kacper and Zamorski, Maciej and Zi{\k{e}}ba, Maciej and Trzci{\'n}ski, Tomasz and Chorowski, Jan},
journal={arXiv},
year={2020}
}