GithubHelp home page GithubHelp logo

liangtonglu / neural-point-cloud-rendering-via-multi-plane-projection Goto Github PK

View Code? Open in Web Editor NEW

This project forked from daipengwa/neural-point-cloud-rendering-via-multi-plane-projection

0.0 0.0 0.0 1.08 MB

License: MIT License

Python 100.00%

neural-point-cloud-rendering-via-multi-plane-projection's Introduction

Neural Point Cloud Rendering via Multi-Plane Projection

Neural Point Cloud Rendering via Multi-Plane projection (CVPR 2020)
Peng Dai*, Yinda Zhang*, Zhuwen Li*, Shuaicheng Liu, Bing Zeng.
Paper, Project_page, Video

Introduction


Our method is divided into two parts, the multi-plane based voxelization (left) and multi-plane rendering(right). For the first part, point clouds are re-projected into camera coordinate system to form frustum region and voxelization plus aggregation operations are adopted to generate a multi-plane 3D representation, which will be concatenated with normalized view direction and sent to render network. For the second part, the concatenated input is feed into a 3D neural render network to predict the product with 4 channels (i.e. RGB + blend weight) and the final output is generated by blending all planes. The training process is under the supervision of perceptual loss, and both network parameters and point clouds features are optimized according to the gradient.

Environments

Tensorflow 1.10.0
Python 3.6
OpenCV

Data downloads

Download datasets (i.e. ScanNet and Matterport 3D) into corresponding 'data/...' folders, including RGB_D images, camera parameters.

Download 'imagenet-vgg-verydeep-19.mat' into 'VGG_Model/'.

Preprocessing

Before training, there are several steps required. And the pre-processed results will be stored in 'pre_processing_results/[Matterport3D or ScanNet]/[scene_name]/'.

cd pre_processing/

Point clouds generation

Generate point clouds files('point_clouds.ply') from registrated RGB-D images by running

python generate_pointclouds_[ScanNet or Matterport].py

Before that, you need to specific which scene is used in 'generate_pointclouds_[ScanNet or Matterport].py' (e.g. set "scene = 'scene0010_00'" for ScanNet) .

Point clouds simplification

Based on generated point cloud files, point cloud simplification is adopted by running

python pointclouds_simplification.py

Also, you need to specific the 'point_clouds.ply' file generated from which dataset and scene in 'pointclouds_simplification.py' (e.g. set "scene = 'ScanNet/scene0010_00'"). And simplified point clouds will be saved in 'point_clouds_simplified.ply'.

Voxelization and Aggregation

In order to save training time, we voxelize and aggregate point clouds in advance by running

python voxelization_aggregation_[ScanNet or Matterport].py

This will pre-compute voxelizaion and aggregation information for each camera and save them in 'reproject_results_32/' and 'weight_32/' respectively (default 32 planes). Also, you need to specific the scene in 'voxelization_aggregation_[ScanNet or Matterport].py' (e.g. set "scene = 'scene0010_00'" for ScanNet) .

Train

To train the model, just run python npcr_ScanNet.py for ScanNet and python npcr_Matterport3D.py for Matterport3D.

You need to set 'is_training=True' and provide the paths of train related files (i.e. RGB images, camera parameters, simplified point cloud file, pre-processed aggregation and voxelizaton information) by specificing the scene name in 'npcr_[ScanNet or Matterport3D].py' (e.g. set "scene = 'scene0010_00'" for ScanNet).

The trained model (i.e. checkpoint files) and optimized point descriptors (i.e. 'descriptor.mat') will be saved in '[ScanNet or Matterport3D]npcr[scene_name]/'.

Test

To test the model, also run python npcr_ScanNet.py for ScanNet and python npcr_Matterport3D.py for Matterport3D.

You need to set 'is_training=False' and provide the paths of test related files (i.e. checkpoint files, optimized point descriptors, camera parameters, simplified point cloud file, pre-processed aggregation and voxelizaton information) by specificing the scene name in 'npcr_ [ScanNet or Matterport3D].py' (e.g. set "scene = 'scene0010_00'" for ScanNet).

The test results will be saved in '[ScanNet or Matterport3D]npcr[scene_name]/Test_Result/'.

If you need the point cloud files and pretrained models, please email me([email protected]) and show licenses of ScanNet and Matterport3D.

Citation

If you use our code or method in your work, please cite the following:

@InProceedings{Dai_2019_neuralpointcloudrendering,
author = {Peng, Dai and Yinda, Zhang and Zhuwen, Li and Shuaicheng, Liu and Bing, Zeng},
title = {Neural Point Cloud Rendering via Multi-plane Projection},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

neural-point-cloud-rendering-via-multi-plane-projection's People

Contributors

daipengwa avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.