GithubHelp home page GithubHelp logo

erlerphilipp / points2surf Goto Github PK

View Code? Open in Web Editor NEW
427.0 11.0 43.0 9.13 MB

Learning Implicit Surfaces from Point Clouds (ECCV 2020)

Home Page: https://www.cg.tuwien.ac.at/research/publications/2020/erler-2020-p2s/

License: MIT License

Python 91.93% Shell 8.07%
deep-learning point-cloud surface-reconstruction

points2surf's Introduction

Points2Surf: Learning Implicit Surfaces from Point Clouds (ECCV 2020 Spotlight)

This is our implementation of Points2Surf, a network that estimates a signed distance function from point clouds. This SDF is turned into a mesh with Marching Cubes. For more details, please watch the short video and long video.

Points2Surf reconstructs objects from arbitrary points clouds more accurately than DeepSDF, AtlasNet and Screened Poisson Surface Reconstruction.

The architecture is similar to PCPNet. In contrast to other ML-based surface reconstruction methods, e.g. DeepSDF and AtlasNet, Points2Surf is patch-based and therefore independent from classes. The strongly improved generalization leads to much better results, even better than Screened Poisson Surface Reconstruction in most cases.

This code was mostly written by Philipp Erler and Paul Guerrero. This work was published at ECCV 2020.

Prerequisites

  • Python >= 3.7
  • PyTorch >= 1.6
  • CUDA and CuDNN if using GPU
  • BlenSor 1.0.18 RC 10 for dataset generation

Quick Start

To get a minimal working example for training and reconstruction, follow these steps. We recommend using Anaconda to manage the Python environment. Otherwise, you can install the required packages with Pip as defined in the requirements.txt.

# clone this repo
# a minimal dataset is included (2 shapes for training, 1 for evaluation)
git clone https://github.com/ErlerPhilipp/points2surf.git

# go into the cloned dir
cd points2surf

# create a conda environment with the required packages
conda env create --file p2s.yml

# activate the new conda environment
conda activate p2s

# train and evaluate the vanilla model with default settings
python full_run.py

Reconstruct Surfaces from our Point Clouds

Reconstruct meshes from a point clouds to replicate our results:

# download the test datasets
python datasets/download_datasets_abc.py
python datasets/download_datasets_famous.py
python datasets/download_datasets_thingi10k.py
python datasets/download_datasets_real_world.py

# download the pre-trained models
python models/download_models_vanilla.py
python models/download_models_ablation.py
python models/download_models_max.py

# start the evaluation for each model
# Points2Surf main model, trained for 150 epochs
bash experiments/eval_p2s_vanilla.sh

# ablation models, trained to for 50 epochs
bash experiments/eval_p2s_small_radius.sh
bash experiments/eval_p2s_medium_radius.sh
bash experiments/eval_p2s_large_radius.sh
bash experiments/eval_p2s_small_kNN.sh
bash experiments/eval_p2s_large_kNN.sh
bash experiments/eval_p2s_shared_transformer.sh
bash experiments/eval_p2s_no_qstn.sh
bash experiments/eval_p2s_uniform.sh
bash experiments/eval_p2s_vanilla_ablation.sh

# additional ablation models, trained for 50 epochs
bash experiments/eval_p2s_regression.sh
bash experiments/eval_p2s_shared_encoder.sh

# best model based on the ablation results, trained for 250 epochs
bash experiments/eval_p2s_max.sh

Each eval script reconstructs all test sets using the specified model. Running one evaluation takes around one day on a normal PC with e.g. a 1070 GTX and grid resolution = 256.

To get the best results, take the Max model. It's 15% smaller and produces 4% better results (mean Chamfer distance over all test sets) than the Vanilla model. It avoids the QSTN and uses uniform sub-sampling.

Training with our Dataset

To train the P2S models from the paper with our training set:

# download the ABC training and validation set
python datasets/download_datasets_abc_training.py

# start the evaluation for each model
# Points2Surf main model, train for 150 epochs
bash experiments/train_p2s_vanilla.sh

# ablation models, train to for 50 epochs
bash experiments/train_p2s_small_radius.sh
bash experiments/train_p2s_medium_radius.sh
bash experiments/train_p2s_large_radius.sh
bash experiments/train_p2s_small_kNN.sh
bash experiments/train_p2s_large_kNN.sh
bash experiments/train_p2s_shared_transformer.sh
bash experiments/train_p2s_no_qstn.sh
bash experiments/train_p2s_uniform.sh
bash experiments/train_p2s_vanilla_ablation.sh

# additional ablation models, train for 50 epochs
bash experiments/train_p2s_regression.sh
bash experiments/train_p2s_shared_encoder.sh

# best model based on the ablation results, train for 250 epochs
bash experiments/train_p2s_max.sh

With 4 RTX 2080Ti, we trained around 5 days to 150 epochs. Full convergence is at 200-250 epochs but the Chamfer distance doesn't change much. The topological noise might be reduced, though.

Logging of loss (absolute distance, sign logits and both) with Tensorboard is done by default. Additionally, we log the accuracy, recall and F1 score for the sign prediction. You can start a Tensorboard server with:

bash start_tensorboard.sh

Make your own Datasets

The point clouds are stored as NumPy arrays of type np.float32 with ending .npy where each line contains the 3 coordinates of a point. The point clouds need to be normalized to the (-1..+1)-range.

A dataset is given by a text file containing the file name (without extension) of one point cloud per line. The file name is given relative to the location of the text file.

Dataset from Meshes for Training and Reconstruction

To make your own dataset from meshes, place your ground-truth meshes in ./datasets/(DATASET_NAME)/00_base_meshes/. Meshes must be of a type that Trimesh can load, e.g. .ply, .stl, .obj or .off. Because we need to compute signed distances for the training set, these input meshes must represent solids, i.e be manifold and watertight. Triangulated CAD objects like in the ABC-Dataset work in most cases. Next, create the text file ./datasets/(DATASET_NAME)/settings.ini with the following settings:

[general]
only_for_evaluation = 0
grid_resolution = 256
epsilon = 5
num_scans_per_mesh_min = 5
num_scans_per_mesh_max = 30
scanner_noise_sigma_min = 0.0
scanner_noise_sigma_max = 0.05

When you set only_for_evaluation = 1, the dataset preparation script skips most processing steps and produces only the text file for the test set.

For the point-cloud sampling, we used BlenSor 1.0.18 RC 10. Windows users need to fix a bug in the BlenSor code. Make sure that the blensor_bin variable in make_dataset.py points to your BlenSor binary, e.g. blensor_bin = "bin/Blensor-x64.AppImage".

You may need to change other paths or the number of worker processes and run:

python make_dataset.py

The ABC var-noise dataset with about 5k shapes takes around 8 hours using 15 worker processes on a Ryzen 7. Most computation time is required for the sampling and the GT signed distances.

Dataset from Point Clouds for Reconstruction

If you only want to reconstruct your own point clouds, place them in ./datasets/(DATASET_NAME)/00_base_pc/. The accepted file types are the same as for meshes.

You need to change some settings like the dataset name and the number of worker processes in make_pc_dataset.py and then run it with:

python make_pc_dataset.py

Manually Created Dataset for Reconstruction

In case you already have your point clouds as Numpy files, you can create a dataset manually. Put the *.npy files in the (DATASET_NAME)/04_pts/ directory. Then, you need to list the names (without extensions, one per line) in a textfile (DATASET_NAME)/testset.txt.

Related Work

This work is the most important baseline for surface reconstruction. It fits a surface into a point cloud.

This is one of the first data-driven methods for surface reconstruction. It learns to approximate objects with 'patches', deformed and subdivided rectangles.

This is one of the first data-driven methods for surface reconstruction. It learns to approximate a signed distance function from points.

This concurrent work uses a similar approach as ours. It produces smooth surfaces but requires point normals.

Citation

If you use our work, please cite our paper:

@InProceedings{ErlerEtAl:Points2Surf:ECCV:2020,
  title   = {{Points2Surf}: Learning Implicit Surfaces from Point Clouds}, 
  author="Erler, Philipp
    and Guerrero, Paul
    and Ohrhallinger, Stefan
    and Mitra, Niloy J.
    and Wimmer, Michael",
  editor="Vedaldi, Andrea
    and Bischof, Horst
    and Brox, Thomas
    and Frahm, Jan-Michael",
  year    = {2020},
  booktitle="Computer Vision -- ECCV 2020",
  publisher="Springer International Publishing",
  address="Cham",
  pages="108--124",
  abstract="A key step in any scanning-based asset creation workflow is to convert unordered point clouds to a surface. Classical methods (e.g., Poisson reconstruction) start to degrade in the presence of noisy and partial scans. Hence, deep learning based methods have recently been proposed to produce complete surfaces, even from partial scans. However, such data-driven methods struggle to generalize to new shapes with large geometric and topological variations. We present Points2Surf, a novel patch-based learning framework that produces accurate surfaces directly from raw scans without normals. Learning a prior over a combination of detailed local patches and coarse global information improves generalization performance and reconstruction accuracy. Our extensive comparison on both synthetic and real data demonstrates a clear advantage of our method over state-of-the-art alternatives on previously unseen classes (on average, Points2Surf brings down reconstruction error by 30{\%} over SPR and by 270{\%}+ over deep learning based SotA methods) at the cost of longer computation times and a slight increase in small-scale topological noise in some cases. Our source code, pre-trained model, and dataset are available at: https://github.com/ErlerPhilipp/points2surf.",
  isbn="978-3-030-58558-7"
  doi = {10.1007/978-3-030-58558-7_7},
}

points2surf's People

Contributors

erlerphilipp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

points2surf's Issues

AttributeError: Can't pickle local object 'points_to_surf_train.<locals>.seed_train_worker'

When i try to run it,error happens.

pytorch 1.10.2 py3.8_cuda11.3_cudnn8_0 pytorch

Traceback (most recent call last):
  File "D:/repo/points2surf/full_run.py", line 80, in <module>
    points_to_surf_train.points_to_surf_train(train_opt)
  File "D:\repo\points2surf\source\points_to_surf_train.py", line 428, in points_to_surf_train
    train_enum = enumerate(train_dataloader, 0)
  File "D:\Programs\anaconda3\envs\p2s\lib\site-packages\torch\utils\data\dataloader.py", line 354, in __iter__
    self._iterator = self._get_iterator()
  File "D:\Programs\anaconda3\envs\p2s\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "D:\Programs\anaconda3\envs\p2s\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__
    w.start()
  File "D:\Programs\anaconda3\envs\p2s\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "D:\Programs\anaconda3\envs\p2s\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "D:\Programs\anaconda3\envs\p2s\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "D:\Programs\anaconda3\envs\p2s\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "D:\Programs\anaconda3\envs\p2s\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'points_to_surf_train.<locals>.seed_train_worker'

Own dataset with point clouds as numpy arrays

I've already .npy files and they are normalized. I want to train this vanilla network on them. Is it enough if I put them into 04_pts folder with the desired format. Or will I need to prepare 05_query_pts as well? If so how can I prepare query points? Is there a useful script for that?

Data loss in Shapenet

Note: I've accidentaly posted this issue before typing the description, feel free to delete it completely

How to reconstruct my own pointcloud?

Thanks for your excellent work! I have a problem with your code. After using make_pc_dataset.py I don't know how to reconstruct the point cloud to a mesh. Looking forward to your reply.

error in make_dataset.py: datasets/abc/00_base_meshes

I have downloaded the abc dataset using a python datasets/download_datasets_abc.py and I do see a folder in the datasets/abc with four files:

03_meshes
04_pts
settings.ini
testset.txt

When running make_dataset.py I am getting:

Traceback (most recent call last):
  File "make_dataset.py", line 813, in <module>
    make_dataset(dataset_name=d, blensor_bin=blensor_bin, base_dir=base_dir, num_processes=num_processes)
  File "make_dataset.py", line 724, in make_dataset
    clean_up_broken_inputs(base_dir=base_dir, dataset_dir=dataset_dir,
  File "make_dataset.py", line 534, in clean_up_broken_inputs
    final_output_files = [f for f in os.listdir(final_out_dir_abs)
FileNotFoundError: [Errno 2] No such file or directory: 'datasets/abc/00_base_meshes'

Patch point IndexError given #points < `opt.points_per_patch`

Hi, I bumped into an IndexError while training on my own point clouds:

Original Traceback (most recent call last):
  File "/workspace/envs/miniconda3/envs/points2poly/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/workspace/envs/miniconda3/envs/points2poly/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/workspace/envs/miniconda3/envs/points2poly/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/workspace/projects/points2poly/points2surf/source/data_loader.py", line 359, in __getitem__
    get_patch_points(shape=shape, query_point=imp_surf_query_point_ms)
  File "/workspace/projects/points2poly/points2surf/source/data_loader.py", line 343, in get_patch_points
    pts_patch_ms = shape.pts[patch_pts_ids, :]
IndexError: index 40 is out of bounds for axis 0 with size 40

Then I tried to identify a potential bug:
When the point cloud is of less amount (e.g., 40) than the specified number of points per local patch (e.g., 300 by default). Then KDTree.query would return self.n (40 in this case) to indicate missing neighbors. These missing neighbors are not properly addressed later, regardless of padding -1 values, which results in the IndexError.

A quick fix would be to change this line from

patch_pts_pad_ids = patch_pts_ids == -1

to

patch_pts_pad_ids = np.logical_or(patch_pts_ids == -1, patch_pts_ids==len(shape.pts))

I didn't dig deeper on this but assume this would be a fix, hence this issue for your attention. What do you think?

My Own datasets

Hi, @ErlerPhilipp !
I want to deal a scene data using make_dataset.py. After set enforce_solid=False, I still can't find any file in 02_meshes_cleaned. I get "ValueError: Dataset is empty! datasets/lounge/05_query_pts". So I would like to ask if it's possible to deal with that data.

lounge.zip

Reconstruction Error: "The model contains no faces."

Hello, thank you for the great work.

I am trying to reconstruct my own 2048-point point cloud with the max model however, when I try to visualize the resulting .ply file, I encounter with the error: "The model contains no faces." What could be the reason for this?

Thanks!

Error while running the max model evalution

running this command
bash experiments/eval_p2s_max.sh
i keep getting this errror
torch._C._cuda_init()
RuntimeError: CUDA driver initialization failed, you might not have a CUDA gpu.

any help please

ABC training set

Hi,

Thank you for making this great work available here.

Would it be possible to share your training data, the (scanned) 5k shapes from the ABC dataset?

Kind regards

Dataset from Point Clouds for Reconstruction

If I just want to reconstruct my own point clouds, it seems that I don't need the Blensor, then how should I modify the make_dataset.py? Just directly remove code and parameters related to blensor?

didn’t get a good reconstrcution result by directly using sdf.implicit_surface_to_mesh_directory

Hi,
for testing ,I tried to reconstruct objs from abc_train datasets .I only used fuction sdf.implicit_surface_to_mesh_directory with argument
query_dist_dir='abc_train\05_query_dist',
query_pts_dir='abc_train\05_query_pts',
query_grid_resolution=128
sigma=5
certainty_threshold=13.
I opened gernerated ply&off file with meshlab , found all of them have many holes ,which quite different from GTmesh .
Are there any problems when I run the code?

How to invoke the pretrained model?

Thanks for the detailed code. Hi, I'm a novice in this field. The problem I'm facing now is how to call the pre-trained models (e.g. vanilla, ablation, max) to reconstruct our own point cloud data. I only use the command line in README.md to downloa them, but I don't know how to call them. I currently use full_ run.py to reconstruct our own point cloud data, but it is not effective.

Question about chamfer distance formulation

Hi,

I have noticed your CD formulation in the main paper has the mean operator, but in the implementation without that. The formulation of implementation seems like # http://graphics.stanford.edu/courses/cs468-17-spring/LectureSlides/L14%20-%203d%20deep%20learning%20on%20point%20cloud%20representation%20(analysis).pdf that directly sum distance.

def _chamfer_distance_single_file(file_in, file_ref, samples_per_model, num_processes=1):
# http://graphics.stanford.edu/courses/cs468-17-spring/LectureSlides/L14%20-%203d%20deep%20learning%20on%20point%20cloud%20representation%20(analysis).pdf
import trimesh
import trimesh.sample
import sys
import scipy.spatial as spatial
def sample_mesh(mesh_file, num_samples):
try:
mesh = trimesh.load(mesh_file)
except:
return np.zeros((0, 3))
samples, face_indices = trimesh.sample.sample_surface_even(mesh, num_samples)
return samples
new_mesh_samples = sample_mesh(file_in, samples_per_model)
ref_mesh_samples = sample_mesh(file_ref, samples_per_model)
if new_mesh_samples.shape[0] == 0 or ref_mesh_samples.shape[0] == 0:
return file_in, file_ref, -1.0
leaf_size = 100
sys.setrecursionlimit(int(max(1000, round(new_mesh_samples.shape[0] / leaf_size))))
kdtree_new_mesh_samples = spatial.cKDTree(new_mesh_samples, leaf_size)
kdtree_ref_mesh_samples = spatial.cKDTree(ref_mesh_samples, leaf_size)
ref_new_dist, corr_new_ids = kdtree_new_mesh_samples.query(ref_mesh_samples, 1, n_jobs=num_processes)
new_ref_dist, corr_ref_ids = kdtree_ref_mesh_samples.query(new_mesh_samples, 1, n_jobs=num_processes)
ref_new_dist_sum = np.sum(ref_new_dist)
new_ref_dist_sum = np.sum(new_ref_dist)
chamfer_dist = ref_new_dist_sum + new_ref_dist_sum
return file_in, file_ref, chamfer_dist

Am I right?

Best

Missing keys and unexpected keys in state_dict

When running this command bash experiments/eval_p2s_vanilla.sh, I get following output

Random Seed: 40938661
getting information for 100 shapes
models/p2s_vanilla_model_149.pth
Traceback (most recent call last):
  File "/home/origin/codes/points2surf-master/full_eval.py", line 81, in <module>
    full_eval(opt=points_to_surf_eval.parse_arguments())
  File "/home/origin/codes/points2surf-master/full_eval.py", line 46, in full_eval
    points_to_surf_eval.points_to_surf_eval(opt)
  File "/home/origin/codes/points2surf-master/source/points_to_surf_eval.py", line 338, in points_to_surf_eval
    p2s_model = make_regressor(train_opt=train_opt, pred_dim=pred_dim, model_filename=model_filename, device=device)
  File "/home/origin/codes/points2surf-master/source/points_to_surf_eval.py", line 172, in make_regressor
    p2s_model.load_state_dict(state)
  File "/home/origin/anaconda3/envs/p2s/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for DataParallel:
	Missing key(s) in state_dict: "module.feat_global.stn1.conv1.weight", "module.feat_global.stn1.conv1.bias", "module.feat_global.stn1.conv2.weight", "module.feat_global.stn1.conv2.bias", "module.feat_global.stn1.conv3.weight", "module.feat_global.stn1.conv3.bias", "module.feat_global.stn1.fc1.weight", "module.feat_global.stn1.fc1.bias", "module.feat_global.stn1.fc2.weight", "module.feat_global.stn1.fc2.bias", "module.feat_global.stn1.fc3.weight", "module.feat_global.stn1.fc3.bias", "module.feat_global.stn1.bn1.weight", "module.feat_global.stn1.bn1.bias", "module.feat_global.stn1.bn1.running_mean", "module.feat_global.stn1.bn1.running_var", "module.feat_global.stn1.bn2.weight", "module.feat_global.stn1.bn2.bias", "module.feat_global.stn1.bn2.running_mean", "module.feat_global.stn1.bn2.running_var", "module.feat_global.stn1.bn3.weight", "module.feat_global.stn1.bn3.bias", "module.feat_global.stn1.bn3.running_mean", "module.feat_global.stn1.bn3.running_var", "module.feat_global.stn1.bn4.weight", "module.feat_global.stn1.bn4.bias", "module.feat_global.stn1.bn4.running_mean", "module.feat_global.stn1.bn4.running_var", "module.feat_global.stn1.bn5.weight", "module.feat_global.stn1.bn5.bias", "module.feat_global.stn1.bn5.running_mean", "module.feat_global.stn1.bn5.running_var". 
	Unexpected key(s) in state_dict: "module.point_stn.conv1.weight", "module.point_stn.conv1.bias", "module.point_stn.conv2.weight", "module.point_stn.conv2.bias", "module.point_stn.conv3.weight", "module.point_stn.conv3.bias", "module.point_stn.fc1.weight", "module.point_stn.fc1.bias", "module.point_stn.fc2.weight", "module.point_stn.fc2.bias", "module.point_stn.fc3.weight", "module.point_stn.fc3.bias", "module.point_stn.bn1.weight", "module.point_stn.bn1.bias", "module.point_stn.bn1.running_mean", "module.point_stn.bn1.running_var", "module.point_stn.bn1.num_batches_tracked", "module.point_stn.bn2.weight", "module.point_stn.bn2.bias", "module.point_stn.bn2.running_mean", "module.point_stn.bn2.running_var", "module.point_stn.bn2.num_batches_tracked", "module.point_stn.bn3.weight", "module.point_stn.bn3.bias", "module.point_stn.bn3.running_mean", "module.point_stn.bn3.running_var", "module.point_stn.bn3.num_batches_tracked", "module.point_stn.bn4.weight", "module.point_stn.bn4.bias", "module.point_stn.bn4.running_mean", "module.point_stn.bn4.running_var", "module.point_stn.bn4.num_batches_tracked", "module.point_stn.bn5.weight", "module.point_stn.bn5.bias", "module.point_stn.bn5.running_mean", "module.point_stn.bn5.running_var", "module.point_stn.bn5.num_batches_tracked". 

training

Hi, @ErlerPhilipp !
I want to build my own point clouds dataset, I wonder if I can train with point clouds ?

How can I get colored reconstruction fron own point clouds?

Thanks for your excellent works!
I have own colored point-clouds data, and I would like to reconstruct mesh from it with color.
Currently, it looks like train_p2s_max.sh just outputs gray mesh.
Should I need modify make_pc_dataset.py or some other code ...?

Cannot get correct surface representation by using Marchin_Cube

Hi,
I tried to save query_pts and query_dist of my own network after training,and then used fuction sdf.implicit_surface_to_mesh_directory from point2surf to reconstruct shapes but found that the generated mesh has a lot of vertics and faces (like this pic)
I checked query_dist file and found the dists are highly smaller than ABC_train datasets. The average query_dist of mine is 0.00165,while ABC_train dataset is -0.08. I don't know whether too small query_dist will cause reconstructing failed.
How can I fix it?
20220518013127

Sampling with Blensor

Hi @ErlerPhilipp,

In the paper you stated

As a pre-processing step, we center all meshes at the origin and scale them uniformly to fit within the unit cube, ..., For each scan, we place the scanner at a random location on a sphere centered at the origin, ..., The scanner is oriented to point at a location with small random offset from the origin, ..., and rotated randomly around the view direction.

I wonder how this corresponds to the code

scanner.location = Vector([0.0, 0.0, 0.0])

obj_object.location = Vector(obj_locations[i])
obj_object.rotation_quaternion = Quaternion(obj_rotations[i])
do_scan(scanner, evd_file)

which to me seems the scanner is at the center while the mesh is moving and rotating around.

Any hint? Thanks in advance!

Problems while training on Shapenet

Thanks for the detailed code @ErlerPhilipp. I'm trying to train the vanilla network on Shapenet. I have encountered a lot of problems. Could you comment on them? Also please let me know if you have further hints and points that I should be careful.

I put meshes into 00_base_meshes and run the make_dataset.py. Following problems occurL

  1. Names of some meshes changes which then throws error on training. For instance out of 6778 samples I have 6678 in 04_pts and 6767 in 05_query_pts. Some samples have different filenames, ignoring file extensions. Where did they come from?
    I could only keep common files in trainset.txt at the cost of throwing a lot of samples away but I want to know why this is happening. Maybe something to do with Blensor?

  2. I see that this work requires meshes to be watertight, but trimesh.fill_holes() can't fill the samples so I am using https://github.com/hjwdzh/Manifold. Do you think output of this repo works with this implementation? Can you also comment on why your work assumes watertight meshes?

  3. I've waited make_dataset.py for 10 hours then realized it's not utilizing CPU anymore. I've checked 05_query_pts it had same number of samples with 00_meshes, then I've manually produced txt files. Any idea why this is happening?
    For 6K samples, I get 118255 files in 04_pcd. Is this number too big? Should I change any parameters like grid_resolution, num_scans_per_mesh_max?

Dataset variant naming

Hi, thanks for releasing the code and the dataset!

In the paper you mentioned you created a dataset with medium noise (with the suffix ‘med’ to indicate this variant). But the downloaded datasets have no such variant. I wonder if it is no suffix for ABC, and ‘original’ for Famous and Thingi10k?

query points on point clouds

Dear Philip,

Thanks for providing the code for this super interesting paper.
I have read all of the paper and still, one question has remained for me: When using pre-trained models for just inference (mesh reconstruction) out of point clouds, how the query points are defined from the point clouds?

In the paper, it is mentioned that query points are defined by randomly sampling 1000 points on the surface and offset them in the normal direction. However, in the case of using just point clouds for the reconstruction once the model is trained, there would be no surface to sample query points from them, I wonder how these are defined in this case.

Thank you in advance.

@bearprin yes, you're right. The mean is essentially a normalization by number of points in the subsamples. Looks like i forgot it in the code. Also note that the shapes are asumed to be normalized to unit cube size to get comparable results.

@bearprin yes, you're right. The mean is essentially a normalization by number of points in the subsamples. Looks like i forgot it in the code. Also note that the shapes are asumed to be normalized to unit cube size to get comparable results.

If the number of points in the subsamples is always the same, the error will be off by a constant factor (10000) here. Because of the default optimization in trimesh's surface sampling, which rejects too close samples, the correct CD might be ~5% lower than stated in the paper. However, this should apply to all methods very similarly. I'm sorry for this inaccuracy. A PR would be very welcome.

Originally posted by @ErlerPhilipp in #20 (comment)

How to get datasets with var,med and max noise?

Hi,
I tried to download ABC and Famous datastes and got vesion of original,free,and extra noise . It seems theres only one noisy datasets. Now I want to get datasets(or test datasets) with different levels of noise.Could you please provided them to me?Thanks!Looking forward to your reply.

Empty folder 04_pts_vis

Thanks for the great code!(Still learning how it works xD) I just want to test your model on my pointcloud(.npy file) so I ran

python make_pc_dataset.py

with including my npy file in 04_pts and adding the name on testset.txt. Then the folder "04_pts" was created but it was empty.

Am I doing right? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.