GithubHelp home page GithubHelp logo

3d_multiview_reg's People

Contributors

tolgabirdal avatar zgojcic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3d_multiview_reg's Issues

Threshold for losses different than original FCGF

Hi, you state that you keep the FCGF parameters the same as in the original paper. In your code positive and negative thresholds for contrastive_hardest_negative_loss are like this.

# For FCGF loss we keep the parameters the same as in the original paper

self.pos_thresh = 1.4
self.neg_thresh = 0.1

However in original FCGF code default threshold parameters are opposite. https://github.com/chrischoy/FCGF/blob/f0863a9ba4a29f677c0ddd2248cf4e56b9318add/config.py#L34

trainer_arg.add_argument('--neg_thresh', type=float, default=1.4)
trainer_arg.add_argument('--pos_thresh', type=float, default=0.1)

Did you use your parameters to train FCGF descriptor? If it is a design choice could you elaborate more?

Thank you!

Questions for Datasets

Hi,
When I run the "bash scripts/download_3DMatch_train.sh preprocessed" and "bash scripts/download_3DMatch_train.sh raw"

I get this error.

Connectingtoshare.phys.ethz.ch(share.phys.ethz.ch)|129.132.80.27|:443... connected HTTP request sent, awaiting response... 403 Forbidden
2023-03-20 16:21:34 ERR0R 403: Forbidden.

unzip:cannot findoropen3dmatch raw.zip,3d match raw.zip.zip or 3d match raw.zip.ZIP rm:cannot remove '3d match raw.zip': No such file or directory

What is the reason of this?

Best Regards

Inputs must be sparse tensors when processing raw eval data

Hi!
Thanks for sharing your very promising work!

I'm trying to follow your instructions to download and process (feature extraction) your evaluation data.
However when calling extract_data.py as in your example, I'm getting the following output:

2020-07-13 15:09:05 824982bf9848 root[13462] INFO Starting feature extraction
2020-07-13 15:09:05 824982bf9848 root[13462] INFO ['./data/eval_data/3d_match/raw_data/kitchen', './data/eval_data/3d_match/raw_data/sun3d-home_at-home_at_scan1_2013_jan_1', './data/eval_data/3d_match/raw_data/sun3d-home_md-home_md_scan9_2012_sep_30', './data/eval_data/3d_match/raw_data/sun3d-hotel_uc-scan3', './data/eval_data/3d_match/raw_data/sun3d-hotel_umd-maryland_hotel1', './data/eval_data/3d_match/raw_data/sun3d-hotel_umd-maryland_hotel3', './data/eval_data/3d_match/raw_data/sun3d-mit_76_studyroom-76-1studyroom2', './data/eval_data/3d_match/raw_data/sun3d-mit_lab_hj-lab_hj_tea_nov_2_2012_scan1_erika']
2020-07-13 15:09:05 824982bf9848 root[13462] INFO 0 / 60: kitchen_000
/content/drive/My Drive/Dev/DL/colab_data/3D_multiview_reg/scripts/utils.py:118: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  coords = torch.tensor(coords, dtype=torch.int32)

Traceback (most recent call last):
  File "./scripts/extract_data.py", line 363, in <module>
    logger.info('Feature extraction completed')
  File "./scripts/extract_data.py", line 88, in extract_features_batch
    
  File "/content/drive/My Drive/Dev/DL/colab_data/3D_multiview_reg/scripts/utils.py", line 125, in extract_features
    return return_coords, model(stensor).F
  File "/usr/local/envs/lmpr/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "/content/drive/My Drive/Dev/DL/colab_data/3D_multiview_reg/lib/descriptor/fcgf.py", line 257, in forward
    out = ME.cat((out_s4_tr, out_s4))
  File "/usr/local/envs/lmpr/lib/python3.6/site-packages/MinkowskiEngine-0.4.3-py3.6-linux-x86_64.egg/MinkowskiEngine/MinkowskiOps.py", line 67, in cat
    assert isinstance(s, SparseTensor), "Inputs must be sparse tensors."
AssertionError: Inputs must be sparse tensors.

Do you have an idea why I'm seeing this or am I missing something?

Key Error

Hi,
When I use my data-set (extract_data.py) ,

python ./scripts/extract_data.py
--source_path ./data/train_data/
--target_path ./data/train_data/
--dataset 3d_match
--model ./pretrained/fcgf/model_best.pth
--voxel_size 0.025
--n_correspondences 20000
--inlier_threshold 0.05
--extract_features
--extract_correspondences
--extract_precomputed_training_data
--with_cuda \

it creates "correspond and features" but afterwards I get this error.

Traceback (most recent call last):
File "./scripts/extract_data.py", line 373, in
args.target_path, args.inlier_threshold, args.voxel_size)
File "./scripts/extract_data.py", line 254, in extract_precomputed_training_data
xs = data['xs']
File "/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py", line 266, in getitem
raise KeyError("%s is not a file in the archive" % key)
KeyError: 'xs is not a file in the archive'

What is the reason of this?

Best Regards

color information in the descriptor

Currently, only the coord info are used for training to generate the descriptor, did you also try to include the color information, i think it should be helpful in some difficult case also

Processing of Scannet dataset

Hi, I would like to have further information about the Scannet dataset you used for the multiview experiments.

  1. Do you have a list of frames used for the evaluation?
  2. Are the settings for pairwise matching the same as what is provided in that used by pairwise_demo.py? Also, it possible to provide the pairwise transformations used?

Thanks!

Question on new release of the multiview 3D point cloud registration part

Hello, thanks for the current release. I have finished the demo part and got a perfect pairwise registration result. But the current release only supports training the pairwise registration network, and is it possible to update the new release on the multiview 3D point cloud registration part not only for the pairwise?

IndexError

when I run 3DMatch
python ./scripts/benchmark_pairwise_registration.py
--source ./data/eval_data/
--dataset 3d_match
--method RegBlock
--model ./pretrained/RegBlock/model_best.pt
--only_gt_overlap \

I get the following error
File "./scripts/benchmark_pairwise_registration.py", line 421, in
evaluate_registration_performance(eval_data,
File "./scripts/benchmark_pairwise_registration.py", line 313, in evaluate_registration_performance
ext_traj_est, ext_traj_gt = extract_corresponding_trajectors(est_pairs,gt_pairs,est_traj, gt_traj)
File "/home/san/Desktop/3D_multiview_reg-master/lib/utils.py", line 514, in extract_corresponding_trajectors
est_pairs = est_pairs[:,0:2]
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed

What is problem?

Best Regards

Docker Guide

Thank you for the sharing the wonderful work.

Is there any way to create dockerfile for the environment?

Since the the repos that are used keep changing the support version?

Problem with dataset.

Hello author,because of the network,I can't use your script files to download the dataset.So I download it from the 3DMatch(http://3dmatch.cs.princeton.edu/).However I noticed that the extract_data.py will need .ply file.And I can only get .png file.So is that mean I should get the some .ply files like the raw pointcloud.Then I can use extract_data.py to get some attribute like features,etc.

Questions for building environment for project

Hello, thanks for the current release.however, following questions happened: python: can't open file 'setup.py': [Errno 2] No such file or directory, when exporting 'CXX=g++-7; python setup.py install'.How can i solve this problem?

shape mismatch

When I run ./scripts/extract_data.py, I got the error:
IndexError: shape mismatch: indexing arrays could not be broadcast together with shapes (201306,3) (201306,)

Result of the pointcloud after registration

Hello author,I have train the model and get the model_best.pt. And I try to eval it.However I can't see the result about the pointcloud file after registration like .ply. So I want to know how can I see the result in paper like the globally aligned point clouds
image

TypeError: __init__() got an unexpected keyword argument 'has_bias'

Today, I have face a problem about TyperError as follow when I run ‘python3 ./scripts/pairwise_demo.py ./configs/pairwise_registration/demo/config.yaml --source_pc ./data/demo/pairwise/raw_data/cloud_bin_0.ply --target_pc ./data/demo/pairwise/raw_data/cloud_bin_1.ply --model pairwise_reg.pt --verbose --visualize’。
"""
File "/home/wyu/WORKS/PycharmProjects/pythonProject/3D_multiview_reg/lib/descriptor/fcgf.py", line 125, in init
dimension=D)
TypeError: init() got an unexpected keyword argument 'has_bias'
"""
Please help me, thanks.

Multiview registration

Hi,

thanks for sharing your code ! Am I under the correct assumption that the current state of the repo does not support multiview registration but only pairwise registration ?

It would also be nice if you could provide some demo code to test registration performance with point clouds from custom images.

Thanks for any information
Alexander

Point cloud generation

Hey,

Thank you for your great work! Could you tell a bit more about the ScanNet evaluation?

  1. Did you generate each point cloud from a single RGBD frame?
  2. In the evaluation of ScanNet, how did you decide which frames to choose?

Best,
Zhengdi

How well does it generalize?

I'm currently trying out your method with scans of a foot.
Unfortunately it's not really working yet and I played around with different voxel sizes and different scans.

In terms of dimension and surface characteristics it's quite different compared to the office datasets used for training. So I'm not completely surprised that it doesn't work out of the box.

Do you think that this is to be expected? Would training be the only way to solve my problem here or are there maybe other things that I didn't consider yet? Would you expect an improvement with the multiview part? I'm currently considering 3-4 scans of a foot from different perspectives, so overlap is not huge here...

Please let me know it you need some scans or if this question is not appropriate here!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.