GithubHelp home page GithubHelp logo

wuziyi616 / multi_part_assembly Goto Github PK

View Code? Open in Web Editor NEW
61.0 61.0 11.0 276 KB

Code Accompany NeurIPS 2022 Dataset Paper: Breaking Bad. A Shape Assembly Benchmark Implemented in PyTorch

Home Page: https://breaking-bad-dataset.github.io/

License: MIT License

Python 83.72% C++ 4.33% Cuda 9.60% C 0.57% Shell 1.78%
computer-vision machine-learning point-cloud pytorch pytorch-lightning shapeassembly

multi_part_assembly's Introduction

Wuziyi's github stats

multi_part_assembly's People

Contributors

ruiyuan-zhang avatar wuziyi616 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

multi_part_assembly's Issues

Install problem

Hello, when i install this project following the install.md(https://github.com/Wuziyi616/multi_part_assembly/blob/master/docs/install.md). i allways face a problem in the step "Custom Ops" as blew:

Obtaining file:///home/xb/ProjectToy/multi_part_assembly-master/multi_part_assembly/utils/chamfer
Preparing metadata (setup.py) ... done
DEPRECATION: pytorch-lightning 1.6.2 has a non-standard dependency specifier torch>=1.8.*. pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063
Installing collected packages: chamfer-ext
Running setup.py develop for chamfer-ext
error: subprocess-exited-with-error

× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [39 lines of output]
    running develop
    running egg_info
    creating chamfer_ext.egg-info
    writing chamfer_ext.egg-info/PKG-INFO
    writing dependency_links to chamfer_ext.egg-info/dependency_links.txt
    writing top-level names to chamfer_ext.egg-info/top_level.txt
    writing manifest file 'chamfer_ext.egg-info/SOURCES.txt'
    reading manifest file 'chamfer_ext.egg-info/SOURCES.txt'
    writing manifest file 'chamfer_ext.egg-info/SOURCES.txt'
    running build_ext
    /home/xb/anaconda3/envs/assembly/lib/python3.8/site-packages/setuptools/command/develop.py:40: EasyInstallDeprecationWarning: easy_install command is deprecated.
    !!

            ********************************************************************************
            Please avoid running ``setup.py`` and ``easy_install``.
            Instead, use pypa/build, pypa/installer or other
            standards-based tools.

            See https://github.com/pypa/setuptools/issues/917 for details.
            ********************************************************************************

    !!
      easy_install.initialize_options(self)
    /home/xb/anaconda3/envs/assembly/lib/python3.8/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
    !!

            ********************************************************************************
            Please avoid running ``setup.py`` directly.
            Instead, use pypa/build, pypa/installer or other
            standards-based tools.

            See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
            ********************************************************************************

    !!
      self.initialize_options()
    /home/xb/anaconda3/envs/assembly/lib/python3.8/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
      warnings.warn(msg.format('we could not find ninja.'))
    error: [Errno 2] No such file or directory: ':/usr/local/cuda-11.4/bin/nvcc'
    [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

error: subprocess-exited-with-error

× python setup.py develop did not run successfully.
│ exit code: 1
╰─> [39 lines of output]
running develop
running egg_info
creating chamfer_ext.egg-info
writing chamfer_ext.egg-info/PKG-INFO
writing dependency_links to chamfer_ext.egg-info/dependency_links.txt
writing top-level names to chamfer_ext.egg-info/top_level.txt
writing manifest file 'chamfer_ext.egg-info/SOURCES.txt'
reading manifest file 'chamfer_ext.egg-info/SOURCES.txt'
writing manifest file 'chamfer_ext.egg-info/SOURCES.txt'
running build_ext
/home/xb/anaconda3/envs/assembly/lib/python3.8/site-packages/setuptools/command/develop.py:40: EasyInstallDeprecationWarning: easy_install command is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` and ``easy_install``.
        Instead, use pypa/build, pypa/installer or other
        standards-based tools.

        See https://github.com/pypa/setuptools/issues/917 for details.
        ********************************************************************************

!!
  easy_install.initialize_options(self)
/home/xb/anaconda3/envs/assembly/lib/python3.8/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` directly.
        Instead, use pypa/build, pypa/installer or other
        standards-based tools.

        See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details.
        ********************************************************************************

!!
  self.initialize_options()
/home/xb/anaconda3/envs/assembly/lib/python3.8/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
  warnings.warn(msg.format('we could not find ninja.'))
error: [Errno 2] No such file or directory: ':/usr/local/cuda-11.4/bin/nvcc'
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.

could you please give me some guidance to solve it?

Problem with scripts/vis.py

Hi,
There is a problem with this file because the program crashes due to the following bug:
'rotation must be a tensor' in multi_part_assembly/utils/rotation.py in _check_valid function.
In this case self._rot is a map object instead of a tensor, and the assert blocks execution.
What should I do to fix the problem?

The 'info file' for splitting train/val list in trivial training

Hi, thanks for the great work and dataset.

I tried to use these codes to start my training progress, but it proposed an error related to the
_C.data_fn = everyday.train.txt (for default)
like 'No such file or directory: './everyday/everyday.train.txt'

I have checked the split_data.py and found that it needs the parameter '--info_file' as an augment.
However, there are no details about the format or data structure of the contents in the 'info_file' either in https://github.com/Breaking-Bad-Dataset/Breaking-Bad-Dataset.github.io

It would be appreciated if you could provide an example or instruction about generating the info_file for input.

Thanks a lot!

Including a new loss in the computation graph

Hi, first thanks for the nice work and for releasing it open source.

I have a question which is related to pytorch lightning but also to your code, so I wanted to ask here if you could point me to some part of the code or other resources.
I am adding one (or more) loss functions to change the way the model learns the assembly task. I believe there is margin for improvements in the loss function calculation.

My question is: how can I add a custom loss and include it in the computation graph?

I added the loss function in base_model.py, added in the loss_dict, added its weight in the configuration file (let's call our loss custom_loss, I added _C.loss = CN(); _C.loss.custom_loss_w = 1. in the config file).
Now when I run the training (no errors, it goes through), I see my loss always having almost the same value without changes. One explanation could be that I need to train longer, but I do suspect that the optimization is not including my new loss.

So I debugged the code a bit and printed out the losses, and got:

loss: trans_loss, value: tensor([[0.1740, 0.0799]], device='cuda:0', grad_fn=<StackBackward0>)
loss: rot_pt_cd_loss, value: tensor([[0.0029, 0.0040]], device='cuda:0', grad_fn=<StackBackward0>)
loss: transform_pt_cd_loss, value: tensor([[0.0024, 0.0012]], device='cuda:0', grad_fn=<StackBackward0>)
loss: custom_loss, value: tensor([[0.0262, 0.0298]], device='cuda:0')
loss: rot_loss, value: tensor([[0.4134, 0.3462]], device='cuda:0', grad_fn=<StackBackward0>)
loss: rot_pt_l2_loss, value: tensor([[0.0294, 0.1065]], device='cuda:0', grad_fn=<StackBackward0>)

As you can see, the custom_loss is missing the grad_fn, and as I can read here, this could mean that the function is not connected to the computation graph and therefore the optimizer does not take it into account. That would explain why it never decreases.
So I switched my custom loss (which was an exponential) with a pre-defined from pytorch (trying torch.nn.L1Loss for example) to see if the problem was in the definition of the loss (I read yesterday A Gentle Introduction to torch.autograd, but it did not solve my doubts, should I add _requires_grad somewhere? But I also did not see it in your code for other loss functions) but the result is the same (still no grad_fn), so I was wondering, where in the code should I look at to include the loss in the computation graph?

Thanks a lot for your time in advance.

Context for why I am trying to add a loss:
I got the training to work and I was looking at results, and to me it seems that the fragments stays in the same place where they have been placed at the beginning (in the origin). Even after a consistent number of epochs, both pieces are still there. I want to add a loss which forces them to be moved apart. I am starting with a simple idea about the minimum distances of two points: assuming we got the correct transformation for the assembly of two pieces, the minimum distance between any point in piece A and in piece B would be zero (any point on the border of the two pieces). Of course the fact that the minimum distance is zero does not necessarily mean that the assembly is correct, but it's a place to start. I would be happy already by seeing the two pieces being moved apart. More will follow later (maybe using center of mass or more sophisticated methods), but my guess is that the estimation of the transformation needs to be guided with a meaningful loss on the assembly part, meaning the two pieces are completing and not overlapping each other.

Experimental settings

I have some questions about the experimental settings:

  1. In subsection 6.1, you said that models were separately trained on fractured objects with at most 20 fractured pieces for each category. I find some categories only contain very few models, such as Cookie (4 * 100 models), and Statue (4* 100 models), while some categories have more models, such as Vase (106 *100) and Bottle (73 *100). Did you use some methods to avoid overfitting when training on the categories containing few models?
  2. The experimental results in the top block of Table 5 were trained and tested on the artifact subset? The experimental results in the bottle block of Table 5 were obtained by fine-tuning from the models trained on the everyday subset? There were 20 models trained on each category, so which one did you use to do fine-tuning?

Pretrained models

Hi, thank you for your amazing work :), could you release the pretrained models to reproduce results reported in the paper?

Some clarification questions

Hi @Wuziyi616,

Thanks for sharing a clean version of the bb benchmark code, it looks quite nice and well organized. I have some questions though which I would like to clarify.

  1. If I understand correctly this config file for training tries to replicate the nsm work for the everyday subset of the breaking bad dataset.
  2. In the nsm work though they also use the sdf module, at least this is what is described in the paper, because in the available code this is not present anywhere, which as I understand you are not making use of it here as well.
  3. In the nsm work when they try to extract the transformation matrix error they consider also the point to point distance error in your case are you considering this somewhere in the code? Because I was trying to figure out whether and where in the code you might be applying this but I couldn't find something or it is not that clear.
  4. Can you elaborate a bit more on the geometric loss. It is not clear what you mean by saying that we do not need to match the GT for loss computation (maybe this is also related to the previous question). Also in the code you are creating a loss dictionary but following the code I am not quite sure how this is structured and how the final loss is computed.
  5. Also it is not clear how do you define semantic and geometric assembly, thus if you could elaborate upon this as well it would be nice.

For now I am puzzled with the above, but as I am getting more into the code probably I will come up with some more questions.

Thanks.

Volume constrained data

Hi,

I have a question related to the new data with feature volume constrained. I wonder for the volume constrained data, does it just simply delete the data which have small pieces in the datase Or re-simulated data. Does it have same data split with the original data?

A question on duplicated data in BB dataset

I notice that some point clouds are extremly similar or even completely the same when I scan the dataset. My question is why does that happen?

For example, I try to get all samples with $2$ parts in Bowl class
BB_data = GeometryPartDataset(category='Bowl', num_points=10000, min_num_part=2, max_num_part=2, data_dir='dir', data_fn='data_split/everyday.train.txt', data_keys=('part_ids', ),)
but the 0-th data is almost the same as the 1-st data
1699286778904

1699286801919

The first image is the 0-th data, and the second image is the 1-st data.

That actually happens quite frequently - I estimate that almost 2/3 of the data in this case is duplicated. I am wondering whether it is possible to get rid the duplicated data.

PS: I remove the recentering and random rotation augmentation in the __get_item function

            # pc, gt_trans = self._recenter_pc(pc)
            # pc, gt_quat = self._rotate_pc(pc)

Pretrained models to reproduce paper's results

Thank you for your great work and dataset!

I tried to reproduce Global/DGL/LSTM results for everyday subset(2~20 frags), with same experiment settings, but I had quite different results on some metrics. I had almost same results on RMSE(Trans.), PA(Part Accuracy), but cannot on others(RMSE(Rot.), CD).
image

Do you have any plan to share your pretrained model that can reproduce results in paper(breaking-bad)?
If you can, I will very appreciate with that.

Thanks.

Unbroken models

Hi, I want to know how to obtain the mesh files of unbroken models. If we directly merge all the fractures together, there will be a lot of faces on the fracture surface.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.