GithubHelp home page GithubHelp logo

How to inference about second.pytorch HOT 14 CLOSED

traveller59 avatar traveller59 commented on June 23, 2024
How to inference

from second.pytorch.

Comments (14)

traveller59 avatar traveller59 commented on June 23, 2024

The inference.py currently only used for kitti viewer. you can check inference steps in viewer.py.
There is no way to inference a single example in command line for now, you need to use evaluate in train.py to predict entire test set or use viewer.py to inference in GUI.
Inference step:

  1. read point cloud [N, 4] and calib matrix from file (to remove points outside camera and generate bbox2d/camera box3d)
  2. use remove_outside_points to remove points.
  3. generate anchors (and anchor masks, if you remove empty anchors), use prep_pointcloud to get example dict (just generate voxels, num_points_per_voxel and coordinates when predict)
  4. pass example dict to net.call() and get results.

from second.pytorch.

bigsheep2012 avatar bigsheep2012 commented on June 23, 2024

Thanks a lot.
Just create a new .py file following your suggestions giving an inference solution for a single example.
I guess I can just remove the code related to 'infos' if I do not care the bbox in camera's coordinates, right?

from second.pytorch.

traveller59 avatar traveller59 commented on June 23, 2024

Yes, but you still need to create a dict which is needed for net.call. you need to add some code in VoxelNet.predict to return LiDAR boxes:
second/pytorch/models/voxelnet.py, line 924:

final_box_preds_camera = box_torch_ops.box_lidar_to_camera(
    final_box_preds, rect, Trv2c)
if self.lidar_only:
                predictions_dict = {
                    "box3d_lidar": final_box_preds,
                    "scores": final_scores,
                    "label_preds": label_preds,
                    "image_idx": img_idx,
                }
else:
    #  camera code

from second.pytorch.

bigsheep2012 avatar bigsheep2012 commented on June 23, 2024

Thank you. Closing the issue.

from second.pytorch.

kxhit avatar kxhit commented on June 23, 2024

Hi! @bigsheep2012
I'm trying to use the pretrained model to predict the results on my point cloud. Have you figure it out? Would you like to share your code? Thanks a lot!

from second.pytorch.

bigsheep2018 avatar bigsheep2018 commented on June 23, 2024

Hello @kxhit,

I am not using a pretrained model as @traveller59 said there might be some issues using pretrained model with SparseConv by facebook research.

I trained the model from scratch for like one day with a 1080 ti. I am afraid that I may not be able to share the code because it is done in a company.

My way is firstly trying to extract the voxel generator, target_assigner and voxelnet parts for building a small version without any data augmentation and, then add other stuffs.

Do modifications carefully in box_np_ops.* as there are many trivial modifications if you are using your own data.

from second.pytorch.

kxhit avatar kxhit commented on June 23, 2024

@bigsheep2012 @bigsheep2018 @traveller59 Thanks for your reply!
I found the time consuming is much higher than the time as stated in the paper and the KITTI benchmark(0.05s).
My test info is:

[14:03:35] input preparation time: 0.2631642818450928
[14:03:36] detection time: 0.291057825088501

I want to know the 0.05s specifically refers to which time-consuming and why it takes me much more time. Am I doing something wrong?
Waiting for reply! Thanks!

from second.pytorch.

traveller59 avatar traveller59 commented on June 23, 2024

@kxhit some code need real-time JIT compiling, so the first run may cost some time. you can see the input preparation time is very long because point_to_voxel is a numba.jit function. the following run should cost much less time.

from second.pytorch.

bigsheep2018 avatar bigsheep2018 commented on June 23, 2024

hi @kxhit ,

Firstly, I do not think the Submission of Kitti by @traveller59 is the shared github version as sparse convolution on GPU is not implemented on the github code. Thus, the time cost might be some different.

Secondly, if you are using the author's orginal code (and the reduced point cloud mentioned in ReadMe) without any modification, you should get a faster result. In my case, on my desktop with a 1060 6G, the detection time is 100-120 ms.

from second.pytorch.

kxhit avatar kxhit commented on June 23, 2024

As you say, the next will cost much less time. Thanks! Testing on TITAN XP 12G.
So, this open source code can't achieve 0.05s performance now, right?

[14:37:15] input preparation time: 0.013611555099487305
[14:37:15] detection time: 0.08942961692810059

from second.pytorch.

bigsheep2018 avatar bigsheep2018 commented on June 23, 2024

@kxhit
Yes, I think this is the case. You can refer to his paper, which is cited on the Kitti website. The GPU version Sparse Convolution implemented by the author is not shared. This github code is using the default sparse convolution by Facebook research.

from second.pytorch.

traveller59 avatar traveller59 commented on June 23, 2024

@kxhit @bigsheep2012 Currently I can't reproduce the speed in Ubuntu 18.04, PyTorch 1.0 and newest SparseConvNet. The forward time (not include input prepare time) of the pointcloud 107 is 0.069s in current environment but I can get 0.049s in previous 16.04 in 1080Ti , you can check the deprecated KittiViewer picture in README.md.
I even can't build SparseConvNet correctly after lots of try. Now I am using a wheel package built in a 16.04 docker. I have no idea about this speed problem for know.

from second.pytorch.

chandrakantkhandelwal avatar chandrakantkhandelwal commented on June 23, 2024

Thank you. Closing the issue.

@bigsheep2012 I am also trying to predict on only on Lidar data (from a custom Lidar), I do not have image and calib input. Could you please tell me which files I need to modify to remove the image and calib parameters?

from second.pytorch.

xieqi1996 avatar xieqi1996 commented on June 23, 2024

Thank you. Closing the issue.

@bigsheep2012 I am also trying to predict on only on Lidar data (from a custom Lidar), I do not have image and calib input. Could you please tell me which files I need to modify to remove the image and calib parameters?

I have the same problem,hava you solved the problem?

from second.pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.