GithubHelp home page GithubHelp logo

Comments (15)

johleh avatar johleh commented on July 21, 2024 1

an approach that works with voxelnet and should work with SECOND :

as far as i understand the model is trained for the 80x70x4 area in front of the car and inference produces labels only for the objects visible in image space .

to get inference data for all points you could use your 360° data four times. rotate the data by 0°/90°/180°/270°.
then use the pre-processing step that reduces these point clouds to the part that is visible in the image.
(or just cut everything that is not within -45° to 45° of the front view center )
run inference on all 4 reduced point clouds, rotate the predictions back accordingly.
together these predictions should cover all objects within the 360° point cloud.

maybe 2 parts (original and 180° rotated) are enough.

from second.pytorch.

Oofs avatar Oofs commented on July 21, 2024 1

@johleh thanks for your advice. That seems a reasonable way and i will try this.
I found that simply modifying the following ranges to the desired value in .config file can make the net detecting bigger area: point_cloud_range anchor_ranges post_center_limit_range, but result is no quiet stable.
car_detect

from second.pytorch.

traveller59 avatar traveller59 commented on July 21, 2024 1

@Oofs you are right. Note that if you use anchor_generator_stride (not recommend), you need to change offset when change detection range.
In addition, the pretrained model encode the absolute location of voxels, so if you want to detect objects outside the camera range in KITTI, you may need to train a new model.

from second.pytorch.

bigsheep2012 avatar bigsheep2012 commented on July 21, 2024

@traveller59 May I know why the range is [-3,1] for the z axis ([0,-40,-3, 70.4, 40, 1] in car.config)? According to the official documentation of Kitti, the z axis of the lidar coordinate is pointing upwards to the sky. Then why -3 is needed which seems evaluating the points under ground?

from second.pytorch.

traveller59 avatar traveller59 commented on July 21, 2024

@bigsheep2012 The distribution of car bottom-center location is in [-3, 1]. you can draw the distributions.

from second.pytorch.

kwea123 avatar kwea123 commented on July 21, 2024

velodyne sensor is located 1.73m above the ground (on top of the car).
for detection of the points behind the car, maybe you can try to multiply the x coordinates by -1, do the detection, then transform the boxes back.

from second.pytorch.

cedricxie avatar cedricxie commented on July 21, 2024

Hi @Oofs , it is great to see that you seem to also have tried to do some detections in ROS.

I ran SECOND as a ROS Node with one of the KITTI dataset and the results is at youtube link. This code is at repository link. I felt that the performance is not as expected and I might have done something wrong. Could you please help me check to see if you have any suggestions for improvement? Thank you very much.

Yuesong

from second.pytorch.

kwea123 avatar kwea123 commented on July 21, 2024

@Oofs Hi, how does the network perform on 16 lines data? Thank you.

from second.pytorch.

chowkamlee81 avatar chowkamlee81 commented on July 21, 2024

I have changed point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1]. But still iam not able to detect objects for X<0. KIndly suggest if anybody found the solutions...

from second.pytorch.

chowkamlee81 avatar chowkamlee81 commented on July 21, 2024

@kwea123 @traveller59 @cedricxie kindly help us on this issue.

from second.pytorch.

chowkamlee81 avatar chowkamlee81 commented on July 21, 2024

@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.

So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]

KIndly help

from second.pytorch.

kwea123 avatar kwea123 commented on July 21, 2024

Don't modify the config, put the range to [0,-40,-3, 70.4, 40, 1] or whatever as it is.
Basically just do two detections, one for x>0. And to do detection on x<0, you just need to "flip" them to the front by multiplying x by -1 (or do x *=-1 and y *=-1 to rotate them to the front), then do detection on these points; finally just rotate the boxes back (you may have to write this code by yourself).

from second.pytorch.

chowkamlee81 avatar chowkamlee81 commented on July 21, 2024

@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.

So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]

KIndly help

from second.pytorch.

chowkamlee81 avatar chowkamlee81 commented on July 21, 2024

Ok thanks let me try with the suggested step..

from second.pytorch.

dingfuzhou avatar dingfuzhou commented on July 21, 2024

Ok thanks let me try with the suggested step..

Hi, have you correctly detect the objects behind the camera? Would you provide some suggestions please?

from second.pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.