Comments (15)
an approach that works with voxelnet and should work with SECOND :
as far as i understand the model is trained for the 80x70x4 area in front of the car and inference produces labels only for the objects visible in image space .
to get inference data for all points you could use your 360° data four times. rotate the data by 0°/90°/180°/270°.
then use the pre-processing step that reduces these point clouds to the part that is visible in the image.
(or just cut everything that is not within -45° to 45° of the front view center )
run inference on all 4 reduced point clouds, rotate the predictions back accordingly.
together these predictions should cover all objects within the 360° point cloud.
maybe 2 parts (original and 180° rotated) are enough.
from second.pytorch.
@johleh thanks for your advice. That seems a reasonable way and i will try this.
I found that simply modifying the following ranges to the desired value in .config file can make the net detecting bigger area: point_cloud_range
anchor_ranges
post_center_limit_range
, but result is no quiet stable.
from second.pytorch.
@Oofs you are right. Note that if you use anchor_generator_stride
(not recommend), you need to change offset
when change detection range.
In addition, the pretrained model encode the absolute location of voxels, so if you want to detect objects outside the camera range in KITTI, you may need to train a new model.
from second.pytorch.
@traveller59 May I know why the range is [-3,1] for the z axis ([0,-40,-3, 70.4, 40, 1] in car.config)? According to the official documentation of Kitti, the z axis of the lidar coordinate is pointing upwards to the sky. Then why -3 is needed which seems evaluating the points under ground?
from second.pytorch.
@bigsheep2012 The distribution of car bottom-center location is in [-3, 1]. you can draw the distributions.
from second.pytorch.
velodyne sensor is located 1.73m above the ground (on top of the car).
for detection of the points behind the car, maybe you can try to multiply the x coordinates by -1, do the detection, then transform the boxes back.
from second.pytorch.
Hi @Oofs , it is great to see that you seem to also have tried to do some detections in ROS.
I ran SECOND as a ROS Node with one of the KITTI dataset and the results is at youtube link. This code is at repository link. I felt that the performance is not as expected and I might have done something wrong. Could you please help me check to see if you have any suggestions for improvement? Thank you very much.
Yuesong
from second.pytorch.
@Oofs Hi, how does the network perform on 16 lines data? Thank you.
from second.pytorch.
I have changed point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1]. But still iam not able to detect objects for X<0. KIndly suggest if anybody found the solutions...
from second.pytorch.
@kwea123 @traveller59 @cedricxie kindly help us on this issue.
from second.pytorch.
@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.
So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]
KIndly help
from second.pytorch.
Don't modify the config, put the range to [0,-40,-3, 70.4, 40, 1] or whatever as it is.
Basically just do two detections, one for x>0. And to do detection on x<0, you just need to "flip" them to the front by multiplying x by -1 (or do x *=-1 and y *=-1 to rotate them to the front), then do detection on these points; finally just rotate the boxes back (you may have to write this code by yourself).
from second.pytorch.
@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.
So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]
KIndly help
from second.pytorch.
Ok thanks let me try with the suggested step..
from second.pytorch.
Ok thanks let me try with the suggested step..
Hi, have you correctly detect the objects behind the camera? Would you provide some suggestions please?
from second.pytorch.
Related Issues (20)
- KeyError: 'annotations' when using nuscenes dataset
- KeyError: 'annotations' when using nuscenes dataset HOT 3
- How to start training from interrupted step HOT 1
- About gt_sampling
- ModuleNotFoundError: No module named 'second' HOT 1
- Kitti web viewer backend issue HOT 1
- second.pytorch 1.6.0 Alpha and spconv 2.1 HOT 2
- Convert custom Lidar point cloud data to KITTI format HOT 1
- Summary name eval.kitti/official/Car/[email protected]/1 is illegal; using eval.kitti/official/Car/3d_0.70/1 instead.
- Issues while using Kitti viewer. HOT 1
- Need tips on improving performance on custom dataset
- Need suggestions on how to generate onnx files using this repo
- Source of torchplus package
- OSS License compatibility question
- About kitti viewer
- second.data HOT 2
- I would like to know how many samples should be used for validation if 3517 samples are used for training?
- Is the sparse library writed by the autuor?
- Input only the desired scene
- 有人试过把这个代码在windows上运行吗
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from second.pytorch.