GithubHelp home page GithubHelp logo

epvelasco / lidar-camera-fusion Goto Github PK

View Code? Open in Web Editor NEW
187.0 3.0 37.0 22.43 MB

The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.

License: GNU General Public License v3.0

CMake 5.10% C++ 94.90%
armadillo-installation armadillo-library camera interpolation lidar lidar-camera-calibration lidar-camera-fusion lidar-point-cloud realsense-camera velodyne-vlp

lidar-camera-fusion's People

Contributors

epvelasco avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

lidar-camera-fusion's Issues

ZED vertical mount, projection is wrong

Hello!

I've been experimenting with a ZED camera and a Ouster Lidar. The positioning of the sensors is such that lidar faces forward, while the camera faces to the right of it. The camera is also vertically. I've calculated the transformation matrix, and accounted for this setup in the launch file configuration, however the projection of LiDAR points onto my image from ZED is quite poor. My suspicion was first that LiDAR and ZED get initialized in different coordinate systems, and trying for variations of them did not make much difference. I was even thinking that the intermediate step of using RangeImage and its LASER_FRAME had to do something with this as well, but I am not quite sure.

Please note that, when I mounted both of my sensors facing forward, I could easily adjust the parameters to get an output that I want, but this new setup breaks constantly.

Are there any suggestions that someone may provide regarding this kind of setup.

camera lidar matrix

Hello, I use the same repo of calibration, may I ask which rlc and tlc you use? Is averege or final? I'm using ouster 128 lines lidar
thank you

Error in pose estimation when use vlp_16

Hi , you recommended to use the FLOAM for pose estimation, I used my own dataset and use FLOAM to estimate the pose and set the scan line = 16 . But the pose estimation is error because it grew up (in z-axes) as shown below. Could you prese
Uploading 84c5497a-f093-4ee4-a2a2-53adcad21d24.jpeg…
nt to me any advice?

type of extrinsic parameters

Hi you mentioned you used lidar_camera_calibration https://github.com/ankitdhall/lidar_camera_calibration to generate extrinsic parameters to be used in this package. I see that it "transform all the points in the LiDAR frame to the (monocular) camera frame". What if I used a different package for lidar-cam calibration to generate extrinsic parameters which "transforms the camera frame (parent) into the lidar frame (child)"? Am I still able to use those extrinsic parameters? When I try to I don't see the fusion (pcOnImage and points2). However, when I use your default parameters from this package I do. I would appreciate any input, thank you.

No points in interpolated pointcloud

Thank you for the great contribution, I was trying to test this on a Velodyne Puck LITE (16 Point) with a Spinnaker FLIR Camera (Blackfly S). The output matrix that was obtained after running the suggested calibration package is as follows:

--------------------------------------------------------------------
After 1000 iterations
--------------------------------------------------------------------
Average translation is:
0.00709106
   0.30344
  0.106444
Average rotation is:
   0.99984  0.0134891  0.0117732
-0.0128457   0.998506 -0.0531141
 -0.012472  0.0529544   0.998519
Average transformation is: 
   0.99984  0.0134891  0.0117732 0.00709106
-0.0128457   0.998506 -0.0531141    0.30344
 -0.012472  0.0529544   0.998519   0.106444
         0          0          0          1
Final rotation is:
 0.0125693  -0.999819 -0.0142852
-0.0531243  0.0135985  -0.998495
  0.998509  0.0133093 -0.0529438
Final ypr is:
  1.80313
 -1.62541
-0.246282
Average RMSE is: 0.0250422
RMSE on average transformation is: 0.0615788

With this in mind the params file was written as follows:

camera_matrix: [2.30402682e+03, 0.00000000e+00, 1.02375433e+03,
0.00000000e+00, 2.30541892e+03, 7.42403621e+02, 
0.00000000e+00, 0.00000000e+00, 1.00000000e+00,]

# Final
rlc: [ 0.0125693,  -0.999819, -0.0142852,
-0.0531243,  0.0135985,  -0.998495,
  0.998509,  0.0133093, -0.0529438]

# Average  
# rlc: [0.99984 , 0.0134891,  0.0117732,
# -0.0128457,   0.998506, -0.0531141,
#  -0.012472,  0.0529544 ,  0.998519]

tlc: [ 0.00709106, 0.30344, 0.10644]

When using the final rotation matrix the output has only a couple of points (which are rotated) in the top left corner and it appears not to be the correct tranformation
Screenshot from 2022-10-10 17-00-39

When using the average rlc values, the resulting pointcloud has no points visible.

Is there some additional rotation that needs to be done in the rotation matrix provided by the code? Is there something wrong with my extrinsic? Any help would be appreciated

About Livox lidar instead of velodyne

First of all, thank you very much for your outstanding contribution, which brought me into contact with this excellent project fusion lidar and cameras. About the lidar we use, it is a solid-state lidar rather than a mechanical lidar like velodyne. So I would like to ask, is there anything that needs to be paid attention to when replacing velodyne with solid-state lidar? By changing the lidar topic and image topic, I can currently display the original image and point cloud, but cannot display the fused image and point cloud, I hope you can give me your comments on this part. Thank you so much.

Fusing Ouster Instead Velodyne

Assuming I did the calibration, would be okay to use the point cloud of Ouster instead Velodyne?

What would I need besides changing the topic and frame name?

Using Ouster instead of Velodyne

Hey, My college and me try to use the Code u provided.
But We use another lidar. Bouth of the Lidars use the Ros datatype Pointcloud2.

We changed the input ros topics and also the launch files.
All starts without a problem.

If rviz starts, we can see the image from the lidar, and we can configure the inputs, that we can see the image from the camera. But we cant se any fusion.

Can you help us?

Fusion not happening on my data

I have tried to fuse lidar and camera data. I have tried with two data (as rosbags). One rosbag is downloaded from the internet. Another one is I have recorded with my lidar(rs-lidar 16) and camera(zed cam - data from left eye only).

The data I have downloaded from the internet works pretty well. But while working on the data I have recorded it is not publishing any data(The published topic is blank). Why is this happening is there anything to take care of while recording my own data?

some error occurs when launch lidar_camera_node

I have the same error when using rosbag that you provided and my data.
My pcl_version is 1.8.1.
I also try to modify cfg_params.yaml.
Screenshot from 2024-03-31 02-26-07
Screenshot from 2024-03-31 02-25-48
could you please help me solve this problem? thank you very much!

Interpolation node dying while using Ouster OS0-32U Lidar instead of Velo.

Greetings,
I have been following this repo for quite a while as a part of my project. I am using Ouster OS0-32U lidar which has a 32 laser scans (instead of 16 for velo puck). I have changed each and every subscribing and publishing topics as per my dataset. Below is my setup:

  • Ouster OS0-32U lidar
  • Intelrealsense 435D Stereo Camera.
    As per my data I ahv also followed the previous inssues here and here but they don't seem to solve our issues. After everything, the interpolation node is dying because of some unknown reason. I am kind of aware about the issue (not sure though) but I don't know how and where I can perform these changes.
    So the Velo lidar has 4 channels whereas the Ouster lidar has 9 channels as per the datasheet. Below is my output. This code works
    pid_errror.

Question about parameters and sensor displacement

First of all, Thanks for the great work.
I got three questions when analyzing your code.
Please help me to figure these out.

  1. What is the model of your Camera?

    • Did you use 'D435' of intel realsense2?
  2. What is the meaning of some parameters in 'cfg_params.yaml'?

    • y_interpolation
    • max_ang_FOV
    • min_ang_FOV
    • Would you give me an example to calculate those three params?
  3. Can you describe the sensor displacement of your LiDAR and Camera when you do the calibration?

    • For example, LiDAR is placed higher than 6cm than the camera according to the Y-axis (Camera-based axis).
    • Are both LiDAR and Camera pointing forward?
    • I need this information to calculate a transformation matrix of mine.

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.