GithubHelp home page GithubHelp logo

Comments (16)

EPVelasco avatar EPVelasco commented on May 22, 2024 2

Hi, you can now use the point cloud interpolation node of the LiDAR velodyne VLP16.

from lidar-camera-fusion.

EPVelasco avatar EPVelasco commented on May 22, 2024

Hi, thank you for using my code.
Answer 1 .-
The camera I used was an intel realsense D435i (we only used the RGB image) and a velodyne VLP16.

Answer 2.
The cfg settings are as follows:

  • y_interpolation, Is the number of interpolations between the laser layers, i.e. if you have y_interpolation=10.0 and a 16 laser layer lidar, in the output you will have a point cloud of up to 160 lines.

  • max_ang_FOV and min_ang_FOV, are the minimum and maximum angles of the camera field of view with respect to the Lidar, shown in the image.
    FOV_camera

Answer 3.
Our robot shows it in the image.
BLUE

We calculate the transformation matrix with the Lidar-Camera calibration method. To calibrate the sensors, both must have intersection in their fields of view.

from lidar-camera-fusion.

LoyalLumber avatar LoyalLumber commented on May 22, 2024

Many Thanks for the detailed and friendly answer!! It is fruitful to understand your contribution.
I got 3 questions (2 successive and 1 new questions).

1) Parameters: I also use D435 (only RGB) and VLP-16. It seems I only need to adopt all of your configuration including 'y-interpolation' and 'max_ang_FoV, min_ang_FoV, x_resolution, ang_Y_resolution, minlen, maxlen' only except 'the transformation matrix'. Am I right?

2) Transformation: What was the initial rotation (x,y,z) when using the Ldar-Camera calibration? I use the initial rotation [1.57, -1.57, 0] radians of LiDAR with repect to Camera since my sensors only look forward (my configuration has no additional tilting like your Camera.). The initial rotation of your configuration could be [1.57, -1.57, -something] since your camera looks little down-side.

3) y-interpolation: Impressive approach! I could be very helpful for sparse laser like VLP-16 when the param is properly set. What kind of algorithm do you use for the interpolation? Is there a paper or baseline? I guess this parameter could be beneficial or wrong according to running environment of a mobile robot.

Thanks a lot for your help.

from lidar-camera-fusion.

EPVelasco avatar EPVelasco commented on May 22, 2024
  1. Parameters: yes, you just need to modify the transformation matrix.

  2. Transformation: My final rotation matrix [ 0.0113 0.0210 0.2350] radians. In the line there is a rotation to align the camera and lidar axes. This internal rotation is [-1.57, 1.57, 0]. Because I use this axis rotation, I suggest you use an initial rotation of [0 0 0].

  3. The interpolation is done with a 2D linear interpolation algorithm by converting the point cloud into a range image. We are working in 1 paper where we explain how the interpolation is done. If you want to cite the repository use:
    Velasco, E. (2022). Lidar and camera fusion. url: github.com/EPVelasco/lidar camera fusion
    I would appreciate if you put a star in the repository. We use this as a measure of the impact factor of the repository in the scientific community.

from lidar-camera-fusion.

EPVelasco avatar EPVelasco commented on May 22, 2024

You need to calibrate your camera to get the intrinsic parameters. In camera_matrix write the intrinsic parameters of your camera. If you don't know these parameters, you can use the tool Ros Camera_calibration.

from lidar-camera-fusion.

LoyalLumber avatar LoyalLumber commented on May 22, 2024

Thanks again for your detailed comments.
Absolutely. I starred your repo. Also, I will cite your paper for my study and future papers.

1) Transformation: I understood what you did. However, is there any specific reason you set the internal rotation as [-1.57, 1.57, 0]? I guess the convention follows the initial rotation [1.57, 1-.57, 0] as LiDAR to Cam axis as described in Ldar-Camera. This repo sets the initial_rot_x (and y,z) as [1.57, 1-.57, 0]. The rotation refers to the original LiDAR axis (x, y, z) to the Camera axis (z, x, y). Did I misunderstand?

2) y-interpolation: Can I turn off this function (the interpolation of LiDAR)? I set the value as '1.0' and it seems working. But, I don't know exactly I properly turned off the function.
*By the way, when I set the value as 1.0, It outputs a very sparse range image of LiDAR (PointXYZRGB).

Thanks a lot for your help.

from lidar-camera-fusion.

EPVelasco avatar EPVelasco commented on May 22, 2024

Thanks for adding the repository in your citations.
Can you upload an image of the interpolation when you put 1.0 in the y_interpolation. I would like to see what the problem is. Indeed, if you put 1.0, the program should not do an interpolation.

Concerning the rotations. The initial rotations that I used for the calibration are [1.57 -1.57 0]. This initial rotation is necessary because the camera and lidar axes are not aligned. Here is an image of the camera and lidar axes.
image
Generally the camera axes are depicted like this. The camera moves back and forth on the Z axis and in the image plane on the XY axes.

from lidar-camera-fusion.

LoyalLumber avatar LoyalLumber commented on May 22, 2024

Long time no see bro.
Sorry for that I didn't close this issue yet. I've been busy for something else, and I just resume this work.

I got three questions to complete the calibration.

  1. Did you do an additional camera calibration following this package? Since you use D435i for the work, I guess the intrinsic matrix is fixed according to the resolution you set. Am I wrong?

  2. What resolution of D435i did you use for the work? Is that HD (720p)?

  3. How did you define the "max_ang_FOV (2.7, radian)" and "min_ang_FOV (0.5, radian)" in your launch file? Following your previous answer, I guess these should be 2.1764 and 0.9651, respectively. Because D435 has 69.4 degree horizontal FOV.

Below describes the problem I got. (BTW, I also use VLP16 and D435i. And, I have similar displacement of the sensors like yours without tilting of the camera.)

Now, I am getting a wrong result on the calibration.
Please check my result ("/pcOnImage_image" topic image) on the following Link. This is the calibrated image without the interpolation. In the image, the red bounding box is some building detected by pointcloud, and it should be translated toward up and right.

I will re-calculate the transformation matrix again. Also, I suspect an intrinsic matrix of camera. That's why I answered the second question.

Many Thanks for this work. It helps me a lot.

from lidar-camera-fusion.

EPVelasco avatar EPVelasco commented on May 22, 2024

The intirnsec matrix is on line 1 of the cfg_params.yaml file. This matrix was obtained with the ROS camera calibration tool. Video tutorial link.

My realsense camera resolution is 1280x720.

I suggest you set the max_ang_FOV and min_ang_FOV limits to 2.7 and 0.5 respectively. The program removes all point cloud outside that range to optimize the code. In addition, the program removes the point cloud that is not projected on the 1280x720 image. If a very limited FOV range is set, information from the point cloud may be lost.

Based on the image in the link, it could be an error in the intrinsic or extrinsic calibration parameters. I suggest you calibrate your camera with the ROS Camera Calibration tool. If you still have this problem, you could move your calibration matrix a few millimeters on the axes that don't match.

from lidar-camera-fusion.

LoyalLumber avatar LoyalLumber commented on May 22, 2024

Many thanks for the help.
As you mentioned, the calibrations need to be done.

  • Can you explain the translation matrix in the cfg file? What do they stand for? Are thy [x, y, z] from LiDAR coordinate to Camera coordinate? In the source code, the transformed matrix is used to calculate pixel coordinate (px, py) for the pcOnImage topic. I'm confused now.
    Thanks.

from lidar-camera-fusion.

EPVelasco avatar EPVelasco commented on May 22, 2024

The coordinates of the cfg file are the displacement distances in the axes [x , y , z] starting from the camera axes already oriented to the LiDAR axes, as shown in the image.
image

I have uploaded in the README a preprint of an article where we apply the code of the repository in the depth estimation of objects. Preprint. In this article there is a section about the LiDAR and camera fusion.

from lidar-camera-fusion.

svinin avatar svinin commented on May 22, 2024

I have uploaded in the README a preprint of an article where we apply the code of the repository in the depth estimation of objects. Preprint. In this article there is a section about the LiDAR and camera fusion.

Thank you for your work. Very interesting!

I have read the article and reviewed the code and couldn't find the moment where this is implemented in the code:
"the RGB-D camera depth image is used for small distances of 0.3 - 3.0 m and the LiDAR projected depth image is used for distances greater than 3.0 m"

And I also wanted to ask this: Based on the quote from the article, does that mean that the LIDAR's point cloud doesn't rely on the Image when it builds the interpolated point cloud (more dense point cloud) ? Because as I understand, camera cannot look further than 3m. Just want to understand how does this fusion works in practice.

Thank you!

from lidar-camera-fusion.

EPVelasco avatar EPVelasco commented on May 22, 2024

Hi gxnse, the code in this repository is used to fusion the interpolated point cloud of a LiDAR sensor with the RGB channel of a camera. In this case the VLP16 lidar is used and the realsense 435 camera which is an RGBD camera. As seen in the article, the lidar and camera fusion is used to estimate the camera-to-object distance. When the object is within the range of 0.3 - 3 meters, the depth data from the RGBD camera is used and beyond 3 meters the LiDAR data is used.

To answer your second question. We do not use the image data to interpolate the point cloud, in fact you can interpolate and have a lidar with more channels than the original (I am preparing a repository to do that). The interpolation of the data is given by converting the point cloud to a range image and a bilinear interpolation with the armadillo library.

image

I will soon have a repository ready showing the point cloud interpolation. I still have to improve the filtering of the interpolation noise trails.

from lidar-camera-fusion.

svinin avatar svinin commented on May 22, 2024

Thank you! I will be using your interpolation approach in my Masters degree project for curb detection of the autonomous vehicle and will quote you. :)

from lidar-camera-fusion.

wangxianggang1997 avatar wangxianggang1997 commented on May 22, 2024

gxnse,此存储库中的代码用于将 LiDAR 传感器的插值点云与相机的 RGB 通道融合。在这种情况下,使用 VLP16 激光雷达和 realsense 435 相机,它是一个 RGBD 相机。正如文章中所见,激光雷达和相机融合用于估计相机到物体的距离。当物体在 0.3 - 3 米范围内时,使用来自 RGBD 相机的深度数据,超过 3 米时使用 LiDAR 数据。

回答你的第二个问题。我们不使用图像数据来插值点云,事实上,您可以插值并拥有比原始通道更多的激光雷达(我正在准备一个存储库来做到这一点)。数据的插值是通过将点云转换为距离图像并使用犰狳库进行双线性插值来给出的。

图像

我很快就会有一个存储库准备好显示点云插值。我仍然需要改进插值噪声轨迹的过滤。

I would like to ask if the interpolation code does not seem to interpolate the laser intensity, is it possible to do this, how did you do it as shown in the second picture, the laser intensity is displayed after interpolation

from lidar-camera-fusion.

EPVelasco avatar EPVelasco commented on May 22, 2024

Hello, the code does not interpolate the data with the intensity channel, the image shows the colored point cloud due to the positioning of each point along the z-axis.
Yes, you could interpolate the data with the intensity channel with the armadillo library. But when converting the point cloud to a range image with the function pcl ::RangeImage::createFromPointCloud the intensity parameter is not added.
You should create an array of the same dimensions as the uninterpolated image containing the intensity data.

from lidar-camera-fusion.

Related Issues (17)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.