GithubHelp home page GithubHelp logo

acfr / cam_lidar_calibration Goto Github PK

View Code? Open in Web Editor NEW
415.0 415.0 97.0 255.61 MB

(ITSC 2021) Optimising the selection of samples for robust lidar camera calibration. This package estimates the calibration parameters from camera to lidar frame.

License: Apache License 2.0

CMake 3.18% Python 4.55% Dockerfile 1.14% Shell 0.95% C++ 90.17%

cam_lidar_calibration's People

Contributors

darrenjkt avatar jclinton830 avatar jwebb-acfr avatar stewart-worrall avatar surabhi96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cam_lidar_calibration's Issues

Maximum Value of Feature Extraction

H, I'm just curious about the maximum bounds that stated in in params.yaml. I noticed even if I changed the parameter of min z and max into -10 to 10, the range in the rqt_reconfigure won't be lower or higher than -5 and 5. Since I'm having a high tilted camera, I wish I can adjust the max z into more than 5. Do you have any idea about this? Thanks!

What I stated into params.yaml
image

the rqt_reconfigure gui
image

The feature extraction visualization
image

Calibrate LiDAR and camera using files.

Hello!

I am a beginner in ROS.
There is a question that I want to ask for help.
Through the README tutorial, we know how to use this tool to calibrate our own sensor by playing back rosbag.
If I have a lot of image and point cloud files, which are .jpg and .pcd files, can I use these files to calibrate the LiDAR and camera using this GitHub tool?

Thanks for reading patiently!

I'll keep working on that. I will update if there is any progress. Thank you!

ImportError: No module named pandas

When I command
roslaunch cam_lidar_calibration assess_results.launch csv:="$(rospack find cam_lidar_calibration)/data/vlp/calibration_quickstart.csv" visualise:=true, the following error message is displayed.

error code

`
... logging to /home/ecb/.ros/log/32102fc4-1219-11ed-8179-d8bbc1224b6b/roslaunch-ecb-GP76-20675.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://ecb-GP76:37649/

SUMMARY

PARAMETERS

  • /D: [-0.0540096, -0.0...
  • /K: [1176.93, 0.0, 96...
  • /assess/csv: /home/ecb/catkin_...
  • /assess/visualise: True
  • /assess/visualise_pose_num: 16
  • /camera_info: /gmsl/A0/camera_info
  • /camera_topic: /gmsl/A0/image_color
  • /chessboard/board_dimension/height: 850
  • /chessboard/board_dimension/width: 610
  • /chessboard/pattern_size/height: 7
  • /chessboard/pattern_size/width: 5
  • /chessboard/square_length: 95
  • /chessboard/translation_error/x: 2
  • /chessboard/translation_error/y: 0
  • /distortion_model: fisheye
  • /feature_extraction/x_max: 10.0
  • /feature_extraction/x_min: -10.0
  • /feature_extraction/y_max: 8.0
  • /feature_extraction/y_min: -8.0
  • /feature_extraction/z_max: 5.0
  • /feature_extraction/z_min: -5.0
  • /height: 1208
  • /lidar_topic: /velodyne/front/p...
  • /rosdistro: melodic
  • /rosversion: 1.14.13
  • /use_sim_time: False
  • /visualise_results/csv: /home/ecb/catkin_...
  • /visualise_results/degree: False
  • /visualise_results/rot_binwidth_deg: 0.5
  • /visualise_results/trans_binwidth: 0.01
  • /width: 1920

NODES
/
assess (cam_lidar_calibration/assess_node)
visualise_results (cam_lidar_calibration/visualise_results.py)

ROS_MASTER_URI=http://localhost:11311

process[visualise_results-1]: started with pid [20777]
process[assess-2]: started with pid [20778]
[ INFO] [1659414159.112580773]: Importing samples from: /home/ecb/catkin_ws/src/cam_lidar_calibration/data/vlp/poses.csv
[ INFO] [1659414159.114178533]: 40 samples imported
Traceback (most recent call last):
File "/home/ecb/catkin_ws/src/cam_lidar_calibration/scripts/visualise_results.py", line 13, in
import pandas
ImportError: No module named pandas
[visualise_results-1] process has died [pid 20777, exit code 1, cmd /home/ecb/catkin_ws/src/cam_lidar_calibration/scripts/visualise_results.py __name:=visualise_results __log:=/home/ecb/.ros/log/32102fc4-1219-11ed-8179-d8bbc1224b6b/visualise_results-1.log].
log file: /home/ecb/.ros/log/32102fc4-1219-11ed-8179-d8bbc1224b6b/visualise_results-1*.log
^C[assess-2] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
`

Please let me know how to solve the problem.

solve meet prolem

hello, when I tried calibration with my own data, meet the problem as follow. it seems wrong at ga_obj.solve(), but i don't know why, can you explain it?Thank you~
image
image

RVIZ shuts down when i capture sample for calibration !

every time i want to capture the first sample to start the calibration, Rviz shuts down and i have to force quit it. on the terminal the capture seems to be successful and i get this :
--- Sample 1 ---
Measured board has: dimensions = 610x850 mm; area = 0.51850 m^2
Distance = 1.88 m
Board angles = 87.15,86.83,87.63,86.36 degrees
Board area = 0.52148 m^2 (+0.00298 m^2)
Board avg height = 857.69mm (+7.69mm)
Board avg width = 608.00mm (-2.00mm)
Board dim = 602.17,855.47,859.90,613.83 mm
Board dim error = 27.03
this the error i get when i force quit rviz :

[rviz-2] process has died [pid 8354, exit code -9, cmd /opt/ros/melodic/lib/rviz/rviz -d /home//catkin_ws/src/cam_lidar_calibration/rviz/cam_lidar_calibration.rviz __name:=rviz __log:=/home//.ros/log/9cd9ae78-55c8-11ed-880c-d04cc1051f2c/rviz-2.log].
log file: /home/***/.ros/log/9cd9ae78-55c8-11ed-880c-d04cc1051f2c/rviz-2.log

any help please ?

Reprojection Image Seems Strange

Hi, I applied your code to my own data an strangely I got this result. I worked with your noetic branch with Azure Kinect Camera and Ouster Lidar.

My reprojection result:
image

Since I'm new at this, I just wondering is this kind of result is normal? I tried several times, and most of them produce similar result. The only different is the place of the yellow and the green cross sometimes not lies on the center of board. Is there anyone also experience this strange result?

Shift Errors

Hello,

I have been trying to calibrate VLP32 LIDAR with Camera using different tools and I have came across your paper and I have got the best results so far but I have one issue that has been noticeable using whatever calibration technique.

The calibration is at its' best on the middle of the image however, whenever it comes near to corner; the shift keeps increasing till it is very significant at the corners.

I have taken already a lot of samples at the corners and went through all the recommended steps but I am not sure what I might be missing. I have attached two images with the best results I have achieved so far from the tool taking into consideration that I commented the distortion part in the code from the point cloud so I am calibrating distorted images with distorted point cloud. There is a significant shift as shown in the Z direction.

https://ibb.co/K00Kb6N
https://ibb.co/Y8XD170

Another question what is the procedure regarding the distortion? Is it un-distorted camera + un-distorted LIDAR OR distorted + distorted? When is the distortion taken into consideration at which point?. I have tried un-distorted camera + un-distorted LIDAR but the results I have got had relatively very large error.

Thanks in advance.

Lidar centre vs pointcloud projection

Hi, thanks for a great tool.

I'm wondering why the lidar centre looks to be projected spot on, but the resulting pointcloud is way off target.
I'm using a wide angle lens (74 degree fov), but should have relatively low distortion.

Even if the result is expected to behave worse at the camera edges this much re-projection error for the pointcloud is very unexpected. Is the lidar centre projected differently?

Mean reprojection error across  31 samples
- Error (pix) =  9.391 pix, stdev =  5.864
- Error (mm)  = 32.677 mm , stdev = 18.313

Screenshot from 2021-12-14 11-55-49

Thank you,

Several Approaches and could not get results

Thanks for your great result and appreciate for your work again.

I've tried several approaches, tried in ros Melodic for local environment, also on Docker. Finally I tried to make own environment as this blog says. This blog says that there is something to be modified that I tried the same but I could not get the result.

https://programming.vip/docs/cam-for-joint-calibration-of-lidar-and-camera_-lidar_-calibration.html

But the same result. No data shown on the rviz. (I modified the frame id also.) These are what I have modified and progressed.

I am using Ouster OS-1 32 channel.

1. 'Ring Match Problem
I found out that the ring data is not the same as Velodyne and modified the data type as Ouster.

Modified point.h in ouster_ros official package onto uint16_t

image

Ring not match warning not visible anymore.

2. Topic Name Configuration
I've used usb_cam package to publish the camera data on ROS.
After that I used image_proc as following command.

ROS_NAMESPACE=usb_cam rosrun image_proc image_proc

image

RQT Graph shows proper data flow for the extraction

3. Data not shown on rviz

Two window appears and expected to adjust the thresholds to show only chessboard. But I could not get anymore of progress.

Can anyone give help for the result? Thanks in advance.

own data

Hi, if the chessboard is rigid enough, can we calibrate the lidar and camera without a board? Thus , we can set the offset as 0 mm and 0 mm.

a large FOV calibration

The fisheye's FOV is 180 degree,and we plane to replace it with 197 degree camera later. I wonder if this method would still work with such a large FOV?You prompt reply will be very much appreciated.

docker container issue

Hi, first, thank you for your great work. I have followed your guide line to use the package in a docker container. But when I quit and restart the container using "docker start" and "docker run". It will just free and have no response. Can you point out what is the possible problem? thank you in anvance!

image

RANSAC PCL erro

image
image

This is the erro happened, it seemed to be the issue of setting threshold. And I found that the value in this project was set to 0.004. Should I change the value? Thanks!

Error in feature extraction from ouster lidar

I am using this code with a usb cam and ouster os1 lidar on ROS noetic. I used a prerecorded rosbag for the data.
I have done some changes in the cam_lidar_calibration.rviz.
Class: rviz/Image Enabled: true Image Topic: /cv_camera/image_raw

and Global Options: Background Color: 48; 48; 48 Default Light: true Fixed Frame: os_sensor Frame Rate: 30 so that this code will work for my os1 lidar and camera. While running the launch file for rviz, the image is loading without any issues, but the pointclouds from os_sensor is not loading. Hence the rqt gui is also not working. I have added the screenshots below.
image

image

Can someone please tell me what all changes should i make for my code to work properly?
Thanks in advance

catkin build failed: missing header file

Hello, First of all thank you for the ros package. When i try to build the package i get the error:

/home/user/catkin_ws2/src/cam_lidar_calibration/src/cam_lidar_panel.h:14:10: fatal error: cam_lidar_calibration/RunOptimiseAction.h: No such file or directory

Could you update the missing header file? Thank you in advance.

Question about the paper

Hi,

thanks for sharing such wonderful work. I am curious about the improvement of the data selection stage on the baseline algorithm, Besides, I am curious about the comparison on accuracy and performance with the baseline algorithm since I did not see any comparison on your paper.

Thanks

Failed to find "ring"

Thanks for your work, I am very excited to try it!

I was trying to start the launch file to do a live calibration, but when I started it, I received an error message related to the lidar. The algorithm was not able to fin the "ring" information from it, even if it is available into the lidar driver (if I open Rviz, I can select the "ring" view)

Do you have an idea why it is not able to fetch the information correctly?

Thank you!

Error while caliberating os1 64

Now i am able to extract the checkerboard but i am getting an error due to which i am not able to calibrate. I have attached the error screenshot. Kindly reply as soon as possible.
Screenshot from 2021-09-30 04-37-01
Screenshot from 2021-09-30 04-44-20

error while trying to build the Worspace opencv/cv.hpp

Hi everyone,
when i try building my workspace i get this error:
fatal error: opencv/cv.hpp: No such file or directory
10 | #include <opencv/cv.hpp>
| ^~~~~~~~~~~~~~~

im using Ubuntu 20.04 and Ros noetic.
any idea on how i can solve this Problem please ?

opencv version is 4.5

Error when running the quickstart using Noetic

Interested in reopening issue #48 (comment), as the solution didn't resolve my error.

  • I am using ros-noetic, with Opencv-4.5.5 (due to other project dependencies)
  • Changing #include <opencv/cv.hpp>' to '#include <opencv2/opencv.hpp>' in optimiser.h. It didn't resolve the problem, since it caused the following error when executing catkin build`:
    image

I was able to successfully build when I replaced `#include <opencv/cv.hpp>' to:

#include <opencv2/core/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/calib3d.hpp>

But then, I still got a runtime error when executing the quick start
roslaunch cam_lidar_calibration run_optimiser.launch import_samples:=true

image

*** logs ***
rosout-1-stdout.log
rosout.log
roslaunch-netanel-Precision-5560-26735.log
master.log

NOTE
I don't know if this is correct, but I used the calib_v2 branch, and there
cv::reduce(...) under optimiser.cpp here
was not commented out (all I had to do was replace CV_REDUCE_SUM by cv::REDUCE_SUM for proper compatibility), so I wasn't sure whether I missed an additional change I had to do.
It wasn't clear to me from @YHuniv 's instructions.

Originally posted by @natik-bw in #48 (comment)

Problem about `distance_from_origin`

2024-04-16_13-42
when i capture a sample, it shows:

Measured board has: dimensions = 576.02389 x 989.93934 mm; area = 0.56602m^2
Distance =  0.00 m
Board angles     = 96.97,89.20,90.47,98.25 degrees
Board area       = 0.57054 m^2 (+0.00452 m^2)
Board avg height = 989.94mm (+9.07mm)
Board avg width  = 576.02mm (-1.03mm)
Board dim        = 567.52,953.30,1026.57,584.53 mm
Board dim error  =   90.28
Avg Lidar Normal =  -0.15,  -0.99,   0.04 
Avg Lidar Centre = 545.69, 2175.47, -74.88 
Std Dev Lidar Normal =   0.00,   0.00,   0.00 
Std Dev Lidar Centre =   3.00,   1.57,   1.53 

the Distance is always 0.

when I look up the code, it seems bug

          // Board dimension errors
          double w0_diff    = abs( avg_w0 - board_width_ );
          double w1_diff    = abs( avg_w1 - board_width_ );
          double h0_diff    = abs( avg_h0 - board_height_ );
          double h1_diff    = abs( avg_h1 - board_height_ );
          double be_dim_err = w0_diff + w1_diff + h0_diff + h1_diff;

          double distance = sqrt( pow( sample.lidar_centre.x / 1000 - 0, 2 ) + pow( sample.lidar_centre.y / 1000 - 0, 2 ) +
                                  pow( sample.lidar_centre.z / 1000 - 0, 2 ) );

          ROS_INFO_STREAM( "On Line: " << __LINE__ << " Distance: " << distance << " m" );
          ROS_INFO_STREAM( "On Line: " << __LINE__ << " Lidar Centre: " << sample.lidar_centre.x << ", " << sample.lidar_centre.y << ", " << sample.lidar_centre.z );

          sample.camera_centre = cv::Point3d( sum_camera_centre.x / samples_.size(), sum_camera_centre.y / samples_.size(),
                                              sum_camera_centre.z / samples_.size() );

          sample.camera_normal = cv::Point3d( sum_camera_normal.x / samples_.size(), sum_camera_normal.y / samples_.size(),
                                              sum_camera_normal.z / samples_.size() );

          sample.camera_corners = { avg_camera_c0, avg_camera_c1, avg_camera_c2, avg_camera_c3 };

          sample.lidar_centre = cv::Point3d( sum_lidar_centre.x / samples_.size(), sum_lidar_centre.y / samples_.size(),
                                             sum_lidar_centre.z / samples_.size() );
          ROS_INFO_STREAM( "On Line: " << __LINE__ << " Distance: " << distance << " m" );
          ROS_INFO_STREAM( "On Line: " << __LINE__ << " Lidar Centre: " << sum_lidar_centre.x / samples_.size() << ", " << sum_lidar_centre.y / samples_.size() << ", " << sum_lidar_centre.z / samples_.size() );

          sample.lidar_normal = cv::Point3d( sum_lidar_normal.x / samples_.size(), sum_lidar_normal.y / samples_.size(), sum_lidar_normal.z / samples_.size() );

the variable distance is calculated with sample.lidar_centre, but it it update below, that cause distance always zero

tf static_transform_broadcaster does not fit calibration result

Hi,
first of all thanks for your awesome repository, the calibration process is very intuitive and provides good feedback and good results. I have tested your method among others and it produced the best results.

I have then used the results of the calibration in a static_transform_publisher to complete the tf_tree and look up the transformation in my project. I then noticed the results were pretty bad and went back to investigate. If I take the obtained transformation from the calibration back to matlab and apply it to project the pointclouds captured with the calibrated sensors onto the image, everything is fine.

If I then take a static_transform_publisher with the same parameters, rectify the image and overly the lidar points in rviz, everything is again quite off (not off like source and target frame confused, but a few degrees in rotation and/or a few 100 mm in translation). The source and target frames are correct, which i verified by switching them up which was worse. I have also inverted the transformation and switched the source and target frame, which got me the same result.

I wrote my static_transform_publisher as:

What am i doing wrong? Thanks again for your awesome work and your help!

What is the sign of "translation_error"?

Hi Darren,

Thanks for this codes. It works well for me just after few modifications.

I'm confused by the translation_error in configure file.
It seems to be the "offset" from chessboard center to backing board center. But what is the "positive" direction of axis X and Y?
I mean, how should I determine the "x" is "10mm" or "-10mm" as demonstrated in 2.2-3, since the chessboard is symmetric. I'm afraid the detection maybe ambiguous if the chessboard is upside-down when image detecting the coordinates.

I am getting pcl::PCLBase<pcl::PointXYZIR> and undefined reference to `pcl::KdTreeFLANN<pcl::PointXYZIR, flann::L2_Simple<float> >::KdTreeFLANN(bool) while library linking

I am using ROS Noetic to build some calibration code and I am facing

undefined reference to pcl::PCLBasepcl::PointXYZIR::deinitCompute()' undefined reference to pcl::PCLBasepcl::PointXYZIR::initCompute()'
undefined reference to pcl::PCLBasepcl::PointXYZIR::setIndices(boost::shared_ptr<std::vector<int, std::allocator > const> const&)' pcl::KdTreeFLANN<pcl::PointXYZIR, flann::L2_Simple >::KdTreeFLANN(bool)' undefined reference to pcl::PCLBasepcl::PointXYZIR::setInputCloud(boost::shared_ptr<pcl::PointCloudpcl::PointXYZIR const> const&)'
/catkin_ws/devel/.private/cam_lidar_calibration/lib/libcam_lidar_calibration.so: undefined reference to pcl::KdTreeFLANN<pcl::PointXYZIR, flann::L2_Simple >::setSortedResults(bool)' catkin_ws/devel/.private/cam_lidar_calibration/lib/libcam_lidar_calibration.so: undefined reference to pcl::PCLBasepcl::PointXYZIR::setIndices(unsigned long, unsigned long, unsigned long, unsigned long)'
catkin_ws/devel/.private/cam_lidar_calibration/lib/libcam_lidar_calibration.so: undefined reference to pcl::PCLBasepcl::PointXYZIR::setIndices(boost::shared_ptr<pcl::PointIndices const> const&)' catkin_ws/devel/.private/cam_lidar_calibration/lib/libcam_lidar_calibration.so: undefined reference to pcl::PCLBasepcl::PointXYZIR::PCLBase()'
catkin_ws/devel/.private/cam_lidar_calibration/lib/libcam_lidar_calibration.so: undefined reference to `pcl::PCLBasepcl::PointXYZIR::setIndices(boost::shared_ptr<std::vector<int, std::allocator > > const&)'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/feature_extraction_node.dir/build.make:303: catkin_ws/devel/.private/cam_lidar_calibration/lib/cam_lidar_calibration/feature_extraction_node] Error 1
make[1]: *** [CMakeFiles/Makefile2:3731: CMakeFiles/feature_extraction_node.dir/all] Error 2
make: *** [Makefile:141: all] Error 2

capture error

When we press the "Capture sampe" button, it shows the following errors:
We already modify the checkerboard dimension size to the right one but it still didn't work. Do you know why ?

[ INFO] [1677116162.028044308]: Ready for capture

[ INFO] [1677116179.599672785]: Capturing sample
[ INFO] [1677116179.675028523]: Processing sample
[ INFO] [1677116179.701179301]: Chessboard found
[ INFO] [1677116179.702963483]: Publishing chessboard image
[pcl::SampleConsensusModelPlane::optimizeModelCoefficients] Not enough inliers found to optimize model coefficients (0)! Returning the same coefficients.
[pcl::SampleConsensusModelLine::optimizeModelCoefficients] Not enough inliers found to support a model (0)! Returning the same coefficients.
[pcl::SampleConsensusModelLine::optimizeModelCoefficients] Not enough inliers found to support a model (0)! Returning the same coefficients.
[pcl::SampleConsensusModelLine::optimizeModelCoefficients] Not enough inliers found to support a model (0)! Returning the same coefficients.
[pcl::SampleConsensusModelLine::optimizeModelCoefficients] Not enough inliers found to support a model (0)! Returning the same coefficients.

--- Sample 1 ---
Measured board has: dimensions = 880x1200 mm; area = 1.05600 m^2
Distance =  0.00 m
Board angles     =   nan,  nan,  nan,  nan degrees
Board area       = 0.00000 m^2 (-1.05600 m^2)
Board avg height =   0.00mm (-1200.00mm)
Board avg width  =   0.00mm (-880.00mm)
Board dim        =   0.00,  0.00,  0.00,  0.00 mm
Board dim error  = 4160.00

[ERROR] [1677116183.045184085]: Plane fitting error, LiDAR board dimensions incorrect; discarding sample - try capturing again

The chessboard has been filtered success. But the RANSAC unsuccessful occurs if i click capture sample

One is before the filter and the other is after the filter, which has been placed at 45°
Screenshot from 2023-08-10 11-07-50
Screenshot from 2023-08-10 12-18-38

But RANSAC errors still occur, why?

[ INFO] [1691641310.418340591]: Ready for capture
 
[ INFO] [1691641333.268849203]: Capturing sample
[ INFO] [1691641333.414355339]: Processing sample
[ INFO] [1691641333.422643699]: Chessboard found
[ INFO] [1691641333.423457583]: Publishing chessboard image
[ WARN] [1691641333.851872087]: Still working with last sample!
[pcl::SampleConsensusModel::getSamples] Can not select 0 unique points out of 1!
[pcl::RandomSampleConsensus::computeModel] No samples could be selected!
[pcl::SACSegmentation::segment] Error segmenting the model! No solution found.
[pcl::SampleConsensusModel::getSamples] Can not select 0 unique points out of 1!
[pcl::RandomSampleConsensus::computeModel] No samples could be selected!
[pcl::SACSegmentation::segment] Error segmenting the model! No solution found.
[ERROR] [1691641334.313363673]: RANSAC unsuccessful, discarding sample - Need more lidar points on board

The following is my yaml information

Topics
camera_topic: "/bonson/camera_image"
camera_info: "/bonson/camera_info"
lidar_topic: "/bonson/point_cloud"

Dynamic rqt_reconfigure default bounds
feature_extraction:
x_min: -10.0
x_max: 10.0
y_min: -10.0
y_max: 10.0
z_min: -10.0
z_max: 10.0

Properties of chessboard calibration target
chessboard:
pattern_size:
height: 11
width: 8
square_length: 35
board_dimension:
width: 500
height: 380
translation_error:
x: 2
y: 0

——————
distortion_model: "fisheye"
width: 1920
height: 1080
D: [-0.958013458, -28.3233961, 0.0736144326, -0.0146417112, 505.177751]
K: [1012.71567, 0.0, 316.010119, 0.0, 1047.60412, 178.325968, 0.0, 0.0, 1.0]

Can't filter point cloud with rqt

Hi there,

I really appreciate this repo. But when I tried to calibrate with my own data, I can't filter the pointcloud data by moving the sliding bar of the rqt window. The pointcloud does't change at all.

rqt_1
rqt_2

If I take the risk to capture sample with the full pointcloud, the RVIZ window will be frozen. But no error is showing in the command line.

error_when_capture

Would you please give me some guidance about how to use the rqt to filter the pointcloud? My Lidar is Velodyne VLP-16. My camera is Teledyne DALSA.

Thank you so much. I appreciate any comments and advice.

Issue with calibration with my own data

Hi,
I read your paper "Optimising the selection of samples for robust lidar camera calibration" and found it very interesting. I decided to test the available source code. Calibration with your data worked well. However when I tried calibration with my own data using a robosense RS-LiDAR-16 and an ids UI-5280xCP-M mono camera, I faced an issue. In fact, when it comes to feature extraction, The following error appears :
Connection to node=/feature_extraction failed: param client failed (Please find below a screen shot of the problem?)

I'm trying to understand the cause of this error, but I don't seem to find it.
Note1: I can't find "libfeature_extraction.so" in the Lib folder
Note2: I didn't use a recorded bag file but I used live data instead
Can you please enlighten me on how to solve this problem ?

I would appreciate any help,
Regards,
Capture d’écran de 2022-03-16 10-38-58

the question of the chessboard size???

Is the size of the calibration board that can be used just the length and width of the chessboard, and there is no blank edge
57cm*76cm for example
thanks!

Lidar point projections are wrong

Hello,

thanks for your really great work. We tried software from other teams and they all were pretty bad. I tried to implement my own, but in the end I had some bugs that I couldn't fix. In the end we found your software and again, thanks for your work. Really easy to use.

Currently I try to calibrate an Ouster128 with an RGB camera (I modified your code so that it works with the 16 bit ring values).
Our RGB camera is a plumb_bob distortion model lense and when I looked at your images in this repo I thought you used a non fisheye lense as well (off of my head images from fish eye lenses look differently).
And then I calibrated the camera with 28 samples, in my opinion high enough variance (different positions, different xyz angles, different distances - with the 128 layers of the Ouster enough 'rings' will hit the board when placed further away).

Then I ran the "assess" script and got the following result:
ProjectionError

My camera is a non fisheye lense. The images that I used were distorted. I thought that it is not necessary to undistort, because your software gets the camera matrix anyway. After I got the result that can be seen in the attached image I rectified the images and calibrated it again, but got almost the same result (that's the 2nd time I calibrated and I kept the distortion vector as it is).
I looked at #7 and followed your advice:
"If you publish a rectified image, the distortion matrix in the camera_infos should also be all zeros. If the D matrix is all zeros then you can leave that line of code. Otherwise, simply feed in a zero matrix in place of the distcoeff.".
But if I enter all zeroes in the d vector and calibrate again (3rd attempt) I get projections that are completely wrong (I didn't take an image of that).

Do you have any explanation for the circular projection that can be seen in the attached image (and how to get rid of it)?

Thanks in advance

Ouster lidar os1 64

Can this package be used for calibration of ouster os2 64 and Ip camera with this package? if it is possible then kindly guide me to complete the procedure I am doing as the research intern at NUST Pakistan. Kindly help me to achieve this task.

Question about camera undistortion in non-fisheye cases

Hi, thank you for this great package! I have gotten good initial results.

I was wondering why the image only is explicitely undistorted in case distortion model = fisheye? Given, let's say, OpenCVs default distortion model, no undistortion would be applied to an incoming image, which can quite significantly alter results, especially when using low cost cameras.

Recording my own data

HI,
I want to know that how you recorded the bag file of your lidar stream and camera stream at the same time. I basically want to know that what are the steps to make your own data set for calibration.

issue with rosbag play

Everything works fine until I run rosbag play -l --pause mybag.bag. I am getting the following error. Could you please take a look and reply me back if you know a way around

issue

openCV issue with new cv version

catkin_ws/src/cam_lidar_calibration/include/cam_lidar_calibration/optimiser.h:10:10: fatal error: opencv/cv.hpp: No such file or directory
10 | #include <opencv/cv.hpp>
| ^~~~~~~~~~~~~~~
compilation terminated.

Solution :
in optimiser .h just change line 10

// #include <opencv/cv.hpp>
#include <opencv2/opencv.hpp>

and in optimiser.cpp change line 427 :
cv::reduce(trans_diff, summed_diff, 1, cv::REDUCE_SUM, CV_64F);

Error when running the quick start

Hi! I am facing an error when running the calibration process using the quick-start data provided. A snapshot of the error is shown below. Look forward for your reply! Thanks in advance! (Additional information: ROS Noetic, Ubuntu 20.04)
Quick Start Error

transforming results to lidar to camera frame

Hi this package's extrinsic parameter results is the camera to lidar frame from the documentation. How do I obtain the extrinsic parameters for lidar to camera frame? Is it getting the rotation matrix from the roll, pitch, yaw then combining it with the translation vector to get the transformation matrix and inverse it? Appreciate any input!

Cannot catpture image using Manually crafted rosbag

Hi!
Thanks for your great work!
I used your algorithm to calibrate lidar(VLP-16) and pinhole cameras before,and the results are satisfactory.
Now I am using Robosense RS-Bpearl lidar and an external fisheye camera.The camera is well calibrated but can't not used under ROS,which means we can't get image topic and camera info topic using driver like usb_cam .
We figured out how to make our devices meet the requirements to use this algorithm,here is what we do.
1、Using external algorithms to publish camera_info topics.(Adjust the correct parameters )
2、Recorded different chessboard poses (camera_info and lidar_points recorded as rosbag ,image recorded as files(jpg) ).
3、Rename the image name as pose time-stamp respective. External algorithms are used to converted image to rosbag using the time stamp.
4、Merge the rosbag together,now we have a rosbag including image topic 、camera_info topic and lidar points topic.
rqt_bag visualized:
企业微信截图_16758419221107

When we using this rosbag ,We meet these problems:
企业微信截图_16758279538151
As you can see,the chessboard is isolated properly,but the dimension is estimated wrong,I don't know if it is caused by lidar's internal distance estimation calibration.
And every time I try to capture,the pcd can be saved while image can't. After pcd is saved, rviz no longer responds.
That's weired :(
Any suggestion would be appreciated!

Does it creaate a problem with ros distriubutions.

first i cloned your git link which was given...
I have ros noetic version and i installed python dependencies just by changing the name of the ros distributor from melodic to noetic.
when i did catkin make it throws a lot of error in the code regarding unsigned int and some functions not defined.

calibration of lidar which has no 'Ring' value

Hi,
I am using the lidar which has no value of 'ring',and the scan pattern is different from velydone.
I saw your source code that it used the 'ring' to get the top left/right and bottom left/right
of the board. So I want to ask if there's no "ring" value in my point cloud data, can I use the tool to do calibration?How to do it?
Thanks.

How can I use calibration .csv data?

I got a calibration result data on csv file.
I want to use them but I don't know how to use them.
Is there is an equation that I can cat tranform matrix from camera to lidar?

frame_id error Causing RViz Freeze During Background Capture

Hello , I'm trying out the offline mode with a saved bag. I followed the instructions in the README and updated the config file as directed. But when I hit the "capture background" button, Rviz freezes and shuts down. I also got this warning:

[INFO] [1708884414.136332641]: Capturing background pointcloud
[WARN] [1708884414.137188861]: Invalid argument passed to canTransform argument source_frame in tf2 frame_ids cannot be empty

I checked the LiDAR topic, and the frame_id isn't empty.
Screenshot from 2024-02-25 18-38-33

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.