mfxox / ilcc Goto Github PK
View Code? Open in Web Editor NEWIntensity-based_Lidar_Camera_Calibration
License: BSD 2-Clause "Simplified" License
Intensity-based_Lidar_Camera_Calibration
License: BSD 2-Clause "Simplified" License
"These data are acquired with the chessboard file which contains 68 patterns and the length of one grid is 7.5cm if it is printed by A0 size."
I'm very confused by the statement. The number here doesn't add up. A0 size is 841mm x 1189mm, why the 68 patterns with grid 7.5cm could cover A0 size paper? I think the grids pattern is 400mm X 600mm, which can be covered by A1 size paper. Am I right?
from ILCC import pcd_corners_est
pcd_corners_est.detect_pcd_corners()
When the code above is run, the program stalls at
seg_co was segmented into 2045
twice_clustered_seg num=2045
/home/user/github/ILCC/DATA/output/pcd_seg/0001/0001block41.txt
passed
pca: [0.63413158 0.36268445 0.00318398]
1
marker is found!
It just sits there and never returns. Has anyone seen this behavior? How do I fix it?
Hello! Are you using a panoramic camera directly? Or is it a panoramic picture stitched with several monocular cameras?
Hi, I found the current method is only supported VLP-32, but VLP-16 is not supported, I am wondering when the VLP-16 will be supported?
Hi,
I'm wondering if anyone has succeeded in getting extrinsics between a 360 camera like the insta360 series and a Velodyne or Ouster Lidar. I've tried with the insta360 one R and an Ouster OS1-32, but my checkerboards are either poorly segmented in the point clouds, or when the segments look ok, the back projection of corner points from point cloud to the image is completely wrong. I'd appreciate any help provided.
Checkerboards are (6x8) with 7.5cm length of side for each square in the pattern. The bottom left corner is a black square so my config.yaml file has 'start_pattern_corner': 0. I've attached pictures of the output below.
Thanks
<< Description >>
after run the code of "6.Non-linear optimization for final extrinsic parameters."
output data 0001_cal_backproj.png 〜 0004_cal_backproj.png is obviously wrong.
"img_corners_est" is successfully corners detected.
but, the corners from point cloud data doesn't match the corners from img data.
Do I need to change config.yaml from the default value to work with the data "ILCC_sample_perspective_data"?
If so, please let me know the setting value.
<< Details >>
I installed the most recent version ILCC
and run with the sample data "ILCC_sample_perspective_data".
about config.yaml
I changed the below parameters.
'base_dir' : the data directory
'poses_num' : 20 → 4 (there are only 4 data at the sample data)
After the command "LM_opt.cal_ext_paras()".
the contents of "YYYYMMDD_HHMMSS_calir_result.txt" is below.
4.715408122537658286e+00
2.337674434525445211e+00
1.106741906490246730e-01
9.065658045971332069e-01
2.508680940249127245e-01
-2.121439663123081965e-01
<< Verification >>
I tried only the code "After the aforementioned process, utility module can be imported for visualizing various of results. "
with "ILCC_sample_perspective_data"
and the output data 0001_cal_backproj.png 〜 0004_cal_backproj.png is well.
After the command "LM_opt.cal_ext_paras()".
the contents of "YYYYMMDD_HHMMSS_calir_result.txt" is below.
1.338479622733038943e-01
-1.524339677023719419e+00
1.441150134721521425e+00
-6.890869881762318530e-03
-1.935656845054284925e-01
-3.367990535562562227e-01
<< My Environment >>
OS : Ubuntu16.04 64bit
ILCC : most recent version ILCC (now 2019/1/10)
OpenCV : 3.3.1-dev
MATLAB : 9.5.0.944444(R2018b)
PCL : 1.7
VTK : 8.1.1
python : 2.7.15
Give index error when passing CSV file for corner detection. CSV has 5000+ rows of VLP-16 lidar.
Is there any easy way to convert bag to csv or pcd to csv of the format suggested?
The normal Velodyne_points topic to csv does not have columns like : laser_id | azimuth | distance | timestamp
it has only x,y,z, intensity and ring
I use the "config.yaml" with params
'base_dir': "DATA" #The path of directory including img,pcd,out folders.
'image_res': (2736,5472) #image resolution for panoramic image only
'backend': 'opencv' #backend of detecting corners from the image, "matlab" or "opencv"
'output_img_with_dectected_corners': True #output the image with detected corners
'back_proj_corners': True #back-project the detected corners of the point cloud to the images
'camera_type': 'panoramic' #camera model: 'panoramic' or 'perspective'
'instrinsic_para': (166.87739756, 0., 311.29512622, 0., 334.91696616, 781.89612824, 0., 0.,1.)
intrinsic parameters are necessary for perspective cameras (1,2,3,4,5,6,7,8,9)→ [1,2,3, 4,5,6, 7,8,9]
Parameters for segmenting the point cloud
'file_name_digits': 4 #filename format
'LiDAR_type': 'vlp16_puck' # available choices 'vlp16_puck', 'hdl32', 'hdl64'
'laser_beams_num': 16 #The number of the laser beams of the Velodyne LiDAR
'jdc_thre_ratio': 8 #The ratio of adaptive threshold for clustering scanline
'agglomerative_cluster_th_ratio': 8 #The ratio of adaptive threshold for combining the scanline cluters
Parameters for detecting chessboard from segmentation result
'pattern_size': (8,8) #number of vertical and horizontal patterns
'grid_length': 0.035 #the length of one grid of the pattern [m]
'chessboard_detect_planar_PCA_ratio': 0.05 # The threshold of the planarity check of the potential chessboard with PCA
'marker_range_limit': 2 #The farthest range of the chessboard for filtering irrelative segments. Segments whose centroid to Velodyne farther that this threshold will not be considered as a potential chessboard.
Parameters for 3D corner detection from detected chessboard point cloud segment
'start_pattern_corner': 0 #The color of the pattern in the left down of the chessboard and 0 for black 1 for white
'intensity_col_ind': 6 # The index of intensity column (counting starts from 0). !!! Wrong number will cause the failure of corner detection in pcd files.
Settings for multi-processing
'multi_proc': False #Use multiple processing
'proc_num': 2 #The number of cores to use for multiple processing
'poses_num': 7 #The number of image-laserscan pairs
Ahd when I use the script:
from ILCC import utility
utility.vis_back_proj(ind=1, img_style="orig", pcd_style="dis", hide_occlussion_by_marker=False)
utility.vis_back_proj(ind=1, img_style="orig", pcd_style="dis", hide_occlussion_by_marker=True)
utility.vis_back_proj(ind=1, img_style="edge", pcd_style="intens", hide_occlussion_by_marker=True)
What is the reason of this?
Hi, thanks for your contribution!
From your paper, I thinks it is essential that the chessboard doesn't have a white border, am I right?
Since from opencv's chessboard detection, it's good to have a white border around the chess grid.
I have also tried your ILCC_sample_perspective_data
, but I can not extract image corners by OpenCV, is it right?
When I was using my own lidar, I found that the data format in the final.csv file was different from that in the demo. How should I get the correct data?
The model of my lidar is the Velodyne VLP-16
Hi,
thank you for the amazing work you provided!
I have a following question: Is it really important that the target is rotated for 45 degrees (like in the demo data you provided) and why?
I am trying to use this calibration on my own data. In the data, the checkerboard target is parallel to the ground (not rotated for 45 degrees) and in 80-90% cases I get wrong checkerboard estimation like this:
How could I mitigate this problem?
Error at line 18 in config.py
params = yaml.load(default_params_yaml)
Error:
local/lib/python2.7/site-packages/ILCC-0.2.dev0-py2.7.egg/ILCC/config.py:18: YAMLLoadWarning:
*** Calling yaml.load() without Loader=... is deprecated.
*** The default Loader is unsafe.
*** Please read https://msg.pyyaml.org/load for full details.
params = yaml.load(default_params_yaml)
https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation
I use VLP16 and Flea3 monocular camera to do the calibration. Corners detection of camera and VLP16 respectively is finish without any problems.
But after run the commend:
from ILCC import LM_opt
LM_opt.cal_ext_paras()
the calibration result is output as following:
I tried to change increase jdc_thre_ratio and agglomerative_cluster_th_ratio in config.yaml, but it is no works.
Could you please check the problem, and answer it? Thanks a lot! If you need any file or more information please ask me.
Hi, so ILCC could be used only in panoramic images, what about a single normal camera?
Hi all,
I am trying to run ILCC for calibration between panoramic camera and HDL32 LiDAR. I am still in the compilation face and tried to run one of the available examples.
I installed ILCC and opened it as a PyCharm project. The following requirements in requirements file were installed properly:
PyYAML>=3.12
numpy>=1.12.1
sklearn>=0.0
scipy>=0.19.0
transforms3d>=0.3
matplotlib>=2.0.0
Cython
boost
pcl
Note: the installed version are PCL 1.12 and boost 1.74 and I am using Python3 on Ubuntu 22.04.1 LTS.
I couldn't install python-pcl as the setup.py file set for an older version of pcl. I replaced manually "pcl_version == '-1.9'" by "pcl_version == '-1.12'" in setup.py, but this doesn't help and the command python3 setup.py install lead to errors as you can see in the attached snapshot problem1.png.
However, I managed to run img_corners_est.detect_img_corners() and it works fine, but I couldn't run pcd_corners_est.detect_pcd_corners() which lead to the error:
File "/home/samer/Desktop/ILCC/ILCC/pcd_corners_est.py", line 751, in seg_pcd
import pcl
ModuleNotFoundError: No module named 'pcl'
More details are in the attached snapshot problem2.png. Did you face such a problem before? or manage to install and compile ILCC recently?
Thanks in advance
Samer
/80517101/209473849-4b04b904-912e-4662-9fcf-c71b1f30048a.png)
I'm not working on correctly saving detected_corners images
I think error in img_corners_est.py at line 61, 62
ret, corners = cv2.findChessboardCorners(img, size,
flags=cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_NORMALIZE_IMAGE + cv2.CALIB_CB_FAST_CHECK)
corners_reshaped=corners.reshape((size[1],size[0],2))
because corners type is nonetype, reshpae function could not work.
My thoughts may not be correct, but please reply.
Does this Repository need the software matlab?
I did not install the software matlab in my desktop right now but I hope to run it in my linux system.
Thank you for your great work.
for chessboard detection from point cloud, does the chessboard need to be mounted on a stand or can a person hold it ? would that affect the detection of the chessboard ?
hi,
First of all, I want to commend this amazing work. Secondly, this work is of great help to my project. Now I would like to evaluate the results of external parameters obtained from my own 64-line lidar data, but I have not found the location of calculated reprojection error in the code. Could you please figure out or provide a code for me?
Best wishes to you.
thank you.
@mfxox
Hi, mfxox
I have the other problem in using VLP32c,
the Vertical Angular Resolution of this sensor is non-linear distribution.
So we can not use the same vertical angular resolution to verify the segment.
Would you please tell me, is it any solution to solve this problem?
And how can I modify the code?
Thanks, again!
Hi, Thanks for providing a tool for camera-lidar sensor calibration.
I would like to ask is this camera-lidar calibration tool applicable for a monocular camera and a VLP-32C Velodyne 3D LiDAR?
Thanks.
Could you please clarify the way you calculate the re-projection error in case of perspective images ? in the LM_opt.py script - cost_func() why didn't you use the same calculations for both panoramic and perspective images ?
also there is a function in the script that isn't used at all (cost_func_min()), what is it for ?
I am using perspective images and I need to know if my extrinsic calibration is accurate, that is why I want to find the re-projection error.
Thanks.
hello, can you explain more details about the cost function in your paper
I'm using the code to calibration the extrinsic parameter of velodyne 64S3 and monocular camera. But the samples are working with panoramic camera, would it work well with monocular camera?
Hi, when I download the example and try to test ILCC_sample_perspective_data,
I can finish the process command of "from ILCC import img_corners_est" and "from ILCC import pcd_corners_est", but when I input "LM_opt.cal_ext_paras()", there shows
double free or corruption (out)
Aborted (core dumped)
and I find the error locate in :
transformation = pyopengv.absolute_pose_upnp(img_bearing_vectors,pcd_bearing_vectors)[0]
I've installed the opengv package and can import opengv without error,
can you tell me how to fix this error? thank you!
I deleted the bf and crf layers and the loss began to devrease. however the iou is really poor(0.03 for car and 0.0001 for cyclist) and output in the tensorboard is really bad.
I use c++ opencv to extract the corner of the image, and the board size (7,5) can't be correctly projected onto the image. What is going on? (Now I use vlp16)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.