Comments (42)
Finally can normal filter, I gave up online calibration, and use offline calibration, i can normal filter point cloud!
from cam_lidar_calibration.
Hi @juliangaal Thanks for this reminder. I have tried out this branch. It works for me now! I can use the rqt filter now! I really appreciate that. Although the board dimension error is high and most samples can't get accepted :) I will consider collecting new data.
from cam_lidar_calibration.
@hoangduyloc
Rotate the chessboard so it is in the shape of a diamond (at an angle of 45° with respect to the ground) and mount it on a stand.
from cam_lidar_calibration.
@Haishanliu have you setup the topic names in the config files like suggested in the readme?
for example only
camera_topic: "/gmsl/A0/image_color"
camera_info: "/gmsl/A0/camera_info"
lidar_topic: "/velodyne/front/points"
from cam_lidar_calibration.
Hi @jclinton830 ,
Thank you for your response. Yes, I have changed the params.yaml and camera_info.yaml according to my dataset.
distortion_model: "plumb_bob" width: 4112 height: 2176 D: [-0.17330185428381,0.11095826068889,-0.00052133492862747,-0.00027085005462575,-0.017576435343066] K: [2372.4550420066,0,2063.399169011,0,2372.4550420066,1132.822895508,0,0,1]
`# Topics
camera_topic: "/Bsw_ImgFrntRawConvert"
camera_info: "/Bsw_ImgCamInfo"
lidar_topic: "/Bsw_CloudCentrRawConvert"
Dynamic rqt_reconfigure default bounds
feature_extraction:
x_min: -10.0
x_max: 10.0
y_min: -8.0
y_max: 8.0
z_min: -5.0
z_max: 5.0
Properties of chessboard calibration target
chessboard:
pattern_size:
height: 9
width: 7
square_length: 100
board_dimension:
width: 1016
height: 1220
translation_error:
x: 0
y: 0`
I am currently trying your v2 package as well. Still the dynamic rqt window can't apply to my data. I can't see the pointcloud data change after I apply the filter. Do I suppose to wait longer? Btw, I made to few changes to let the package compile with my ROS Noetic version.
In the optimiser.h
- change the # include<opencv2/cv.hpp> to #include<opencv/imgproc.hpp>
- Add # include<opencv2/calib3d.hpp>
- Add #include<opencv2/core/core_c.h>
In the feature_extractor.h
- Add #include<opencv2/imgcodecs.hpp>
Thank you
from cam_lidar_calibration.
I think I encountered the same error, but fixed it in this PR. You could try with those changes? Have a look at the diff, especially the FeatureExtractor::serviceCB
function
from cam_lidar_calibration.
Happy to hear! Your data looks pretty dense, so that doesn't seem to be the issue. I did get pretty good results while evaluating the package, even when holding the checkerboard in my hands, but as far from my body as possible, for easier filtering. In the long term, I will be mounting the board to a tripod, though.
Are you mounting the board to anything? The images suggest you are holding it in your hands
My changes are now in the "noetic" branch of this repo, see #38
from cam_lidar_calibration.
I think I encountered the same error, but fixed it in this PR. You could try with those changes? Have a look at the diff, especially the
FeatureExtractor::serviceCB
function
I have already changed these four files, but still can't use rqt to filter chessboard.
When I play my bag, the camera image will show up but no point cloud image. Through the tutorial, the point cloud will show up automatically. If I add my lidar topic in display, the point cloud will show up, but the rqt is useless.
Also when I try to capture samples, there's a warning in terminal. See picture attached.
from cam_lidar_calibration.
Hi all, thanks for your words @Haishanliu @jclinton830 @juliangaal @hankge,
I try this source code, all process seem right, but at "CAPTURE SAMPLE" I meet the RANSAC error
before filtering lidar zone
After filtering lidar zone
Do you guys meet this problem, or do you know how to figure it out? Thank you so much
from cam_lidar_calibration.
same problem
from cam_lidar_calibration.
@Haishanliu 这个问题你解决了吗?我已经使用https://github.com/acfr/cam_lidar_calibration/pull/38,但还是没法filter
from cam_lidar_calibration.
@hoangduyloc Rotate the chessboard so it is in the shape of a diamond (at an angle of 45° with respect to the ground) and mount it on a stand.
Thank you, it worked when I move the board closer to the LIDAR, as well as rotate a degree of 4 degree --> 60 degree.
@gzchenjiajun make sure you have correct Camera intrinsic parameter also
from cam_lidar_calibration.
Do you mean that it must be the correct camera parameters before the filter operation can be performed to cut out the calibration plate? If the camera parameters are wrong, will the filter operation not work correctly? @hoangduyloc
from cam_lidar_calibration.
I am actually running the filter without a calibration board now, and I want to test it first. But I have a question, I do 3d point cloud cutting operation, why do I need the camera's internal parameters? @hoangduyloc
from cam_lidar_calibration.
I am actually running the filter without a calibration board now, and I want to test it first. But I have a question, I do 3d point cloud cutting operation, why do I need the camera's internal parameters? @hoangduyloc
yeah, you need I think, because this code finding the projection matrix between L to C, so you need to check the corresponding points between them, so if your Camera parameter is not correct, it could affect to the RANSAC matching problem
from cam_lidar_calibration.
I am actually running the filter without a calibration board now, and I want to test it first. But I have a question, I do 3d point cloud cutting operation, why do I need the camera's internal parameters? @hoangduyloc
yeah, you need I think, because this code finding the projection matrix between L to C, so you need to check the corresponding points between them, so if your Camera parameter is not correct, it could affect to the RANSAC matching problem
Ok, I will test tomorrow. I will fill in the correct parameters of the camera and then carry out the filter operation of min max @hoangduyloc
from cam_lidar_calibration.
I would like to ask, now I only adjust the min max of feature extraction, point cloud data can not be filtered normally, are we talking about a problem? @hoangduyloc
from cam_lidar_calibration.
I think it should not be caused by this problem. The internal parameters of the camera should not affect my ability to extract the point cloud of the calibration board for frame extraction @hoangduyloc
from cam_lidar_calibration.
I have already changed these four files, but still can't use rqt to filter chessboard.
When I play my bag, the camera image will show up but no point cloud image. Through the tutorial, the point cloud will show up automatically. If I add my lidar topic in display, the point cloud will show up, but the rqt is useless.
Also when I try to capture samples, there's a warning in terminal. See picture attached.
Same problem,how to solve @hankge
from cam_lidar_calibration.
@gzchenjiajun make sure you have the three topics:
- camera
- camera info
- lidar point cloud.
Could you check their header time? Are they close to each other? We use ApproximateTimeSynchronizer to get the samples from the camera topics and lidar. Consider that some lidars have an internal clock, and the point cloud is generated based on that; you will need to sync that with your computer.
from cam_lidar_calibration.
![Pasted Graphic](https://private-user-images.githubusercontent.com/13104274/257979076-23eca400-201e-4458-a60b-fce6f51bf5c7.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTEwNTMyOTIsIm5iZiI6MTcxMTA1Mjk5MiwicGF0aCI6Ii8xMzEwNDI3NC8yNTc5NzkwNzYtMjNlY2E0MDAtMjAxZS00NDU4LWE2MGItZmNlNmY1MWJmNWM3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzIxVDIwMjk1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTIxOWU3YWU5M2FlOTYxMTJjNTYwMmVkMGM0MWYxYWU3NDIxM2Q0YjE5YjU3OTc4YTNlM2FhYjc2YmZkNjA2YWUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.5YLzWXx99mtsN-NLdcky-cyO2d0n-62_SpUNY8atEFs)
Hello, this is my yaml configuration, in which I modified camera_topic and lidar_topic, which can be normally displayed in rviz, but I did not modify the information of camera_info (because I do not know what to change?). Roughly as follows:
I did rqt filtering, but the point cloud remained unchanged
from cam_lidar_calibration.
@chinitaberrio
What is header time? How to view or configure?
from cam_lidar_calibration.
The modified rqt value can be viewed normally on the terminal
from cam_lidar_calibration.
@gzchenjiajun camera_info is a topic that your system needs to publish http://wiki.ros.org/Sensors/Cameras, you can find more information here: http://wiki.ros.org/image_pipeline/CameraInfo
Seems your point cloud has an empty frame_id, you need to change that in your lidar driver.
from cam_lidar_calibration.
@chinitaberrio
camera_info_pub = rospy. Publisher('/camera_info', CameraInfo, queue_size=10)
ok, camera_info is published,yaml is also configured .
I'll keep trying with the point cloud question
from cam_lidar_calibration.
“Seems your point cloud has an empty frame_id, you need to change that in your lidar driver.”
After my analysis, it should not be related to the empty frame_id, because the point cloud is receiving normally, is there a problem in other places, such as the Settings in the displays panel?
@chinitaberrio
from cam_lidar_calibration.
from std_msgs.msg import Header
header = Header()
header.stamp = rospy.Time.now()
I see what you're saying. You're saying this part needs time synchronization.
I change ApproximateTimeSynchronizer test again.
from cam_lidar_calibration.
Failed to find match for field 'intensity'.
Failed to find match for field 'ring'.
most likely you need to change the point type.
from cam_lidar_calibration.
Yes, I did not add intensity and ring fields before, but now that I have added them, I still cannot filter them out.
My display panel looks like this, which option should I change to my point cloud topic (/point_cloud)? Should I add a pointcloud2 or make topic changes based on chessborad?
@chinitaberrio
from cam_lidar_calibration.
thks!!!!
from cam_lidar_calibration.
The topic that publishes the filtered point cloud is /feature_extraction/experimental_region
from cam_lidar_calibration.
I am now pushing the point cloud to /bonson/point_cloud with the fields x,y,z,intensity,ring. However, when cam_lidar_calibration.rviz is started, it is found that rqt is reconfigured, and the filter of min and max values cannot be performed normally.
feature_extraction/experimental_region also has no data(No matter how you adjust the min max value of rqt). param has been configured normally. If you directly check /bonson/point_cloud, you can see normal point cloud data
![Pasted Graphic 3](https://private-user-images.githubusercontent.com/13104274/258036779-0117c1c0-a822-44f2-a8c0-e2b50e9136a8.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTEwNTMyOTIsIm5iZiI6MTcxMTA1Mjk5MiwicGF0aCI6Ii8xMzEwNDI3NC8yNTgwMzY3NzktMDExN2MxYzAtYTgyMi00NGYyLWE4YzAtZTJiNTBlOTEzNmE4LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzIxVDIwMjk1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTJiY2ZjZDkwOTIxMzFhNDg0NmU4OWVmYjNiNzc5YjE1ZWEyOWRiMTIzNDUxNDFiNTA3ZDU1OWJjZjUxY2FmZjMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.TlfxbNsT6tnxv4LiB_KgpF6uirKzPAYFvd6jrTuojcY)
@chinitaberrio thks!
from cam_lidar_calibration.
This is terminal information. It looks normal
from cam_lidar_calibration.
/feature_extraction/experimental_region no data
from cam_lidar_calibration.
Provide a screenshot with the following information:
Terminal 1
rostopic echo /bonson/point_cloud/header
Terminal 2
rostopic echo /bonson/camera_image/header
Terminal 3
rostopic echo /bonson/camera_info/header
Change the frame in rviz to the lidar frame
from cam_lidar_calibration.
I set the same your_frame_id for all three nodes
rostopic echo /bonson/point_cloud/header
![Pasted Graphic 9](https://private-user-images.githubusercontent.com/13104274/258042813-4c65e54d-7b5b-4d73-9d3b-a72a38e0b260.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTEwNTMyOTIsIm5iZiI6MTcxMTA1Mjk5MiwicGF0aCI6Ii8xMzEwNDI3NC8yNTgwNDI4MTMtNGM2NWU1NGQtN2I1Yi00ZDczLTlkM2ItYTcyYTM4ZTBiMjYwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzIxVDIwMjk1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTlkZTNhNDhlZGRjMzQwMWI3ZDNhNGFmNGFkZGRhMWViZWQ3YmIxYmQwNTI3N2JhNDlkMDA1NmRmMTJmMDA2NzAmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.karQ6MQvyHGOniII-5RIpame4fwZVUGbyLv-Knpd2tg)
rostopic echo /bonson/camera_image/header
![Pasted Graphic 10](https://private-user-images.githubusercontent.com/13104274/258042868-24c60328-6998-4680-bc02-2277e7bd6121.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTEwNTMyOTIsIm5iZiI6MTcxMTA1Mjk5MiwicGF0aCI6Ii8xMzEwNDI3NC8yNTgwNDI4NjgtMjRjNjAzMjgtNjk5OC00NjgwLWJjMDItMjI3N2U3YmQ2MTIxLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzIxVDIwMjk1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNiNzI0MjFmY2M4MGQyODc4ZjA1NmM1YmI0YmVkZGI2ZDI4ZjA2NjNmMDFjZGNhNmUyOGU3OTc2Yjc4NTA2ZWEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.IrTbweBR08CigU_1m1riwEjDgikN1Jinu624oH3n1eo)
rostopic echo /bonson/camera_info/header
![Pasted Graphic 11](https://private-user-images.githubusercontent.com/13104274/258042903-0917a097-864f-4f0e-a36d-72e05dd9b59d.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTEwNTMyOTIsIm5iZiI6MTcxMTA1Mjk5MiwicGF0aCI6Ii8xMzEwNDI3NC8yNTgwNDI5MDMtMDkxN2EwOTctODY0Zi00ZjBlLWEzNmQtNzJlMDVkZDliNTlkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzIxVDIwMjk1MlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWI1MzVmMDhmMTQ3MTIzNzIzZjdmOWE4NThkYmZhNTYxMzM1NjE2ZjJiZWRiZmIxYmRlZjhmYTc2Zjk2Mjg2NDkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.saRfDxZDGQdbykyXUVZVXUEI2PQ5Fpx3Eo28RfgWLZU)
I've tried setting it to a different frame_id and the problem is the same
The Fixed Frame in rviz I set to map or "your_frame_id"
from cam_lidar_calibration.
I have got a new computer, it is ubuntu18, installed the environment is melodic, according to the document process test again, still can not be normal filter, can you help me to see how to solve?
I wonder if there is something wrong with the data release, because my frame_id has been modified. Or is there a problem with xyzir's data release?
Here is my publish code
import rospy
from sensor_msgs.msg import PointCloud2, PointField
from std_msgs.msg import Header
from sensor_msgs import point_cloud2
from services.bonson_depth_track_detection_baseline.service import get_irnp_frame
import numpy as np
# from tf.transformations import quaternion_from_euler
# import tf
import threading
def numpy_to_pointcloud2(point_cloud_data):
# 创建PointCloud2消息
msg = PointCloud2()
msg.header = Header()
msg.header.stamp = rospy.Time.now()
msg.header.frame_id = "base_link"
msg.height = 1 # 点云数据是一维数组,所以height设置为1
msg.width = point_cloud_data.shape[0] # 设置为点云中点的数量,即点云数据的行数
# 设置点云数据字
msg.fields.append(PointField(name="x", offset=0, datatype=PointField.FLOAT32, count=1))
msg.fields.append(PointField(name="y", offset=4, datatype=PointField.FLOAT32, count=1))
msg.fields.append(PointField(name="z", offset=8, datatype=PointField.FLOAT32, count=1))
msg.fields.append(PointField(name="intensity", offset=12, datatype=PointField.FLOAT32, count=1))
msg.fields.append(PointField(name="ring", offset=16, datatype=PointField.UINT16, count=1))
msg.is_bigendian = False
msg.point_step = 20 # 每个点占用20个字节(3个坐标 + 2个字段,每个字段占用4个字节)
msg.row_step = point_cloud_data.shape[0] * msg.point_step # 数据大小为 (3 + 2) * 行数 * 4 字节
msg.is_dense = True
data = np.zeros((len(point_cloud_data), 5), dtype=np.float32)
data[:, 0:3] = point_cloud_data
data[:, 3] = 80
data[:, 4] = 10 # ring 默认值为10
print(data.shape)
print(data)
msg.data = data.tostring()
return msg
def publish_pointcloud():
# 创建点云消息发布者
pub = rospy.Publisher('/bonson/point_cloud', PointCloud2, queue_size=10)
# 设置发布频率
rate = rospy.Rate(20)
while not rospy.is_shutdown():
# 获取点云数据
ir_list = get_irnp_frame()
if ir_list is not None:
close_ir = ir_list[len(ir_list) - 1]
if close_ir is not None:
# 将点云数据转换为PointCloud2消息
header = Header()
# header.stamp = rospy.Time.now()
header.frame_id = "base_link" # 请替换为你的坐标系ID
print(close_ir)
# 构建PointCloud2消息
# 将O3D点云数据转换为PointCloud2消息
pointcloud_msg = numpy_to_pointcloud2(close_ir)
# 发布点云消息
pub.publish(pointcloud_msg)
rate.sleep()
if __name__ == '__main__':
try:
# 初始化ROS节点
rospy.init_node('pointcloud_publisher')
# 创建tf.TransformBroadcaster对象
# tf_thread = threading.Thread(target=publish_tf_transform)
# tf_thread.daemon = True # 设置为守护线程,节点关闭时自动退出
# tf_thread.start() # 启动发布TF变换的线程
publish_pointcloud() # 发布点云数据
rospy.spin() # 保持节点运行
except rospy.ROSInterruptException:
pass
@chinitaberrio thks!!
from cam_lidar_calibration.
I assume you did this in different times, so it doesn't look like the topics are out of sync
I set the same your_frame_id for all three nodes rostopic echo /bonson/point_cloud/header
rostopic echo /bonson/camera_image/header
rostopic echo /bonson/camera_info/header
I've tried setting it to a different frame_id and the problem is the same
The Fixed Frame in rviz I set to map or "your_frame_id"
As you're visualising the original point cloud, might the /feature_extraction/experimental_region
topic be hidden behind it?
At this moment I can't look into your code as I'm pretty busy here, and I won't be available for the rest of August.
There's a video about how to use the tool here: https://www.youtube.com/watch?v=WmzEnjmffQU
from cam_lidar_calibration.
ok, I will solve this problem myself, and I will report back here. @chinitaberrio
from cam_lidar_calibration.
"I assume you did this in different times, so it doesn't look like the topics are out of sync"
Yes, their time is synchronized, I just took a screenshot at a different time
"As you're visualising the original point cloud, might the /feature_extraction/experimental_region topic be hidden behind it?"
/feature_extraction/experimental_region
There are no point clouds
from cam_lidar_calibration.
I think I encountered the same error, but fixed it in this PR. You could try with those changes? Have a look at the diff, especially the function
FeatureExtractor::serviceCB
I have already changed these four files, but still can't use rqt to filter chessboard. When I play my bag, the camera image will show up but no point cloud image. Through the tutorial, the point cloud will show up automatically. If I add my lidar topic in display, the point cloud will show up, but the rqt is useless. Also when I try to capture samples, there's a warning in terminal. See picture attached.
Have you solved this problem? I also encountered the same problem.
from cam_lidar_calibration.
Related Issues (20)
- Cannot catpture image using Manually crafted rosbag HOT 2
- What is the sign of "translation_error"? HOT 2
- capture error HOT 9
- Error in feature extraction from ouster lidar HOT 6
- Question about camera undistortion in non-fisheye cases HOT 9
- Lidar point projections are wrong HOT 3
- openCV issue with new cv version
- The chessboard has been filtered success. But the RANSAC unsuccessful occurs if i click capture sample HOT 6
- Maximum Value of Feature Extraction
- Reprojection Image Seems Strange HOT 2
- tf static_transform_broadcaster does not fit calibration result HOT 3
- Help me please. HOT 1
- Does it creaate a problem with ros distriubutions. HOT 2
- Error when running the quick start HOT 3
- The chessboard has been filtered success. But the RANSAC unsuccessful occurs if i click capture sample HOT 1
- I am getting pcl::PCLBase<pcl::PointXYZIR> and undefined reference to `pcl::KdTreeFLANN<pcl::PointXYZIR, flann::L2_Simple<float> >::KdTreeFLANN(bool) while library linking HOT 2
- transforming results to lidar to camera frame
- Question about the paper HOT 1
- frame_id error Causing RViz Freeze During Background Capture HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cam_lidar_calibration.