GithubHelp home page GithubHelp logo

Comments (42)

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024 2

Finally can normal filter, I gave up online calibration, and use offline calibration, i can normal filter point cloud!

from cam_lidar_calibration.

Haishanliu avatar Haishanliu commented on July 2, 2024 1

Hi @juliangaal Thanks for this reminder. I have tried out this branch. It works for me now! I can use the rqt filter now! I really appreciate that. Although the board dimension error is high and most samples can't get accepted :) I will consider collecting new data.

from cam_lidar_calibration.

chinitaberrio avatar chinitaberrio commented on July 2, 2024 1

@hoangduyloc
Rotate the chessboard so it is in the shape of a diamond (at an angle of 45° with respect to the ground) and mount it on a stand.

from cam_lidar_calibration.

jclinton830 avatar jclinton830 commented on July 2, 2024

@Haishanliu have you setup the topic names in the config files like suggested in the readme?

for example only

camera_topic: "/gmsl/A0/image_color"
camera_info: "/gmsl/A0/camera_info"
lidar_topic: "/velodyne/front/points"

from cam_lidar_calibration.

Haishanliu avatar Haishanliu commented on July 2, 2024

Hi @jclinton830 ,

Thank you for your response. Yes, I have changed the params.yaml and camera_info.yaml according to my dataset.

distortion_model: "plumb_bob" width: 4112 height: 2176 D: [-0.17330185428381,0.11095826068889,-0.00052133492862747,-0.00027085005462575,-0.017576435343066] K: [2372.4550420066,0,2063.399169011,0,2372.4550420066,1132.822895508,0,0,1]

`# Topics
camera_topic: "/Bsw_ImgFrntRawConvert"
camera_info: "/Bsw_ImgCamInfo"
lidar_topic: "/Bsw_CloudCentrRawConvert"

Dynamic rqt_reconfigure default bounds

feature_extraction:
x_min: -10.0
x_max: 10.0
y_min: -8.0
y_max: 8.0
z_min: -5.0
z_max: 5.0

Properties of chessboard calibration target

chessboard:
pattern_size:
height: 9
width: 7
square_length: 100
board_dimension:
width: 1016
height: 1220
translation_error:
x: 0
y: 0`

I am currently trying your v2 package as well. Still the dynamic rqt window can't apply to my data. I can't see the pointcloud data change after I apply the filter. Do I suppose to wait longer? Btw, I made to few changes to let the package compile with my ROS Noetic version.

In the optimiser.h

  • change the # include<opencv2/cv.hpp> to #include<opencv/imgproc.hpp>
  • Add # include<opencv2/calib3d.hpp>
  • Add #include<opencv2/core/core_c.h>

In the feature_extractor.h

  • Add #include<opencv2/imgcodecs.hpp>

Thank you

from cam_lidar_calibration.

juliangaal avatar juliangaal commented on July 2, 2024

I think I encountered the same error, but fixed it in this PR. You could try with those changes? Have a look at the diff, especially the FeatureExtractor::serviceCB function

from cam_lidar_calibration.

juliangaal avatar juliangaal commented on July 2, 2024

Happy to hear! Your data looks pretty dense, so that doesn't seem to be the issue. I did get pretty good results while evaluating the package, even when holding the checkerboard in my hands, but as far from my body as possible, for easier filtering. In the long term, I will be mounting the board to a tripod, though.

Are you mounting the board to anything? The images suggest you are holding it in your hands

My changes are now in the "noetic" branch of this repo, see #38

from cam_lidar_calibration.

HaogeZhou avatar HaogeZhou commented on July 2, 2024

I think I encountered the same error, but fixed it in this PR. You could try with those changes? Have a look at the diff, especially the FeatureExtractor::serviceCB function

I have already changed these four files, but still can't use rqt to filter chessboard.
When I play my bag, the camera image will show up but no point cloud image. Through the tutorial, the point cloud will show up automatically. If I add my lidar topic in display, the point cloud will show up, but the rqt is useless.
Also when I try to capture samples, there's a warning in terminal. See picture attached.
image

from cam_lidar_calibration.

hoangduyloc avatar hoangduyloc commented on July 2, 2024

Hi all, thanks for your words @Haishanliu @jclinton830 @juliangaal @hankge,

I try this source code, all process seem right, but at "CAPTURE SAMPLE" I meet the RANSAC error
image

image
before filtering lidar zone
image
After filtering lidar zone

Do you guys meet this problem, or do you know how to figure it out? Thank you so much

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

same problem

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

@Haishanliu 这个问题你解决了吗?我已经使用https://github.com/acfr/cam_lidar_calibration/pull/38,但还是没法filter

from cam_lidar_calibration.

hoangduyloc avatar hoangduyloc commented on July 2, 2024

@hoangduyloc Rotate the chessboard so it is in the shape of a diamond (at an angle of 45° with respect to the ground) and mount it on a stand.

Thank you, it worked when I move the board closer to the LIDAR, as well as rotate a degree of 4 degree --> 60 degree.
@gzchenjiajun make sure you have correct Camera intrinsic parameter also

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

Do you mean that it must be the correct camera parameters before the filter operation can be performed to cut out the calibration plate? If the camera parameters are wrong, will the filter operation not work correctly? @hoangduyloc

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

I am actually running the filter without a calibration board now, and I want to test it first. But I have a question, I do 3d point cloud cutting operation, why do I need the camera's internal parameters? @hoangduyloc

from cam_lidar_calibration.

hoangduyloc avatar hoangduyloc commented on July 2, 2024

I am actually running the filter without a calibration board now, and I want to test it first. But I have a question, I do 3d point cloud cutting operation, why do I need the camera's internal parameters? @hoangduyloc

yeah, you need I think, because this code finding the projection matrix between L to C, so you need to check the corresponding points between them, so if your Camera parameter is not correct, it could affect to the RANSAC matching problem

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

I am actually running the filter without a calibration board now, and I want to test it first. But I have a question, I do 3d point cloud cutting operation, why do I need the camera's internal parameters? @hoangduyloc

yeah, you need I think, because this code finding the projection matrix between L to C, so you need to check the corresponding points between them, so if your Camera parameter is not correct, it could affect to the RANSAC matching problem

Ok, I will test tomorrow. I will fill in the correct parameters of the camera and then carry out the filter operation of min max @hoangduyloc

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

I would like to ask, now I only adjust the min max of feature extraction, point cloud data can not be filtered normally, are we talking about a problem? @hoangduyloc

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

I think it should not be caused by this problem. The internal parameters of the camera should not affect my ability to extract the point cloud of the calibration board for frame extraction @hoangduyloc

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

I have already changed these four files, but still can't use rqt to filter chessboard.
When I play my bag, the camera image will show up but no point cloud image. Through the tutorial, the point cloud will show up automatically. If I add my lidar topic in display, the point cloud will show up, but the rqt is useless.
Also when I try to capture samples, there's a warning in terminal. See picture attached.

Same problem,how to solve @hankge

from cam_lidar_calibration.

chinitaberrio avatar chinitaberrio commented on July 2, 2024

@gzchenjiajun make sure you have the three topics:

  • camera
  • camera info
  • lidar point cloud.
    Could you check their header time? Are they close to each other? We use ApproximateTimeSynchronizer to get the samples from the camera topics and lidar. Consider that some lidars have an internal clock, and the point cloud is generated based on that; you will need to sync that with your computer.

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

@chinitaberrio

Pasted Graphic

Hello, this is my yaml configuration, in which I modified camera_topic and lidar_topic, which can be normally displayed in rviz, but I did not modify the information of camera_info (because I do not know what to change?). Roughly as follows:

Screenshot from 2023-08-03 09-48-08

I did rqt filtering, but the point cloud remained unchanged

Screenshot from 2023-08-03 09-47-00

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

@chinitaberrio
What is header time? How to view or configure?

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

The modified rqt value can be viewed normally on the terminal
Pasted Graphic 1

from cam_lidar_calibration.

chinitaberrio avatar chinitaberrio commented on July 2, 2024

@gzchenjiajun camera_info is a topic that your system needs to publish http://wiki.ros.org/Sensors/Cameras, you can find more information here: http://wiki.ros.org/image_pipeline/CameraInfo
Seems your point cloud has an empty frame_id, you need to change that in your lidar driver.

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

@chinitaberrio
camera_info_pub = rospy. Publisher('/camera_info', CameraInfo, queue_size=10)

ok, camera_info is published,yaml is also configured .
I'll keep trying with the point cloud question

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

“Seems your point cloud has an empty frame_id, you need to change that in your lidar driver.”
After my analysis, it should not be related to the empty frame_id, because the point cloud is receiving normally, is there a problem in other places, such as the Settings in the displays panel?
@chinitaberrio

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

from std_msgs.msg import Header
header = Header()
header.stamp = rospy.Time.now()

I see what you're saying. You're saying this part needs time synchronization.
I change ApproximateTimeSynchronizer test again.

from cam_lidar_calibration.

chinitaberrio avatar chinitaberrio commented on July 2, 2024
Failed to find match for field 'intensity'.
Failed to find match for field 'ring'.

most likely you need to change the point type.

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

Yes, I did not add intensity and ring fields before, but now that I have added them, I still cannot filter them out.
My display panel looks like this, which option should I change to my point cloud topic (/point_cloud)? Should I add a pointcloud2 or make topic changes based on chessborad?
image
@chinitaberrio

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

�this is my code
image

Screenshot from 2023-08-03 14-00-20

thks!!!!

from cam_lidar_calibration.

chinitaberrio avatar chinitaberrio commented on July 2, 2024

The topic that publishes the filtered point cloud is /feature_extraction/experimental_region

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

I am now pushing the point cloud to /bonson/point_cloud with the fields x,y,z,intensity,ring. However, when cam_lidar_calibration.rviz is started, it is found that rqt is reconfigured, and the filter of min and max values cannot be performed normally.
feature_extraction/experimental_region also has no data(No matter how you adjust the min max value of rqt). param has been configured normally. If you directly check /bonson/point_cloud, you can see normal point cloud data

Pasted Graphic 3

@chinitaberrio thks!

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

Screenshot from 2023-08-03 15-37-19

This is terminal information. It looks normal

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

/feature_extraction/experimental_region no data
Screenshot from 2023-08-03 15-41-42

/bonson/point_cloud have data
Screenshot from 2023-08-03 15-43-48

params.yaml info
Pasted Graphic 4

from cam_lidar_calibration.

chinitaberrio avatar chinitaberrio commented on July 2, 2024

Provide a screenshot with the following information:

Terminal 1
rostopic echo /bonson/point_cloud/header

Terminal 2
rostopic echo /bonson/camera_image/header

Terminal 3
rostopic echo /bonson/camera_info/header

Change the frame in rviz to the lidar frame

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

I set the same your_frame_id for all three nodes
rostopic echo /bonson/point_cloud/header

Pasted Graphic 9

rostopic echo /bonson/camera_image/header

Pasted Graphic 10

rostopic echo /bonson/camera_info/header

Pasted Graphic 11

I've tried setting it to a different frame_id and the problem is the same

The Fixed Frame in rviz I set to map or "your_frame_id"

@chinitaberrio

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

I have got a new computer, it is ubuntu18, installed the environment is melodic, according to the document process test again, still can not be normal filter, can you help me to see how to solve?

I wonder if there is something wrong with the data release, because my frame_id has been modified. Or is there a problem with xyzir's data release?
Here is my publish code

import rospy
from sensor_msgs.msg import PointCloud2, PointField
from std_msgs.msg import Header
from sensor_msgs import point_cloud2
from services.bonson_depth_track_detection_baseline.service import get_irnp_frame
import numpy as np
# from tf.transformations import quaternion_from_euler
# import tf
import threading


def numpy_to_pointcloud2(point_cloud_data):
    # 创建PointCloud2消息
    msg = PointCloud2()
    msg.header = Header()
    msg.header.stamp = rospy.Time.now()
    msg.header.frame_id = "base_link"  
    msg.height = 1  # 点云数据是一维数组,所以height设置为1
    msg.width = point_cloud_data.shape[0]  # 设置为点云中点的数量,即点云数据的行数

    # 设置点云数据字
    msg.fields.append(PointField(name="x", offset=0, datatype=PointField.FLOAT32, count=1))
    msg.fields.append(PointField(name="y", offset=4, datatype=PointField.FLOAT32, count=1))
    msg.fields.append(PointField(name="z", offset=8, datatype=PointField.FLOAT32, count=1))
    msg.fields.append(PointField(name="intensity", offset=12, datatype=PointField.FLOAT32, count=1))
    msg.fields.append(PointField(name="ring", offset=16, datatype=PointField.UINT16, count=1))

    msg.is_bigendian = False
    msg.point_step = 20  # 每个点占用20个字节(3个坐标 + 2个字段,每个字段占用4个字节)
    msg.row_step = point_cloud_data.shape[0] * msg.point_step  # 数据大小为 (3 + 2) * 行数 * 4 字节
    msg.is_dense = True
    data = np.zeros((len(point_cloud_data), 5), dtype=np.float32)
    data[:, 0:3] = point_cloud_data
    data[:, 3] = 80
    data[:, 4] = 10  # ring 默认值为10
    print(data.shape)
    print(data)
    msg.data = data.tostring()

    return msg

def publish_pointcloud():
    # 创建点云消息发布者
    pub = rospy.Publisher('/bonson/point_cloud', PointCloud2, queue_size=10)
    # 设置发布频率
    rate = rospy.Rate(20)

    while not rospy.is_shutdown():
        # 获取点云数据
        ir_list = get_irnp_frame()
        if ir_list is not None:
            close_ir = ir_list[len(ir_list) - 1]

            if close_ir is not None:
                # 将点云数据转换为PointCloud2消息
                header = Header()
                # header.stamp = rospy.Time.now()
                header.frame_id = "base_link"  # 请替换为你的坐标系ID
                print(close_ir)
                # 构建PointCloud2消息
                # 将O3D点云数据转换为PointCloud2消息
                pointcloud_msg = numpy_to_pointcloud2(close_ir)

                # 发布点云消息
                pub.publish(pointcloud_msg)

        rate.sleep()


if __name__ == '__main__':
    try:
        # 初始化ROS节点
        rospy.init_node('pointcloud_publisher')

        # 创建tf.TransformBroadcaster对象
        # tf_thread = threading.Thread(target=publish_tf_transform)
        # tf_thread.daemon = True  # 设置为守护线程,节点关闭时自动退出
        # tf_thread.start()  # 启动发布TF变换的线程

        publish_pointcloud()    # 发布点云数据

        rospy.spin()  # 保持节点运行
    except rospy.ROSInterruptException:
        pass



@chinitaberrio thks!!

from cam_lidar_calibration.

chinitaberrio avatar chinitaberrio commented on July 2, 2024

I assume you did this in different times, so it doesn't look like the topics are out of sync

I set the same your_frame_id for all three nodes rostopic echo /bonson/point_cloud/header

Pasted Graphic 9 rostopic echo /bonson/camera_image/header Pasted Graphic 10 rostopic echo /bonson/camera_info/header Pasted Graphic 11 I've tried setting it to a different frame_id and the problem is the same

The Fixed Frame in rviz I set to map or "your_frame_id"

@chinitaberrio

As you're visualising the original point cloud, might the /feature_extraction/experimental_region topic be hidden behind it?
At this moment I can't look into your code as I'm pretty busy here, and I won't be available for the rest of August.

There's a video about how to use the tool here: https://www.youtube.com/watch?v=WmzEnjmffQU

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

ok, I will solve this problem myself, and I will report back here. @chinitaberrio

from cam_lidar_calibration.

gzchenjiajun avatar gzchenjiajun commented on July 2, 2024

"I assume you did this in different times, so it doesn't look like the topics are out of sync"
Yes, their time is synchronized, I just took a screenshot at a different time

"As you're visualising the original point cloud, might the /feature_extraction/experimental_region topic be hidden behind it?"
/feature_extraction/experimental_region
There are no point clouds

from cam_lidar_calibration.

jjustzxd avatar jjustzxd commented on July 2, 2024

I think I encountered the same error, but fixed it in this PR. You could try with those changes? Have a look at the diff, especially the functionFeatureExtractor::serviceCB

I have already changed these four files, but still can't use rqt to filter chessboard. When I play my bag, the camera image will show up but no point cloud image. Through the tutorial, the point cloud will show up automatically. If I add my lidar topic in display, the point cloud will show up, but the rqt is useless. Also when I try to capture samples, there's a warning in terminal. See picture attached. image

Have you solved this problem? I also encountered the same problem.

from cam_lidar_calibration.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.