GithubHelp home page GithubHelp logo

kit-isas / lili-om Goto Github PK

View Code? Open in Web Editor NEW
529.0 14.0 157.0 88.17 MB

LiLi-OM is a tightly-coupled, keyframe-based LiDAR-inertial odometry and mapping system for both solid-state-LiDAR and conventional LiDARs.

License: GNU General Public License v3.0

CMake 0.66% C++ 98.83% Python 0.52%
lidar-inertial-odometry lidar-mapping slam 3d-mapping livox-lidar livox-horizon velodyne-slam velodyne sensor-fusion ros

lili-om's People

Contributors

kamibukuro5656 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lili-om's Issues

save and restart / restarting SLAM at another position in scanned map (in another session)

Hello, great work you did!
I would like to make lili-om restart-persistent, meaning to save all information it needs for relocalization of a scanned area in another session (at another start position in that area) - Could you list some short points what you think would be the best approach? :-)
Maybe saving/loading all data stored in all variables included in the backend fusion would be the way to go?

pcl::PointCloud<PointXYZI>::Ptr pose_cloud_frame; //position of keyframe
pcl::PointCloud<PointPoseInfo>::Ptr pose_info_cloud_frame; //pose of keyframe
pcl::PointCloud<PointXYZI>::Ptr pose_each_frame; //position of each frame
pcl::PointCloud<PointPoseInfo>::Ptr pose_info_each_frame; //pose of each frame
vector<int> keyframe_idx;
vector<int> keyframe_id_in_frame;
vector<vector<double>> abs_poses;

I will do the coding if you think it's nothing you want to have on your TODO list ;-)
Thanks,
Alexander

loop closure performance

Hello,
I have evaluated different SLAM software with the Livox Horizon LiDAR (cartographer, hdl_graph_slam, livox_mapping, etc.) and it seems 'lili-om' is the most robust SLAM for such a LiDAR - congratulations!

I tried your code (using 'run_fr_iosb_internal_imu.launch') with some larger recording (Livox Horizon with internal IMU). It works except the loop closure doesn't work at the end of the recording.

Screenshot from 2021-07-12 21-23-16

Screenshot from 2021-07-12 21-25-21

I have uploaded the bag file here (recorded using 'livox_lidar_msg.launch' from 'livox_ros_driver'):
https://drive.google.com/file/d/1NheqSfHnh1E00umY0no6I3AqigL7WDFK/view?usp=sharing

Maybe there is something I can try to tune the loop closures? :-)

PS: I plan to add cm-precise RTK-GPS factors (already available as LiDAR time-synchronized NavSatFix messages in the above recording) to the global graph pose optimization to make 'lili-om' even more robust ;-)

Thanks,
Alexander

Ground truth start and end positions in provided datasets

Hi, thanks for opensourcing this great work. I have two questions about the experiments.

First, in your paper, you use "END-TO-END POSITION ERROR IN METERS" to characterize the performance of different LIOs on FR-IOSB and KA-URBAN datasets. I want to run other LIOs on your datasets, so is it possible to provide the ground truth start and end positions you use in calculating the end-to-end position errors? Or at least a ground truth relative translation between the start and end positions of each trajectory (assuming identity 6dof pose at start)?

Another question is about the end-to-end location selection. Your paper mentions that KA-Urban Data Set is collected "with end-to-end locations registered from satellite images", I wonder,

  1. since LiLi-OM already has a centimeter-level precision, how do you register even more precise positions on the satellite images (Google map I presume) as ground truth?
  2. what about end-to-end locations in FR-IOSB Data Set?

Thank you.

which metis version are you using

Hello here, when correctly complied my lili_om, I wanna launch with example bag, but the terminal gives me

devel/lib/lili_om/BackendFusion: error while loading shared libraries: libmetis.so: cannot open shared object file: No such file or directory
[ INFO] [1605101464.753388799]: ----> Preprocessing Started.
[ INFO] [1605101464.769086095]: ----> Lidar Odometry Started.
[BackendFusion-7] process has died [pid 19832, exit code 127, cmd /home/yusheng/Documents/3rd_party/livox/ws_livox/devel/lib/lili_om/BackendFusion __name:=BackendFusion __log:=/home/yusheng/.ros/log/2564d1c6-2422-11eb-9512-94b86da529ef/BackendFusion-7.log].
log file: /home/yusheng/.ros/log/2564d1c6-2422-11eb-9512-94b86da529ef/BackendFusion-7*.log
[ WARN] [1605101494.615353143]: Waiting for IMU data ...

Which shows I have not yet installed a metis, thus I wanna know which version of metis are you using?

Issue of trajectory drifting at U-Turn

Hi, kailaili and other users,

I ran into the issue of trajectory drifting at U-Turn situations. What happens can be illustrated by the example below, where the white dotted line is /trajectory topic from BackendFusion, and the green dotted line is the odom /path topic from LidarOodmetry. So after the 180 degree of U-turn on the road, the white trajectory starts to drift away from the actual path(meanwhile, the lidar pointcloud's orientation shown in rviz also shift away from the trajectory), while the odom path is still roughly stays.
I tried to trace a bit in the BackendFusion code, and the /trajectory topic is from the variable "pose_each_frame", which is further updated in function buildLocalPoseGraph() and optimizeLocalGraph().

I collected quite a few dataset using horizon, and this drift happens at most of U-turn cases. Appreciate any help on the reason analysis and possible solutions.

Screenshot from 2020-11-23 00-45-52

data set download very very slow

Hi author,
Thanks for your hard work and sharing!
When download data set, its speed is very very slow and the network always disconnected from time to time. I don't know why this happened, although I can go online normally. Could you please share the data set, for example the smallest bag "KA_Urban_Campus_2.bag | 2020-10-29 12:30 | 580M" onto google drive to run the demo? Thanks for your help!
Best

Saved pcd map is less dense then the global map in Rviz

Hi,
I would like to make a dense 3D map with lili-om, but the saved point cloud has fewer points than the global map in Rviz.
pointclouds_comparation

Also, could you tell me how to get the denser map? When I set mapping_ds less than 0.1 I get the warning:
"[pcl::VoxelGrid::applyFilter] Leaf size is too small for the input dataset. Integer indices would overflow." but still the map are built ok. Is it possible to set the algorithm to process the downsampled point cloud and build the map with a full point cloud scan?

Avoid cutting pointcloud

I am using lili-om-rot with an ouster 0 lidar (64 lines, 512 points). When I run run.launch this is what I get:
image
As you can tell, the pointcloud is reduced to just a few lines after being preprocessed. I was wondering if this is normal behavior because it actually seems to be performing SLAM correctly on these few lines. Is there is any parameter or part of the code I can change so that the map is built on the full point cloud (64 lines). This is my config file:

common:
  frame_id: "lili_om_rot"
  data_set: "iosb"
  
IMU:
  acc_n: 2000
  gyr_n: 0.0173
  acc_w: 2
  gyr_w: 0.00025

preprocessing:
  lidar_topic: "/os_cloud_node/points"
  ds_rate: 2
  line_num: 64
  cut_pointcloud: true

lidar_odometry:
  max_num_iter: 12
  scan_match_cnt: 1
  if_to_deskew: false

backend_fusion:
  max_num_iter: 15
  loop_closure_on: true
  local_map_width: 50
  lc_search_radius: 15
  lc_map_width: 25
  lc_icp_thres: 0.2
  slide_window_width: 3
  imu_topic: "/imu/data"
  lidar_const: 7.5
  mapping_interval: 10
  lc_time_thres: 60.0
  surf_dist_thres: 0.12
  kd_max_radius: 1.0
  save_pcd: false

#extrinsic imu to lidar
  ql2b_w: 1.0
  ql2b_x: 0
  ql2b_y: 0
  ql2b_z: 0.0

  tl2b_x: 0.0
  tl2b_y: 0
  tl2b_z: 0.3

Thank you.

Loop detection

I am curious about your loop detection module, while you did not mention it in your paper. So I want to know how your loop-detection module work? Which paper or website can I refer to?

publish odometry

Hi all,
When we publish odometry, the published pose's time is last keyframe's time in the current window, but the pose is the first keyframe in the current window.

void publishOdometry() {
if(pose_info_cloud_frame->points.size() >= slide_window_width) {
odom_mapping.header.stamp = ros::Time().fromSec(time_new_odom);
odom_mapping.pose.pose.orientation.w = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].qw;
odom_mapping.pose.pose.orientation.x = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].qx;
odom_mapping.pose.pose.orientation.y = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].qy;
odom_mapping.pose.pose.orientation.z = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].qz;
odom_mapping.pose.pose.position.x = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].x;
odom_mapping.pose.pose.position.y = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].y;
odom_mapping.pose.pose.position.z = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].z;
pub_odom.publish(odom_mapping);
}

It maybe a typo, should be odom_mapping.pose.pose.orientation.w = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size() - 1].qw; ? Thank for your help!

Why is the accelerometer noise value set quite large in IMU preIntegration?

Thank you for your excellent work!
I have doubts about the noise value setting of IMU accelerometer in the configuration file. (config_fr_iosb.yaml IMU/acc_n: 2000).
Can you explain about this? Is this to reduce the drift of z-axis? This value does not appear to be set from the calibration data.
screenshot_19

Looking forward to your answer.

find corresponding features

Hi @kailaili, thanks for your excellent work !

  • Q1. When found corresponding features,
    void findCorrespondingCornerFeatures(int idx, Eigen::Quaterniond q, Eigen::Vector3d t) {
    int idVec = idx - keyframe_idx[keyframe_idx.size()-slide_window_width] + 1;
    vec_edge_res_cnt[idVec] = 0;
    for (int i = 0; i < edge_lasts_ds[idx]->points.size(); ++i) {
    pt_in_local = edge_lasts_ds[idx]->points[i];

    the for loop why not is
        for (int i = 0; i < edge_lasts_ds[idx + 1]->points.size(); ++i) {
            pt_in_local = edge_lasts_ds[idx + 1]->points[i];

The similar doubt for

void findCorrespondingSurfFeatures(int idx, Eigen::Quaterniond q, Eigen::Vector3d t) {
int idVec = idx - keyframe_idx[keyframe_idx.size()-slide_window_width] + 1;
vec_surf_res_cnt[idVec] = 0;
for (int i = 0; i < surf_lasts_ds[idx]->points.size(); ++i) {
pt_in_local = surf_lasts_ds[idx]->points[i];

Thanks for help and time!
Best

Robustness at higher speed

Hello, i have tried LiLi-OM on some datasets and it seems very robust, though for my application we will use it on vehicles that might move at up to 70km/h.

Do you know how fast a Vehicle can go before the mapping becomes unreliable? So far we dont have any high velocity datasets to test this on our own.

Thanks!

parameter tuning

hi, i am troubled in getting the desired result from your code and dataset. what parameters should i tune in run_fr_iosb_internal_imu.launch and config_fr_iosb_internal_imu.yaml when playing FR_IOSB_Short.bag. The end-to-end error in your paper is 0.25 m or lili-om, but i have the result about 2m. looking forward to your reply!

about the scanID in Preprocessing

Hi @kailaili , is the scanID here

scanID = int((angle + 92.0/3.0) * 3.0 / 4.0);
important? I guess you use this method to compute the ring number of a laser scan, but acctually my velodyne VLP-32C lidar has uneven scan lines, there is a map between ring number and vertical angle according to the manual in https://icave2.cse.buffalo.edu/resources/sensor-modeling/VLP32CManual.pdf page 57.
image

I'm trying to use this map for vertical angle and scanID transform.
Please let me know if I misunderstand the code, thank you!

question about the residual

Hi, @kailaili ,thanks for your great work!

Q1, when compute the corresponding residual about edge and surf,
LidarEdgeFactor:

    template <typename T>
    bool operator()(const T *t, const T *q, T *residual) const
    {

        Eigen::Matrix<T, 3, 1> cp{T(curr_point.x()), T(curr_point.y()), T(curr_point.z())};
        Eigen::Matrix<T, 3, 1> lpa{T(last_point_a.x()), T(last_point_a.y()), T(last_point_a.z())};
        Eigen::Matrix<T, 3, 1> lpb{T(last_point_b.x()), T(last_point_b.y()), T(last_point_b.z())};

        Eigen::Quaternion<T> q_last_curr{q[0], q[1], q[2], q[3]};

        Eigen::Matrix<T, 3, 1> t_last_curr{t[0], t[1], t[2]};

        Eigen::Quaternion<T> q_l_b{T(qlb.w()), T(qlb.x()), T(qlb.y()), T(qlb.z())};
        Eigen::Matrix<T, 3, 1> t_l_b{T(tlb.x()), T(tlb.y()), T(tlb.z())};

        Eigen::Matrix<T, 3, 1> lp;
        lp = q_last_curr * cp + t_last_curr;

        Eigen::Matrix<T, 3, 1> nu = (lp - lpa).cross(lp - lpb);
        Eigen::Matrix<T, 3, 1> de = lpa - lpb;

        residual[0] = nu.norm() / de.norm();
        residual[0] *= T(s);

        return true;
    }

LidarPlaneNormFactor:

    template <typename T>
    bool operator()(const T *t, const T *q, T *residual) const
    {
        Eigen::Quaternion<T> q_w_curr{q[0], q[1], q[2], q[3]};
        Eigen::Matrix<T, 3, 1> t_w_curr{t[0], t[1], t[2]};
        Eigen::Matrix<T, 3, 1> cp{T(curr_point.x()), T(curr_point.y()), T(curr_point.z())};
        Eigen::Matrix<T, 3, 1> point_w;
        Eigen::Quaternion<T> q_l_b{T(qlb.w()), T(qlb.x()), T(qlb.y()), T(qlb.z())};
        Eigen::Matrix<T, 3, 1> t_l_b{T(tlb.x()), T(tlb.y()), T(tlb.z())};

        point_w = q_l_b.inverse() * (cp - t_l_b);
        point_w = q_w_curr * point_w + t_w_curr;

        Eigen::Matrix<T, 3, 1> norm(T(plane_unit_norm.x()), T(plane_unit_norm.y()), T(plane_unit_norm.z()));
        residual[0] = T(score) * (norm.dot(point_w) + T(negative_OA_dot_norm));
        return true;
    }

In LidarEdgeFactor, use lp = q_last_curr * cp + t_last_curr; here cp is in laser frame, and in LidarPlaneNormFactor,use point_w = q_w_curr * point_w + t_w_curr; here point_w is in imu frame.

I'm confused why is there this difference?

Q2: Is the weight right in function findCorrespondingSurfFeatures (BackendFusion.cpp).

                // if one eigenvalue is significantly larger than the other two
                if (planeValid)
                {
                    float pd = norm.x() * pt_in_map.x + norm.y() * pt_in_map.y + norm.z() * pt_in_map.z + normInverse;
                    float weight = 1 - 0.9 * fabs(pd) / sqrt(sqrt(pt_in_map.x * pt_in_map.x + pt_in_map.y * pt_in_map.y + pt_in_map.z * pt_in_map.z));

                    if (weight > 0.3)
                    {
                        PointType normal;
                        normal.x = weight * norm.x();
                        normal.y = weight * norm.y();
                        normal.z = weight * norm.z();
                        normal.intensity = weight * normInverse;

                        vec_surf_cur_pts[idVec]->push_back(pt_in_local);
                        vec_surf_normal[idVec]->push_back(normal);

                        ++vec_surf_res_cnt[idVec];
                        vec_surf_scores[idVec].push_back(lidar_const * weight);
                    }
                }

float weight = 1 - 0.9 * fabs(pd) / sqrt(sqrt(pt_in_map.x * pt_in_map.x + pt_in_map.y * pt_in_map.y + pt_in_map.z * pt_in_map.z));

Maybe the weight is related to the pt_in_local, not pt_in_map, right?

build error

File "/usr/lib/python3.7/signal.py", line 47, in signal
TypeError: signal handler must be signal.SIG_IGN, signal.SIG_DFL, or a callable object

Not enough feature points from the map

Hi,

Thanks for sharing this great package.

When I tried to regenerate your result by using livox horizon and xsens imu, I got this warning message:

"Not enough feature points from the map"

May I know what is the problem?

Thanks

Inconsistent performance from different start timestamp using the same dataset

Hi kailaili,
Thanks for your good work!

I compiled and ran this project with the given datatset, for example FR_IOSB_Tree_64.bag.
It was working well when i played the rosbag from the begin time. But it drifted quite a lot when i played the rosbag from a later timestamp such as from 30s.

I didn't know what reason caused this result. I guess maybe it should start with static status. Looking for forward your reply.
Thank you!

image

Compatible with IntelRealsense L515?

Hi,

I have been trying to adapt the package to work with Realsense L515, however I am getting the following warnings:

Failed to find match for field 'intensity'.
Failed to find match for field 'normal_x'.
Failed to find match for field 'normal_y'.
Failed to find match for field 'normal_z'.
Failed to find match for field 'curvature'

The pointcloud published by realsense has the following fields "x", "y", "z" and "rgb". Is there a way to obtain the 'intensity', 'normal_x', 'normal_y', 'normal_z' and 'curvature'. I have looked into "FormatConvert.cpp" where the correct pointcloud is built but I don't know if it is possible to do the same for the resulting pointcloud of L515. Are all those fields important?

Thanks!

backend error on 18.04, is there need for tuning..?

Hello, I can't get it to run with my own data, I get this error:

F0506 14:06:24.649806 32614 problem_impl.cc:84] Check failed: !RegionsAlias( existing_block, existing_block_size, new_block, new_block_size) Aliasing detected between existing parameter block at memory location 0x7ffdc1903b90 and has size 3 with new parameter block that has memory address 0x7ffdc1903ba0 and would have size 4.
*** Check failure stack trace: ***
    @     0x7f4848a7d0cd  google::LogMessage::Fail()
    @     0x7f4848a7ef33  google::LogMessage::SendToLog()
    @     0x7f4848a7cc28  google::LogMessage::Flush()
    @     0x7f4848a7f999  google::LogMessageFatal::~LogMessageFatal()
    @     0x5569ba59b77c  (unknown)
    @     0x5569ba59deff  (unknown)
    @     0x5569ba59f645  (unknown)
    @     0x5569ba50fe67  (unknown)
    @     0x5569ba54b42c  (unknown)
    @     0x5569ba4f9013  (unknown)
    @     0x7f4846d80bf7  __libc_start_main
    @     0x5569ba4f999a  (unknown)
[BackendFusion-5] process has died [pid 32614, exit code -6, cmd /home/antonis/lili_om_ws/devel/lib/lili_om_rot/BackendFusion __name:=BackendFusion __log:=/home/antonis/.ros/log/c362c978-ae51-11eb-805d-e86a640b1f6f/BackendFusion-5.log].
log file: /home/antonis/.ros/log/c362c978-ae51-11eb-805d-e86a640b1f6f/BackendFusion-5*.log
^C[LidarOdometry-4] killing on exit
[Preprocessing-3] killing on exit
[lili_om_rot-2] killing on exit
[rviz-1] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done

Similar problem that #9 mentioned, but I am using 18.04 linux and I run it on my own data instead of demo as the issue #9 says, I think I need some further tuning but I do not know what there parameters are, can you help me with that ? I am using a vlp-16 and the imu comes from the T256 imu's sensor.

When it starts it loads the first pointcloud but then this happens until the error occurs.
issue lili

Drift Problem with Marginalization Factor

Hello.

Thank you for your help with the topic on built-in IMUs.

I have been using the built-in IMU and LILI-OM to map the data that I have collected.
However, there were some situations where the data drifted rapidly and could not be maped properly.
The following picture shows a SLAM of the 2020-openload data published by Livox using the built-in IMU and LILI-OM, but the data drifted in the opposite direction in the middle.
liliom_openload_with_marginalization

I thought that the cause of this phenomenon was one of the residual terms, so I commented out the registrations to the ceres solver one by one to isolate the problem.
As a result, commenting out the MarginalizationFactor resulted in the following good results. (I changed "true" to "false" in line 833 of BackendFusion.cpp)

liliom_openload_no_marginalization

I would like to know if you have any advice about this result.

Info: using with RealSense L515

Hello to all,
First of all congratulations for your interesting work. I have read papers and documentation and would like to try your technique for indoor mapping using the Realsense L515 lidar.

What do you think about it? Do you see complications in using this hardware?

Running package with Ouster lidar

Hey, I'm really interested to try out your package and play around with it but I only have access to an OS1-128, is it possible to make this package work with that?

errors with own datasets

Hello here, I tried to evaluate ur tightly coupled system with my loosely coupled algorithm, but the back-end optimization process of ur algorithm always crashes after a while, some times only a couple of seconds, with the output:
Screenshot from 2020-12-20 17-21-51
I chose several scenarios for comparison with ur algorithms:
The first is the common school scene, where the map output crashed whenever my robot start moving, which seems to be wrong coordinates setup, (I also encountered the same situation when testing LIO-SAM) however, I have exactly the same coordinate definition in my second scenario, which seems fine at first.
Screenshot from 2020-12-20 17-22-27
Figure 1. Everything seem fine at first in school scene

Screenshot from 2020-12-20 17-23-07
Figure 2. Whenever the robot starts moving, the back end fusion crashes

The second is the railway scene, the back-end optimization seems to work longer now, but it still keeps crashing from time to time.
Screenshot from 2020-12-20 11-29-45

Hardware setup:
MTI680g(forward-left-up), livox horizon(forward-left-up).

I only used mti680G IMU for preprocessing and backend fusion.

Do you have any suggestions?

Evaluation

Hi @kailaili , it's nice to see lili-om paper shows evaluation with LINS on UrbanNav dataset, and I'm confused how to set the imu parameters in the LINS config files because its unit seems different with normal (m/(s^2), rad/s). Do you have any suggestions? Thanks a lot!

[Question] Is it possible to run on the internal IMU only?

Hello.
Thank you for publishing your excellent work.

Now, I have a question about IMU.
I'm trying to see if it is possible to run LiLi-OM using only the internal IMU of Horizon.
Have you ever tried to run LiLi-OM with only the internal IMU?

I have calculated the Orientation by using the Madgwick Filter and inputting it, as shown below.
Screenshot from 2020-11-01 23-16-35 (2)

This attempt worked well in the case of the "FR-IOSB-Long.bag".
Screenshot from 2020-11-01 23-16-35 (3)

However, it did not work with "2020_open_road.bag", which is available in the official Livox Loam repository.
(https://terra-1-g.djicdn.com/65c028cd298f4669a7f0e40e50ba1131/demo/2020_open_road.bag)
Screenshot from 2020-11-01 23-45-07 (2)

Can these problems be solved by parameter tuning?
Or is the built-in IMU and Madgwick Filter not accurate enough?

I would appreciate any advice you can give me.

hello,I wonder why the pose of the first keyframe inside the siliding window is published here and not the last

void publishOdometry()
{
if(pose_info_cloud_frame->points.size() >= slide_window_width) {
odom_mapping.header.stamp = ros::Time().fromSec(time_new_odom);
odom_mapping.pose.pose.orientation.w = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].qw;
odom_mapping.pose.pose.orientation.x = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].qx;
odom_mapping.pose.pose.orientation.y = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].qy;
odom_mapping.pose.pose.orientation.z = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].qz;
odom_mapping.pose.pose.position.x = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].x;
odom_mapping.pose.pose.position.y = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].y;
odom_mapping.pose.pose.position.z = pose_info_cloud_frame->points[pose_info_cloud_frame->points.size()-slide_window_width].z;
pub_odom.publish(odom_mapping);
}

Backend fusion dying

I have tried the given datasets for the velodyne version of lili-om and it works.

When I run my own dataset, this occurs. Does anyone know what this means? For reference, the dataset I use is the park dataset from lio-sam.

BackendFusion: /usr/include/boost/smart_ptr/shared_ptr.hpp:734: typename boost::detail::sp_member_access::type boost::shared_ptr::operator->() const [with T = const sensor_msgs::Imu_<std::allocator >; typename boost::detail::sp_member_access::type = const sensor_msgs::Imu_<std::allocator >*]: Assertion `px != 0' failed.

For reference, my yaml file is as follows:

common:

frame_id: "lili_om_rot"
data_set: "Park"

IMU:
acc_n: 3.9939570888238808e-03
gyr_n: 1.5636343949698187e-03
acc_w: 6.4356659353532566e-05
gyr_w: 3.5640318696367613e-05

preprocessing:
lidar_topic: "/points_raw"
ds_rate: 4
line_num: 16

lidar_odometry:
max_num_iter: 12
scan_match_cnt: 1
if_to_deskew: false

backend_fusion:
max_num_iter: 15
loop_closure_on: true
local_map_width: 50
lc_search_radius: 15
lc_map_width: 25
lc_icp_thres: 0.2
slide_window_width: 3
imu_topic: "/imu_correct"
lidar_const: 7.5
mapping_interval: 2
lc_time_thres: 60.0
surf_dist_thres: 0.12
kd_max_radius: 1.0
save_pcd: false

#extrinsic imu to lidar
ql2b_w: 0
ql2b_x: 1
ql2b_y: 0
ql2b_z: 0

tl2b_x: 0.0
tl2b_y: 0
tl2b_z: 0.0

Thanks everyone!

about the imu parameters

Hi, thank you for sharing your great work!
I tested the given dataset, it runs very well. Then I also run my own data, it's a velodyne HDL 32 lidar with an high precision imu. At the beginning I forgot to set the imu parameters and extrinsics, but it still runs very well. Then I try to set the intrinsics and extrinsics, but I find that you set the acc_n to be a very big value, much bigger than I know, why is it? Does it mean that you make the weight of imu acceleration very small, just use the rotation to aid the lidar slam?

LiLi-OM-ROT BackendFusion node died where isam->update()

First, Thanks your excellent work and well-written paper!
I test the LiLi-OM-ROT using FR_IOSB_Tree_64.bag with loop closure on. I found that the BackendFusion died in saveKeyFramesAndFactors() where isam->update(). The more amazing thing is that in the terminal, there is no error output. I use cout to find that problem.
So, Is this related to gtsam's version? I use gtsam-4.0.0-alpha2.

build error

ubuntu18.04 gtsam 4.00 ceres 2.00 Eigen 3.3.4
Is Eigen version not matching? Thanks!
Screenshot from 2020-12-15 09-13-15

usr/local/include/ceres/jet.h:998:8: error: ‘ScalarBinaryOpTraits’ is not a class template
struct ScalarBinaryOpTraits<ceres::Jet<T, N>, T, BinaryOp>

LiLi-OM-ROT BackendFusion node died randomly using own data

Hello! I test LiLi-OM-ROT with my own data. Sometimes, the BackendFusion node will die in the function called saveKeyFramesAndFactors(). The specific code line where it dies is as follows.
if(imu_buf[i]->header.stamp.toSec() > timeodom_cur)
By the way, this error happens randomly. Do you have any suggestions? Thank you very much!

PS: the error output in terminal is as follows

BackendFusion: /usr/include/boost/smart_ptr/shared_ptr.hpp:734: typename boost::detail::sp_member_access::type boost::shared_ptr::operator->() const [with T = const sensor_msgs::Imu_<std::allocator >; typename boost::detail::sp_member_access::type = const sensor_msgs::Imu_<std::allocator >*]: Assertion `px != 0' failed

Long build times for BackendFusion target

I want to try out some minor changes in the BackendFusion.cpp file. However for every change the incremental build/compile time takes more than 1 minute on my i5 9300HF cpu. I am wondering if you also experience these long build times or if you set some cmake flags during development to overcome this very long build time?

I also tried out the ninja generator instead of make with minor successes which reduced the compile time from 1 min 30 secs to still around 1 min 15 secs

backend error

Hi all
Sometimes when i run rosbag play FR_IOSB_Tree_64.bag --clock -r 1 --pause, everything is OK. The terminal output nothing, and the map showed in rviz is also OK. Sometimes the rviz result is unnormal and error occured about the backend, it seems about rotation? Did anyone else encounter this problem?

jxl@dell:~$ roslaunch lili_om_rot run_fr_iosb.launch 
... logging to /home/jxl/.ros/log/5b5130ec-2803-11eb-a3af-8cec4bc262f3/roslaunch-dell-1060.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://localhost:36597/

SUMMARY
========

PARAMETERS
 * /IMU/acc_n: 2000
 * /IMU/acc_w: 2
 * /IMU/gyr_n: 0.0173
 * /IMU/gyr_w: 0.00025
 * /backend_fusion/imu_topic: /applanix/imu
 * /backend_fusion/kd_max_radius: 1.0
 * /backend_fusion/lc_icp_thres: 0.2
 * /backend_fusion/lc_map_width: 25
 * /backend_fusion/lc_search_radius: 15
 * /backend_fusion/lc_time_thres: 60.0
 * /backend_fusion/lidar_const: 7.5
 * /backend_fusion/local_map_width: 50
 * /backend_fusion/loop_closure_on: True
 * /backend_fusion/mapping_interval: 2
 * /backend_fusion/max_num_iter: 15
 * /backend_fusion/ql2b_w: 0.7071
 * /backend_fusion/ql2b_x: 0
 * /backend_fusion/ql2b_y: 0
 * /backend_fusion/ql2b_z: 0.7071
 * /backend_fusion/save_pcd: True
 * /backend_fusion/slide_window_width: 3
 * /backend_fusion/surf_dist_thres: 0.12
 * /backend_fusion/tl2b_x: -0.18
 * /backend_fusion/tl2b_y: 0
 * /backend_fusion/tl2b_z: -0.095
 * /common/data_set: iosb
 * /common/frame_id: lili_om_rot
 * /lidar_odometry/if_to_deskew: False
 * /lidar_odometry/max_num_iter: 12
 * /lidar_odometry/scan_match_cnt: 1
 * /preprocessing/ds_rate: 4
 * /preprocessing/lidar_topic: /velodyne_pcl_gen...
 * /preprocessing/line_num: 64
 * /rosdistro: kinetic
 * /rosversion: 1.12.14
 * /use_sim_time: False

NODES
  /
    BackendFusion (lili_om_rot/BackendFusion)
    LidarOdometry (lili_om_rot/LidarOdometry)
    Preprocessing (lili_om_rot/Preprocessing)
    lili_om_rot (tf/static_transform_publisher)
    rviz (rviz/rviz)

ROS_MASTER_URI=http://localhost:11311

process[rviz-1]: started with pid [1081]
process[lili_om_rot-2]: started with pid [1082]
process[Preprocessing-3]: started with pid [1083]
process[LidarOdometry-4]: started with pid [1095]
process[BackendFusion-5]: started with pid [1096]
[ INFO] [1605531281.625058651]: ----> Back End Started.
[ INFO] [1605531281.631377798]: ----> Lidar Odometry Started.
[ INFO] [1605531281.641533446]: ----> Preprocessing Started.
F1116 20:54:55.148710  1096 problem_impl.cc:82] Check failed: !RegionsAlias(existing_block, existing_block_size, new_block, new_block_size) Aliasing detected between existing parameter block at memory location 0x7fff5dc4a950 and has size 3 with new parameter block that has memory address 0x7fff5dc4a960 and would have size 4.
*** Check failure stack trace: ***
    @     0x7f52d8ee243d  google::LogMessage::Fail()
    @     0x7f52d8ee4253  google::LogMessage::SendToLog()
    @     0x7f52d8ee1fcb  google::LogMessage::Flush()
    @     0x7f52d8ee4c3e  google::LogMessageFatal::~LogMessageFatal()
    @           0x530f0c  ceres::internal::(anonymous namespace)::CheckForNoAliasing()
    @           0x533d8e  ceres::internal::ProblemImpl::InternalAddParameterBlock()
    @           0x53438c  ceres::internal::ProblemImpl::AddResidualBlock()
    @           0x534e0c  ceres::internal::ProblemImpl::AddResidualBlock()
    @           0x4b1ee0  BackendFusion::optimizeLocalGraph()
    @           0x4f1b4b  BackendFusion::saveKeyFramesAndFactors()
    @           0x495186  main
    @     0x7f52d6eb2830  __libc_start_main
    @           0x495d59  _start
[BackendFusion-5] process has died [pid 1096, exit code -6, cmd /home/jxl/lili_om_ws/devel/lib/lili_om_rot/BackendFusion __name:=BackendFusion __log:=/home/jxl/.ros/log/5b5130ec-2803-11eb-a3af-8cec4bc262f3/BackendFusion-5.log].
log file: /home/jxl/.ros/log/5b5130ec-2803-11eb-a3af-8cec4bc262f3/BackendFusion-5*.log
^C[LidarOdometry-4] killing on exit
[Preprocessing-3] killing on exit
[lili_om_rot-2] killing on exit
[rviz-1] killing on exit
shutting down processing monitor...
... shutting down processing monitor complete
done
jxl@dell:~$ 

This is the rviz's snap capture.
liliom_rot

I modify the CMakeLists set(CMAKE_BUILD_TYPE "Release"), my OS is ubuntu16.04, installed eigen3.3.7, ceres2.0.0 and gtsam-4.0.2, config as following. I don't modify the config file and launch file. Although play bag, we also published --clock, but i see the launch file <param name="/use_sim_time" value="false" />, i remain the param false.

# find_package(Eigen3 REQUIRED) #default
list(APPEND CMAKE_PREFIX_PATH "/home/jxl/third_softwares/eigen_3_3_7/INSTALL_DIR/share/eigen3/cmake")
#set(CMAKE_PREFIX_PATH "/home/jxl/third_softwares/eigen_3_3_7/INSTALL_DIR/share/eigen3/cmake")
find_package(Eigen3 3.3.7 REQUIRED)

# find_package(Ceres REQUIRED) #default
find_package(Ceres REQUIRED PATHS "/home/jxl/third_softwares/ceres-solver-2.0.0/INSTALL/")

find_package(GTSAM REQUIRED)

build local graph optimization

Hi all,
First of all, I would like to express my high respect and thanks to the authors's hard work!
I will try to carefully describe my problem clearly, please be patient.

  • I don't set modify the slide window size, so the default value is still 3, there are 3 keyframes in the current window.
  • If we set the first received laser's index is 1, by the backend, then we will perform the first slide window when we received 3rd lasers, perform the second slide window when received 4th lasers.
  • When we use the imus between the first keyframe in the current window and the last keyframe to integrate to get pose_each_frame[] and pose_info_each_frame[], then we use this two variables as the initial values to build the local optimization graph:
    I found that the start integration keyframe Ps[], Vs[], Rs[] shoule be Ps[Ps.size() - slide_window_width - 1], the Ps[Ps.size() - slide_window_width] is the first keyframe in the current window. I think it should be a typo, agree ?
    } else if(pose_cloud_frame->points.size() > slide_window_width) {
    int ii = imu_idx_in_kf[imu_idx_in_kf.size() - slide_window_width - 1];
    double dx = 0, dy = 0, dz = 0, rx = 0, ry = 0, rz = 0;
    Eigen::Vector3d Ptmp = Ps[Ps.size() - slide_window_width];
    Eigen::Vector3d Vtmp = Vs[Ps.size() - slide_window_width];
    Eigen::Matrix3d Rtmp = Rs[Ps.size() - slide_window_width];
    Eigen::Vector3d Batmp = Eigen::Vector3d::Zero();
    Eigen::Vector3d Bgtmp = Eigen::Vector3d::Zero();
  • I print the Ps[] value as following, thanks for your help and time!
[ INFO] [1605269721.485954907]: ----> Preprocessing Started.
[ INFO] [1605269721.502283278]: ----> Back End Started.
[ INFO] [1605269721.528559733]: ----> Lidar Odometry Started.
laser_index = 1 start
laser_index = 1 done!


laser_index = 2 start
laser_index = 2 done!


laser_index = 3 start
laser_index = 3 done!


laser_index = 4 start
curr Ps[].size = 5
Ps[0] = 0.0000, 0.0000, 0.0000
Ps[1] = 0.0125, -0.0138, 0.0002
Ps[2] = 0.0174, -0.0197, 0.0008
Ps[3] = 0.0165, -0.0414, -0.0004
Ps[4] = 0.0241, -0.0444, 0.0008
laser_index = 4 done!


laser_index = 5 start
curr Ps[].size = 6
Ps[0] = 0.0000, 0.0000, 0.0000
Ps[1] = 0.0125, -0.0138, 0.0002
Ps[2] = 0.0174, -0.0197, 0.0008
Ps[3] = 0.0152, -0.0412, 0.0003
Ps[4] = 0.0211, -0.0393, 0.0010
Ps[5] = 0.0251, -0.0362, -0.0002
laser_index = 5 done!

How to use it with my own data?

Sorry for the newbie question but I can't get my head around on how to use it with my own data, could somebody clarify this ? How can I subscribe lili_om to my data?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.