GithubHelp home page GithubHelp logo

tiers / multi-modal-loam Goto Github PK

View Code? Open in Web Editor NEW
107.0 12.0 13.0 12.96 MB

Multi-Modal Multi-Lidar-Inertial Odometry and Mapping Implementation

License: MIT License

CMake 4.03% C++ 95.95% Shell 0.02%

multi-modal-loam's Introduction

MMLOAM : Robust Multi-Modal Multi-LiDAR-Inertial Odometry and Mapping for Indoor Environments

This is the repository for the code implementation, results, and dataset of the paper titled "Robust Multi-Modal Multi-LiDAR-Inertial Odometry and Mapping for Indoor Environments". The current version of the paper can be accessed at here.

Introduction

We propose a tightly-coupled multi-modal multi-LiDAR-inertial SLAM system for surveying and mapping tasks. By taking advantage of both solid-state and spinnings LiDARs, and built-in inertial measurement units (IMU). First, we use spatial-temporal calibration modules to align the timestamp and calibrate extrinsic parameters between sensors. Then, we extract two groups of feature points including edge and plane points, from LiDAR data. Next, with pre-integrated IMU data, an undistortion module is applied to the LiDAR point cloud data. Finally, the undistorted point clouds are merged into one point cloud and processed with a sliding window based optimization module.

System Pipeline

- (left )Data collecting platform. Hardware platform used for data acquisition. The sensors used in this work are the Livox Horizon LiDAR, with its built-in IMU, and the Velodyne VLP-16 LiDAR; (Right) The pipeline of proposed multi-modal LiDAR-inertial odometry and mapping framework. The system starts with preprocessingmodule which takes the input from sensors and performs IMU pre-integration, calibrations, and un-distortions. The scan registration module extracts features and sent the features to a tightly coupled sliding window odometry. Finally, a pose graph is built to maintain global consistency
- Mapping result with the proposed system at a hall environment. Thanks to the high resolution of solid-state LiDAR with a non-repetitive scan pattern, the mapping result is able to show clear detail of object’s surface.

Updates

  • 2023.09.30 : Add docker support
  • 2023.04.30 : Upload multi-modal lidar-inertial odom
  • 2023.03.02 : Init repo

RUN with rosbags:

Build MM-LOAM Docker image

- cd ~/catkin_ws/src
- git clone https://github.com/TIERS/multi-modal-loam.git 
- cd multi-modal-loam
- docker build --progress=plain . -t mm-loam 
- git submodule update --init --recursive

Note: We recommend using a Docker container to run this code because there are unresolved dependencies in some Ubuntu system. Docker containers provide a consistent and isolated environment, ensuring that the code runs smoothly across different systems.

Play rosbag

roscore
rosbag play office_2022-04-21-19-14-23.bag --clock

Run launch file in Docker container

docker run --rm -it --network=host mm-loam /bin/bash -c "roslaunch mm_loam mm_lio_full.launch"

Visualzation

rviz -d mm-loam/config/backlio_viz.rviz

Loop Closure

TODO

Dataset

Indoor environments:

  1. Hall (2.73 GB) link
  2. Corridor (1.82 GB) link
  3. Office (0.89 GB) link

Outdoor enviornments:

  1. Street (27.7 GB) Uploading
  2. Forest (44 GB) Uploading

More datasets can be found in our previous work:

  1. tiers-lidars-dataset
  2. tiers-lidars-dataset-enhanced

More results

  1. Hardware and Mapping result in long corridor environment. Our proposed methods show robust performance in long corridor environments and survive in narrow spaces where 180°U-turn occur.
  2. Qualitative comparison of map details in the office room dataset sequence. The color of the points represents the reflectivity provided by raw sensor data. The point size is 0.01, and transparency is set to 0.05. The middle two columns show the zoom-in view of the wall (top) and TV (bottom).
  3. Mapping results with the proposed method outdoors: urban street (left) and forest environment (right)..

Acknowledgements

This repo has borrowed code from the following great repositories and has included relevant copyrights in the code to give proper credit. We are grateful for the excellent work done by the creators of the original repositories, which has made this work possible. We encourage the user to read material from those original repositories.

  • LIO-LIVOX : A Robust LiDAR-Inertial Odometry for Livox LiDAR
  • LEGO-LOAM : A lightweight and ground optimized lidar odometry and mapping
  • LIO-MAPPING : A Tightly Coupled 3D Lidar and Inertial Odometry and Mapping Approach

multi-modal-loam's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multi-modal-loam's Issues

[pcl::PCDWriter::writeASCII] Could not open file for writing!

Thanks for publishing this code!
When I try to run this program on the rosbags I receive the error

terminate called after throwing an instance of 'pcl::IOException'
what(): : [pcl::PCDWriter::writeASCII] Could not open file for writing!

I fixed this by changing the file location in the lidars_extrinsic_cali.h file on lines 377 and 616 to a location on my computer
It was initially trying to save to /home/qing/icp_ICP.pcd

The extrinsic params used in the project

Hi, dear author, @Qingq-li
Nice work! Thanks for your nice working.
I'm wonder what is the extrinsic param between the vlp16 and the livox lidar (In the data set you provided )?
Can you give me some help?
Thanks!
2023-10-18 15-14-41屏幕截图
The identity matrix is used in the picture, clearly wrong.

Thanks your amazing work! When I run the code, I found that the slide window size is always 1, if I change the slide window size to 2 or bigger, the map is wrong.

Thanks your amazing work! When I run the code, I found that the slide window size is always 1, if I change the slide window size to 2 or bigger, the map is wrong, and there is "Error in evaluating the ResidualBlock.
There are two possible reasons. Either the CostFunction did not evaluate and fill all residual and jacobians that were requested or there was a non-finite value (nan/infinite) generated during the or jacobian computation.
Residual Block size: 2 parameter blocks x 15 residuals
For each parameter block, the value of the parameters are printed in the first column and the value of the jacobian under the corresponding residual. If a ParameterBlock was held constant then the corresponding jacobian is printed as 'Not Computed'. If an entry of the Jacobian/residual array was requested but was not written to by user code, it is indicated by 'Uninitialized'. This is an error. Residuals or Jacobian values evaluating to Inf or NaN is also an error."
Could you give me some advise? Thank you very much !

About timestamp synchronization

Hello, this is a great job. Thank you for your contributions to the community!

After reading your paper, I would like to ask how the timestamp soft synchronization in the paper is implemented?

I am integrating avia and mid-360. When I use PTPd to perform soft synchronization between the two network ports, there seems to be a delay of about 3 seconds. The topic:/livox/lidar released after merging the point cloud does not seem to be accepted by the algorithm. Is this because the time is not synchronized properly?

I look forward to your reply.

build issue on Ubuntu 20.04

Hi, thanks for sharing this code. When I built it, I have several issues.

  1. my laptop running Ubuntu 20.04/ROS noetic. with OpenCV 4.2, PCL-1.10(the standard). So I change two things to build the project.
  • change #include<opencv/cv.h> to #include<opencv2/opencv.hpp>
  • change set(CMAKE_CXX_FLAGS "-std=c++11") to set(CMAKE_CXX_FLAGS "-std=c++14")
  1. the other issue is the project name, the new one is mm_loam, however in the code, it seems used union_om. After change, it builds fine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.