GithubHelp home page GithubHelp logo

prbonn / mapmos Goto Github PK

View Code? Open in Web Editor NEW
129.0 7.0 7.0 111 KB

Building Volumetric Beliefs for Dynamic Environments Exploiting Map-Based Moving Object Segmentation (RAL 2023)

Home Page: https://www.ipb.uni-bonn.de/pdfs/mersch2023ral.pdf

License: MIT License

Makefile 0.13% Python 82.62% CMake 1.22% C++ 16.03%
cloud deep-learning map minkowski-engine minkowskiengine mos moving object point point-cloud

mapmos's Introduction

Building Volumetric Beliefs for Dynamic Environments Exploiting Map-Based Moving Object Segmentation

Our approach identifies moving objects in the current scan (blue points) and the local map (black points) of the environment and maintains a volumetric belief map representing the dynamic environment.

Click here for qualitative results!
mapmos.mp4

Our predictions for the KITTI Tracking sequence 19 with true positives (green), false positives (red), and false negatives (blue).

Installation

First, make sure the MinkowskiEngine is installed on your system, see here for more details.

Next, clone our repository

git clone [email protected]:PRBonn/MapMOS && cd MapMOS

and install with

make install

or

make install-all

if you want to install the project with all optional dependencies (needed for the visualizer). In case you want to edit the Python code, install in editable mode:

make editable

How to Use It

Just type

mapmos_pipeline --help

to see how to run MapMOS.

This is what you should see

Screenshot from 2023-08-03 13-07-14

Check the Download section for a pre-trained model. Like KISS-ICP, our pipeline runs on a variety of point cloud data formats like bin, pcd, ply, xyz, rosbags, and more. To visualize these, just type

mapmos_pipeline --visualize /path/to/weights.ckpt /path/to/data
Want to evaluate with ground truth labels?

Because these lables come in all shapes, you need to specify a dataloader. This is currently available for SemanticKITTI and NuScenes as well as our post-processed KITTI Tracking sequence 19 and Apollo sequences (see Downloads).

Want to reproduce the results from the paper? For reproducing the results of the paper, you need to pass the corresponding config file. They will make sure that the de-skewing option and the maximum range are set properly. To compare different map fusion strategies from our paper, just pass the `--paper` flag to the `mapmos_pipeline`.

Training

To train our approach, you need to first cache your data. To see how to do that, just cd into the MapMOS repository and type

python3 scripts/precache.py --help

After this, you can run the training script. Again, --help shows you how:

python3 scripts/train.py --help
Want to verify the cached data?

You can inspect the cached training samples by using the script python3 scripts/cache_to_ply.py --help.

Want to change the logging directory?

The training log and checkpoints will be saved by default to the current working directory. To change that, export the export LOGS=/your/path/to/logs environment variable before running the training script.

Downloads

You can download the post-processed and labeled Apollo dataset and KITTI Tracking sequence 19 from our website.

The weights of our pre-trained model can be downloaded as well.

Publication

If you use our code in your academic work, please cite the corresponding paper:

@article{mersch2023ral,
  author = {B. Mersch and T. Guadagnino and X. Chen and I. Vizzo and J. Behley and C. Stachniss},
  title = {{Building Volumetric Beliefs for Dynamic Environments Exploiting Map-Based Moving Object Segmentation}},
  journal = {IEEE Robotics and Automation Letters (RA-L)},
  volume = {8},
  number = {8},
  pages = {5180--5187},
  year = {2023},
  issn = {2377-3766},
  doi = {10.1109/LRA.2023.3292583},
  codeurl = {https://github.com/PRBonn/MapMOS},
}

Acknowledgments

This implementation is heavily inspired by KISS-ICP.

License

This project is free software made available under the MIT License. For details see the LICENSE file.

mapmos's People

Contributors

benemer avatar mehermvr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mapmos's Issues

Problems with pydantic?

May I ask the dependencies for pydantic? I met several problems from pydantic

File "/home/spacex/miniconda3/envs/LiDAR-MOS/lib/python3.7/site-packages/pydantic/main.py", line 719, in __setattr__
    if self.__pydantic_private__ is None or name not in self.__private_attributes__:
  File "/home/spacex/miniconda3/envs/LiDAR-MOS/lib/python3.7/site-packages/pydantic/main.py", line 699, in __getattr__
    pydantic_extra = object.__getattribute__(self, '__pydantic_extra__')
AttributeError: __pydantic_extra__

about experiment of generalization

hi, thanks for your excellent work!
may I ask if you have done any preprocessing when testing on apollo or kitti-tracking datasets? since I checked Apollo's LiDAR data and found that its intensity > 1, which seems very inconsistent with the semanticKITTI's training data

about performance

hi, sorry to bother you again.
I would like to understand why MapMOS performs well on the validation (with the iou of 86.1%) of SemanticKITTI but experiences a significant drop in performance on the test (with the iou of 66.0%) as reported in the paper. From my understanding, other methods do not exhibit such a large discrepancy. Could this difference be attributed to KISS-ICP?

nuScenes Moving Object Segmentation Data

Hi authors, thanks for your impressive job! Could you please provide the labeled nuScenes validation data, or can you explain how to label the dataset? This would be helpful for me to follow your work. Thanks and best regards.

what's the meaning of "mapmos_pipeline --visualize /path/to/weights.ckpt /path/to/data"

Hello author, I feel that you are such an excellent open source project. But I didn't understand a few problems during the operation according to your instructions (because I'm new to python and I'm not very familiar with python). As the title says, can you give an example to illustrate how this script works, and what are the meanings of the last two parameters? I downloaded the pre-trained model, this compressed file is a .ckpt file, but which one to use as the first parameter after decompression? Does it refer to the file data.pkl? The data folder after decompression of the second parameter "/path/to/data" mapmos.ckpt?
in fact,i have a error problem
File "/home/seu_wx/.conda/envs/torch190/bin/mapmos_pipeline", line 5, in
from mapmos.cli import app
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 677, in _load_unlocked
File "", line 724, in exec_module
File "", line 859, in get_code
File "", line 916, in get_data
FileNotFoundError: [Errno 2] No such file or directory: '/home/seu_wx/star_work/test/MapMOS/src/mapmos/init.py'

about validation on nuscenes

Hi, benemer!
I would like to know how to use the nuscenes dataset for validation because I'm not familiar with it. How should I set up the "sequence" parameter?

class NuScenesDataset:
def __init__(self, data_dir: Path, sequence: int, *_, **__):

Looking forward to your response :)

How to use the labelled apollo dataset for mos?

Hello dear author, recently I was planning to use the apollo dataset you released specifically for MOS, I downloaded the dataset directly, but I get an error when I use the dataset directly on the MotionSeg3D, how to use the dataset correctly?

Question regarding Generalization benchmark

Thanks for the fantastic and exhaustive work.
In Table II, are the results proposed with a model trained on the train split (00->07, 09, 10) or on the trainval split (00->10) as it is often used for benchmark submissions ?

Best,

Jules

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.