GithubHelp home page GithubHelp logo

peterfws / structure-plp-slam Goto Github PK

View Code? Open in Web Editor NEW
370.0 11.0 59.0 91.84 MB

[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"

License: GNU General Public License v3.0

C++ 95.88% C 0.18% CMake 2.85% Dockerfile 0.01% JavaScript 1.05% EJS 0.04%
slam augmented-reality localization mapping robotics visual-odometry

structure-plp-slam's Introduction

Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras

License Issue

  • Notice that this work is based on the original OpenVSLAM (renamed now as stella-vslam), which is a derivative work of ORB-SLAM2 without any modification to core algorithms. It was declared in conflict with ORB-SLAM2 (see: stella-cv/stella_vslam#249).

  • For the courtesy of ORB-SLAM2, the granted license of this project is GNU General Public License v3.0. For commercial purposes of this project, please contact department Augmented Vision (https://www.dfki.de/en/web/research/research-departments/augmented-vision), DFKI (German Research Center for Artificial Intelligence), Germany.

  • If you have any technical questions regarding to the implementation, please kindly leave an issue.

Related Papers:

[1] F. Shu, et al. "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras". IEEE International Conference on Robotics and Automation (ICRA), 2023. (https://arxiv.org/abs/2207.06058) updated arXiv v3 with supplementary materials.

[2] F. Shu, et al. "Visual SLAM with Graph-Cut Optimized Multi-Plane Reconstruction." International Symposium on Mixed and Augmented Reality (ISMAR, poster). IEEE, 2021. (https://arxiv.org/abs/2108.04281)

[3] Y. Xie, et al. "PlaneRecNet: Multi-Task Learning with Cross-Task Consistency for Piece-Wise Plane Detection and Reconstruction from a Single RGB Image." British Machine Vision Conference (BMVC), 2021. (https://arxiv.org/abs/2110.11219)

Video

Example Video

The System Workflow:

Qualitative Illustration:

Monocular:

  • fr3_structure_texture_far (dataset TUM RGB-D)

  • living_room_traj0 and living_room_traj2 (dataset ICL-NUIM)

  • MH_04_difficult (dataset EuRoC MAV)

  • Sequence_00 (data_odometry_gray, dataset KITTI)

RGB-D:

  • fr2_pioneer_slam (dataset TUM RGB-D)

  • office_room_traj0 (dataset ICL-NUIM)

Stereo:

  • MH_04_difficult (dataset EuRoC MAV)

  • V1_03_difficult (dataset EuRoC MAV)

Some Remarks

  • Point-Line SLAM is generalized. It can be conducted on all kind of data image sequences.
  • For running Planar SLAM, segmentation needs to be done beforehand (see details in below).
  • ORB vocabulary is already attached to this repository, see: ./orb_vocab/

Build with PangolinViewer (Default)

Documentation

The structure-plp-slam code is based on a relatively old version of OpenVSLAM (from early 2021 I think).

You should be able to find everything you need in this documentation:

https://stella-cv.readthedocs.io/en/0.3.9/example.html 

Notice the version of this documentation is 0.3.9 which should be the corresponding one to my version of code. Do not use the latest documentation for the revised Stella-slam.

Dependencies:

  • For utilizing line segment (LSD + LBD): we develop the code using OpenCV 3.4.6, in which we restored the implementation of LSD because it was removed. Hence, if you use OpenCV 3+, you may need to restore the LSD code yourself.

    However, later version of OpenCV restored the LSD (e.g. OpenCV 3.4.16 should work).

  • Other dependencies (g2o, Eigen3, Pangolin, DBoW2, Ubuntu 18.04) are in general similar to ORB-SLAM2.

  • We integrated Graph-Cut RANSAC C++ implementation to our project, which is under BSD license. See https://github.com/danini/graph-cut-ransac.

  • This project does not support using ROS and Docker, at least we haven't tested, for now.

  • An additional plane detector:

    This work take PlaneRecNet [3] as the instance planar segmentation CNN (only instance segmentation is used, predicted depth is not used by far). Example segmentation images for different datasets will be provided with a downloading link, see section below -> Run Point-Plane SLAM.

    You could also segment images yourself, code please see: https://github.com/EryiXie/PlaneRecNet

Build using CMake:

mkdir build && cd build

cmake \
    -DBUILD_WITH_MARCH_NATIVE=ON \
    -DUSE_PANGOLIN_VIEWER=ON \
    -DUSE_SOCKET_PUBLISHER=OFF \
    -DUSE_STACK_TRACE_LOGGER=ON \
    -DBOW_FRAMEWORK=DBoW2 \
    -DBUILD_TESTS=OFF \
    ..

make -j4

(or, highlight and filter (gcc) compiler messages)

make -j4 2>&1 | grep --color -iP "\^|warning:|error:|"
make -j4 2>&1 | grep --color -iP "\^|error:|"

Command options to run the example code on standard dataset, e.g. TUM-RGBD:

$ ./build/run_tum_rgbd_slam
Allowed options:
  -h, --help             produce help message
  -v, --vocab arg        vocabulary file path
  -d, --data-dir arg     directory path which contains dataset
  -c, --config arg       config file path
  --frame-skip arg (=1)  interval of frame skip
  --no-sleep             not wait for next frame in real time
  --auto-term            automatically terminate the viewer
  --debug                debug mode
  --eval-log             store trajectory and tracking times for evaluation
  -p, --map-db arg       store a map database at this path after SLAM

Known Issues

  • If you have a crash right after running SLAM (e.g. a segmentation error), try to de-activate BUILD_WITH_MARCH NATIVE (in ccmake .). This is due to the wrong version of g2o.

    You could find my version of g2o and DBoW in the link: https://1drv.ms/u/s!Atj7rBR0X5zagZwcFs1oIqXeV5r4Cw?e=pbnNES

  • Visualization issue if you are running latest Ubuntu 20 or 22. As this codebase was developed with Ubuntu 18. When you run the code, the camera maybe not following the tracking path, see: #8

Standard SLAM with Standard Datasets

(1) TUM-RGBD dataset (monocular/RGB-D):

./build/run_tum_rgbd_slam \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/TUM_RGBD/rgbd_dataset_freiburg3_long_office_household \
-c ./example/tum_rgbd/TUM_RGBD_mono_3.yaml

(2) KITTI dataset (monocular/stereo):

./build/run_kitti_slam \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/KITTI/odometry/data_odometry_gray/dataset/sequences/00/ \
-c ./example/kitti/KITTI_mono_00-02.yaml

(3) EuRoC MAV dataset (monocular/stereo)

./build/run_euroc_slam \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/EuRoC_MAV/MH_01_easy/mav0 \
-c ./example/euroc/EuRoC_mono.yaml

Run Point-Line SLAM

(1) TUM RGB-D (monocular/RGB-D)

./build/run_tum_rgbd_slam_with_line \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/TUM_RGBD/rgbd_dataset_freiburg3_long_office_household \
-c ./example/tum_rgbd/TUM_RGBD_mono_3.yaml

(2) ICL-NUIM (monocular/RGB-D)

./build/run_tum_rgbd_slam_with_line \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/ICL_NUIM/traj3_frei_png \
-c ./example/icl_nuim/mono.yaml

(3) EuRoc MAV (monocular/stereo)

./build/run_euroc_slam_with_line \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/EuRoC_MAV/MH_04_difficult/mav0 \
-c ./example/euroc/EuRoC_mono.yaml

(4) KITTI (monocular/stereo)

./build/run_kitti_slam_with_line \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/KITTI/odometry/data_odometry_gray/dataset/sequences/00/ \
-c ./example/kitti/KITTI_mono_00-02.yaml

Run Re-localization (Map-based Image Localization) using Pre-built Map

First, pre-build a map using (monocular or RGB-D) SLAM:

./build/run_tum_rgbd_slam_with_line \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/TUM_RGBD/rgbd_dataset_freiburg3_long_office_household \
-c ./example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
--map-db freiburg3_long_office_household.msg

Second, run the (monocular) image localization mode, notice that give the path to the RGB image folder:

./build/run_image_localization_point_line \
-v ./orb_vocab/orb_vocab.dbow2 \
-i /data/TUM_RGBD/rgbd_dataset_freiburg3_long_office_household/rgb \
-c ./example/tum_rgbd/TUM_RGBD_mono_3.yaml \
--map-db freiburg3_long_office_household.msg

Run Point-Plane SLAM (+ Line if Activated, see: planar_mapping_parameters.yaml)

  • We provide instance planar segmentation masks and *.txt file, which can be download here (OneDrive shared):

    https://1drv.ms/u/s!Atj7rBR0X5zagZwcFs1oIqXeV5r4Cw?e=pbnNES

  • TUM RGB-D dataset, besides the folder which saves rgb image, you need to provide folder which saves the segmentation masks and a mask.txt file.

    ./data/TUM_RGBD/rgbd_dataset_freiburg3_long_office_household/
    |
    |____./rgb/
    |____./depth/
    .
    |____./rgb.txt
    .
    .
    |____./mask/              % given by our download link
    |____./mask.txt           % given by our download link
    
  • ICL-NUIM dataset, we customize it the same way as TUM RGB-D dataset.

    ./data/ICL_NUIM/living_room_traj0_frei_png/
    |
    |____./rgb/
    |____./depth/
    .
    .
    |____./mask/              % given by our download link
    .
    |____./rgb.txt            % given by our download link
    |____./depth.txt          % given by our download link
    |____./mask.txt           % given by our download link
    |____./associations.txt   % given by our download link
    |____./groundtruth.txt    % given by our download link
    
  • EuRoC MAV dataset, we provide necessary segmentation masks in the downloading link, save the segmentation masks under folder cam0, e.g.:

    /data/EuRoC_MAV/V1_02_medium/mav0/cam0/seg/   % given by our download link
    

Run SLAM with Piece-wise Planar Reconstruction

  • Mapping parameters can be adjusted, see planar_mapping_parameters.yaml.

(1) TUM RGB-D (monocular/RGB-D)

./build/run_slam_planeSeg \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/TUM_RGBD/rgbd_dataset_freiburg3_structure_texture_far \
-c ./example/tum_rgbd/TUM_RGBD_mono_3.yaml

(2) ICL-NUIM (monocular/RGB-D)

./build/run_slam_planeSeg \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/ICL_NUIM/living_room_traj0_frei_png \
-c ./example/icl_nuim/mono.yaml

(3) EuRoc MAV (monocular/stereo):

Only V1 and V2 image sequences, due to segmentation CNN failure on factory data sequences MH_01-05, as mentioned in the paper [1].

./build/run_euroc_slam_planeSeg \
-v ./orb_vocab/orb_vocab.dbow2 \
-d /data/EuRoC_MAV/V1_01_easy/mav0 \
-c ./example/euroc/EuRoC_stereo.yaml

Activate Depth-based Dense Reconstruction

It can be easily activated in planar_mapping_parameters.yaml -> Threshold.draw_dense_pointcloud: true.

This is a toy demo (RGB-D only).

Evaluation with EVO tool (https://github.com/MichaelGrupp/evo)

evo_ape tum /data/TUM_RGBD/rgbd_dataset_freiburg3_structure_texture_far/groundtruth.txt ./keyframe_trajectory.txt -p --plot_mode=xy -a --verbose -s

Important flags:

--align or -a = SE(3) Umeyama alignment (rotation, translation)
--align --correct_scale or -as = Sim(3) Umeyama alignment (rotation, translation, scale)
--correct_scale or -s = scale alignment

Debug with GDB

gdb ./build/run_slam_planeSeg

run -v ./orb_vocab/orb_vocab.dbow2 -d /data/TUM_RGBD/rgbd_dataset_freiburg3_structure_texture_far -c ./example/tum_rgbd/TUM_RGBD_rgbd_3.yaml

structure-plp-slam's People

Contributors

peterfws avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

structure-plp-slam's Issues

‘runtime_error’ is not a member of ‘std’

Hi,

An error was met during the compilation:
Structure-PLP-SLAM/src/pangolin_viewer/color_scheme.cc:45:24: error: ‘runtime_error’ is not a member of ‘std’`

std::runtime_error is defined in the header <stdexcept> (according to this), so placing #include <stdexcept> in src/pangolin_viewer/color_scheme.h could solve the issue.

run is Aborted?why?

./build/run_euroc_slam_with_line -v ./orb_vocab/orb_vocab.dbow2 -d /home/gq/guoqian/data/V1_02_medium/mav0/ -c ./example/euroc/EuRoC_mono.yaml
[2022-08-24 14:19:29.138] [I] config file loaded: ./example/euroc/EuRoC_mono.yaml


Structure PLP-SLAM:
Copyright (C) 2022, Department Augmented Vision, DFKI, Germany.
All rights reserved.

This is free software,
and you are welcome to redistribute it under certain conditions.
See the LICENSE file.

Camera Configuration:

  • name: EuRoC monocular
  • setup: Monocular
  • fps: 20
  • cols: 752
  • rows: 480
  • color: Gray
  • model: Perspective
    • fx: 458.654
    • fy: 457.296
    • cx: 367.215
    • cy: 248.375
    • k1: -0.283408
    • k2: 0.0739591
    • p1: 0.00019359
    • p2: 1.76187e-05
    • k3: 0
    • min x: -135.812
    • max x: 895.507
    • min y: -92.8751
    • max y: 565.556

ORB Configuration:

  • number of keypoints: 1000
  • scale factor: 1.2
  • number of levels: 8
  • initial fast threshold: 20
  • minimum fast threshold: 7

Other Configuration:
PangolinViewer:

  • camera_line_width: 3
  • camera_size: 0.08
  • graph_line_width: 1
  • keyframe_line_width: 1
  • keyframe_size: 0.07
  • point_size: 2
  • viewpoint_f: 400
  • viewpoint_x: 0
  • viewpoint_y: -0.65
  • viewpoint_z: -1.9

[2022-08-24 14:19:29.141] [I] loading ORB vocabulary: ./orb_vocab/orb_vocab.dbow2
[2022-08-24 14:19:29.553] [I] startup SLAM system
Gtk-Message: 14:19:29.712: Failed to load module "canberra-gtk-module"
[2022-08-24 14:19:34.352] [I] initialization succeeded with H
double free or corruption (out)
*** Aborted at 1661321974 (unix time) try "date -d @1661321974" if you are using GNU date ***
PC: @ 0x7f9cc3ec800b gsignal
*** SIGABRT (@0x3e80001f570) received by PID 128368 (TID 0x7f9ca8b50700) from PID 128368; stack trace: ***
@ 0x7f9cc595e631 (unknown)
@ 0x7f9cc550e420 (unknown)
@ 0x7f9cc3ec800b gsignal
@ 0x7f9cc3ea7859 abort
@ 0x7f9cc3f1226e (unknown)
@ 0x7f9cc3f1a2fc (unknown)
@ 0x7f9cc3f1bfa0 (unknown)
@ 0x7f9cc3ddc6a8 g2o::OptimizableGraph::~OptimizableGraph()
@ 0x7f9cc57edc7f PLPSLAM::optimize::global_bundle_adjuster::optimize()
@ 0x7f9cc57737f6 PLPSLAM::module::initializer::create_map_for_monocular()
@ 0x7f9cc5774f9b PLPSLAM::module::initializer::initialize()
@ 0x7f9cc563edd0 PLPSLAM::tracking_module::initialize()
@ 0x7f9cc5641245 PLPSLAM::tracking_module::track()
@ 0x7f9cc5641c28 PLPSLAM::tracking_module::track_monocular_image()
@ 0x7f9cc5626581 PLPSLAM::system::feed_monocular_frame()
@ 0x5638a377a985 _ZZ13mono_trackingRKSt10shared_ptrIN7PLPSLAM6configEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESC_jbbbSC_bENKUlvE_clEv
@ 0x7f9cc4168de4 (unknown)
@ 0x7f9cc5502609 start_thread
@ 0x7f9cc3fa4133 clone
Aborted (core dumped)

Puzzled at using partial_occlusion for line projection

Hi,

When projecting 3D lines to 2D images, to find a support from image observation, the code (i) first project two end points of the line to the image, and then (ii) find whether the reprojected end points are visible in the image space. If one of the end point is invisible, then the code checks the visibility of the mid-point of the line. If it is visible, partial_occlusion is set true. (see code starting this line)

However, later on the code does not use this partial_occlusion; instead, the two end points are still used to find 2D line support (see code here).

I am puzzled: what is the need of using partial_occlusion?

How to turn off global BA and loop closure detection

Hi, I would like to evaluate your amazing work in a visual odometry mode by turning off the global bundle adjustment and the loop closure detection. The modules of BA, loop closure detection, etc are tightly coupled in the pipeline, so I wonder how this can be achieved in order to leave only the tracking module.
Thank you!

I can't compile the code

I got some code syntax error,may I ask you about your test platform. Ubuntu 18.04 or Ubuntu 20.04.
And if needing to satisfy with requirements for stella_vslam

could you guys make a colab notebook

would be great if you guys can make a colab notebook for testing monocular scene videos (.mp4)
it would be great if a user can upload their .mp4 video as input and get a map output.
Hope you guys will have a look into this , would be great to test this on google colab (using the T4 GPU)

cmake unsuccessful

Hello author, I am trying to reproduce your code and compile it after the dependency installation is complete, the following error will appear:

CMake Error in src/PLPSLAM/CMakeLists.txt:
Target "PLPSLAM" INTERFACE_INCLUDE_DIRECTORIES property contains path:
"/home/xxx/project/Structure-PLP-SLAM/src/PLPSLAM/"
which is prefixed in the source directory.

CMake Error in src/PLPSLAM/CMakeLists.txt:
Target "PLPSLAM" INTERFACE_INCLUDE_DIRECTORIES property contains path:
"/home/xxx/project/Structure-PLP-SLAM/src/PLPSLAM//usr/local/include/DBoW2"
which is prefixed in the source directory.

In addition, do you need to compile PLP-SLAM first under the src file? How should these problems be solved?

cmake error

this is my cmake error ,if you konw something please reply me,thank you very much
CMake Error in src/PLPSLAM/CMakeLists.txt:
Target "PLPSLAM" INTERFACE_INCLUDE_DIRECTORIES property contains path:

"/home/xt/Structure-PLP-SLAM-main/src/PLPSLAM/"

which is prefixed in the source directory.

CMake Error in src/PLPSLAM/CMakeLists.txt:
Target "PLPSLAM" INTERFACE_INCLUDE_DIRECTORIES property contains path:

"/home/xt/Structure-PLP-SLAM-main/src/PLPSLAM//usr/local/include/DBoW2"

which is prefixed in the source directory.

-- Generating done
CMake Generate step failed. Build files cannot be regenerated correctly.

initialization error

Hi,

I tried my own data with this repo, but can't get it pass the initialization phase. The error log is like this:

[2023-06-25 18:12:51.414] [I] initialization succeeded with F
double free or corruption (out)
*** Aborted at 1687687971 (unix time) try "date -d @1687687971" if you are using GNU date ***
PC: @     0x7fc9dd81c00b gsignal
*** SIGABRT (@0x3e800172112) received by PID 1515794 (TID 0x7fc904455700) from PID 1515794; stack trace: ***
    @     0x7fc9df4dc631 (unknown)
    @     0x7fc9df052420 (unknown)
    @     0x7fc9dd81c00b gsignal
    @     0x7fc9dd7fb859 abort
    @     0x7fc9dd86626e (unknown)
    @     0x7fc9dd86e2fc (unknown)
    @     0x7fc9dd86ffa0 (unknown)
    @     0x7fc9dd7360a8 g2o::OptimizableGraph::~OptimizableGraph()
    @     0x7fc9df35577f PLPSLAM::optimize::global_bundle_adjuster::optimize()
    @     0x7fc9df2d0416 PLPSLAM::module::initializer::create_map_for_monocular()
    @     0x7fc9df2d1b9b PLPSLAM::module::initializer::initialize()
    @     0x7fc9df188070 PLPSLAM::tracking_module::initialize()
    @     0x7fc9df18a5ed PLPSLAM::tracking_module::track()
    @     0x7fc9df18affb PLPSLAM::tracking_module::track_monocular_image()
    @     0x7fc9df16a351 PLPSLAM::system::feed_monocular_frame()
    @     0x555890af133f _ZZ13mono_trackingRKSt10shared_ptrIN7PLPSLAM6configEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESC_SC_SC_jbbbSC_ENKUlvE_clEv
    @     0x7fc9ddabede4 (unknown)
    @     0x7fc9df046609 start_thread
    @     0x7fc9dd8f8133 clone
Aborted

Every time it gives a initialization succeedded output, it fails immediately with this error.
Also, I already used the g2o version provided in your onedrive link.
Any idea what caused the double free and g2o::OptimizableGraph::~OptimizableGraph() error?

THANKS

about the jacobian calculation

I am wondering why oplusImpl is not implemented in the reproj_edge_line3d class, where is the Jacobian matrix computed in the image?
image

pangolin error

Hello, I ran this project and encountered that the pangolin only visualized the current keyframe, but the points, keyframe trajectories and line landmarks were not visualized?

Any answers are greatly appreciated.

g2o version

Thanks for your great work! But there are some problems when compiling.
I have de-activated BUILD_WITH_MARCH NATIVE in cmake, but when I run "make -j4", the following error occurs:
/PLPSLAM/optimize/g2o/se3/shot_vertex.h:58:42: error: ‘number_t’ does not name a type; did you mean ‘timer_t’?
and some g2o errors, for example:
error: ‘Matrix2’ in namespace ‘g2o’ does not name a type
It seems to be a g2o version problem, Could you please offer your g2o version? Thanks a lot!

Jacobi of linear residuals

Hi,

Can I ask you about the technical details of line residual optimization in G2O?
In the appendix of the paper, The Jacobi derivation of the line feature residuals with the pose is mentioned, But I didn't find the relevant code in the project . If the code exists in the project, can you tell me where it is?

Thank you!

can not find frame_trajectory.txt

./build/run_tum_rgbd_slam_with_line -v ./orb_vocab/orb_vocab.dbow2 -d ../datasets/TUM/rgbd_dataset_freiburg3_large_cabinet/ -c ./example/tum_rgbd/TUM_RGBD_rgbd_3.yaml --eval-log
or
./build/run_tum_rgbd_slam_with_line -v ./orb_vocab/orb_vocab.dbow2 -d ../datasets/TUM/rgbd_dataset_freiburg3_large_cabinet/ -c ./example/tum_rgbd/TUM_RGBD_rgbd_3.yaml --eval-log frame_trajectory.txt
Hello, I've tried adding both "--eval-log" and "--eval-log frame_trajectory.txt" to the command, but I couldn't find the corresponding trajectory file after running it. Can you please help me? Thank you.

the euroc example error

when i run euroc-MH01 data as :./run_euroc_slam -v ../Structure-PLP-SLAM/orb_vocab/orb_vocab.dbow2 -d /home/robot/dataset/MH_01/mav0 -c /home/robot/lib/PLPSlam/Structure-PLP-SLAM/example/euroc/EuRoC_mono.yaml ,then it show a error?
[2023-04-03 17:15:01.146] [I] config file loaded: /home/robot/lib/PLPSlam/Structure-PLP-SLAM/example/euroc/EuRoC_mono.yaml


Structure PLP-SLAM:
Copyright (C) 2022, Department Augmented Vision, DFKI, Germany.
All rights reserved.

This is free software,
and you are welcome to redistribute it under certain conditions.
See the LICENSE file.

Camera Configuration:

  • name: EuRoC monocular
  • setup: Monocular
  • fps: 20
  • cols: 752
  • rows: 480
  • color: Gray
  • model: Perspective
    • fx: 458.654
    • fy: 457.296
    • cx: 367.215
    • cy: 248.375
    • k1: -0.283408
    • k2: 0.0739591
    • p1: 0.00019359
    • p2: 1.76187e-05
    • k3: 0
    • min x: -135.812
    • max x: 895.507
    • min y: -92.8751
    • max y: 565.556

ORB Configuration:

  • number of keypoints: 1000
  • scale factor: 1.2
  • number of levels: 8
  • initial fast threshold: 20
  • minimum fast threshold: 7

Other Configuration:
PangolinViewer:

  • camera_line_width: 3
  • camera_size: 0.08
  • graph_line_width: 1
  • keyframe_line_width: 1
  • keyframe_size: 0.07
  • point_size: 2
  • viewpoint_f: 400
  • viewpoint_x: 0
  • viewpoint_y: -0.65
  • viewpoint_z: -1.9

[2023-04-03 17:15:01.170] [I] loading ORB vocabulary: ../Structure-PLP-SLAM/orb_vocab/orb_vocab.dbow2
[2023-04-03 17:15:01.438] [I] startup SLAM system
[2023-04-03 17:15:01.438] [I] start mapping module
[2023-04-03 17:15:01.438] [I] start global optimization module
terminate called after throwing an instance of 'YAML::BadFile'
what(): bad file

How about your SLAM ATE?

I run your SLAM code in my ubuntu18.04,and i find it is not good about translation of ATE.You can watch my picture,it is in point and line mode.
微信图片_20240116163109

trajectory issue

Hello, thanks for the code, I tried to evaluate the trajectory and ATE, but no trajectory file was generated at the end of the run, for example: keyframe_trajectory.txt. Excuse me, where is the trajectory file generated, I would like to know how keyframe_trajectory.txt is generated?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.