GithubHelp home page GithubHelp logo

prbonn / semantic_suma Goto Github PK

View Code? Open in Web Editor NEW
858.0 42.0 200.0 36.77 MB

SuMa++: Efficient LiDAR-based Semantic SLAM (Chen et al IROS 2019)

License: MIT License

CMake 0.91% C++ 78.93% C 5.31% GLSL 14.43% Dockerfile 0.43%
rangenet-lib suma lidar slam semantic 3d-lidar suma-plus-plus

semantic_suma's Introduction

SuMa++: Efficient LiDAR-based Semantic SLAM

This repository contains the implementation of SuMa++, which generates semantic maps only using three-dimensional laser range scans.

Developed by Xieyuanli Chen and Jens Behley.

SuMa++ is built upon SuMa and RangeNet++. For more details, we refer to the original project websites SuMa and RangeNet++.

An example of using SuMa++: ptcl

Table of Contents

  1. Introduction
  2. Publication
  3. Docker
  4. Dependencies
  5. Build
  6. How to run
  7. More Related Work
  8. Frequently Asked Questions
  9. License

Publication

If you use our implementation in your academic work, please cite the corresponding paper:

@inproceedings{chen2019iros, 
		author = {X. Chen and A. Milioto and E. Palazzolo and P. Giguère and J. Behley and C. Stachniss},
		title  = {{SuMa++: Efficient LiDAR-based Semantic SLAM}},
		booktitle = {Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
		pages = {4530--4537},
		year = {2019},
		url = {https://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/chen2019iros.pdf}
}

Docker

Thanks to the efforts of Hyunggi Chang, we have now a dockerized version of semantic_suma, which takes care of the proper setup.

You can build the Docker image with the provided Dockerfile, i.e.,

docker build -t semantic_suma:latest

and run the container by

docker run -it -e "DISPLAY=$DISPLAY" -e "QT_X11_NO_MITSHM=1" -e "XAUTHORITY=$XAUTH" -v "/tmp/.X11-unix:/tmp/.X11-unix:rw" -v /kitti:/data --runtime=nvidia --net=host --ipc=host --privileged semantic_suma:latest

Note that it makes a volume located at kitti that should point to the KITTI files or another directory containing scans in the KITTI format. Follow the How to run instructions below to execute the binary.

The GUI is visualized via X11, so prior to starting a docker container, xhost +local:docker command should be used to provide access to the xhost from docker.

Dependencies

  • catkin
  • Qt5 >= 5.2.1
  • OpenGL >= 4.0
  • libEigen >= 3.2
  • gtsam >= 4.0 (tested with 4.0.0-alpha2)

In Ubuntu 16.04: Installing all dependencies should be accomplished by

sudo apt-get install build-essential cmake libgtest-dev libeigen3-dev libboost-all-dev qtbase5-dev libglew-dev libqt5libqgtk2 catkin

Additionally, make sure you have catkin-tools and the fetch verb installed:

sudo apt install python-pip
sudo pip install catkin_tools catkin_tools_fetch empy

Build

rangenet_lib

To use SuMa++, you need to first build the rangenet_lib with the TensorRT and C++ interface. For more details about building and using rangenet_lib you could find in rangenet_lib.

SuMa++

Clone the repository in the src directory of the same catkin workspace where you built the rangenet_lib:

git clone https://github.com/PRBonn/semantic_suma.git

Download the additional dependencies (or clone glow into your catkin workspace src yourself):

catkin deps fetch

For the first setup of your workspace containing this project, you need:

catkin build --save-config -i --cmake-args -DCMAKE_BUILD_TYPE=Release -DOPENGL_VERSION=430 -DENABLE_NVIDIA_EXT=YES

Where you have to set OPENGL_VERSION to the supported OpenGL core profile version of your system, which you can query as follows:

$ glxinfo | grep "version"
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL core profile version string: 4.3.0 NVIDIA 367.44
OpenGL core profile shading language version string: 4.30 NVIDIA [...]
OpenGL version string: 4.5.0 NVIDIA 367.44
OpenGL shading language version string: 4.50 NVIDIA

Here the line OpenGL core profile version string: 4.3.0 NVIDIA 367.44 is important and therefore you should use -DOPENGL_VERSION = 430. If you are unsure you can also leave it on the default version 330, which should be supported by all OpenGL-capable devices.

If you have a NVIDIA device, like a Geforce or Quadro graphics card, you should also activate the NVIDIA extensions using -DENABLE_NVIDIA_EXT=YES for info about the current GPU memory usage of the program.

After this setup steps, you can build with catkin build, since the configuration has been saved to your current Catkin profile (therefore, --save-config was needed).

Now the project root directory (e.g. ~/catkin_ws/src/semantic_suma) should contain a bin directory containing the visualizer.

How to run

Important Notice

  • Before running SuMa++, you need to first build the rangenet_lib and download the pretrained model.
  • You need to specify the model path in the configuration file in the config/ folder.
  • For the first time using, rangenet_lib will take several minutes to build a .trt model for SuMa++.
  • SuMa++ now can only work with KITTI dataset, since the semantic segmentation may not generalize well in other environments.
  • To use SuMa++ with your own dataset, you may finetune or retrain the semantic segmentation network.

All binaries are copied to the bin directory of the source folder of the project. Thus,

  1. run visualizer in the bin directory by ./visualizer,
  2. open a Velodyne directory from the KITTI Visual Odometry Benchmark and select a ".bin" file,
  3. start the processing of the scans via the "play button" in the GUI.

More Related Work

This repo contains the code for our RSS2020 paper: OverlapNet - Loop Closing for 3D LiDAR-based SLAM.

OverlapNet is a modified Siamese Network that predicts the overlap and relative yaw angle of a pair of range images generated by 3D LiDAR scans, which can be used for place recognition and loop closing.

This repo contains the code for our IROS2020 paper: Learning an Overlap-based Observation Model for 3D LiDAR Localization.

It uses the OverlapNet to train an observation model for Monte Carlo Localization and achieves global localization with 3D LiDAR scans.

Frequently Asked Questions

License

Copyright 2019, Xieyuanli Chen, Jens Behley, Cyrill Stachniss, Photogrammetry and Robotics Lab, University of Bonn.

This project is free software made available under the MIT License. For details see the LICENSE file.

semantic_suma's People

Contributors

changh95 avatar chen-xieyuanli avatar jbehley avatar tano297 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

semantic_suma's Issues

Different semantic results of suma++

Hello,

Thanks for your work! After I build the suma++, I just obtain the result like this, I use the pretrained model provided on the webpage, however, it seems that the semantic result is quite different from the picture you provided. I am wondering why it happened( Is it due to the pretrained model used?). I try to use the trained model which is trained from scratch by myself, it works well using the infer.py of RangeNet++, the result is also strange. Besides, the result of RangeLib are also different from the picture you provided.
Looking forward to your reply. :) Thanks!

image

The visualizer crashes

The visualizer crashes giving different output each time I run it (without making any changes).

Segmentation fault

OpenGL Context Version 4.5 core profile
GLEW initialized.
OpenGL context version: 4.5
OpenGL vendor string  : NVIDIA Corporation
OpenGL renderer string: GeForce GTX 1080 Ti/PCIe/SSE2
Segmentation fault (core dumped)

rv::XmlError

OpenGL Context Version 4.5 core profile
GLEW initialized.
OpenGL context version: 4.5
OpenGL vendor string  : NVIDIA Corporation
OpenGL renderer string: GeForce GTX 1080 Ti/PCIe/SSE2
terminate called after throwing an instance of 'rv::XmlError'
  what():  Error while parsing in line 1
Aborted (core dumped)

Sometimes I also reach

Extracting surfel maps partially.
Performing frame-to-model matching.

But it also crashes.


  • rangenet_lib works fine with me.

  • As for the build I used:

catkin build --save-config -i --cmake-args -DCMAKE_BUILD_TYPE=Release -DOPENGL_VERSION=450 -DENABLE_NVIDIA_EXT=YES
  • And here is the output of glxinfo | grep "version"
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL core profile version string: 4.5.0 NVIDIA 390.25
OpenGL core profile shading language version string: 4.50 NVIDIA
OpenGL version string: 4.6.0 NVIDIA 390.25
OpenGL shading language version string: 4.60 NVIDIA
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 390.25
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
    GL_EXT_shader_implicit_conversions, GL_EXT_shader_integer_mix, 
  • I just had to change a line in the include_directories in the CMakeLists.txt from /usr/include/eigen3 to /usr/local/include/eigen3 because I got the following error:
error: static assertion failed: Error: GTSAM was built against a different version of Eigen

which made the build work.

(error)

Issue to be deleted (not in the right repo, sorry)

Can not detect loop closure sometimes!

My experiment on SuMa++ and KIITI always cannot detect loop closure at some place, i have tried many loop closure parameters, but i think my parameters have problems, could you tell me how to change the parameters to make it work??

question about data type

Hi, I'd like to try my own lidar points cloud on this algorithm.
What kind of data type is needed to test?

Code segment for ICP improvement?

I would like to build on top of the ICP algorithm by utilizing EM algorithm to soft assign the points. I was curious where exactly are the ICP algorithms implemented.

I've dugged through the code and it seemed like the ICP algorithm is implemented in src/shaders/Frame2Model_jacobians.geom. However, it seems like it is a OpenGL format so I was wondering if I should modify the algorithm there directly or if I should modify the algorithm elsewhere.

a process function prolem of " glDrawArrays(GL_POINTS, 0, 1);"

hello,

I have a problem in the function of "Preprocessing::process", in the process of "avgVertexmap_","filterVertexmap_","generate normal map",just call "glDrawArrays(GL_POINTS, 0, 1);"; this function just draw a vertex,although the shader of "quad.geom" generates 4 vertexes ;but there are just 4 vertexes in total; So how the fragment shader , for example "gen_normalmap.frag",to generate normal vector of all vertexes ?
please help me,Thanks a lot !!!!!

` glDisable(GL_DEPTH_TEST);

glow::GlTextureRectangle erode_semantic_map(width_, height_, TextureFormat::RGBA_FLOAT);

semanticbuffer_.attach(FramebufferAttachment::COLOR0, frame.normal_map);
semanticbuffer_.attach(FramebufferAttachment::COLOR1, erode_semantic_map);
semanticbuffer_.bind();

glActiveTexture(GL_TEXTURE0);
if (filterVertexmap_)
temp_vertices_.bind();
else
frame.vertex_map.bind();

glActiveTexture(GL_TEXTURE1);
frame.semantic_map.bind();

sampler_.bind(0);
sampler_.bind(1);

vao_no_points_.bind();
normal_program_.bind();

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // reset depth/normalmap
glDrawArrays(GL_POINTS, 0, 1);

normal_program_.release();
vao_no_points_.release();
semanticbuffer_.release();

glActiveTexture(GL_TEXTURE0);
if (filterVertexmap_)
temp_vertices_.release();
else
frame.vertex_map.release();

glActiveTexture(GL_TEXTURE1);
frame.semantic_map.release();

sampler_.release(0);
sampler_.release(1);`

`SyntaxError: invalid syntax`.

Hello! I meet an error and I need your help. Thank you!
When I run the catkin deps fetch, it responsed me
SyntaxError: invalid syntax.
What's wrong with it?
I'm using pip version 8.1.1,and install catkin-tools version 0.8.2,on the ubuntu 16.04.Is the version wrong?

The error is like this:
Traceback (most recent call last):
File "/usr/local/bin/catkin", line 9, in <module>
load_entry_point('catkin-tools==0.7.0', 'console_scripts', 'catkin')()
...
File "/usr/local/lib/python2.7/dist-packages/catkin_tools/commands/catkin.py", line 200
file=sys.stderr)
^
SyntaxError: invalid syntax

Failed to run visualizer...

HI~

after building the project, and run ./visualizer , I got this error

OpenGL Context Version 3.3 core profile
GLEW initialized.
OpenGL context version: 3.3
OpenGL vendor string  : VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 6.0, 256 bits)
terminate called after throwing an instance of 'glow::GlShaderError'
  what():  shader/gen_surfels.geom: 0:1(10): error: GLSL 4.00 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.40, 1.50, 3.30, 1.00 ES, and 3.00 ES

Aborted (core dumped)

Here is output when run glxinfo | grep "version", I have set -DOPENGL_VERSION=330 to compile...

server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
    Max core profile version: 3.3
    Max compat profile version: 3.0
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.0
OpenGL core profile version string: 3.3 (Core Profile) Mesa 18.0.5
OpenGL core profile shading language version string: 3.30
OpenGL version string: 3.0 Mesa 18.0.5
OpenGL shading language version string: 1.30
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 18.0.5
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.00

do you know how to pass it?

Thanks ~

when i execute,Segfault happend

Hi chen,
when i try to use suma++,i got some trouble.can you help me to solve it?
my gpu is rtx 2070 SUPER,my computer environment is
Driver Version: 440.100 CUDA Version: 10.1 cudnn:7.5.0 TensorRT-5.1.2.2
I configure my environment according to the readme.Can be compiled normally.
my system version is ubuntu18.04.the dependencies libqt5libqgtk2 is replaced by qt5-style-plugins.
OpenGL Context Version 4.6 core profile GLEW initialized. OpenGL context version: 4.6 OpenGL vendor string : NVIDIA Corporation OpenGL renderer string: GeForce RTX 2070 SUPER/PCIe/SSE2 Extracting surfel maps partially. Performing frame-to-model matching. 段错误 (核心已转储)
sometimes is
OpenGL Context Version 4.6 core profile GLEW initialized. OpenGL context version: 4.6 OpenGL vendor string : NVIDIA Corporation OpenGL renderer string: GeForce RTX 2070 SUPER/PCIe/SSE2 段错误 (核心已转储)
I use the gdb tool to locate the segmentation fault as follows:
`GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word".
(gdb) file visualizer
Reading symbols from visualizer...done.
(gdb) run
Starting program: /home/darren/ros_projects/rangenet_ws/src/semantic_suma/bin/visualizer
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffbfe89700 (LWP 14197)]
[New Thread 0x7fffb59a7700 (LWP 14198)]
[New Thread 0x7fffb51a6700 (LWP 14199)]
[New Thread 0x7fffaef60700 (LWP 14200)]
OpenGL Context Version 4.6 core profile
GLEW initialized.
OpenGL context version: 4.6
OpenGL vendor string : NVIDIA Corporation
OpenGL renderer string: GeForce RTX 2070 SUPER/PCIe/SSE2

Thread 1 "visualizer" received signal SIGSEGV, Segmentation fault.
0x00005555555b3a25 in _mm256_store_ps (__A=..., __P=) at /usr/lib/gcc/x86_64-linux-gnu/7/include/avxintrin.h:880
880 (__m256 )__P = __A;
(gdb) bt
#0 0x00005555555b3a25 in _mm256_store_ps (__A=..., __P=) at /usr/lib/gcc/x86_64-linux-gnu/7/include/avxintrin.h:880
#1 0x00005555555b3a25 in Eigen::internal::pstore<float, float __vector(8)>(float
, float __vector(8) const&) (from=..., to=)
at /usr/local/include/eigen3/Eigen/src/Core/arch/AVX/PacketMath.h:251
#2 0x00005555555b3a25 in Eigen::internal::pstoret<float, float __vector(8), 32>(float
, float __vector(8) const&) (from=..., to=)
at /usr/local/include/eigen3/Eigen/src/Core/GenericPacketMath.h:474
#3 0x00005555555b3a25 in Eigen::internal::assign_op<float, float>::assignPacket<32, float __vector(8)>(float*, float __vector(8) const&) const (this=, b=..., a=) at /usr/local/include/eigen3/Eigen/src/Core/functors/AssignmentFunctors.h:28
#4 0x00005555555b3a25 in Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::assign_op<float, float>, 0>::assignPacket<32, 32, float __vector(8)>(long, long) (this=, this=, col=, row=) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:652
#5 0x00005555555b3a25 in Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::assign_op<float, float>, 0>::assignPacketByOuterInner<32, 32, float __vector(8)>(long, long) (inner=0, outer=0, this=) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:666
#6 0x00005555555b3a25 in Eigen::internal::copy_using_evaluator_innervec_CompleteUnrolling<Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::assign_op<float, float>, 0>, 0, 16>::run(Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::assign_op<float, float>, 0>&) (kernel=...) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:274
#7 0x00005555555b3a25 in Eigen::internal::dense_assignment_loop<Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::assign_op<float, float>, 0>, 3, 2>::run(Eigen::internal::generic_dense_assignment_kernel<Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::evaluator<Eigen::Matrix<float, 4, 4, 0, 4, 4> >, Eigen::internal::assign_op<float, float>, 0>&) (kernel=...) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:434
#8 0x00005555555b3a25 in Eigen::internal::call_dense_assignment_loop<Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::internal::assign_op<float, float> >(Eigen::Matrix<float, 4, 4, 0, 4, 4>&, Eigen::Matrix<float, 4, 4, 0, 4, 4> const&, Eigen::internal::assign_op<float, float> const&) (func=..., src=..., dst=...) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:741
#9 0x00005555555b3a25 in Eigen::internal::Assignment<Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::internal::assign_op<float, float>, Eigen::internal::Dense2Dense, void>::run(Eigen::Matrix<float, 4, 4, 0, 4, 4>&, Eigen::Matrix<float, 4, 4, 0, 4, 4> const&, Eigen::internal::assign_op<float, float> const&) (func=..., src=..., dst=...) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:879
#10 0x00005555555b3a25 in Eigen::internal::call_assignment_no_alias<Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::internal::assign_op<float, float> >(Eigen::Matrix<float, 4, 4, 0, 4, 4>&, Eigen::Matrix<float, 4, 4, 0, 4, 4> const&, Eigen::internal::assign_op<float, float> const&) (func=..., src=..., dst=...) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:836
#11 0x00005555555b3a25 in Eigen::internal::call_assignment<Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::internal::assign_op<float, float> >(Eigen::Matrix<float, 4, 4, 0, 4, 4>&, Eigen::Matrix<float, 4, 4, 0, 4, 4> const&, Eigen::internal::assign_op<float, float> const&, Eigen::internal::enable_if<!Eigen::internal::evaluator_assume_aliasing<Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::internal::evaluator_traits<Eigen::Matrix<float, 4, 4, 0, 4, 4> >::Shape>::value, void*>::type) (func=..., src=..., dst=...) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:804
#12 0x00005555555b3a25 in Eigen::internal::call_assignment<Eigen::Matrix<float, 4, 4, 0, 4, 4>, Eigen::Matrix<float, 4, 4, 0, 4, 4> >(Eigen::Matrix<float, 4, 4, 0, 4, 4>&, Eigen::Matrix<float, 4, 4, 0, 4, 4> const&) (src=..., dst=...) at /usr/local/include/eigen3/Eigen/src/Core/AssignEvaluator.h:782
#13 0x00005555555b3a25 in Eigen::PlainObjectBase<Eigen::Matrix<float, 4, 4, 0, 4, 4> >::_set<Eigen::Matrix<float, 4, 4, 0, 4, 4> >(Eigen::DenseBase<Eigen::Matrix<float, 4, 4, 0, 4, 4> > const&) (other=..., this=0x555555f883b0) at /usr/local/include/eigen3/Eigen/src/Core/PlainObjectBase.h:714
---Type to continue, or q to quit---
#14 0x00005555555b3a25 in Eigen::Matrix<float, 4, 4, 0, 4, 4>::operator=(Eigen::Matrix<float, 4, 4, 0, 4, 4> const&) (other=..., this=0x555555f883b0)
at /usr/local/include/eigen3/Eigen/src/Core/Matrix.h:208
#15 0x00005555555b3a25 in ViewportWidget::ViewportWidget(QWidget*, QGLWidget const*, QFlagsQt::WindowType) (this=0x555555f877d0, parent=, shareWidget=, f=...) at /home/darren/ros_projects/rangenet_ws/src/semantic_suma/src/visualizer/ViewportWidget.cpp:46
#16 0x00005555555d4b72 in Ui_MainWindow::setupUi(QMainWindow*) (this=this@entry=0x7fffffffd2c8, MainWindow=MainWindow@entry=0x7fffffffd280)
at /home/darren/ros_projects/rangenet_ws/build/semantic_suma/ui_visualizer.h:939
#17 0x00005555555c7811 in VisualizerWindow::VisualizerWindow(int, char**) (this=0x7fffffffd280, argc=, argv=)
at /home/darren/ros_projects/rangenet_ws/src/semantic_suma/src/visualizer/VisualizerWindow.cpp:35
#18 0x000055555557cb02 in main(int, char**) (argc=, argv=0x7fffffffdbf8)
at /home/darren/ros_projects/rangenet_ws/src/semantic_suma/src/visualizer/visualizer.cpp:19
`
What should I do? Is it an environmental configuration problem?

The error : 'std::runtime_error'

Hello.When I ran the ./visualize and selected the data file, the runing window disappeared. But I can run the rangenet_lib correctly. The following are the tips from command window.
/********************************************/
OpenGL Context Version 4.5 core profile
GLEW initialized.
OpenGL context version: 4.5
OpenGL vendor string : NVIDIA Corporation
OpenGL renderer string: GeForce RTX 2060/PCIe/SSE2
Extracting surfel maps partially.
Performing frame-to-model matching.
terminate called after throwing an instance of 'std::runtime_error'
what(): Can't open cfg.yaml from ~/test/darknet53/arch_cfg.yaml
/**********************************************/
Hope someone can help me solve this problem! Thanks.

visualizer segmentation fault

I used the command ./visualizer ~/catkin_suma/src/semantic_suma-master/config/default.xml to run the visualizer, but I encountered a segmentation fault.

OpenGL Context Version 4.6 core profile
GLEW initialized.
OpenGL context version: 4.6
OpenGL vendor string  : NVIDIA Corporation
OpenGL renderer string: GeForce GTX 1080/PCIe/SSE2
Extracting surfel maps partially.
Performing frame-to-model matching.
[1]    21658 segmentation fault (core dumped)  ./visualizer
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffc42d1700 (LWP 21782)]
[New Thread 0x7fffb9d9e700 (LWP 21783)]
[New Thread 0x7fffb959d700 (LWP 21784)]
[New Thread 0x7fffb379a700 (LWP 21785)]
OpenGL Context Version 4.6 core profile
GLEW initialized.
OpenGL context version: 4.6
OpenGL vendor string  : NVIDIA Corporation
OpenGL renderer string: GeForce GTX 1080/PCIe/SSE2

Thread 1 "visualizer" received signal SIGSEGV, Segmentation fault.
0x00005555555b2e39 in _mm256_store_ps (__A=..., __P=<optimized out>)
    at /usr/lib/gcc/x86_64-linux-gnu/7/include/avxintrin.h:879
879	  *(__m256 *)__P = __A;
(gdb) 

I tried Rangenet, it seems good.

================================================================================
scan: ~/catkin_suma/src/rangenet_lib-master/example/000000.bin
path: ~/catkin_suma/src/semantic_suma-master/darknet53/
verbose: 0
================================================================================
Setting verbosity to: false
Trying to open model
Trying to deserialize previously stored: ~/catkin_suma/src/semantic_suma-master/darknet53//model.trt
Could not deserialize TensorRT engine. 
Generating from sratch... This may take a while...
Trying to generate trt engine from : ~/catkin_suma/src/semantic_suma-master/darknet53//model.onnx
Platform DOESN'T HAVE fp16 support.
No DLA selected.
----------------------------------------------------------------
Input filename:   ~/catkin_suma/src/semantic_suma-master/darknet53//model.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
 ----- Parsing of ONNX model ~/catkin_suma/src/semantic_suma-master/darknet53//model.onnx is Done ---- 
Success picking up ONNX model
Failure creating engine from ONNX model
Current trial size is 8589934592
Cuda error in file src/implicit_gemm.cu at line 648: out of memory
Success creating engine from ONNX model
Final size is 4294967296
Success creating engine from ONNX model
Trying to serialize engine and save to : ~/catkin_suma/src/semantic_suma-master/darknet53//model.trt for next run
Binding: 0, type: 0
[Dim 5][Dim 64][Dim 2048]
Binding: 1, type: 0
[Dim 20][Dim 64][Dim 2048]
Successfully create binding buffer
================================================================================
Predicting image: ~/catkin_suma/src/rangenet_lib-master/example/000000.bin
================================================================================
Example finished! 

How to install opengl >= 4.0?

Ubuntu 16.04 default opengl version is 3.3.
How to update?
and when I finish install NVIDIA driver.
I can't get opengl info
root@ubuntu:~# glxinfo | grep "version"
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL version string: 1.4 (2.1 Mesa 10.5.4)

The question about IMLS experiment result ?

I have noticed that you have compared your result with IMLS-SLAM, I'm really interested in that part how to reimplement the implicit surface function ? Can you help me ?

calib.txt for saving poses

Sorry I was looking for the "calib.txt" and figured out that it is provided in the KITTI odom dataset (while not in the raw data).

Not the same as video demo's

Hi guys ,

Thanks for your contributions to us.
Recently I have configured the environment(Nvidia RTX 2080,CUDA-10.1 etc.) of the SuMa++ working and I have got the results from the Kitti's dataset as your paper's.
suma_1
suma_2

Kitti's odometry velodyne:
dataset

Here I attach my results(two images), I wonder why not the same result?

Thanks.

How to run KITTI odometry dataset

I downloaded KITTI odometry dataset, there are many .bin in the velodyne file, i could run one bin at one time.
How can i choose many bin at one time to make them look like a video, not images.

catkin build error related to rangenet_lib

Dear authors,

I have encountered a problem while use "catkin build ......." command to build SUMA++, the error information is showed below.

I tried OpenCV 3.3.1, 3.4.1 and 4.1.0, but the same error occurred.

Could you please give me a hint about how to fix this error?

[semantic_suma:make] /home/zxkj/PRBonn/devel/.private/rangenet_lib/lib/librangenet_lib.so: undefined reference to `cv::String::deallocate()'
[semantic_suma:make] collect2: error: ld returned 1 exit status
[semantic_suma:make] CMakeFiles/visualizer.dir/build.make:531: recipe for target '/home/zxkj/PRBonn/src/semantic_suma/bin/visualizer' failed
[semantic_suma:make] make[2]: *** [/home/zxkj/PRBonn/src/semantic_suma/bin/visualizer] Error 1
[semantic_suma:make] CMakeFiles/Makefile2:299: recipe for target 'CMakeFiles/visualizer.dir/all' failed
[semantic_suma:make] make[1]: *** [CMakeFiles/visualizer.dir/all] Error 2
[semantic_suma:make] Makefile:138: recipe for target 'all' failed
[semantic_suma:make] make: *** [all] Error 2
Failed <<< semantic_suma [ 1 minute and 3.0 seconds ]
[build] Summary: 4 of 5 packages succeeded.
[build] Ignored: None.
[build] Warnings: 3 packages succeeded with warnings.
[build] Abandoned: None.
[build] Failed: 1 packages failed.
[build] Runtime: 1 minute and 24.2 seconds total.
[build] Note: Workspace packages have changed, please re-source setup files to use them.

the question about the part of code !!

i read your paper and code ,and i have questions about it ,can you give me some help,thank you!
first is the kittireader.cpp file, the code in the code line about 27.28 ;label_map_ = net->getLabelMap();color_map_ = net->getColorMap(); this two function get the label_map and color_map , i have look the rangenet_lib files and do not find the defination about two function and what are the two map representation? i have no idea. otherwise , i debuge the code and print the label_map in the terminal . it seem like the fixed values,like 0.10.11.15.18.20.30.31.32.40.44.48.49.50.51...........
second ,at the end of the same file. if (labels_prob[i] <= color_mask[i*20+j]) this line ,why the prob compare with the color_mask and the loop lines labels[i] = label_map_[j]; labels_prob[i] = color_mask[i*20+j];can you give me some advices?thank you!
and the variate remission is the intensity in the point cloud? does they have difference?

./visualizer:segmentation fault

when i run ./visualizer ,there are appear an error: Segmentation fault(core dumped);
I have a default.xml in ../config folder, and I chang the default.xml : model_path to my path and model files;
So i do not know where the error come from, someone could help me ,thanks!

How to find the RangeNet_lib ?

I'm sorry, I have already install the rangenet lib. The package have successfully installed, but when I install the suma++ on the Ubuntu. The "catkin deps fetch" tells me that the rangetnet_lib can't find. Like:
[rangenet_lib] : [NOT FOUND]

I build the range net in another workspace, but I have already source it and move the build file into the system lib file. But it still didn't work. Is there some operation like "sudo make install" I should do ?
I'm sorry to bother you about this. Really thank you for your help.

SuMa++ Save poses

您好!
请问SuMa++ 的位姿能直接保存到txt吗?我看到您设计的GUI界面的“Loop Closure”部分有“Save poses”,然而我点击它却没有反应,是我操作有误吗?期待您的回复,谢谢!!!

the effect of semantic segmentation

Hi,when i run semantic_suma on my computer,the effect of semantic segmentation is not the same as the your demo,This is the effect I run:
image
compare with yours:
image

i used the darknet53.

Failed to initialize GLEW on AGX

I am trying to run suma++ on NVIDIA AGX development toolkit. I am getting the following error when I try to run the visualizer:

QEGLPlatformContext: Failed to create context: 3009
OpenGL Context Version 2.0 compatibility profile
Missing GL version
terminate called after throwing an instance of 'std::runtime_error'
what(): Failed to initialize GLEW.
Aborted (core dumped)

GLEW is not initialized, could you provide a solution? I am new to using OpenGL.

Debug OpenGL

Is there any method to debug the code for OpenGL, like printing in OpenGL ?

Evaluation Metric

Hello,

I want to ask about something related to the quantitative results in the paper (http://www.ipb.uni-bonn.de/wp-content/papercite-data/pdf/chen2019iros.pdf).

How are the numbers in [TABLE I: Results on KITTI Road dataset] calculated? I can see there is a devkit available at KITTI website (http://www.cvlibs.net/datasets/kitti/eval_odometry.php). However, this devkit generates a file for each kitti sequence containing error values for subsequences of different lengths. So, is the overall error for a sequence just the simple average of these values in the error file?

The build step

Recently I am trying to use SUMA to have a test, it's a little bit hard for me to build it.

Is there any video or detailed tutorial about the build step?

Can't visualize on my own dataset

Hi, Thank you for sharing the source code. But I have a problem.
I've converted my dateaset into . Bin files, but when I run it, the visualization doesn't show anything like this. What's the matter.
图片

the visualizer has no PointCloud!

when I run " ./visualizer " in the terminal, and then, I selected a ".bin" file, like this:

$ ./visualizer 
OpenGL Context Version 4.5 core profile
GLEW initialized.
OpenGL context version: 4.5
OpenGL vendor string  : NVIDIA Corporation
OpenGL renderer string: GeForce GTX 1050/PCIe/SSE2
Extracting surfel maps partially.
Performing frame-to-model matching.
Setting verbosity to: false
Trying to open model
Trying to deserialize previously stored: /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.trt
Successfully found TensorRT engine file /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.trt
Successfully created inference runtime
No DLA selected.
Successfully allocated 426755792 for model.
Successfully read 426755792 to modelmem.
Created engine!
Successfully deserialized Engine from trt file
Binding: 0, type: 0
[Dim 5][Dim 64][Dim 2048]
Binding: 1, type: 0
[Dim 20][Dim 64][Dim 2048]
Successfully create binding buffer
calibration filename: /media/raymond/17354422509/suma++_ws/src/semantic_suma/bin/calib.txt...loaded.
ground truth filename: /media/raymond/17354422509/suma++_ws/src/poses/bin.txt
1101 poses read.
Performing frame-to-model matching. 

the visualizer has no PointCloud ! how can i fix this?

ERROR: could not create engine from ONNX. Aborted (core dumped)

when i run the command "./visualizer " in the terminal, and then select an ".bin" file ,throw the error:

$ ./visualizer 
OpenGL Context Version 4.5 core profile
GLEW initialized.
OpenGL context version: 4.5
OpenGL vendor string  : NVIDIA Corporation
OpenGL renderer string: GeForce GTX 1050/PCIe/SSE2
Extracting surfel maps partially.
Performing frame-to-model matching.
Setting verbosity to: false
Trying to open model
Trying to deserialize previously stored: /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.trt
Successfully found TensorRT engine file /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.trt
Successfully created inference runtime
No DLA selected.
Successfully allocated 426755792 for model.
Successfully read 426755792 to modelmem.
Could not deserialize TensorRT engine. 
Generating from sratch... This may take a while...
Trying to generate trt engine from : /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.onnx
Platform DOESN'T HAVE fp16 support.
No DLA selected.
----------------------------------------------------------------
Input filename:   /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.onnx
ONNX IR version:  0.0.4
Opset version:    9
Producer name:    pytorch
Producer version: 1.1
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
----- Parsing of ONNX model /media/raymond/17354422509/suma++_ws/src/semantic_suma/config/darknet53/model.onnx is Done ---- 
Success picking up ONNX model
Failure creating engine from ONNX model
Current trial size is 8589934592
Failure creating engine from ONNX model
Current trial size is 4294967296
Failure creating engine from ONNX model
Current trial size is 2147483648
Failure creating engine from ONNX model
Current trial size is 1073741824
Failure creating engine from ONNX model
Current trial size is 536870912
Failure creating engine from ONNX model
Current trial size is 268435456
Failure creating engine from ONNX model
Current trial size is 134217728
Failure creating engine from ONNX model
Current trial size is 67108864
Failure creating engine from ONNX model
Current trial size is 33554432
Failure creating engine from ONNX model
Current trial size is 16777216
Failure creating engine from ONNX model
Current trial size is 8388608
Failure creating engine from ONNX model
Current trial size is 4194304
Failure creating engine from ONNX model
Current trial size is 2097152
Failure creating engine from ONNX model
Current trial size is 1048576
terminate called after throwing an instance of 'std::runtime_error'
 what():  ERROR: could not create engine from ONNX.
Aborted (core dumped)

Waiting for your reply! thanks!

Confusion Matrix

Is there any method to get the Confusion Matrix of that pre-train model?

Does loop use OverLapNet?

Thanks for your great work. In PRBonn/OverlapNet , it point out that OverLapNet used in suma++, but I don't find OverLapNet using in SurfelMapping::checkLoopClosure(). It seems to loop according to residual.

Understanding semantic label fusion

Hi,

Thanks for open-sourcing this great piece of work!

After reading your paper and code, I have some questions regarding your method for fusing the semantic labels into the surfel map. From what I see, you are generating new surfels using the most likely class and probability from Rangenet. According to my understanding, every surfel you generate seems to have a fixed class from the beginning with the probability dropping if it is merged with measurements from a different class, but the class not changing. I would like to verify if this understanding is correct?

Also, do you have any current use for the probability attached to each surfel? I might have overlooked it, but it does not seem to be used in the code for deciding the surfel stability or maintaining the surfel map.

Hope to hear from you!

build error

Hi, @Chen-Xieyuanli
Thanks for your code~
I tried on my computer two month ago, it can work normally. Now I just try to run it on Xavier, but some errors occur, I just wonder is it related to the Xavier I used, since it is different from the common computer. Can this project be used on Xavier?

image

Thanks very much.

CMake Error when compiling...

Hi,
When I compile the project, i received the following errors:

......
[semantic_suma:cmake] -- Using OpenGL version 450.                                                                                                
[semantic_suma:cmake] -- Configuring done                                                                                                         
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:175 (add_library):                                 
[semantic_suma:cmake]   Target "suma" links to target "Boost::serialization" but the target was not                                               
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:175 (add_library):                                 
[semantic_suma:cmake]   Target "suma" links to target "Boost::thread" but the target was not found.                                               
[semantic_suma:cmake]   Perhaps a find_package() call is missing for an IMPORTED target, or an                                                    
[semantic_suma:cmake]   ALIAS target is missing?                                                                                                  
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:175 (add_library):                                 
[semantic_suma:cmake]   Target "suma" links to target "Boost::date_time" but the target was not                                                   
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:175 (add_library):                                 
[semantic_suma:cmake]   Target "suma" links to target "Boost::regex" but the target was not found.                                                
[semantic_suma:cmake]   Perhaps a find_package() call is missing for an IMPORTED target, or an                                                    
[semantic_suma:cmake]   ALIAS target is missing?                                                                                                  
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:175 (add_library):                                 
[semantic_suma:cmake]   Target "suma" links to target "Boost::timer" but the target was not found.                                                
[semantic_suma:cmake]   Perhaps a find_package() call is missing for an IMPORTED target, or an                                                    
[semantic_suma:cmake]   ALIAS target is missing?                                                                                                  
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:175 (add_library):                                 
[semantic_suma:cmake]   Target "suma" links to target "Boost::chrono" but the target was not found.                                               
[semantic_suma:cmake]   Perhaps a find_package() call is missing for an IMPORTED target, or an                                                    
[semantic_suma:cmake]   ALIAS target is missing?                                                                                                  
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:156 (add_library):                                 
[semantic_suma:cmake]   Target "robovision" links to target "Boost::serialization" but the target                                                 
[semantic_suma:cmake]   was not found.  Perhaps a find_package() call is missing for an IMPORTED                                                  
[semantic_suma:cmake]   target, or an ALIAS target is missing?                                                                                    
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:156 (add_library):                                 
[semantic_suma:cmake]   Target "robovision" links to target "Boost::thread" but the target was not                                                
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:156 (add_library):                                 
[semantic_suma:cmake]   Target "robovision" links to target "Boost::date_time" but the target was                                                 
[semantic_suma:cmake]   not found.  Perhaps a find_package() call is missing for an IMPORTED                                                      
[semantic_suma:cmake]   target, or an ALIAS target is missing?                                                                                    
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:156 (add_library):                                 
[semantic_suma:cmake]   Target "robovision" links to target "Boost::regex" but the target was not                                                 
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:156 (add_library):                                 
[semantic_suma:cmake]   Target "robovision" links to target "Boost::timer" but the target was not                                                 
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:156 (add_library):                                 
[semantic_suma:cmake]   Target "robovision" links to target "Boost::chrono" but the target was not                                                
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:186 (add_executable):                              
[semantic_suma:cmake]   Target "visualizer" links to target "Boost::serialization" but the target                                                 
[semantic_suma:cmake]   was not found.  Perhaps a find_package() call is missing for an IMPORTED                                                  
[semantic_suma:cmake]   target, or an ALIAS target is missing?                                                                                    
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:186 (add_executable):                              
[semantic_suma:cmake]   Target "visualizer" links to target "Boost::thread" but the target was not                                                
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:186 (add_executable):                              
[semantic_suma:cmake]   Target "visualizer" links to target "Boost::date_time" but the target was                                                 
[semantic_suma:cmake]   not found.  Perhaps a find_package() call is missing for an IMPORTED                                                      
[semantic_suma:cmake]   target, or an ALIAS target is missing?                                                                                    
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:186 (add_executable):                              
[semantic_suma:cmake]   Target "visualizer" links to target "Boost::regex" but the target was not                                                 
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:186 (add_executable):                              
[semantic_suma:cmake]   Target "visualizer" links to target "Boost::timer" but the target was not                                                 
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] CMake Error at /home/dlr/Project/suma++/src/semantic_suma/CMakeLists.txt:186 (add_executable):                              
[semantic_suma:cmake]   Target "visualizer" links to target "Boost::chrono" but the target was not                                                
[semantic_suma:cmake]   found.  Perhaps a find_package() call is missing for an IMPORTED target, or                                               
[semantic_suma:cmake]   an ALIAS target is missing?                                                                                               
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake]                                                                                                                             
[semantic_suma:cmake] -- Generating done                                                                                                          
[semantic_suma:cmake] -- Build files have been written to: /home/dlr/Project/suma++/build/semantic_suma                                           
Failed    <<< semantic_suma                [ 6.7 seconds ]                                                                                       
[build] Summary: 3 of 4 packages succeeded.                                                                                                      
[build]   Ignored:   None.                                                                                                                       
[build]   Warnings:  2 packages succeeded with warnings.                                                                                         
[build]   Abandoned: None.                                                                                                                       
[build]   Failed:    1 packages failed.                                                                                                          
[build] Runtime: 1 minute and 17.7 seconds total.               

What`s more my envs:

Ubuntu 5.4.0-6ubuntu1~16.04.12
cmake 3.8.1
CUDA 10.0.130  cudnn 7.5.1
Geforce GTX 1070

Boost version: 1.58.0

GLX version: 1.4
OpenGL core profile version string: 4.5.0 NVIDIA 440.44
OpenGL core profile shading language version string: 4.50 NVIDIA
OpenGL version string: 4.6.0 NVIDIA 440.44
OpenGL shading language version string: 4.60 NVIDIA
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 440.44
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
    GL_EXT_shader_group_vote, GL_EXT_shader_implicit_conversions, 

$ dpkg -l | grep TensorRT
ii  libnvinfer-dev                                              5.1.5-1+cuda10.0                                      amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                          5.1.5-1+cuda10.0                                      all          TensorRT samples and documentation
ii  libnvinfer5                                                 5.1.5-1+cuda10.0                                      amd64        TensorRT runtime libraries
ii  python3-libnvinfer                                          5.1.5-1+cuda10.0                                      amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                      5.1.5-1+cuda10.0                                      amd64        Python 3 development package for TensorRT
ii  tensorrt                                                    5.1.5.0-1+cuda10.0                                    amd64        Meta package of TensorRT

How can I solve it?

Thanks a lot~

Geo-referenced Map

Hi :) Thanks for the nice work.

As far as i can see you are only using the Pointcloud to estimate the odometry. I would like to extend this guess by using GPS (as in kitti available). From this i hope to get at map, that is georeferenced so that i can compare it to other localization-algorithm online. Or is there another way to georeference the surfel-map which is delivered as result of the SLAM?

Another question: can there also be an residual added to the optimzation to use landmarks?

Thanks again for the great work! Looking forward to oyu answers!
Best regards,
Sven

OPEN_GL and NVIDIA

Hi! I'm really interested in your SUMA and SUMA++ and I'm trying to run it on my computer.
It was successful to build. Thank you for your posts.
But there was a problem in visualizer.
I think this problem caused with my computer(without graphic card).
I want to ask you that my guess is right.

$ glxinfo | grep "version"
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
    Max core profile version: 4.6
    Max compat profile version: 3.0
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.2
OpenGL core profile version string: 4.6 (Core Profile) Mesa 20.0.8
OpenGL core profile shading language version string: 4.60
OpenGL version string: 3.0 Mesa 20.0.8
OpenGL shading language version string: 1.30
OpenGL ES profile version string: OpenGL ES 3.2 Mesa 20.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
    GL_EXT_shader_implicit_conversions, GL_EXT_shader_integer_mix,

Here is the result of version check and

~/catkin_ws/src/SuMa/bin$ ./visualizer 
OpenGL Context Version 4.6 core profile
GLEW initialized.
OpenGL context version: 4.6
OpenGL vendor string  : Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 530 (SKL GT2)
Segmentation fault (core dumped)

this is the error message.
I didn't try SUMA++. This problem is related with SUMA which is uploaded on github.com/PRBonn/SuMa. But there was no issue link, so I'm asking here.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.