GithubHelp home page GithubHelp logo

hkust-aerial-robotics / open_quadtree_mapping Goto Github PK

View Code? Open in Web Editor NEW
356.0 21.0 87.0 2.99 MB

This is a monocular dense mapping system corresponding to IROS 2018 "Quadtree-accelerated Real-time Monocular Dense Mapping"

License: GNU General Public License v3.0

CMake 26.45% C++ 27.53% Cuda 46.02%
robotics cv aerial-robotics

open_quadtree_mapping's Introduction

QuadtreeMapping

A Real-time Monocular Dense Mapping System

This is a monocular dense mapping system following the IROS 2018 paper Quadtree-accelerated Real-time Monocular Dense Mapping, Kaixuan Wang, Wenchao Ding, Shaojie Shen.

Given a localized monocular camera, the system can generate dense depth maps in real-time using portable devices. The generated depth maps can be used to reconstruct the environment or be used for UAV autonomous flight. An example of real-time reconstruction is mapping example

Red line is the camera trajectory.

A video can be used to illustrate the pipeline and the performance of our system:

video

We would like to thank rpg_open_remode for their open source work. The project inspires us, and the system architecture helps we build QuadtreeMapping.

Please note that, in the system, the depth value is defined as Euclidean distance instead of z value. For example, if the point is (x, y, z) in camera coordinate, the depth value is

This branch uses opencv totally on CPU. If you need to speed up the image undistort step by using OpenCV with CUDA, please checkout to master branch.

1.0 Prerequisites

  • Ubuntu and ROS

Both Ubuntu 16.04 with ROS Kinect and Ubuntu 14.04 with ROS Indigo are ok.

  • CUDA

The system uses GPU to parallel most of the computation. You don't need a powerful GPU to run the code, but it must be a Nvidia GPU that supports CUDA. We use CUDA 8.0 to run the system. CUDA 9.0 has not been tested yet.

  • OpenCV

The OpenCV that comes with ROS is ok.

2.0 install

Since the GPU device varies from each one to another, the CMakeLists.txt needs to be changed accordingly.

cd ~/catkin_ws/src
git clone https://github.com/HKUST-Aerial-Robotics/open_quadtree_mapping.git
cd open_quadtree_mapping

now find the CMakeLists.txt

First, change the Compute capability in line 11 and line 12 according to your device. The default value is 61 and it works for Nvidia TITAN Xp etc. The compute capability of your device can be found at wikipedia.

After the change of CMakeLists.txt, you can compile the QuadtreeMapping.

cd ~/catkin_ws
catkin_make

3.0 parameters

Before running the system, please take a look at the parameters in the launch/example.launch.

  • cam_width, cam_height, cam_fx, cam_cx, cam_fy, cam_cy, cam_k1, cam_k2, cam_r1, cam_r2 are the camera intrinsic parameters. We use pinhole model.
  • downsample_factor is used to resize the image. The estimated depth maps have size of cam_width*downsample_factor x cam_height*downsample_factor. This factor is useful if you want to run QuadtreeMapping on platforms with limited resources, for example Jetson TX1.
  • semi2dense_ratio is the ratio to control output depth map frequency and must be an integer. If the input camera-pose pairs are 30Hz and you only need 10Hz depth estimation, set semi2dense_ratio to 3. High-frequency camera-pose input works better than a low-frequency input even you want a low-frequency depth estimation.

4.0 run QuadtreeMapping

The input of QuadtreeMapping is synchronized Image (sensor_msgs::Image) and Pose (geometry_msgs::PoseStamped). Make sure the ROS messages are the correct type and the timestamps are the same. Images and poses at different frequencies is ok. For example, the system will filter 30Hz images and 10Hz poses into 10Hz image-pose pairs as input.

4.1 run the example

We provide an example of a hand-held camera navigating in a garden. We provide the link to download the bag. The ego-motion is estimated using VINS-MONO.

To run the example, just

roslaunch open_quadtree_mapping example.launch

and play the bag in another terminal

rosbag play example.bag

The results are published as:

  • /open_quadtree_mapping/depth : estimated depth for each pixel, invalid depth are filled with zeros,
  • /open_quadtree_mapping/color_depth : color-coded depth maps for visualization, invaild depth are red,
  • /open_quadtree_mapping/debug : color-coded depth of pixels before depth interpolation.
  • /open_quadtree_mapping/reference : undistorted intensity image,
  • /open_quadtree_mapping/pointcloud : point cloud from the current undistorted intensity image and the extracted depth map.

4.2 run with other datasets or run live

To run with other data, you can modify the launch file according to your settings. To get good results, a few things need to be noticed:

  • Good ego-motion estimation is required. The ego-motion should be precise and have the metric scale. We recommend using VINS-MONO to estimate the camera motion. Visual odometry systems like ORB-SLAM cannot be directly used unless the scale information is recovered.
  • Rotation is not good for the system. Rotation reduces the number of frames QuadtreeMapping can use to estimate the depth map.
  • A good camera is required. A good choice is an industry camera that has a global shutter and is set to a fixed exposure time. Also, images should have a balanced contrast, too bright or too dark is not good.

5.0 fuse into a global map

Quadtree publishes depth maps and the corresponding intensity images. You can fuse them using the tool you like. We use a modified open chisel for 3D reconstruction and use a GPU-based TSDF to support autonomous flight.

6.0 future update

  • modified open chisel will be open source soon.

open_quadtree_mapping's People

Contributors

alexanderkoumis avatar hlx1996 avatar wang-kx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open_quadtree_mapping's Issues

Modified Open Chisel

Hello,

Thank you for your amazing work. I can run the demo bag without any issue. Now I want to try to create the global map. Can you share the modified open chisel?

Thanks,
Leon

a catkin_make error

ROS kinetic
CUDA 8.0
I have this error:

/home/zhw/catkin_ws/src/open_quadtree_mapping/src/check_cuda_device.cu:21:41: fatal error: quadmap/check_cuda_device.cuh: No such file or directory
compilation terminated.
/home/zhw/catkin_ws/src/open_quadtree_mapping/src/seed_matrix.cu:1:35: fatal error: quadmap/seed_matrix.cuh: No such file or directory
compilation terminated.
CMake Error at CMakeFiles/open_quadtree_mapping_node_generated_seed_matrix.cu.o.cmake:215 (message):
Error generating
/home/zhw/catkin_ws/build/open_quadtree_mapping/./open_quadtree_mapping_node_generated_seed_matrix.cu.o

CMake Error at CMakeFiles/open_quadtree_mapping_node_generated_check_cuda_device.cu.o.cmake:215 (message):
Error generating
/home/zhw/catkin_ws/build/open_quadtree_mapping/./open_quadtree_mapping_node_generated_check_cuda_device.cu.o

open_quadtree_mapping/CMakeFiles/open_quadtree_mapping_node.dir/build.make:62: recipe for target 'open_quadtree_mapping/open_quadtree_mapping_node_generated_check_cuda_device.cu.o' failed
make[2]: *** [open_quadtree_mapping/open_quadtree_mapping_node_generated_check_cuda_device.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs....
open_quadtree_mapping/CMakeFiles/open_quadtree_mapping_node.dir/build.make:68: recipe for target 'open_quadtree_mapping/open_quadtree_mapping_node_generated_seed_matrix.cu.o' failed
make[2]: *** [open_quadtree_mapping/open_quadtree_mapping_node_generated_seed_matrix.cu.o] Error 1
CMakeFiles/Makefile2:7540: recipe for target 'open_quadtree_mapping/CMakeFiles/open_quadtree_mapping_node.dir/all' failed
make[1]: *** [open_quadtree_mapping/CMakeFiles/open_quadtree_mapping_node.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

How can I solve it? Thank you!

Something wrong happens when compiling the master-branch program.

Dear Kaixuan:
I've viewed your paper and the program, thank you for your excellent work!
Something goes wrong when I compile the master-branch program, like this:
2020-05-27 17-45-54屏幕截图
This happens under both CUDA 8.0 and CUDA 10.0, with Ubuntu 16.04 and ROS Kinetic.
There' s no error when I try the no_cudaopencv brach, I can run the program with your dataset and view the depth image, but it seems that delays exist between the raw image and the depth image, and that's why I wanna try the master-brach program.
Could you please give me some advice about solving this problem, like some adjustments to the code? Looking forward to your reply.
Best regards,
Zhang Bohua

problem understanding the age_table

hi, I have problem understanding the code in device kernel function: image_to_cost
the age_table_devptr is used, but it seems is never updated .
and also what is the purpose of :
const int my_frame_id = (float) this_age / 10.0 * (float) frame_id;
as what i understand,
frame_id is for one of the previous 5 frames used for cost computation

Can't read in

use VINS publish the two topics
but it is not starting.
i have modified the parameters

Connection error

When I try to connect to VINS Mono,I got these error:
[ERROR] [1545746670.569675978]: Client [/open_quadtree_mapping] wants topic /vins_estimator/camera_pose to have datatype/md5sum [geometry_msgs/PoseStamped/d3812c3cbc69362b77dc0b19b345f8f5], but our version has [nav_msgs/Odometry/cd5e73d190d741a2f92e81eda573aca7]. Dropping connection.
What's the problem is that?

md5sum error

Running open chisel on a machine which have ros indigo, and try to subscribe the topic: /Chisel/full_mesh from a machine which have ros kinetic, got this error mesage like:
wants topic /Chisel/full_mesh to have datatype/md5sum [visualization_msgs/Marker/4048c9de2a16f4ae8e0538085ebf1b97], but our version has [visualization_msgs/Marker/18326976df9d29249efc939e00342cde]. Dropping connection.
Is there any way to communicate between different distribution of ros? Or can I use any other map fusion on ubuntu 16.04 with ros kinetic?

Questions about running MH_01_easy.bag

Hello, I have combined vins_estimator with open_quadtree_mapping, and the data type problem has been solved, but when I try to play MH_01_easy.bag, I find that his depth map is black for a long time, and the point cloud image does not exist(Only a few points). What is going on?

Screenshot from 2019-05-31 09-57-01
Screenshot from 2019-05-31 09-59-25

will it work with fisheye images.

first , thanks very much for sharing this wonderful code.
my image is collected with fisheye camera which has large field of views(around 150) ,
will this work well with it theoretically

System didnt work on my bag( just several depth image in long period)

Hi!
Thanks for your great works in advance. Here I encounter some problems that the system seemed not work well while I test it on my bag.
quad
Here 's a shot of the situation, according to some previous issues, I checked my bag and it have the accurate frequency of camera pose and image(for vins_estimator/camera_pose in 18-20Hz & mv_26801267/image_raw for 40Hz)

I think the camera pose is accurate enough by vins_fusion, for the path is align to the ground truth. And there's not much rotation of camera in my bag, and the resolution of image is 752 x 480 which is enough (maybe?).
But I just get 2~3 depth images through the whole bag (shown above) ! I dont know what's wrong with it. Any suggestion?

No_cuda branch also uses cuda and giving and error

Running executable: /home/control/catkin_ws/devel/lib/open_quadtree_mapping/open_quadtree_mapping_node
Checking available CUDA-capable devices...
1 CUDA-capable GPU detected:
Device 0 - NVIDIA GeForce GT 710
Using GPU device 0: "NVIDIA GeForce GT 710" with compute capability 3.5
GPU device 0 has 1 Multi-Processors, SM 3.5 compute capabilities

read : width 752 height 480
has success set cuda remap.
inremap_2itial the seed (752 x 480) fx: 365.370941, fy: 365.075897, cx: 369.647461, cy: 240.951813.
 initial the publisher ! 



cuda prepare the image cost 9.254000 ms 
till add image cost 8.382000 ms 
terminate called after throwing an instance of 'quadmap::CudaException'
  what():  
[open_quadtree_mapping-2] process has died [pid 7242, exit code -6, cmd /home/control/catkin_ws/devel/lib/open_quadtree_mapping/open_quadtree_mapping_node ~image:=/mv_25001498/image_raw ~posestamped:=/vins_estimator/camera_pose __name:=open_quadtree_mapping __log:=/home/control/.ros/log/879c991a-c5a6-11ec-b961-04421a07c6b1/open_quadtree_mapping-2.log].
log file: /home/control/.ros/log/879c991a-c5a6-11ec-b961-04421a07c6b1/open_quadtree_mapping-2*.log
^C[rosout-1] killing on exit
[master] killing on exit
^Cshutting down processing monitor...
... shutting down processing monitor complete
done

Please help me to solve this
Thank You

More data for test, please

Could you please provide more bags for us to test? For example, fast camera moves, moving obstacles, and so on.
Thx.

zj

CPU version

Dear Wang,

Thanks for your sharing. Do you have a CPU version of dense mapping of this work? Or do you know some works related to this.
Thanks very much.

Mapping

Hi,

I'm very new to this stuff, so sorry for maybe dumb questions. I don't know where the results for the depth map are saved. I have started the example launch and with the bag file and it worked. But I don't know how I can continue. Where are the results saved and how can i do a 3D construction of that. Can I use the OpenChisel for that or do I need the modified one?

Parameter tuning towards better quality

Hi,
Thank you so much for sharing this awesome code.
I wonder if you have any suggestions on how to tune the parameters (both in launch file and stereo_parameter.cuh) in order to obtain higher quality pointclouds, when running in a powerful machine.

currently it is kind of wave towards, and very dependent on the motion.

Thank you very much in advance.
Chang

example.bag download

is the mainland China can access to the link for the example.bag?I tried, but failed

How to get more depth estimates

When I used this program for depth estimation, only a small part of the pixels of my image were successfully estimated for depth, and I want to know which part of the threshold value can be adjusted to make more pixels for depth estimation. I don't need high precision, I just need more pixels for depth estimation

I want to test my own package, but the system doesn't seem to work.

Hello, I have a set of automatic drive data sets.and I got a series of poses with the stereo DSO.then,I put the poses and raw images into the same data structure as yours.As shown in the following figure:
path: DsoPose_grayImg.bag
version: 2.0
duration: 2:32s (152s)
start: Jul 23 2018 11:31:59.97 (1532316719.97)
end: Jul 23 2018 11:34:32.58 (1532316872.58)
size: 4.1 GB
messages: 5141
compression: none [4833/4833 chunks]
types: geometry_msgs/PoseStamped [d3812c3cbc69362b77dc0b19b345f8f5]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
topics: /mv_25001498/image_raw 4832 msgs : sensor_msgs/Image
/vins_estimator/camera_pose 309 msgs : geometry_msgs/PoseStamped

I changed the fx,fy,cx,cy,k1,k2... in the example.launch file accordingly.

The results are as follows:
open_quadtree_mapping_node
Checking available CUDA-capable devices...
1 CUDA-capable GPU detected:
Device 0 - GeForce 940MX
Using GPU device 0: "GeForce 940MX" with compute capability 5.0
GPU device 0 has 3 Multi-Processors, SM 5.0 compute capabilities

read : width 1280 height 720
has success set cuda remap.
inremap_2itial the seed (1280 x 720) fx: 1286.568604, fy: 1286.568604, cx: 634.927002, cy: 404.273163.
initial the publisher !

cuda prepare the image cost 8.929000 ms
till add image cost 6.984000 ms
initialize keyframe cost 3.597000 ms
depthimage min max of the depth: 0 , 0
INFO: publishing depth map

cuda prepare the image cost 6.542000 ms
till add image cost 2.190000 ms
till all semidense cost 40.039000 ms
till all full dense cost 0.001000 ms
till all end cost 0.538000 ms

It seems that only two frames have output results.Is there a problem with my operation? Or is the algorithm unable to deal with the road scene of automatic drive at present? Is the scene moving too fast to find a match?

CUDA Tesla

Hi again,

I want to install Open Quadtree on a system with a NVidia 8400 GS. But catkin run an error for Compute capability version 11.

It works until 30. Can I do something? Or is it not supported?

Best regards
Sebastian

Message type

I found the message from the topic "/vins_estimator/path" is quite the same as the datatype from the example bag except the frame id,so I modified the example.launch file to subscribe this topic,but still got the error of datatype which is:
[ INFO] [1546090057.543096310]: td 0.048238
[ERROR] [1546090057.621738910]: Client [/open_quadtree_mapping] wants topic /vins_estimator/camera_pose to have datatype/md5sum [geometry_msgs/PoseStamped/d3812c3cbc69362b77dc0b19b345f8f5], but our version has [nav_msgs/Odometry/cd5e73d190d741a2f92e81eda573aca7]. Dropping connection.
[ INFO] [1546090057.646040675]: td 0.048237

Sensor Shutter Type

I was wondering if results from a sensor with a rolling shutter would be totally unusable or less accurate?

Paper?

Hi, I am interested in your outstanding work, can you share your paper to give a scientific intro?

No rule to make target '/usr/lib/x86_64-linux-gnu/libvtkproj4-6.2.so.6.2.0'

[ 85%] Building CXX object open_quadtree_mapping/CMakeFiles/open_quadtree_mapping_node.dir/src/main_ros.cpp.o

make[2]: *** No rule to make target '/usr/lib/x86_64-linux-gnu/libvtkproj4-6.2.so.6.2.0', needed by '/home/tq/a2_demo/dense_vins/devel/lib/open_quadtree_mapping/open_quadtree_mapping_node'。 停止。
make[2]: *** 正在等待未完成的任务....
/home/tq/a2_demo/dense_vins/src/open_quadtree_mapping/src/depthmap.cpp: In constructor ‘quadmap::Depthmap::Depthmap(std::size_t, std::size_t, float, float, float, float, cv::Mat, cv::Mat, int)’:
/home/tq/a2_demo/dense_vins/src/open_quadtree_mapping/src/depthmap.cpp:39:110: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘std::size_t {aka long unsigned int}’ [-Wformat=]
d (%d x %d) fx: %f, fy: %f, cx: %f, cy: %f.\n", width, height, fx, fy, cx, cy);
^
/home/tq/a2_demo/dense_vins/src/open_quadtree_mapping/src/depthmap.cpp:39:110: warning: format ‘%d’ expects argument of type ‘int’, but argument 3 has type ‘std::size_t {aka long unsigned int}’ [-Wformat=]
CMakeFiles/Makefile2:938: recipe for target 'open_quadtree_mapping/CMakeFiles/open_quadtree_mapping_node.dir/all' failed
make[1]: *** [open_quadtree_mapping/CMakeFiles/open_quadtree_mapping_node.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed

Does this demo have a special requirement for openCV? I use openCV3.4. How should I solve this problem?

How to generate the dense mapping

image
your work is so great!
The picture above is the effect in your paper?
What tool was used to generate this?
Looking forward to your reply!
Thanks!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.