GithubHelp home page GithubHelp logo

martinruenz / co-fusion Goto Github PK

View Code? Open in Web Editor NEW
489.0 27.0 130.0 371 KB

Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

Home Page: http://visual.cs.ucl.ac.uk/pubs/cofusion/index.html

License: GNU General Public License v3.0

CMake 2.72% C++ 70.58% Cuda 11.36% C 0.01% GLSL 11.81% Python 2.56% Shell 0.95%
slam visual-slam fusion segmentation tracking icra reconstruction rgbd rgbd-slam

co-fusion's Introduction

Co-Fusion

This repository contains Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects.

Crucially, we use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers that are of no interest to the robot, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system has the benefit to enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.

To run Co-Fusion in real-time, you have to use our approach based no motion cues. If you prefer to use semantic cues for segmentation, please pre-process the segmentation in advance and feed the resulting segmentation masks into Co-Fusion.

More information and the paper can be found here.

If you would like to see a short video comparing ElasticFusion and Co-Fusion, click on the following image: Figure of Co-Fusion

Publication

Please cite this publication, when using Co-Fusion (bibtex can be found on project webpage):

  • Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects, Martin Rünz and Lourdes Agapito, 2017 IEEE International Conference on Robotics and Automation (ICRA)

Building Co-Fusion

The script Scripts/install.sh shows step-by-step how Co-Fusion is build. A python-based install script is also available, see Scripts\install.py.

Dataset and evaluation tools

Tools

Synthetic sequences:

Real (Asus Xtion) sequences, in klg format:

Hardware

In order to run Co-Fusion smoothly, you need a fast GPU with enough memory to store multiple models simultaneously. We used an Nvidia TitanX for most experiments, but also successfully tested Co-Fusion on a laptop computer with an Nvidia GeForce™ GTX 960M. If your GPU memory is limited, the COFUSION_NUM_SURFELS CMake option can help reduce the memory footprint per model. While the tracking stage of Co-Fusion calls for a fast GPU, the motion based segmentation performance depends on the CPU and accordingly, having a nice processor helps as well.

Reformatting code:

The code-formatting rules for this project are defined .clang-format. Run:

clang-format -i -style=file Core/**/*.cpp Core/**/*.h Core/**/*.hpp GUI/**/*.cpp GUI/**/*.h GUI/**/*.hpp

ElasticFusion

The overall architecture and terminal-interface of Co-Fusion is based on ElasticFusion and the ElasticFusion readme file contains further useful information.

New command line parameters (see source-file)

  • -run: Run dataset immediately (otherwise start paused).
  • -static: Disable multi-model fusion.
  • -confO: Initial surfel confidence threshold for objects (default 0.01).
  • -confG: Initial surfel confidence threshold for scene (default 10.00).
  • -segMinNew: Min size of new object segments (relative to image size)
  • -segMaxNew: Max size of new object segments (relative to image size)
  • -offset: Offset between creating models
  • -keep: Keep all models (even bad, deactivated)
  • -dir: Processes a log-directory (Default: Color####.png + Depth####.exr [+ Mask####.png])
  • -depthdir: Separate depth directory (==dir if not provided)
  • -maskdir: Separate mask directory (==dir if not provided)
  • -exportdir: Export results to this directory, otherwise not exported
  • -basedir: Treat the above paths relative to this one (like depthdir = basedir + depthdir, default "")
  • -colorprefix: Specify prefix of color files (=="" or =="Color" if not provided)
  • -depthprefix: Specify prefix of depth files (=="" or =="Depth" if not provided)
  • -maskprefix: Specify prefix of mask files (=="" or =="Mask" if not provided)
  • -indexW: Number of digits of the indexes (==4 if not provided)
  • -nm: Ignore Mask####.png images as soon as the provided frame was reached.
  • -es: Export segmentation
  • -ev: Export viewport images
  • -el: Export label images
  • -em: Export models (point-cloud)
  • -en: Export normal images
  • -ep: Export poses after finishing run (just before quitting if '-q')
  • -or: Outlier rejection strength (default 3).

Acknowledgements

This work has been supported by the SecondHands project, funded from the EU Horizon 2020 Research and Innovation programme under grant agreement No 643950.

co-fusion's People

Contributors

ahundt avatar algomorph avatar christian-rauch avatar martinruenz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

co-fusion's Issues

Co-fusion build exception

Hello everyone.

I am getting an error when I am trying to run :
python3 install.py -a

The OS is Ubuntu 16.04.4 Xenial.

The only thing that I have adjusted trying to resolve the issue is in the install.py and install.sh wherever there is "make -j8", replaced with "make -j6". Because my CPU has 6 cores.

The error is the following :
no match for ‘operator=’ (operand types are ‘std::unique_ptr<pangolin::VideoInterface>’ and ‘pangolin::VideoInterface*’)

You can find a detailed log of the build here.

Any ideas why this exception is occuring?

Reproduce Co-Fusion paper result on car4 and room4

Hi,

I managed to run and build the software in ubuntu 20.04. However, when I evaluate the results for car4 and room4, I found that the camera AT RMSE is 5cm and 14cm, which are much higher than the error indicated in the paper. I used scripts from https://github.com/martinruenz/dataset-tools to evaluate the result for RGBD image sequence of car4 and room4.

I have tried both the default command line arguments and the example command line arguments from dataset tools by adding
-segMinNew 0.008
-crfSmooth 1
-crfPos 0.51
But neither of them gives same results as mentioned in the paper.

Do you have any idea what could be the reason? Could you share the parameters that I can use to reproduce the result? Thanks!

cudaSafeCall() Runtime API error : out of memory.

xxsong5@xxsong5:~/co-fusion/build/GUI$ ./CoFusion -l /media/xxsong5/SAMSUNG/data_real_intel/outintel.klg
Reading log file: /media/xxsong5/SAMSUNG/data_real_intel/outintel.klg which has 3861 frames.
Initialised MainController. Frame resolution is set to: 640x480
Exporting results to: /media/xxsong5/SAMSUNG/data_real_intel/outintel.klg-export//
Created model with max number of vertices: 9437184
Initialised Multi-Object Fusion. Each model can have up to 9437184 surfel (TEXTURE_DIMENSION: 3072x3072).
New label detected (40,184 72,376) - try relocating...
Found new model.
Created model with max number of vertices: 9437184
Lost a model.
Deactivating model... keeping data. Surfels: 12935 confidence threshold: 0.496591
New label detected (40,24 88,136) - try relocating...
Found new model.
/home/xxsong5/co-fusion/Core/Cuda/containers/device_memory.cpp(203) : cudaSafeCall() Runtime API error : out of memory.
/home/xxsong5/co-fusion/Core/GPUTexture.cpp(79) : cudaSafeCall() Runtime API error : out of memory.

build error on Co-Fusion part

Hi
I followed the install.sh file to build co-fusion. However on the final part, when running make -j8, I met this error:
image

May I ask you some advice on that? I am using ubuntu14, the cuda version is 8.0. For other packages, I just follow install.sh file. Many thanks.

Cheers,
Yanhao

build Error:error: unused parameter ‘cinfo’ [-Werror=unused-parameter]

Hi martinruenz,

When I used install.sh to build co-fusion code, some errors are listed blow.
How to solve it, Thanks in advance!

================

In file included from /home/tau/Project/co-fusion/GUI/Tools/LogReader.h:29:0,
from /home/tau/Project/co-fusion/GUI/Tools/PangolinReader.h:18,
from /home/tau/Project/co-fusion/GUI/Tools/PangolinReader.cpp:16:
/home/tau/Project/co-fusion/GUI/Tools/JPEGLoader.h:29:35: error: unused parameter ‘cinfo’ [-Werror=unused-parameter]
static void jpegFail(j_common_ptr cinfo) { assert(false && "JPEG decoding error!"); }
^
/home/tau/Project/co-fusion/GUI/Tools/KlgLogReader.cpp:128:33: error: unused parameter ‘value’ [-Werror=unused-parameter]
void KlgLogReader::setAuto(bool value) {}
^
In file included from /home/tau/Project/co-fusion/GUI/Tools/LiveLogReader.cpp:19:0:
/home/tau/Project/co-fusion/GUI/Tools/LiveLogReader.h:47:24: error: unused parameter ‘frame’ [-Werror=unused-parameter]
void fastForward(int frame) {}

Support for other RGB-D Hardware?

Hi,

I have Kinect 1, but have yet to try your code

It seems that Asus Xtion was used in your paper and officially supported.

The best Intel Realsense, the D435 model is going for 180 USD and seems to much better specs than the Asus or Kinect

It'll be awesome if Co-fusion supports Intel Realsense

Any suggestion how the Intel Realsense (or other sensors) should be supported?
I may purchase one and submit a PR

Thanks!

Install on ubuntu 20

Hi, I want to install it on ubuntu 20 however i found the version maybe too high, is there any method suggested to help me?

Very much appreciated if someone gave any suggestions!

Frame and intrinsics

Hi (sorry for this third post),

As I want to use your work with different cameras I wanted to know at which sensor (rgb or depth?) correspond exactly the intrinsic parameters? Moreover I wanted to be sure that rgb and depth frames should be provided already rectified and registered.

Thank you

car4-noise.klg is corrupted

The linked log file car4-noise.klg (http://visual.cs.ucl.ac.uk/pubs/cofusion/data/car4-noise.klghttp://visual.cs.ucl.ac.uk/pubs/cofusion/data/car4-noise.klg) seems to be corrupted.

When reading it via CoFusion, it reports:

Not a JPEG file: starts with 0x76 0x57

while the other klg logs work correct.

For reference:

$ md5sum *.klg
93a3c45b13da0b80658df4a60b428c2d  car4-noise.klg
68f5c04bc0136a5436dd6224d857e719  place-items.klg
a12b2e3d93dd805f958fb1dc99b1da0d  room4-noise.klg
80a0577908dd29378b2897a8da1a37b2  teddy-handover.klg

Can someone verify that the log car4-noise.klg works? If so, can you provide the MD5 hash sum of the working log?

Question about GPU memory

Hi , i used this project to run on my laptop computer with an Nvidia GeForce GTX 1060 6g , but i oftern got my computer crashed with the memory of GPU only about 70M left . From the README i found "the COFUSION_NUM_SURFELS CMake option can help reduce the memory footprint per model" , so how can i do it ? thanks !

strange behavior

i get the following output on my 16.04 with a fresh install
cofusion_car4-full

any idea what could be wrong?

Extract only valid surfels from GPU to CPU

Hi,

I am actually using your solution as part of ROS node and was wondering if there was a way to get directly from the GPU only the valid surfels. Because model->downloadMap() extracts even the invalid surfels and so as to publish a point cloud it would be faster to loop only through valid ones. Or do you have a hint about what can I change in the source code to make it possible?

Thank you

Errors about find_package(gSLICr REQUIRED) in CmakeLists.txt

I follow the intall.sh and got an error while compiling the co-fusion (the last several lines in install.sh):
cmake
-DBOOST_ROOT="${BOOST_ROOT}"
-DOpenCV_DIR="${OpenCV_DIR}"
-DPangolin_DIR="${Pangolin_DIR}"
..

Error:
CMake Error at CMakeLists.txt:30 (find_package):
By not providing "FindgSLICr.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "gSLICr", but
CMake did not find one.

Could not find a package configuration file provided by "gSLICr" with any
of the following names:

gSLICrConfig.cmake
gslicr-config.cmake

Add the installation prefix of "gSLICr" to CMAKE_PREFIX_PATH or set
"gSLICr_DIR" to a directory containing one of the above files. If "gSLICr"
provides a separate development package or SDK, be sure it has been
installed.

The code is tested on Ubuntu 16.04. the gSLICr package is compiled successfully in path: co-fusion/deps/gSLCr . is there any way to fix it?

cudaSafeCall() Runtime API error : invalid texture reference.

When i was trying to run ./CoFusion -l ./sliding-clock.klg , It opens the GUI and shows nothing. When I click pause button, I'm getting this following error

co-fusion/Core/Cuda/cudafuncs.cu(646) : cudaSafeCall() Runtime API error : invalid texture reference.
co-fusion/Core/GPUTexture.cpp(79) : cudaSafeCall() Runtime API error : invalid texture reference.

Screenshot from 2019-09-19 11-45-53

I used install.sh file to build the co-fusion. I didn't get any error while running install.sh. It was successful.

I'm using ubuntu16.04, cuda8.0, GeForce RTX 2070.

what's mean of Your GPU "GeForce GT 740M" isn't in the ICP Step performance database

when i run ./CoFusion -l /to/my/teddy-handover.klg , and i get that
Your GPU "GeForce GT 740M" isn't in the ICP Step performance database, please add it
Your GPU "GeForce GT 740M" isn't in the RGB Step performance database, please add it
Your GPU "GeForce GT 740M" isn't in the RGB Res performance database, please add it
Your GPU "GeForce GT 740M" isn't in the SO3 Step performance database, please add it
what does that mean?

CoFusion crashes when exit

Hi,

I managed to build CoFusion on Xenial with Cuda9 (I have an Nividia Geforce GTX950 M). Everything seems to work but very often, when I quit the program by shutting off the window or with Ctrl-c my screen freezes and I have to force shutdown with the power button... Has anyone observed the same behavior, or get an idea about what is going wrong?

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.