GithubHelp home page GithubHelp logo

openchisel's Introduction

OpenChisel

An open-source version of the Chisel chunked TSDF library. It contains two packages:

open_chisel

open_chisel is an implementation of a generic truncated signed distance field (TSDF) 3D mapping library; based on the Chisel mapping framework developed originally for Google's Project Tango. It is a complete re-write of the original mapping system (which is proprietary). open_chisel is chunked and spatially hashed inspired by this work from Neissner et. al, making it more memory-efficient than fixed-grid mapping approaches, and more performant than octree-based approaches. A technical description of how it works can be found in our RSS 2015 paper.

This reference implementation does not include any pose estimation. Therefore the pose of the sensor must be provided from an external source. This implementation also avoids the use of any GPU computing, which makes it suitable for limited hardware platforms. It does not contain any system for rendering/displaying the resulting 3D reconstruction. It has been tested on Ubuntu 14.04 in Linux with ROS hydro/indigo.

API Usage

Check the chisel_ros package source for an example of how to use the API. The ChiselServer class makes use of the chisel_ros API.

Dependencies

Compilation note: For speed, it is essential to compile open_chisel with optimization. You will need to add the flag -DCMAKE_BUILD_TYPE=Release to your catkin_make command when building.

chisel_ros

chisel_ros is a wrapper around open_chisel that interfaces with ROS-based depth and color sensors. The main class chisel_ros provides is ChiselServer, which subscribes to depth images, color images, TF frames, and camera intrinsics.

Note: you will also need to get the messages package, chisel_msgs to build this.

Supported ROS image types:

Depth Images

  • 32 bit floating point mono in meters (32FC1)
  • 16 bit unsigned characters in millimeters (16UC1)

Color Images

  • BRG8
  • BGRA8
  • Mono8

Dependencies

  • Eigen
  • C++11
  • catkin (ros-hydro or ros-indigo or higher)
  • PCL 1.8 compiled with stdC++11 enabled.
  • ROS OpenCV cv_bridge

A note on PCL

Unfortunately, PCL 1.7x (the standard PCL included in current versions of ROS) doesn't work with C++11. This project makes use of C++11, so in order to use Chisel, you will have to download and install PCL 1.8 from source, and compile it with C++11 enabled.

  1. Download PCL 1.8 from here: https://github.com/PointCloudLibrary/pcl
  2. Modify line 112 of CMakeLists.txt in PCL to say SET(CMAKE_CXX_FLAGS "-Wall -std=c++11 ...
  3. Build and install PCL 1.8
  4. Download pcl_ros from here: https://github.com/ros-perception/perception_pcl
  5. Change the dependency from PCL to PCL 1.8 in find_package of the CMakeLists.txt
  6. Compile pcl_ros.
  7. Rebuild Chisel

If PCL does not gain c++11 support by default soon, we may just get rid of c++11 in OpenChisel and use boost instead.

Launching chisel_ros Server

Once built, the chisel_ros server can be launched by using a launch file. There's an example launch file located at chisel_ros/launch/launch_kinect_local.launch. Modify the parameters as necessary to connect to your camera and TF frame.

<launch>
    <!-- Use a different machine name to connect to a different ROS server-->
    <machine name="local" address="localhost" default="true"/>
    <!-- The chisel server node-->
    <node name="Chisel" pkg="chisel_ros" type="ChiselNode" output="screen"> 
        <!-- Size of the TSDF chunks in number of voxels -->
        <param name="chunk_size_x" value="16"/>
        <param name="chunk_size_y" value="16"/>
        <param name="chunk_size_z" value="16"/>
        <!--- The distance away from the surface (in cm) we are willing to reconstuct -->
        <param name="truncation_scale" value="10.0"/>
        <!-- Whether to use voxel carving. If set to true, space near the sensor will be
             carved away, allowing for moving objects and other inconsistencies to disappear -->
        <param name="use_voxel_carving" value="true"/>
        <!-- When true, the mesh will get colorized by the color image.-->
        <param name="use_color" value="false"/>
        <!-- The distance from the surface (in meters) which will get carved away when
             inconsistencies are detected (see use_voxel_carving)-->
        <param name="carving_dist_m" value="0.05"/>
        <!-- The size of each TSDF voxel in meters-->
        <param name="voxel_resolution_m" value="0.025"/>
        <!-- The maximum distance (in meters) that will be constructed. Use lower values
             for close-up reconstructions and to save on memory. -->
        <param name="far_plane_dist" value="1.5"/>
        <!-- Name of the TF frame corresponding to the fixed (world) frame -->
        <param name="base_transform" value="/base_link"/>
        <!-- Name of the TF frame associated with the depth image. Z points forward, Y down, and X right -->
        <param name="depth_image_transform" value="/camera_depth_optical_frame"/>
        <!-- Name of the TF frame associated with the color image -->
        <param name="color_image_transform" value="/camera_rgb_optical_frame"/>
        <!-- Mode to use for reconstruction. There are two modes: DepthImage and PointCloud.
             Only use PointCloud if no depth image is available. It is *much* slower-->
        <param name="fusion_mode" value="DepthImage"/>
    
        <!-- Name of the depth image to use for reconstruction -->
        <remap from="/depth_image" to="/camera/depth/image"/>
        <!-- Name of the CameraInfo (intrinsic calibration) topic for the depth image. -->
        <remap from="/depth_camera_info" to="/camera/depth/camera_info"/>
        <!-- Name of the color image topic -->
        <remap from="/color_image" to="/camera/color/image"/>
        <!-- Name of the color camera's CameraInfo topic -->
        <remap from="/color_camera_info" to="/camera/color/camera_info"/>
        
        <!-- Name of a point cloud to use for reconstruction. Only use this if no depth image is available -->
        <remap from="/camera/depth_registered/points" to="/camera/depth/points"/>
    </node>
</launch>

Then, launch the server using roslaunch chisel_ros <your_launchfile>.launch. You should see an output saying that open_chisel received depth images. Now, you can visualize the results in rviz.

Type rosrun rviz rviz to open up the RVIZ visualizer. Then, add a Marker topic with the name /Chisel/full_mesh. This topic displays the mesh reconstructed by Chisel.

Services

chisel_ros provides several ROS services you can use to interface with the reconstruction in real-time. These are:

  • Reset -- Deletes all the TSDF data and starts the reconstruction from scratch.
  • TogglePaused -- Pauses/Unpauses reconstruction
  • SaveMesh -- Saves a PLY mesh file to the desired location of the entire scene
  • GetAllChunks -- Returns a list of all of the voxel data in the scene. Each chunk is stored as a seperate entity with its data stored in a byte array.

openchisel's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openchisel's Issues

setupFrustum bug

in open_chisel/src/camera/PinholeCamera.cpp function: void PinholeCamera::SetupFrustum(const Transform& view, Frustum* frustum) const, line 2 where Frustum::SetFromParams(xxx) was used, but the 4th param should not be intrinsics.GetFx() ?
the code is :
void PinholeCamera::SetupFrustum(const Transform& view, Frustum* frustum) const
{
assert(frustum != nullptr);
frustum->SetFromParams(view, nearPlane, farPlane, intrinsics.GetFy(), intrinsics.GetFy(), intrinsics.GetCx(), intrinsics.GetCy(), width, height);
}

launch dataset error

I can build the ChiselNode node. When I launch the launch_freiburg_dataset file, I meet an error. How can I solve the problem?
2018-11-15 17 58 16

Wrong results when computing Chunk- and Voxel-IDs

Hi,
the usage of meter floating points for distances and resolutions cause numerical issues.
For example, we have a 1D map with a voxel resolution of 0.03m and a chunk size of 16.
The position -0.48m would be assigned to ChunkID -1. But with the issues, the chunkmanager gives me chunk ID -2.
This is because the computed float Chunk ID is slightly smaller then -1.0 and thus std::floor computes -2.

When you compute now the corresponding voxel coordinates with the wrong ChunkID, these are wrong as well.

To fix this, one could this imperformant "hack":

        inline ChunkID GetIDAt(const Vec3& pos) const
        {
            float voxelResolution = std::round(this->voxelResolutionMeters*1000);

            float x = std::round(pos(0)*1000) / (voxelResolution * chunkSize(0));
            float y = std::round(pos(1)*1000) / (voxelResolution * chunkSize(1));
            float z = std::round(pos(2)*1000) / (voxelResolution * chunkSize(2));

            return ChunkID(static_cast<int>(std::floor(x)),
                           static_cast<int>(std::floor(y)),
                           static_cast<int>(std::floor(z)));
        }

or use internally millimieters to improve the numerical stability.

These pictures (integrating chunks into a chunk map) visualize the problem, that the voxels nearby chunk borders doesn't get visited.

mesh_without_bugfix

Mesh without bugfix.

mesh_with_bugfix

After fixing the computation of the IDs, the mesh doesn't have the holes anymore.

OpenChisel works with 14.04 LTS stock setup for me

Hi,
I can compile and run OpenChisel on two different computers with Ubuntu 14.04 LTS and the normal ros-indigo-* packages and the stock PCL 1.7.2.
To prevent getting an segmentation fault I just need to set the cmake build type to "Release" or "RelWithDebInfo".
When I don't set any build type or e.g. write "set(CMAKE_BUILD_TYPE Debug)", I get an segmentation fault.

Documentation for pointcloud format and API cycles

Thanks for building this nice open source library!

I got a few questions about the input data and the API cycles which I haven't found in the README and in the ROS implementation. I'm currently trying to use open_chisel on project tango without the ros connection.

  • How should the pointcloud look like? should the points already be transformed by the pose transformation?
  • what is your typical usage of the api methods? Do you generate a mesh on every depth input frame?
  • when do you do the garbage collection?

Thanks a lot!

Edit: Sorry, I've red the paper and took another look into the code:

  • garbage collection is done for every chunk after updating.
  • pointcloud points get transformed by the camera transformation

Camera extrinsics

I am curious about the camera extrinsics to be passed into the IntegrateDepthScan or IntegrateDepthScanColor methods. When I pass in my extrinsic transform I get good reconstructions from each camera, but they are not merged to the origin. It seems like they are reconstructed where the cameras are in space.

Maybe you can spot something I am doing wrong.. I manually got the calibration from my kinects and am currently not using the ros_server code.

Here's a photo and the camera parameters to be more specific:

screenshot from 2017-05-23 15-13-33

And here are the camera extrinsics for each camera:

Camera 1 Translation
-0.14346 0.198077 -2.68291

Camera 1 Rotation:
0.595904 -0.273674 -0.754984
-0.800451 -0.278087 -0.530986
-0.0646346 0.920744 -0.384776

Camera 2 Translation
-0.252438 0.226531 -2.69458

Camera 2 Rotation:
0.861868 0.17409 0.476315
0.50523 -0.376048 -0.776744
0.043894 0.910099 -0.412059

Camera 3 Translation
-0.0606926 0.626859 -1.58876

Camera 3 Rotation:
-0.993451 0.0622725 0.0957956
0.104139 0.148552 0.983406
0.0470085 0.98694 2-0.154064

Any help would be greatly appreciated! :)

Error in hash function - p3 = 8349279; should be p3 = 83492791; according to paper

// https://github.com/personalrobotics/OpenChisel/blob/master/open_chisel/include/open_chisel/ChunkManager.h
//
// Spatial hashing function from Matthias Teschner
// Optimized Spatial Hashing for Collision Detection of Deformable Objects
struct ChunkHasher {
// Three large primes are used for spatial hashing.
static constexpr size_t p1 = 73856093;
static constexpr size_t p2 = 19349663;
static constexpr size_t p3 = 8349279;

std::size_t operator()(const ChunkID& key) const {
	return (key(0) * p1 ^ key(1) * p2 ^ key(2) * p3);
}

};

How to get triangle vertices of the OpenChisel mesh ?

Hello,

I would like to know how to get the triangle vertices of the mesh generated by OpenChisel. I am working on Image texturing the mesh generated by OpenChisel to get better quality mesh visually.

It would be great if you can support me with this query.

Thanks,
Janani

Fixed Frame [map] does not exist when running with Freiburg RGBD dataset

When I launch the launch_freiburg_dataset.launch with the rosbag "rgbd_dataset_freiburg3_long_office_household.bag", the rviz report error:
no tf data. Fixed Frame [map] does not exist.

I didn't edit the launch file. It seems that no tf data can be found or no tf frame is set. Meanwhile, there is nothing shown in the rviz as expected, except the grid. How can I fix it? Thanks a lot.

vertex indices never used

I know this repository is not maintained now, I should open a issue in voxblox repository, but I find that the mesh module of voxblox is largely inherits frome OpenChisel, so I open issue in both repositories.

I find that the mesh module will compute vertex indices in every chunk, but they are never used in ros node or when saving mesh to disk. And the mesh has lots of repetitive vertices, since every three vertices make a triangle mesh, is there any way to make use of vertex indices, or try to update the mesh partically, since the mesh becomes larger and larger when the map becomes bigger.

Erroneous Mesh Generation with custom dataset

I am trying to run this code on input data from ar core. I am struggling with what shoild be fed as camera pose. Is it camera pose in physical world or camera extrinsics (world to camera) ?

Usage without ROS

Hi, @mklingen thank you for openning this nice work!
I have tested the code with ROS, it runs well. But I don't want to use ROS, it's too heavy and I'm not familiar with ROS, can I use openChisel without it? I'm trying to use chisel directly, the code compiled successfully, but did not reconstruct any thing, bellow is its output:

CHISEL: Done with scan without color 0 chunks 0meshes before collect.0 chunks 0meshes after collect.CHISEL: Integrating a scan

Is there any tip or demo for me, thanks.

Add point cloud support

Currently, open_chisel only supports depth images, and can't deal with things like laser scanners. Raycasting and point clouds must be implemented to make this work.

Can not launch chisel_ros node: seems like eigen error

Sorry to disturb,
I'm trying to install openchisel on my computer. I was following all the procedure in the readme. Everything is ok until I launch the chisel_ros node. It came out some error as below. It seems like something wrong with eigen. Have you met this error before? What kinds of eigen version are you using? Thanks a lot.

pj@pj:~/catkin_ws$ roslaunch chisel_ros pinhole.launch
... logging to /home/pj/.ros/log/0d5229a8-3455-11e9-a5b6-2c56dc4dddb6/roslaunch-pj-9512.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
WARNING: disk usage in log directory [/home/pj/.ros/log] is over 1GB.
It's recommended that you use the 'rosclean' command.

started roslaunch server http://pj:33275/

SUMMARY

PARAMETERS

  • /Chisel/Colorimage_topic: /hybrid_map/refer...
  • /Chisel/Depthimage_topic: /camera/depth/ima...
  • /Chisel/Odometry_topic: /vins_estimator/c...
  • /Chisel/camera_type: Pinhole
  • /Chisel/carving_dist_m: 0.05
  • /Chisel/chunk_size_x: 16
  • /Chisel/chunk_size_y: 16
  • /Chisel/chunk_size_z: 16
  • /Chisel/cx: 323.120483398
  • /Chisel/cy: 236.743209839
  • /Chisel/far_plane_dist: 3
  • /Chisel/fusion_mode: DepthImage
  • /Chisel/fx: 385.754486084
  • /Chisel/fy: 385.754486084
  • /Chisel/image_height: 480
  • /Chisel/image_width: 640
  • /Chisel/truncation_scale: 10
  • /Chisel/use_color: False
  • /Chisel/use_voxel_carving: True
  • /Chisel/voxel_resolution_m: 0.05
  • /rosdistro: lunar
  • /rosversion: 1.13.6

NODES
/
Chisel (chisel_ros/ChiselNode)

auto-starting new master
process[master]: started with pid [9522]
ROS_MASTER_URI=http://localhost:11311

setting /run_id to 0d5229a8-3455-11e9-a5b6-2c56dc4dddb6
process[rosout-1]: started with pid [9535]
started core service [/rosout]
process[Chisel-2]: started with pid [9542]
[ INFO] [1550587551.670122841]: Starting up chisel node.
[ INFO] [1550587551.685902827]: Mode depth image
[ INFO] [1550587551.685925961]: Subscribing.
ChiselNode: /usr/include/eigen3/Eigen/src/Core/DenseStorage.h:128: Eigen::internal::plain_array<T, Size, MatrixOrArrayOptions, 32>::plain_array() [with T = float; int Size = 16; int MatrixOrArrayOptions = 0]: Assertion `(reinterpret_cast<size_t>(eigen_unaligned_array_assert_workaround_gcc47(array)) & (31)) == 0 && "this assertion is explained here: " "http://eigen.tuxfamily.org/dox-devel/group__TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE !!! ***"' failed.
[Chisel-2] process has died [pid 9542, exit code -6, cmd /home/pj/catkin_ws/devel/lib/chisel_ros/ChiselNode __name:=Chisel __log:=/home/pj/.ros/log/0d5229a8-3455-11e9-a5b6-2c56dc4dddb6/Chisel-2.log].
log file: /home/pj/.ros/log/0d5229a8-3455-11e9-a5b6-2c56dc4dddb6/Chisel-2
.log

has 0 mesh

when I put my picture in openchisel,I found that I had 0 mesh
Screenshot from 2019-07-31 11-47-08

huge memory consumption

Hi @mklingen
Thanks for sharing the code. I complied and built the program without ros and used opencv as the image wrapper. However, I found the memory consumption jumped to over 2.1G when running GetChunkIDsIntersecting() function under chisel::integrateDepthScan() (I assume this is the main API for depthimage input, right?). Is it normal or have I done something wrong? Thanks.

Convert to hydro/catkin

Currently OpenChisel assumes a fuerte/rosbuild system. This should be converted to hydro/catkin for future use.

build error

when I build openchiel ,I have a error:

Could not find a package configuration file provided by "open_chisel" with
any of the following names:

open_chiselConfig.cmake
open_chisel-config.cmake

toooooo slow test on the Freiburg RGBD dataset

hi:
i test chisel on my desktop equipped with Intel® Xeon(R) CPU E3-1231 v3 @ 3.40GHz × 8, and i run the launch_freiburg_dataset.launch with fault parameters . every update frame i can get nice reconstruction result, but too slow( Almost 3 seconds). i have no idea if there are some important parameters need to be set, or some significant things to be all attention. thanks!

ChiselServer drops data when TSDF mapping is lagging behind framerate.

I've tested OpenChisel repeatedly with the sample launch file that uses the freiburg dataset and I'm noticing that the resulting meshes tend to vary, missing some piece or other, and ending up with a different number of vertexes each time.

Is this behaviour expected? It does the same with other datasets as well, like when interacting with rtabmap, and it gives the impression that it is only using a small portion of the data to fill in the TSDF (that is, only a fraction of the frames/point clouds it receives).

An example of the different meshes I obtain: freiburg_meshes.zip

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.