GithubHelp home page GithubHelp logo

obstacle_detector's Introduction

The obstacle_detector package

The obstacle_detector package provides utilities to detect and track obstacles from data provided by 2D laser scanners. Detected obstacles come in a form of line segments or circles. The package was designed for a robot equipped with two laser scanners therefore it contains several additional utilities. The working principles of the method are described in an article provided in the resources folder.

The package requires Armadillo C++ library for compilation and runtime.


Visual example of obstacle detector output.
Fig. 1. Visual example of obstacle detector output.


  1. The nodes and nodelets 1.1 The scans_merger 1.2 The obstacle_extractor 1.3 The obstacle_tracker 1.4 The obstacle_publisher
  2. The messages
  3. Launch files
  4. The displays

1. The nodes and nodelets

The package provides separate nodes/nodelets to perform separate tasks. In general solution the data is processed in a following manner:

two laser scans -> scans merger -> merged scan or pcl -> obstacle extractor -> obstacles -> obstacle tracker -> refined obstacles

For some scenarios the pure obstacle extraction directly from a laser scan (without tracking) might be sufficient.

The nodes are configurable with the use of ROS parameter server. All of the nodes provide a private params service, which allows the process to get the latest parameters from the parameter server.

All of the nodes can be in either active or sleep mode, triggered by setting the appropriate variable in the parameter server and calling params service. In the sleep mode, any subscribers or publishers are shut down and the node does nothing until activated again.

For the ease of use it is recomended to use appropriate Rviz panels provided for the nodes with the package. The Rviz panels communicate via parameter server and service-client calls, therefore the names of the nodes must be preserved unchanged (cf. launch files for examples).

1.1. The scans_merger node

This node converts two messages of type sensor_msgs/LaserScan from topics front_scan and rear_scan into a single laser scan of the same type, published under topic scan and/or a point cloud of type sensor_msgs/PointCloud, published under topic pcl. The difference between both is that the resulting laser scan divides the area into finite number of circular sectors and put one point (or actually one range value) in each section occupied by some measured points, whereas the resulting point cloud simply copies all of the points obtained from sensors.

The input laser scans are firstly rectified to incorporate the motion of the scanner in time (see laser_geometry package). Next, two PCLs obtained from the previous step are synchronized and transformed into the target coordinate frame at the current point in time.


Visual example of scans_merger output
Fig. 2. Visual example of `scans_merger` output.


The resulting messages contain geometric data described with respect to a specific coordinate frame (e.g. robot). Assuming that the coordinate frames attached to two laser scanners are called front_scanner and rear_scanner, both transformation from robot frame to front_scanner frame and from robot frame to rear_scanner frame must be provided. The node allows to artificially restrict measured points to some rectangular region around the robot frame as well as to limit the resulting laser scan range. The points falling behind this region will be discarded.

Even if only one laser scanner is used, the node can be useful for simple data pre-processing, e.g. rectification, range restriction or recalculation of points to a different coordinate frame. The node uses the following set of local parameters:

  • ~active (bool, default: true) - active/sleep mode,
  • ~publish_scan (bool, default: false) - publish the merged laser scan message,
  • ~publish_pcl (bool, default: true) - publish the merged point cloud message,
  • ~ranges_num (int, default: 1000) - number of ranges (circular sectors) contained in the 360 deg laser scan message,
  • ~min_scanner_range (double, default: 0.05) - minimal allowable range value for produced laser scan message,
  • ~max_scanner_range (double, default: 10.0) - maximal allowable range value for produced laser scan message,
  • ~min_x_range (double, default: -10.0) - limitation for points coordinates (points with coordinates behind these limitations will be discarded),
  • ~max_x_range (double, default: 10.0) - as above,
  • ~min_y_range (double, default: -10.0) - as above,
  • ~max_y_range (double, default: 10.0) - as above,
  • ~fixed_frame_id (string, default: map) - name of the fixed coordinate frame used for scan rectification in time,
  • ~target_frame_id (string, default: robot) - name of the coordinate frame used as the origin for the produced laser scan or point cloud.

The package comes with Rviz panel for this node.


Rviz panel for the scans_merger node.
Fig. 3. Rviz panel for the `scans_merger` node.


1.2. The obstacle_extractor node

This node converts messages of type sensor_msgs/LaserScan from topic scan or messages of type sensor_msgs/PointCloud from topic pcl into obstacles which are published as messages of custom type obstacles_detector/Obstacles under topic raw_obstacles. The PCL message must be ordered in the angular fashion, because the algorithm exploits the polar nature of laser scanners.


Visual example of obstacle_extractor output.
Fig. 4. Visual example of `obstacle_extractor` output.


The input points are firstly grouped into subsets and marked as visible or not (if a group is in front of neighbouring groups, it is visible. Otherwise it is assumed to be occluded). The algorithm extracts segments from each points subset. Next, the segments are checked for possible merging between each other. The circular obstacles are then extracted from segments and also merged if possible. Resulting set of obstacles can be transformed to a dedicated coordinate frame.

The node is configurable with the following set of local parameters:

  • ~active (bool, default: true) - active/sleep mode,
  • ~use_scan (bool, default: false) - use laser scan messages,
  • ~use_pcl (bool, default: true) - use point cloud messages (if both scan and pcl are chosen, scans will have priority),
  • ~use_split_and_merge (bool, default: true) - choose wether to use Iterative End Point Fit (false) or Split And Merge (true) algorithm to detect segments,
  • ~circles_from_visibles (bool, default: true) - detect circular obstacles only from fully visible (not occluded) segments,
  • ~discard_converted_segments (bool, default: true) - do not publish segments, from which the circles were spawned,
  • ~transform_coordinates (bool, default: true) - transform the coordinates of obstacles to a frame described with frame_id parameter,
  • ~min_group_points (int, default: 5) - minimum number of points comprising a group to be further processed,
  • ~max_group_distance (double, default: 0.1) - if the distance between two points is greater than this value, start a new group,
  • ~distance_proportion (double, default: 0.00628) - enlarge the allowable distance between points proportionally to the range of point (use scan angle increment in radians),
  • ~max_split_distance (double, default: 0.2) - if a point in group lays further from a leading line than this value, split the group,
  • ~max_merge_separation (double, default: 0.2) - if distance between obstacles is smaller than this value, consider merging them,
  • ~max_merge_spread (double, default: 0.2) - merge two segments if all of their extreme points lay closer to the leading line than this value,
  • ~max_circle_radius (double, default: 0.6) - if a circle would have greater radius than this value, skip it,
  • ~radius_enlargement (double, default: 0.25) - artificially enlarge the circles radius by this value,
  • ~frame_id (string, default: map) - name of the coordinate frame used as origin for produced obstacles (used only if transform_coordinates flag is set to true).

The package comes with Rviz panel for this node.


Rviz panel for the obstacle_extractor node.
Fig. 5. Rviz panel for the `obstacle_extractor` node.


1.3. The obstacle_tracker node

This node tracks and filters the circular obstacles with the use of Kalman filter. It subscribes to messages of custom type obstacle_detector/Obstacles from topic raw_obstacles and publishes messages of the same type under topic tracked_obstacles. The tracking algorithm is applied only to the circular obstacles, hence the segments list in the published message is simply a copy of the original segments. The tracked obstacles are supplemented with additional information on their velocity.


Visual example of obstacle_tracker output.
Fig. 6. Visual example of `obstacle_tracker` output.


The node works in a synchronous manner with the default rate of 100 Hz. If detected obstacles are published less often, the tracker will supersample them and smoothen their position and radius. The following set of local parameters can be used to tune the node:

  • ~active (bool, default: true) - active/sleep mode,
  • ~copy_segments (bool, default: true) - copy detected segments into tracked obstacles message,
  • ~loop_rate (double, default: 100.0) - the main loop rate in Hz,
  • ~tracking_duration (double, default: 2.0) - the duration of obstacle tracking in the case of lack of incomming data (after this time from the last corresponding measurement the tracked obstacle will be removed from the list),
  • ~min_correspondence_cost (double, default 0.3) - a threshold for correspondence test,
  • ~std_correspondence_dev (double, default 0.15) - (experimental) standard deviation of the position ellipse in the correspondence test,
  • ~process_variance (double, default 0.01) - variance of obstacles position and radius (parameter of Kalman Filter),
  • ~process_rate_variance (double, default 0.1) - variance of rate of change of obstacles values (parameter of Kalman Filter),
  • ~measurement_variance (double, default 1.0) - variance of measured obstacles values (parameter of Kalman Filter),
  • ~frame_id (string, default: map) - name of the coordinate frame in which the obstacles are described,

The package comes with Rviz panel for this node.


Rviz panel for the obstacle_tracker node.
Fig. 7. Rviz panel for the `obstacle_tracker` node.


1.4. The obstacle_publisher node

The auxiliary node which allows to publish a set of virtual, circular obstacles in the form of message of type obstacles_detector/Obstacles under topic obstacles. The node is mostly used for off-line tests. The following parameters are used to configure the node:

  • ~active (bool, default: true) - active/sleep mode,
  • ~reset (bool, default: false) - reset time for obstacles motion calculation (used by dedicated Rviz panel),
  • ~fusion_example (bool, default: false) - produce obstacles showing fusion,
  • ~fission_example (bool, default: false) - produce obstacles showing fission,
  • ~radius_margin (double, default: 0.25) - artificially enlarge the circles radius by this value in meters,
  • ~loop_rate (double, default: 10.0) - the main loop rate in Hz,
  • ~frame_id (string, default: map) - name of the coordinate frame in which the obstacles are described.

The following parameters are used to provide the node with a set of obstacles:

  • ~x_vector (std::vector<double>, default: []) - the array of x-coordinates of obstacles center points,
  • ~y_vector (std::vector<double>, default: []) - the array of y-coordinates of obstacles center points,
  • ~r_vector (std::vector<double>, default: []) - the array of obstacles radii,
  • ~x_vector (std::vector<double>, default: []) - the array of velocities of obstacles center points in x direction,
  • ~y_vector (std::vector<double>, default: []) - the array of velocities of obstacles center points in y direction.

The package comes with Rviz panel for this node.


Rviz panel for the obstacle_publisher node.
Fig. 8. Rviz panel for the `obstacle_publisher` node.


2. The messages

The package provides three custom message types. All of their numerical values are provided in SI units.

  • CircleObstacle
    • geometry_msgs/Point center - center of circular obstacle,
    • geometry_msgs/Vector3 velocity - linear velocity of circular obstacle,
    • float64 radius - radius of circular obstacle with added safety margin,
    • float64 true_radius - measured radius of obstacle without the safety margin.
  • SegmentObstacle
    • geometry_msgs/Point first_point - first point of the segment (in counter-clockwise direction),
    • geometry_msgs/Point last_point - end point of the segment.
  • Obstacles
    • Header header
    • obstacle_detector/SegmentObstacle[] segments
    • obstacle_detector/CircleObstacle[] circles

3. The launch files

Provided launch files are good examples of how to use obstacle_detector package. They give a full list of parameters used by each of provided nodes.

  • demo.launch - Plays a rosbag with recorded scans and starts all of the nodes with Rviz configured with appropriate panels.
  • nodes_example.launch - Runs all of the nodes with their parameters set to default values.
  • nodelets_example.launch - Runs all of the nodelets with their parameters set to default values.

4. The displays

For better visual effects, appropriate Rviz display for Obstacles messages was prepared. Via its properties, one can change the colors of the obstacles.

Author: Mateusz Przybyla

obstacle_detector's People

Contributors

milesial avatar tysik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

obstacle_detector's Issues

Is /clock being published?

I just use the obstacle_extractor node for only one 2D laser scanners. but when I want to receive the data on the topic: raw_obstacles.
The following error occurred:

WARNING: no messages received and simulated time is active.
Is /clock being published?

what should i do ?
Thanks a lot.

[FATAL] [1672077317.645195442]: [Obstacle Extractor]: Unexpected error

I got this error when trying to run obstacle detector in raspberry pi 4 with 4GB RAM.

I am unable to run obstacle extractor nodes.

I have checked that there are values published out from scan merger which is pcl

Not sure that is this the computational problem of raspberry pi or any problems else, i have tried to run the same code in my pc, everything seems fine until i git clone the code into raspberry pi.

I have also downloaded all dependencies, including armadillo C++, OpenBLAS, LAPACK into raspberry pi, every node works fine except for obstacle extractor.

Is anyone doing the same thing, which is trying to implement obstacle detector in raspberry pi? Pls share some of ur opinion if available. Thx very much.

Here is my launch file:

<launch>
    
    <!-- Activate rplidar (Publish laser scan data to topic /scan) -->
    <node name="rplidarNode"          pkg="rplidar_ros"  type="rplidarNode" output="screen">
        <param name="serial_port"         type="string" value="/dev/rplidar"/>  
        <param name="serial_baudrate"     type="int"    value="256000"/>
        <param name="frame_id"            type="string" value="front_scanner"/>
        <param name="inverted"            type="bool"   value="false"/>
        <param name="angle_compensate"    type="bool"   value="true"/>
        <remap from="scan" to="front_scan"/>
    </node>

    <!-- Scan Merger -->
    <node name="scans_merger" pkg="obstacle_detector" type="scans_merger_node">
        <param name="active"            value="true"/>
        <param name="publish_scan"      value="true"/>
        <param name="publish_pcl"       value="true"/>

        <param name="ranges_num"        value="1000"/>

        <param name="min_scanner_range" value="0.05"/>
        <param name="max_scanner_range" value="10.0"/>

        <param name="min_x_range"       value="-10.0"/>
        <param name="max_x_range"       value="10.0"/>
        <param name="min_y_range"       value="-10.0"/>
        <param name="max_y_range"       value="10.0"/>

        <param name="fixed_frame_id"   value="map"/>
        <param name="target_frame_id"   value="robot"/>
    </node>

    <!-- Obstacle Extraction -->
    <node name="obstacle_extractor" pkg="obstacle_detector" type="obstacle_extractor_node">
        <param name="active"               value="true"/>
        <param name="use_scan"             value="true"/>
        <param name="use_pcl"              value="true"/>

        <param name="use_split_and_merge"    value="true"/>
        <param name="circles_from_visibles"  value="true"/>
        <param name="discard_converted_segments" value="true"/>
        <param name="transform_coordinates"  value="true"/>

        <param name="min_group_points"     value="5"/>

        <param name="max_group_distance"   value="0.1"/>
        <param name="distance_proportion"  value="0.00628"/>
        <param name="max_split_distance"   value="0.2"/>
        <param name="max_merge_separation" value="0.2"/>
        <param name="max_merge_spread"     value="0.2"/>
        <param name="max_circle_radius"    value="0.07"/>
        <param name="min_circle_radius"    value="0.05"/>
        <param name="max_circle_radius2"    value="0.09"/>
        <param name="min_circle_radius2"    value="0.08"/>
        <param name="radius_enlargement"   value="0.3"/>

        <param name="min_x_limit"    value="-3.0"/>
        <param name="min_y_limit"    value="-3.0"/>
        <param name="max_x_limit"    value="3.0"/>
        <param name="max_y_limit"    value="3.0"/>

        <param name="frame_id"             value="front_scanner"/>
    </node>

    <node name="obstacle_tracker" pkg="obstacle_detector" type="obstacle_tracker_node">
        <param name="active"                  value="true"/>

        <param name="loop_rate"               value="20"/>
        <param name="tracking_duration"       value="2.0"/>
        <param name="min_correspondence_cost" value="0.6"/>
        <param name="std_correspondence_dev"  value="0.15"/>
        <param name="process_variance"        value="0.1"/>  
        <param name="process_rate_variance"   value="0.1"/>  
        <param name="measurement_variance"    value="1.0"/> 

        <param name="frame_id"                value="front_scanner"/>

    </node>

    <!-- Obstacle Publisher -->
    <node name="obstacle_publisher" pkg="obstacle_detector" type="obstacle_publisher_node">
        <param name="active"           value="true"/>
        <param name="reset"            value="false"/>

        <param name="fusion_example"   value="false"/>
        <param name="fission_example"  value="false"/>

        <param name="loop_rate"        value="10.0"/>
        <param name="radius_margin"    value="0.25"/>

        <rosparam param="x_vector">[-3.0, -2.5, -2.5, -1.0, -1.0, -0.5, 2.5, 0.2, 2.0, 4.5, 4.0, 1.5]</rosparam>
        <rosparam param="y_vector">[1.5, 0.0, -2.5, 3.0, 1.0, -4.0, -3.0, -0.9, 0.0, 0.0, 2.0, 2.0]</rosparam>
        <rosparam param="r_vector">[0.5, 0.5, 1.5, 0.5, 0.7, 0.5, 1.5, 0.7, 0.7, 1.0, 0.5, 1.0]</rosparam>
        <rosparam param="vx_vector">[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]</rosparam>
        <rosparam param="vy_vector">[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]</rosparam>

        <param name="frame_id"         value="map"/>
    </node>

    <!-- Static transform publisher -->
    <node pkg="tf" type="static_transform_publisher" name="front_scanner_to_base" args="0 0 0 0 0 0 robot front_scanner 100" />
    <node pkg="tf" type="static_transform_publisher" name="base_to_map" args="0 0 0 0 0 0 map robot 100" />
</launch>

Offset in detected obstacle position

@tysik first of all, I wanted to thank you for making this package, and mention that I am using it in a project that I am working on called MAVs. Your package is documented some here for our project.

We are running into a couple things that we are having trouble explaining and I am wondering if maybe you know what may be causing them or how to explain them.

First of all, here are some results that we are getting when we run demoA

test

Questions:

  1. There is a dark red circle that is offset from the green circle(filled with light red). Can you please explain what these different circles are and why they are offset? I thought that it might be radius_enlargment but it is offset, so I am not sure.

  2. The light blue points are the point cloud that gets fed into the algorithm. For the top right obstacle, it can be seen that the predicted obstacle does not encompass all of the blue points. Is this expected? I would think that it would predict that an obstacle would be there.

  3. It looks like the predicted obstacles are always towards the LiDAR , is this because the obstacle_extractor node fits a line to the visible data and then a circle over this line?

Thank you very much!

Catkin_make failed

I'm new to ROS so my issue is not a big deal, but if I try to run the "catkin_make" command I always get the same error: "The specified base path "/home/felix/repofolder/obstacle_detector" contains a package but "catkin_make" must be invoked in the root of workspace"
How can I build and run the obstacle detector?

How to get Bounding Box from detected obstacles?

Hi

I would like to use this package and get Bounding Box(Axis Aligner Bounding Box) of the detected obstacles(Circles and Segments).. Would be that possible? For each of the Segments and circles obstacles , a bounding box would be fitted. To fit a box min and max coordinates of a the obstacles can be used? Or any better idea how to do it?

Calculate the obstacle velocity

Hi, tysik! Thanks for providing this obstacle_detector package! It's awesome!

I have a problem about the calculation of velocity. I don't find the place where did you calculate velocity.
The extract node initialize the velocity to 0 and publish it to topic raw_obstacles.
The tracking node subscribe the topic raw_obstacles, and use kalman filter to update the value.
But the custom type TrackedObstacle initialize the velocity by obstacle's velocity which the value should be 0. Can you tell me how did you calculate the velocity?
Thank you very much!!

YanHao,Chiu

Can width of obstacle be limited?

Is there an existing parameter I can modify to set a limit (both low and high limits) on the width of the obstacle to be published?

Code crashed on windows-iterator -- has problem

Hi, tysik! Thanks for providing this obstacle_detector package!

I am using your code on windows with Visual Studio. I found there are a few cases in ObstacleExtractor where you set "i = --temp_itr". However, if temp_itr is the begin of the container, this line may crash.

Could you please take a look?

Thanks

Calibration of two Lasers

Thanks for your package. It works for a Laser (SICK S300).
There are two Lasers that I don't know the exact relative position. Do you have any idea about the calibration of two Lasers?

how to use this package

Hi,

I have a robot with 2D hokuyo liDAR publishing /scan message. I would like to use the obstacle_extractor to detector circle segments. I have written a launch file as below. However, I did not see the topic obstacles_detector/Obstacles showed in "rostopic list". How can I run this node? Please note I do not need obstacle_tracker or scan_merge. Also, how can I display the obstacles in rviz?

<launch>
  <node name="obstacle_extractor" pkg="obstacle_detector" type="obstacle_extractor_node">
    <param name="active"               value="true"/>
    <param name="use_scan"             value="true"/>
    <param name="use_pcl"              value="false"/>
    <param name="use_split_and_merge"    value="true"/>
    <param name="circles_from_visibles"  value="true"/>
    <param name="discard_converted_segments" value="true"/>
    <param name="transform_coordinates"  value="true"/>
    <param name="min_group_points"     value="5"/>
    <param name="max_group_distance"   value="0.1"/>
    <param name="distance_proportion"  value="0.00628"/>
    <param name="max_split_distance"   value="0.2"/>
    <param name="max_merge_separation" value="0.2"/>
    <param name="max_merge_spread"     value="0.2"/>
    <param name="max_circle_radius"    value="0.6"/>
    <param name="radius_enlargement"   value="0.3"/>
    <param name="frame_id"             value="map"/>
  </node>
</launch>

Enhancement request: 3D tracking capability

@tysik,

While searching for ROS packages that could track noisy object detections I came across your package. I really like well organized modular architecture of this package where each node/nodelet fulfils a well defined task.

I have a suggestion to extend the obstacle_tracker node to track on full 3D space. While it is not required for this package I believe defined in/out topics makes a good case to use this particular node itself with other detection nodes (That operates on RADAR/Point Cloud). I think that enhancement would not effect the current intended functionality where we can think of 2D tracking as a special case of 3D tracking where z is always 0.

Please let me know your thoughts. If you think it's worthwhile but would welcome a PR rather than spending your valuable time let me know. I would be happy to chip in.

Looking forward to positive response.

Problem with Obstacles.h

Hi,

We have read through one of your previous replies to a problem that seems very similar to ours in regards to the missing Obstacles.h. We tried your suggestion and commented out the Obstacle_extractor and tracker with the first build, but we still were not able to see the file. We did a grep to look for it in the entire workspace and no such file existed. Is there somewhere else that we should be looking for it? Or are they any other suggestions you might have to create this Obstacles.h?

Thanks

detection lines and circles

Hi:

I am using the obstacle_detector.cpp as a standalone c++ class in my application. Currently it worked very well to detect trafic cones as circles. Now I also want to detect wall as long straight lines. Could you please let me know how I can detect obstacles and walls at the same time? Seems you already have that in the code. but I could not figure out how to get the circle and lines separately.

THanks,

Hangs after changing frame ids

When changing frame id from the panels, rviz hangs. It seems that the service updateParams does not respond. Maybe the try catch instead of if (!waitForTf) should be used.

About the project package

Hi, tysik! Thanks for providing this obstacle_detector package!

I want to use this package on my project.I have a question.
Can this package track obstacles on real-time?

Thanks!

Why isthe Armadillo C++ needed?

I saw Armadillo C++ was used in "figure_fitting.h" to solved a liner regression. Is this the only place the library is need?

Is point_set visuable

I am using your code for one of my project and in general it worked very well. But I have a question I hope you can help.

In ObstacleExtractor.cpp function "groupPoints", you use the following line:

point_set.is_visible = (abs(sin_d) > sin_dp || range < prev_range);

But think it should be

point_set.is_visible = (abs(sin_d) > sin_dp || range > prev_range);

Could you please confirm?

Thank you very much.

Question regarding plugin

Hello!
I'm currently having problem with not being able to open the panels in Rviz after launch. So now i'm just changing values of the nodes in the launch file.
I'm very new at ROS and using this repo so is there something special i need to do to get the panels opened? Or if someone has any information that can point me in the right direction.
Friendly regards
Marcus

Issues moving the LiDAR

@tysik, I hope that all is well with you!

I still running into issues when I try to move the vehicle. It seems to work OK when the sensor is in one place, but when you go to move it. The output on the rviz screen bounces all over the place and is obviously incorrect.

Do you think that a static transform is appropriate as this
https://github.com/tysik/obstacle_detector/blob/master/launch/demo.launch#L8
even when the vehicle is moving?

Or is there another way to do this?

Thanks!

About the lack of Obstacles.h

Hi tysik!
Thanks for your package of obstacle detector.
I've found a problem in file obstacle_extractor.h, line 43 "#include <obstacle_detector/Obstacles.h>". But I can't find this file "Obstacles.h" in corresponding directory. Could you tell me where it is? Thx!

Circles extraction

Hi, tysik! Thanks for providing this obstacle_detector package!

I am planning to incorporate this package into Autoware and having some problems to extract circles. In rviz, I can see that the segments have been extracted and displayed, however, some segments are not converted to circles even though the radius should be smaller than the threshold. What do you think could cause this issue?

apply question

hello,my robot is equipped with 2d lider and depth camera.i want to use this package on my turtlebot ,but i do not how to use the algorithm package,please help me. thanks

Subscribe to circles-center-x

Hello,
Do you know why i cant subscribe to circles?
I am trying to get the last message but it says out of index.

%%%
def callback(msg):
result=PoseWithCovariance()
result.pose.position.x=msg.circles[-1].center.x

def main():

rospy.init_node("republisher")
pub=rospy.Publisher("target_pos_lidar", PoseWithCovariance, queue_size=1000)
sub=rospy.Subscriber("raw_obstacles", Obstacles, callback)
r=rospy.Rate(10)
result=PoseWithCovariance()
while not rospy.is_shutdown():
    pub.publish(result)
    r.sleep()

Does this node support ROS Noetic?

We are planning on using this node for a group project for a self driving rc car but are unsure what distributions supports this node, Does it work with all distributions. If not which ones are known to work? We are still relatively new to ROS so if there is an easy way to know which nodes work for wich versions i would love to know.

Thank you in advance,
Jesper Jensen

demo.launch not working

Hello, i tried to run "$ roslaunch obstacle_detector demo.launch" and returned the following error:

image

Circles extraction

Hi,
I've been using the obstcle_detector package for a while now but I still don't know how to fix this problem.

When the lidar is near some walls, i can see on Rviz, trough Obstacle Extractor, some circles even though they shouldn't appear. I tried to modify some parameters on the Obstacle Extractor but still can't get rid of these circles.
Is there a solution for this?
Example here: https://prnt.sc/qy0tut
Of course this is one example but this happens several times while the lidar moves on.
Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.