This is a ROS package, where I have implemented Laser Object Extraction using Python and ROS.
To run files, simply add this package to your catkin workspace and run $ catkin_make
The aim of this example is to find and locate the legs of the rack which are installed in the warehouse. The task is to locate and track rack legs in the warehouse, where the robot is moving. The data is given in the recorded rosbag and consists only of LaserScan and Image sensor readings.
The solution to this task is partially inspired by Przybyła, M. (2017, July). Detection and tracking of 2D geometric obstacles from LRF data. In 2017 11th International Workshop on Robot Motion and Control (RoMoCo) (pp. 135-141). IEEE.
work, where the author has presented an idea for LaserScan segmentation based on adjacent point distances. The points in the LaserScan are traversed through each angle, and it's value, range, and distance values are used for computation of their affiliation to any LaserScan segments. These segments are called PointSets and are shown below:
Afterward, every segment is checked for multiple criteria, and the ones that satisfy all, can be assumed to be rack legs candidates as shown below:
Please add a corresponding rosbag to the /rviz folder or change path to it.
To run the example, simply run
roslaunch laser_object_extractor rack_legs_extraction_launch.launch
The script will run the rosbag file, execution node, and will open RViZ window.
It can be clearly seen both from screenshots and video recordings that although rack legs are mostly identified correctly, there are some candidates that are not rack legs that robot is looking for. Therefore, there is some space for improvement.
Although only LaserScan readings were used, these rack locations can not be separately found using only RGB camera readings. Although rack legs can be clearly identified through segmentation or rack detection, camera due to its nature can not by itself give appropriate measure of their distance and location.
Therefore, sensor fusion has to be used to improve detection accuracy. The main limitation of this implementation is that it is not using the camera's data for rack detection. For example, given the proper transformation relations between those two sensors(unfortunately, this information was not given), camera features like focal length, skew, distortions, etc., each laser point could be mapped to a corresponding pixel value. Afterward, the image could be divided into rack legs and else regions, which are then combined with calculated candidates. This could greatly reduce the number of false positives.
Finally, it can be seen that at some time instances not all rack legs are identified due to the occlusion, but they are still detected when robot displaces to other location. This is limitation of the laser sensor.
Rack Leg extraction from LaserScan https://youtu.be/YnFh-PRTJPY