GithubHelp home page GithubHelp logo

ivlabs / person_following_bot Goto Github PK

View Code? Open in Web Editor NEW
102.0 7.0 19.0 1.16 MB

Algorithm for user tracking and following (turtle bot control)

License: MIT License

Python 95.69% Jupyter Notebook 4.31%
robotics deep-learning convolutional-neural-networks depth-image machine-learning ros hacktoberfest person-following-robot

person_following_bot's Introduction

Person Following Robot

GitHub license GitHub stars

This repository aims to make an algoirthm to follow a person autonomously. This algorithm has been successfully tested on a TurtleBot2. Hand gesture for stopping/starting has also added for easier target acquisition. Attempts have been made to reduce the effect of occlusion.

If you are using this repository please cite our paper and star our repo. You can also request the paper on Research Gate or contact authors (typically reply within a day)

Incase you find any problem with the code/repo do let us know by raising an issue.

Table of contents

Working Demo

YouTube Link

Results

Abstract

Helper robots are widely used in various situations, for ex-ample at airports and railway stations. This paper presents a pipeline to multiplex the tracking and detection of a person in dynamic environments using a stereo camera in real-time. Recent developments in object detection using ConvNets have led to robust person detection. These deep convolutional neural networks generally fail to run with high frames rates on devices with less computing power. Trackers are also used to retain the identity of the target person as well as impose fewer constraints on hardware. A concept of multiplexed detection and tracking is used which makes the pipeline faster by many folds. TurtleBot2 is used for prototyping the robot and tuning of the motion controller. Robot Operating System (ROS) is used to set up communication be-tween various nodes of the pipeline. The results found were comparable to current state-of-the-art person followers and can be readily used in day to day life.

Project Pipeline

The complete pipeline for the person following robot can be understood using the given flow chart.

Instructions

Setting up remote server for faster processing:

Host Machine (machine running roscore)

  • export ROS_MASTER_URI=http://192.168.0.113:11311 (replace with host machine ip)
  • export ROS_IP=192.168.0.123 (replace with host machine ip)
  • export ROS_HOSTNAME=192.168.0.123 (replace with host machine ip)

Remote Machine

  • export ROS_MASTER_URI=http://192.168.0.113:11311 (replace with host machine ip)
  • export ROS_IP=192.168.0.123 (replace with local machine ip)
  • export ROS_HOSTNAME=192.168.0.123 (replace with host machine ip)

Run the tracking code

  • First install all requirements using pip3 install requirements.txt. Hardware requirements can be found here
  • Download Yolov3 weight file from here and place it inside the cloned repo.
  • Run the follow.py code to see the output. (Bash command: python3 follow.py)

Dependencies

Hardware

Its recommended to have a Nvidia GPU for faster processing

  • TurtleBot 2 platform
  • Intel Realsense Depth Camera
  • PC/Onboard Computer(Jetson or )

Software

  • Python 3.6+
  • OpenCV-python: link
  • PyTorch: link
  • pyrealsense2: link
  • Robot Operating System (ROS): Link

Citations

If you find our research useful and want to use it, please cite our paper at

@InProceedings{10.1007/978-981-15-3639-7_98,
author="Agrawal, Khush
and Lal, Rohit",
editor="Kalamkar, Vilas R.
and Monkova, Katarina",
title="Person Following Mobile Robot Using Multiplexed Detection and Tracking",
booktitle="Advances in Mechanical Engineering",
year="2021",
publisher="Springer Singapore",
address="Singapore",
pages="815--822",
isbn="978-981-15-3639-7"
}

Contributors

person_following_bot's People

Contributors

akshaykulkarni07 avatar khush3 avatar rajghugare19 avatar take2rohit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

person_following_bot's Issues

How to use follow.py ?

Hi, I've built the software environment on Jetson TX2 as you asked, how do I run the follow.py next? I have try rosrun follow.py but it didn't work. Could you give me a program guide?
Thanks.

Usage of Gaussian Filter

In your paper, you've mentioned this: "How-ever, the depth contains a lot of noise and hence just selecting a single pixel toestimate depth can lead to a highly noisy estimate of depth. To overcome thisproblem, we considered a region of size 50×50 pixel in the center of the trackingwindow(100×100 pixel)."

However, in the code, if tracking is successful, depth is still extracted at the centroid:

`

			calc_x, calc_z = (x+w/2), depth_frame.get_distance(x+w//2, y+h//2)

`

Moreover, I do not see how the get_smoother function is being used:

`

				curr_dist = get_smoother(depth_frame, i)
				distance_list.append(curr_dist)	
				if 1 < curr_dist < 8:
					person_in_range.append(i)
					#yolo_box.remove(i)
					# print('distance : {:4.2f}'.format(curr_dist))`

person_in_range.append(i) - what is this line doing? After appending the YOLO's bounding box, this doesn't seem to be used anywhere?

Am I missing something? Would help if you could clarify these discrepancies. Thanks

The code seems to freeze after choosing ROI and pressing ENTER

The follow.py code seems to freeze after we choose ROI and press enter , another issue faced is the frame seems to lag a lot.
There is no error messages displayed .
The hardware config used is :
Intel Realsense D455 Depth Camera
NVIDIA Jetson Nano Developer Kit

Add license

Hi,

I will be pleased to use parts of your project but please attach, a license file to make it formal.

Best,
Adam

Camera placement and controller working

Great job on the package! With a general overview, I could not find documentation on the implementation of this on a custom robot. Where does it assume the camera needs to be mounted and does it depend on the transform from /odom to camera's frame? In the code, I didn't understand how you're calculating these:

p_error_l = z - 1.5
p_error_a = x - 640

Where are these constants, 1.5 an 640 coming from? Does this encode some previously assumed position of the camera?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.