GithubHelp home page GithubHelp logo

bosshuan / drl_vo_nav Goto Github PK

View Code? Open in Web Editor NEW

This project forked from templerail/drl_vo_nav

0.0 0.0 0.0 124.06 MB

[TRO 2023] DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles

License: GNU General Public License v3.0

Python 93.82% CMake 6.18%

drl_vo_nav's Introduction

DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles

Implementation code for our paper "DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles" in TRO 2023. This repository contains our DRL-VO code for training and testing the DRL-VO control policy in its 3D human-robot interaction Gazebo simulator. Video demos can be found at multimedia demonstrations. Here are two GIFs showing our DRL-VO control policy for navigating in the simulation and real world.

  • Simulation: simulation_demo
  • Real world: hardware_demo

Introduction:

Our DRL-VO control policy is a novel learning-based control policy with strong generalizability to new environments that enables a mobile robot to navigate autonomously through spaces filled with both static obstacles and dense crowds of pedestrians. The policy uses a unique combination of input data to generate the desired steering angle and forward velocity: a short history of lidar data, kinematic data about nearby pedestrians, and a sub-goal point. The policy is trained in a reinforcement learning setting using a reward function that contains a novel term based on velocity obstacles to guide the robot to actively avoid pedestrians and move towards the goal. This DRL-VO control policy is tested in a series of 3D simulated experiments with up to 55 pedestrians and an extensive series of hardware experiments using a turtlebot2 robot with a 2D Hokuyo lidar and a ZED stereo camera. In addition, our DRL-VO control policy ranked 1st in the simulated competition and 3rd in the final physical competition of the ICRA 2022 BARN Challenge, which is tested in highly constrained static environments using a Jackal robot. The deployment code for ICRA 2022 BARN Challenge can be found at "nav-competition-icra2022-drl-vo".

Requirements:

  • Ubuntu 20.04
  • ROS-Noetic
  • Python 3.8.5
  • Pytorch 1.7.1
  • Tensorboard 2.4.1
  • Gym 0.18.0
  • Stable-baseline3 1.1.0

Installation:

This package requires these packages:

We provide two ways to install our DRL-VO navigation packages on Ubuntu 20.04:

  1. independently install them on your PC;
  2. use a pre-created singularity container directly (no need to configure the environment).

1) Independent installation on PC:

  1. install ROS Noetic by following ROS installation document.
  2. install required learning-based packages:
pip install torch==1.7.1+cu110 -f https://download.pytorch.org/whl/torch_stable.html
pip install gym==0.18.0 pandas==1.2.1
pip install stable-baselines3==1.1.0
pip install tensorboard psutil cloudpickle
  1. install DRL-VO ROS navigation packages:
cd ~
mkdir catkin_ws
cd catkin_ws
mkdir src
cd src
git clone https://github.com/TempleRAIL/robot_gazebo.git
git clone https://github.com/TempleRAIL/pedsim_ros_with_gazebo.git
git clone https://github.com/TempleRAIL/drl_vo_nav.git
wget https://raw.githubusercontent.com/zzuxzt/turtlebot2_noetic_packages/master/turtlebot2_noetic_install.sh 
sudo sh turtlebot2_noetic_install.sh 
cd ..
catkin_make

2) Using singularity container: all required packages are installed

  1. install singularity software:
cd ~
wget https://github.com/sylabs/singularity/releases/download/v3.9.7/singularity-ce_3.9.7-bionic_amd64.deb
sudo apt install ./singularity-ce_3.9.7-bionic_amd64.deb
  1. download pre-created "drl_vo_container.sif" to the home directory.

Usage:

Running on PC:

  • train:
roslaunch drl_vo_nav drl_vo_nav_train.launch
  • inference (navigation):
roslaunch drl_vo_nav drl_vo_nav.launch

You can then use the "2D Nav Goal" button on Rviz to set a random goal for the robot, as shown below: sending_goal_demo

Running on singularity container:

  • train:
cd ~
singularity shell --nv drl_vo_container.sif
source /etc/.bashrc
roslaunch drl_vo_nav drl_vo_nav_train.launch
  • inference (navigation):
cd ~
singularity shell --nv drl_vo_container.sif
source /etc/.bashrc
roslaunch drl_vo_nav drl_vo_nav.launch

You can then use the "2D Nav Goal" button on Rviz to set a random goal for the robot, as shown below: sending_goal_demo

Citation

@article{xie2023drl,
  title={DRL-VO: Learning to Navigate Through Crowded Dynamic Scenes Using Velocity Obstacles},
  author={Xie, Zhanteng and Dames, Philip},
  journal={arXiv preprint arXiv:2301.06512},
  year={2023}
}

drl_vo_nav's People

Contributors

zzuxzt avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.