GithubHelp home page GithubHelp logo

bosshuan / navigation_among_pedestrians Goto Github PK

View Code? Open in Web Editor NEW

This project forked from zw199502/navigation_among_pedestrians

0.0 0.0 0.0 1.14 MB

reinforcement learning, navigation, unitree, velodyne, slam, collision avoidance

Shell 0.27% C++ 20.49% Python 76.27% C 0.41% Makefile 0.04% CMake 0.45% Cython 1.85% Dockerfile 0.23%

navigation_among_pedestrians's Introduction

experiment video

https://youtu.be/RVfYF8jYBsQ

navigation_among_pedestrians

We proposed a model-based deep reinforcement learning algorithm for the navigation and collision-free motion planning among crowds. The baselines inlcude the EGO (https://ieeexplore.ieee.org/abstract/document/9197148), LSTM_EGO(https://ieeexplore.ieee.org/abstract/document/9981743), RGL (https://ieeexplore.ieee.org/abstract/document/9340705), SARL (https://ieeexplore.ieee.org/abstract/document/8794134), CADRL (https://ieeexplore.ieee.org/abstract/document/7989037), LSTM_RL (https://ieeexplore.ieee.org/abstract/document/8593871), and ORCA (https://link.springer.com/chapter/10.1007/978-3-642-19457-3_1). We refer to the open-sourced project from https://github.com/ChanganVR/RelationalGraphLearning to implement RGL, SARL, CADRL and LSTM_RL. Because the EGO algorithm is not publicly available, we developed it on our own understanding.

prerequisite

fold introduction

crowd_nav_lidar_scan_ego

  • tensorflow-gpu
  • enter the directory C_library and compile the cython file python setup.py build_ext --inplace. This Cython file is used to simulate Lidar scan. If any error happens, please compile this library within the base environment of anaconda
  • revise the configuration from train.py
  • particularly, select whether to use imitation learning or not in the train.py file parser.add_argument('--if_orca', default=True, action='store_true'), and specify your GPU gpu_index = 0. Normally, if without imitation learning, the training result will be very terrible
  • train your model, python train.py
  • test, python test.py, load your own network weights model_weight_file = os.path.join(args.output_dir, 'weight_episode_12000.h5')

CADRL_LSTMRL_SARL_RGL

  • codes are based on the open-source solution from https://github.com/ChanganVR/RelationalGraphLearning
  • pytorch-cpu, gpu is not supported
  • enter the fold RelationalGraphLearning and install the project, pip install -e .
  • select whether to use imitation learning or not in the crowd_nav/train.py file, parser.add_argument('--il_random', default=False, action='store_true')
  • revise the log file name in crowd_nav/train.py file, parser.add_argument('--output_dir', type=str, default='data/cadrl')
  • choose config type in crowd_nav/train.py file, parser.add_argument('--config', type=str, default='configs/icra_benchmark/cadrl.py'), cadrl.py, lstm_rl.py, sarl.py, and rgl.py are configuration with fixed human number, cadrl_real.py, lstm_rl_real.py, and rgl_real.py are configuration with variable human number, please note that sarl does not allow variable human number
  • set initial imitation learning episodes in crowd_nav/configs/icra_benchmark/config.py(or config_real.py), imitation_learning.il_episodes = 2000
  • train your model in the directory crowd_nav, python train.py --policy cadrl, you can replace the rgl with sarl, cadrl, and lstm_rl
  • test the model python test.py --policy rgl, change the model directory parser.add_argument('-m', '--model_dir', type=str, default='/data/rgl'), select the network weights model_weights = os.path.join(args.model_dir, 'rl_model_4.pth')

MRLCF

  • codes are based on the open-source solution from https://github.com/danijar/dreamerv2
  • tensorflow-gpu
  • train your model, python train.py --logdir ./logdir/online/1 --configs online, another configs 'quadruped_motion_capture' is used for a quadruped robot. if your want to use your own robots, please revise the configs.yaml
  • visualize the training, tensorboard --logdir ./logdir

ORCA

  • run orca policy, python ORCA_policy.py
  • change the human number in crowd_sim.py, self.human_num = 5

RNN_RL

  • codes are based on the open-source solution from https://github.com/AntoineTheb/RNN-RL
  • pytorch-gpu
  • enter the directory C_library and compile the cython file python setup.py build_ext --inplace. This Cython file is used to simulate Lidar scan. If any error happens, please compile this library within the base environment of anaconda
  • configure environment, main.py parser.add_argument("--complex_env", default=False, action="store_true"), false means simple environment with fixed human number, true means complex environment with variable human number and static obstacle number; humans are circles and static obstacles are rectangle
  • train your model, python main.py, set parser.add_argument("--load_model", type=str, default="") and parser.add_argument("--test", default=False, action="store_true")
  • test your model, python main.py, set parser.add_argument("--load_model", type=str, default="/models/step_60000") and parser.add_argument("--test", default=True, action="store_true")

RNN_RL_RAL_Image

  • codes are based on the open-source solution from https://github.com/AntoineTheb/RNN-RL
  • pytorch-gpu
  • RNN_RL baseline uses perceptual LiDAR scans as the observation, whereas RNN_RL_RAL_Image baseline leverages Bird-View occupation maps, same as the observations of our approach.
  • train your model, python main.py
  • It only supports the training in simple environments having fixed human number.

unitree_legged_sdk

  • this is a ros package, please compile it with catkin_make
  • motion_capture.launch, if you have a motion capture system to localize the robot
  • control_via_keyboard, control the robot via keyboard
  • speed_calibration.launch, the speed command does not match real speed, e.g. if you send forward speed 0.3m/s, the real speed may be 0.2m/s
  • unitree_planning, receive the command velocities and control the real robot
  • please run the launch file in the sudo su mode to get the permission of memory

A_LOAM

  • codes are based on the open-source solution from https://github.com/HKUST-Aerial-Robotics/A-LOAM
  • it is a ros package, please compile it with catkin_make
  • aloam_velodyne_VLP16.launch, SLAM algorithm in 3D space to localize the robot
  • human_detection.launch, detect humans on a specific area and localize the robot.
  • gmapping.launch, create grid map on 2D plane
  • quadruped_dwa.launch, move_base

Real experiments

  • the robot moves on a 4x4m area
  • first, compile your Velodyne ros package, connect the Velodyne Lidar sensor, and use the command curl http://192.168.1.201/cgi/short_dist --data "enable=on" to shorten the default minimum range of the Lidar. The default minimum range is 0.5m and the shorted range is 0.1m. Although the range is shortened, the accuracy and stability are degraded
  • second, compile the A_LOAM ros package, roslauch roslaunch aloam_velodyne human_detection.launch. you need to wait 5 seconds and then let pedestrians move into the area.
  • third, compile the unitree ros package, roslaunch unitree_legged_sdk unitree_planning.launch
  • fourth, run the DRL-based motion planner
  • please source all ros packages in the ~/.bashrc file, like source XXX/devel/setup.bash
  • please study our source codes for more details
  • please feel free to contact us if you have any problems, [email protected]

navigation_among_pedestrians's People

Contributors

zw199502 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.