GithubHelp home page GithubHelp logo

ugurkanates / neurirs2019dronechallengerl Goto Github PK

View Code? Open in Web Editor NEW
38.0 3.0 10.0 7.47 MB

Long-Term Planning with Deep Reinforcement Learning on Autonomous Drones

Home Page: https://sites.google.com/view/longtermplanrl

License: MIT License

Python 100.00%
reinforcementlearning deeplearning pathplanning drones autonomous autonomousdrones drl ppo

neurirs2019dronechallengerl's Introduction

Long-Term Planning with Deep Reinforcement Learning on Autonomous Drones

drone

In this paper, we study a long-term planning scenario that is based on drone racing competitions held in real life. We conducted this experiment on a framework created for "Game of Drones: Drone Racing Competition" at NeurIPS 2019. The racing environment was created using Microsoft's AirSim Drone Racing Lab. A reinforcement learning agent, a simulated quadrotor in our case, has trained with the Policy Proximal Optimization(PPO) algorithm was able to successfully compete against another simulated quadrotor that was running a classical path planning algorithm. Agent observations consist of data from IMU sensors, GPS coordinates of drone obtained through simulation and opponent drone GPS information. Using opponent drone GPS information during training helps dealing with complex state spaces, serving as expert guidance allows for efficient and stable training process. All experiments performed in this paper can be found and reproduced with code at our GitHub repository

[Long-Term Planning with Deep Reinforcement Learning on Autonomous Drones - Ugurkan Ates,July 2020]

https://sites.google.com/view/longtermplanrl

Note: If you use this repository in your research, please cite the pre-print.

@Article{ates2020longterm,
    title={Long-Term Planning with Deep Reinforcement Learning on Autonomous Drones},
    author={Ugurkan Ates},
    year={2020},
    eprint={2007.05694},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

Microsoft AirSim NeurIRS2019 Autonomous Drone Challenge implemented with Deep RL.

You need to download environment binaries from https://github.com/microsoft/AirSim-NeurIPS2019-Drone-Racing All code run & tested with Ubuntu 18.04 LTS, pyTorch with CUDA GPU.

How to use in a Google Cloud Enviroment

  • Connect with SSH (a new terminal appears)

  • After that click settings icon on right corner to a new Instance (we need 2 terminals)

  • On first terminal to start simulation

  • bash run_docker_image.sh "" training OR bash run_docker_image.sh "" training headless . Both seems same at this point

  • on second terminal to code run.

    cd /home/<username>/NeurIRS2019DroneChallengeRL/src/source

    python metest.py (for train code)

    python meplay.py(new code I added for playtesting)

  • It saves models if they get better in training then currently saved scores. You can also download .dat (model) files to normal local PC to see how it reacts. You can also see situtation from debug logs and you can save pictures

Images taken by competitions website

neurirs2019dronechallengerl's People

Contributors

ugurkanates avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

neurirs2019dronechallengerl's Issues

setTrajectoryTrackerGains error

Connected!
Traceback (most recent call last):
File "metest.py", line 439, in
baseline_racer.initialize_drone()
File "E:\race\NeurIRS2019DroneChallengeRL\src\source\baseline_racer.py", line 70, in initialize_drone
self.airsim_client.setTrajectoryTrackerGains(traj_tracker_gains, vehicle_name=self.drone_name)
File "C:\Anaconda3\envs\tf14gpu\lib\site-packages\airsimneurips\client.py", line 807, in setTrajectoryTrackerGains
self.client.call('setTrajectoryTrackerGains', *((gains.to_list(),)+(vehicle_name,)))
File "C:\Users\kienchen\AppData\Roaming\Python\Python37\site-packages\msgpackrpc\session.py", line 41, in call
return self.send_request(method, args).get()
File "C:\Users\kienchen\AppData\Roaming\Python\Python37\site-packages\msgpackrpc\future.py", line 45, in get
raise error.RPCError(self._error)
msgpackrpc.error.RPCError: rpclib: server could not find function 'setTrajectoryTrackerGains' with argument count 2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.