GithubHelp home page GithubHelp logo

w844680976 / openaipong-dqn Goto Github PK

View Code? Open in Web Editor NEW

This project forked from bhctsntrk/openaipong-dqn

0.0 0.0 0.0 4.55 MB

Solving Atari Pong Game w/ Duel Double DQN in Pytorch

License: MIT License

Python 41.29% Jupyter Notebook 58.71%

openaipong-dqn's Introduction

DQN Algorithm for Solving Atari Pong

Python 3 Pytorch GYM Open In Colab

animated

๐Ÿ“œ About

Implementing the Duel Double DQN algorithm with Pytorch to solve the OpenAI GYM Atari Pong environment. This implementation learns to play just in 900 episodes. It takes ~7 hours to train from zero in Google Colab. I add the 900th episode if you want to test. For testing make it SAVE_MODELS = False and LOAD_MODEL_FROM_FILE = True and LOAD_FILE_EPISODE = 900 and MODEL_PATH = "./models/pong-cnn-".

๐Ÿ“ˆ Results

These graphs show the training results for 900 episodes.

โš™ Usage

First create a Python3 environment. Then install these libraries with pip:

  • torch
  • gym[atari]
  • opencv-python

then check parameters at the very beginning of the code. You will find their descriptions as comment lines. After editing the parameters just run the python3 pong.py

๐Ÿ”€ Using w/ Different Environment

You can use this implementation for a different environment. But there are some parameters that you have to change in the agent class definition. This code uses CNN network because Pong environment returns images as states. So there are an image preprocess function in code that cut the image to get rid score table. Cut dimensions can be found in the Agent Class definition. Use cv2.imshow in the preprocess function to see and adjust cutting.

You can also turn off greyscaling process from preprocess function. But you have to modify CNN network because input image channels will change.

๐Ÿ“™ Using in Colab

There is two way to do this:

  • Just click the Open In Colab badge above.
  • Open a new colab(GPU) environment. To save your models connect your drive to colab and copy the pong code into a one cell. Lastly change the save path and point it to your drive like MODEL_SAVE_PATH = "/content/drive/My Drive/pong-models/pong-cnn-" and run.

๐Ÿ—’๏ธ References

Playing Atari with Deep Reinforcement Learning

openaipong-dqn's People

Contributors

bhctsntrk avatar rootofarch avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.