GithubHelp home page GithubHelp logo

doom's Introduction

Optimizing Value-Based Reinforcement Learning using DQN

In this project, we compare the performance of two different architectures of deep Q-networks (DQNs) for value-based reinforcement learning, i.e. DQN and Duel DQN in the context of ViZDoom game environment.

Introduction

Reinforcement learning is a process in which an agent interacts with an environment to learn how to make decisions. Value-based reinforcement learning involves estimating the value of actions in a given state to maximize rewards. The deep Q-network algorithm combines deep learning and reinforcement learning to approximate the Q-value function, which is the expected cumulative reward for taking a specific action in a given state.

Architecture

The DQN architecture uses a single neural network to approximate the Q-value function. It takes the current state of the environment as input and outputs the estimated Q-values for all possible actions in that state. The agent selects the action with the highest Q-value to maximize its rewards.

The Duel DQN architecture is an extension of the DQN architecture that separates the estimation of the state value and action advantage functions. It uses two parallel neural networks: one estimates the state value function, which measures the value of being in a particular state regardless of the action taken, and the other estimates the action advantage function, which measures the advantage of taking a particular action in a particular state over other possible actions. The Q-value function is then computed as the sum of the state value and action advantage functions.

Results

To compare the performance of the two architectures, we train agents using both DQN and Duel DQN on the ViZDoom game environment. We monitor the learning process and performance of the agents using mean reward per episode on both training and testing game episodes.

Optimal Hyperparameters

Hyperparameter Value
Learning Rate 0.007
Batch Size 128
Replay Memory Size 10000
Discount Factor 0.88
Frame Repeat 12
Epsilon Decay 0.99

DQN


Train + Test Reward Scores Validation Reward Scores

Double DQN


Train + Test Reward Scores Validation Reward Scores

Dueling DQN


Train + Test Reward Scores (Baseline) Train + Test Reward Scores (Optimized) Validation Reward Scores

Conclusion

The results show that the Duel DQN architecture outperforms the DQN architecture in terms of learning speed and final performance. The Duel DQN agent is able to achieve a higher score in the game environment and learns a more optimal policy faster than the DQN agent. This is primarily due to the separation of the state value and action advantage functions, which helps to reduce overestimation of the Q-values and improve the stability of the learning process.

Project Structure

File/Directory Description
agents DQN, Double DQN and Duel DQN agent implementations
checkpoints Saved model files
config W&B Sweep Configuration
out Execution log files
images Concept diagrams/images
models DQN and Duel DQN model implementations
notebooks Relevant jupyter notebooks
plots Plots for train and test reward scores
scenarios ViZDoom game scenario configuration files
scripts Slurm scripts
main.py Entry point (Training and Testing) the agent
sweep.py W&B Agent entry point

Usage

To run the code, follow these steps:

  1. Clone the repository

    git clone [email protected]:utsavoza/doom.git
  2. Setup and activate the virtual environment

    python3 -m venv .
    source ./bin/activate
  3. Install the required dependencies

    pip install -r requirements.txt
  4. Configure and train the DQN agent with different set of hyperparams

    python main.py --batch-size=64 --lr=0.00025 --discount-factor=0.99 --num-epochs=50 --memory-size=10000
  5. See the trained DQN agent in action

    python main.py --load-model=True --checkpoints='duel' --skip-training=True

References

Authors

  • Rithviik Srinivasan (rs8385)
  • Utsav Oza (ugo1)

License

The project is licensed under MIT license. See LICENCE.

doom's People

Contributors

utsavoza avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.