In this assignment, I implemented an Othello agent with the application of Deep Q-Network and Prioritized Experience Replay.
This file contains the main logic for the Othello game. It uses the Pygame library for graphics and user interaction. The game board is represented as a 2D array and the players make moves by selecting valid positions on the board. The game loop handles the flow of the game and updates the board state accordingly.
This file implements the Deep Q-Network (DQN) algorithm for playing Othello. It uses a neural network model to approximate the Q-values of different game states. The DQN agent learns from experience by replaying past game episodes and adjusting its Q-values based on the observed rewards. The agent chooses actions based on the highest Q-value for a given state.
This file contains the implementation of the Minimax algorithm for playing Othello. The Minimax algorithm is a classic approach for finding the best move in a game with perfect information. It explores the game tree and evaluates the utility of different moves to determine the optimal move.
This file defines the Othello class, which represents the game environment. It provides methods for initializing the game, making moves, and checking for game over conditions. The Othello class is used by both the DQN and Minimax algorithms to interact with the game.
This file implements a binary sum tree data structure used in the DQN algorithm for prioritized experience replay. The sum tree allows efficient sampling of experiences based on their priorities.
To play the game or test the algorithms, run the Game.py file. You can modify the parameters in the file to adjust the game settings, such as board size and AI difficulty.
- Reversi_visualize. Thanks Hieu for allowing us to use your interface.
- Let’s make a DQN series.