The goal of this project is to explore the use of reinforcement learning techniques to build a chess engine for gameplay on a non-traditional hexagonal board.
In order to install the dependencies, initiate a conda environment from the environment.yml
file.
conda env create -f environment.yml
If changes are made, the environment can be exported using the provided script.
bash export-environment.sh
Hexagonal chess, more specifically the version invented by Władysław Gliński, is played on a non-traditional hexagonal chess board.
While the board is made up of 91 hexagon tiles, rather than 64 squares, all the familiar pieces are present and their legal movements are heavily inspired by the original game. For a description of the rules, see Wikipedia.
In order to play the game, run python play.py
from the main directory.
Some of the more intricate rules of the game are still missing, being:
- Check & Checkmate checks, the game now concludes only when the king is captured.
- En passant captures for pawns.
- Pawn promotion.
- Restart feature in the GUI.
We implemented and trained the following models in this project.
- Deep Q Learning
- Simple Actor-Critic
- Advanced Actor Critic
More details can be found in the slides included in the repository.
These are some useful references I have used during the development of this project.