MCTS-based algorithm for parallel training of a chess engine. Adapted from existing deep learning game engines such as Giraffe and AlphaZero, Deep Pepper is a clean-room implementation of a chess engine that leverages Stockfish for the opening and closing book, and learns a policy entirely through self-play.
We use the following technologies to train the model and interface with the Stockfish Chess engine.
- python-chess - For handling the chess environment and gameplay.
- pytorch - For training and inference.
- Stockfish - For value function and endgame evaluation.
- Tensorboard - For visualizing training progress.
- Run
pip install -r requirements.txt
to install the necessary dependencies. - Run
python launch_script.py
to start training the Chess Engine.