Skip to content
You must be logged in to sponsor crybot

Become a sponsor to Marco

@crybot

Marco

crybot
Florence/Pisa - Italy

Napoleon-logo

Introduction

Hi Everyone!
I'm a MD Computer Science student with a strong and driving passion towards AI. I've been building chess engines since I was a kid and my focus growing up has been gradually shifting towards machine learning based systems. I'm the author of Napoleon, a classical chess engine which has been around for quite a while now. It is completely open source and feature-comparable with most of the other engines available today. Unfortunately, my raw chess understanding has never caught up with my engineering skills. Napoleon therefore actually lacks the positional play that is necessary for a strong engine to win against more strategy-aware competitors.

I've always wanted to implement some kind of ML based evaluation function in order to compensate for my lack of knowledge of the game, but the hardware required to train and deploy a competitive neural network has been a difficult obstacle to overcome for quite some time. Eventually I finally managed to buy a discrete GPU, which allowed me to kickstart NapoleonZero!

NapoleonZero uses the same fast and reliable backbone of Napoleon, but completely replaces its linear evaluation function with a neural network trained on millions of previously evaluated positions.
The project is still in its infancy, but any contribution is welcome and, naturally, it will remain open source forever.

You can track every running and past experiments under my Weights & Biases Dashboard!

Some of the things I'm working on:

  • Hybrid Convolutional/Vision Transformer (ViT) architecture. As far as I know, ViTs have not been employed in Computer Chess yet, but they are quickly and reliably replacing SOTA convolutional backbones architectures in Computer Vision. I believe that their attention mechanism is even more relevant in chess than in vision. The use of convolutions paired with self-attention mechanisms is also being recently explored in the literature right now.
  • Self attention approximations and replacements. Self attention is the main component of Transformers and also the one that requires the most computational resources (as it scales quadratically with the input size). Approximated operations will hopefully allow the network to run much faster during inference (there are many promising approaches in the current literature).
  • Large dataset of evaluated positions. As of now, I'm working with only a tiny portion of positions retrieved from the Lichess open database. My current hardware does not allow me to scale to more data, but one of the goals is to provide a variety of large open datasets for everyone to use.
  • Completely Dockerized environment. This allows anyone to easily contribute to the project and reduce installation complexity.
  • Reinforcement learning (RL) framework. The current architecture only works under a supervised learning framework, but I believe that a RL setting would be beneficial to the final network and engine strength. This obviously imposes substantial hardware requirements that are far from being satisfied at the moment.

Some of the problems I'm currently facing:

  • Very large datasets won't fit in memory. My current hardware only allows about 60M positions to be loaded in RAM. While prefetching data from disk would be an acceptable solution to work with much larger datasets, my system is CPU bottlenecked and would not be fast enough to keep up with the GPU during training.
  • Processing datasets is too computationally intense. There are plenty of chess positions available through open databases, but the amount of computation required to evaluate all of the positions with enough precision is currently beyond the capability of my system. Having vast amounts of quality data is paramount to a good machine learning model, so this aspect is of utmost importance.
  • Storing, versioning and making datasets publicly available requires hosting capacities that are currently not satisfied.

Funds coming from contributions will be used to:

  • Pay electricity (which is becoming more expensive by the day in Europe)
  • Invest in new hardware (mainly a new platform, cpu and memory)
  • Contribute to datasets versioning fees (e.g. Github with LFS)
  • Pay cloud VMs expenses (e.g. for generating larger and better datasets)
  • Pay my University fees

Thanks to everyone!

1 sponsor has funded crybot’s work.

@crybot

Reaching at least 5 sponsors will allow me to be much more consistent with NN experiments and maybe invest in new hardware to improve the overall quality of the project. Upon reaching this goal I will deploy an official Napoleon website to publish news, articles and to acknowledge every contributor!

@frankplus

Featured work

  1. crybot/Napoleon

    Re-designed Chess engine and converted in C++

    C++ 11
  2. NapoleonZero/datasets

    Holds datasets along with tools to build and process them

    Shell 2
  3. NapoleonZero/training

    Implementation of the training procedure for NapoleonZero

    Python 1

0% towards 5 monthly sponsors goal

Be the first to sponsor this goal!

Select a tier

$ one time

Choose a custom amount.