Play SET® using image recognition. I've written this project up as a blog post.
Use image recognition algorithms that
- Pick out individual cards from an image.
- For each card, count the number of shapes, and detect the colour, shape and shading.
Finally finding SETs within the image is the easy bit.
I started by using basic image processing and machine learning techniques, before going on to use Deep Learning. The basic approach serve as a baseline, and helped quantify how much better Deep Learning can do.
I took photos of lots of SET cards under various lighting conditions. The cards were on black backgrounds to make things easier. An obvious extension would be to try different backgrounds at some point.
There are two training datasets.
The first dataset is found in data/train, and consists of 36 images of 9 cards. Each image has cards of the same colour, and laid out in the same way.
The program CreateTrainingSetV1
processes the raw training images and creates
a training set of images with one card in each image. See data/train_out.
The small size of the first dataset meant that colour detection in particular was lacking, so I created a new, larger second dataset with more varied lighting conditions.
The train-v2 dataset is a collection of SET card images.
The raw-archive directory contains original camera images. Each image is a photo of a board of 27 images, arranged in a 3 by 9 grid. There are three boards, one contains all the SET cards with one shape, one with two shapes, and one with three shapes.
The raw-archive images are photos of the same boards taken at different times under varying lighting conditions, from different angles, and with different cameras.
- Copy photos from the camera to a new raw-archive/batch-nnnnn directory.
- Visually inspect the images in Preview and make sure they all oriented correctly. (Open the Inspector, and check the Orientation - it should be 1.) Rotate any that are not the right way up.
- Run
mkdir -p data/train-v2/raw-new; cp data/train-v2/raw-archive/batch-nnnnn/* data/train-v2/raw-new
- Run the following if files have an uppercase
.JPG
extension:
for file in data/train-v2/raw-new/*.JPG; do mv $file data/train-v2/raw-new/$(basename $file .JPG).jpg; done
- Run
CheckRawTrainingImagesV2
. This will check that the images all have the correct orientation and the individual cards can be detected. - If there are problematic images, then copy them to raw-problem. These will not be used, but keep them as future versions of the code may be able to handle them.
- Run
SortRawTrainingImagesV2
. This will programmatically detect the number of features on each card so that it can sort the training boards with 1, 2, or 3 number cards. (Note 3 is called 0.) - Open each directory in raw-sorted and visually check that each board is in the correct directory. Move any that are not.
- Run
CreateTrainingSetV2
. This will take each board in raw-sorted and extract labelled individual cards and store them in raw-labelled, then open a window showing each set of cards. Visually inspect these to check they are correct. Move any images that are not. - Run
mkdir -p data/train-v2/labelled/
rsync -a data/train-v2/raw-labelled/ data/train-v2/labelled/
rm -rf data/train-v2/raw-{new,sorted,labelled}
- You can view all of the labelled images by running
ViewLabelledImagesV2
.
The test data is in data/20170106_205743.jpg, as well as data/ad-hoc and data/webcam.
The raw data is preprocessed to get it into shape for training. The preprocessing was discussed above, and the output is one card per image in labelled directories.
Training is comprised of two parts: feature extraction from the images, and creating a
model from the features. Both steps are carried out by the FeatureFinder
classes, which
use hand-crafted feature extractors, followed by k-nearest neighbors to do prediction.
(Note that model creation is not needed for kNN, since all the training data is used as the
model.) Furthermore, FindCardNumberFeatures
does not even need a model since the
image processing can accurately count the number of shapes on a card.
Training is carried out by running CreateTrainingDataV1
.
Prediction (or inference) is the last step of the process, and uses the FeatureFinder
classes to recognize the cards in new or test images.
Prediction is carried out by the classes in com.tom_e_white.set_game.predict
, including
PlaySet
which takes an image (or a series of images from a webcam) and highlights the
SETs in it.
PredictCardFeaturesOnTestData
calculates the accuracy of predicting each feature (number, colour, shading, shape) for each
card in a test set. Here's a sample run:
FindCardNumberFeatures
Correct: 15
Total: 15
Accuracy: 100 percent
------------------------------------------
FindCardColourFeatures
Correct: 10
Total: 15
Accuracy: 66 percent
------------------------------------------
FindCardShadingFeatures
Incorrect, predicted 0 but was 1 for card 3
Correct: 14
Total: 15
Accuracy: 93 percent
------------------------------------------
FindCardShapeFeatures
Correct: 15
Total: 15
Accuracy: 100 percent
------------------------------------------
Notice that all but colour are predicted with high accuracy. I was initially very surprised that detecting colour was so hard, but it turns out that the human eye is very adept at colour detection: it is not just a question of measuring RGB values in an image. Even controlling for lighting using HSB doesn't help much. The main problem was that the training data in the first dataset was from a fairly restricted range of lighting conditions, so it couldn't generalize well to the test data. This is why I gathered the second dataset.
After training on the second dataset, the accuracy for colour prediction improves, but it's still not as good as for the other features.
FindCardColourFeatures
Correct: 12
Total: 15
Accuracy: 80 percent
I trained a convolutional neural network on the second dataset (see _train-cnn.py), and
got 100% accuracy on the test images. When the model is deployed in PlaySet
it can play
a decent game of SET - see the animated GIF at the top of the page.
- The Joy of SET by McMahon et al.
- BoofCV Java Computer Vision library
- Deep Learning with Python by François Chollet
- Vehicle Color Recognition using Convolutional Neural Network
- YOLO9000: Better, Faster, Stronger