Digital games combining both physical and psychological fitness and gaming, called exergames, emerged in the 1980s. Exergames promise improvements in the physical state of a player (caloric expenditure, coordination and heart rate increase), in the psychosocial state (social interaction, mood and motivation) and in the cognitive state (spatial awareness and attention).
One such exergame is Plunder Planet, a dynamically-adaptive exergame developed by Martin-Niedecken and Götz. The player navigates a flying pirate ship through a desert filled with obstacles and defends himself against giant sandworms by activating a shield. The user gets points awarded by collecting crystals, and points deducted after each collision with an obstacle or a sandworm. Currently, the game difficulty can be set manually by a second person observing the user. The goal of this thesis was to create a model that predicts the in-game performance of the user, which enables to automatically adjust the difficulty to the user's physical and emotional state, allowing for a fast entry into a so-called Dual Flow, a state where the player is neither over- nor under-challenged, and thus the player can benefit form a better fitness experience.
Based on log files of users playing the game, we created a machine learning model that predicts the user's in-game performance, namely whether or not the user is going to crash into the next obstacle. The modeling step consisted of analyzing and validating log files and extracting, pre-processing and selecting features. Different metrics were used to evaluate the performance of our models.
We used both classical machine learning classifers such as SVM, k-Nearest Neighbor, Random Forests and Naive Bayes models, and Recurrent Neural Networks with Long Short-Term Memory units.
Major Contributions:
- Developing a predictor of in-game performance in the game Plunder Planet.
- Using ML to improve the user's experience of an exergame.
It is recommended to install 4P inside a virtual environment.
Setup a virtual environment:
virtualenv --python=python3 <venv-name>
source <venv-name>/bin/activate
Install 4Ps:
git clone https://github.com/bastianmorath/4Ps-Plunder-Planet/
cd 4Ps-Plunder-Planet
pip install -r requirements.txt
brew install graphviz
We need to manually add the logfiles to the project. These will be refactored the first time the project runs and saved into Logs/text_logs_refactored/.
mkdir Logs
After putting the unziped folder 'text_logs_original' into the Logs-folder, call:
python 4Ps/main.py
(optionally) rm -r ../Logs/text_logs_original
Note: The very first time the program runs for quite a long time since the log files get refactored and the feature matrix must be computed. This will then be stored in a pickle file, so the other runs should be much faster.
Most of the Figures used in the report can be generated by calling
python main.py -r
The entire set of commands can be looked up by calling
$ python main.py -h
usage: main.py [-h] [-r] [-p clf_name] [-t clf_name]
[-w hw_window crash_window gc_window] [-g] [-m n_epochs] [-k]
[-f] [-l] [-u] [-a] [-s] [-n] [-d]
optional arguments:
-h, --help show this help message and exit
-r, --generate_plots_for_report
Generates plots that are used in the Bachelor Thesis
report and stores it in folder /Plots/Report
-p clf_name, --performance_without_tuning clf_name
Outputs detailed scores of the given classifier
without doing hyperparameter tuning. Set
clf_name='all' if you want to test all classifiers
(file is saved in Evaluation/Performance/clf_performan
ce_without_hp_tuning_{window_sizes}.txt)
-t clf_name, --performance_with_tuning clf_name
Optimizes the given classifier with RandomizedSearchCV
and outputs detailed scores. Set clf_name='all' if you
want to test all classifiers (file is saved in Evaluat
ion/Performance/clf_performance_with_hp_tuning_{window
_sizes}.txt)
-w hw_window crash_window gc_window, --test_windows hw_window crash_window gc_window
Trains and tests all classifiers with the given window
sizes. Provide the windows in seconds. Stores roc_auc score under
/Evaluation/Performance/Windows/
-g, --leave_one_group_out
Plot performance when leaving out a logfile vs leaving
out a whole user in crossvalidation under
Plots/Performance/LeaveOneGroupOut
-m n_epochs, --evaluate_lstm n_epochs
Compile, train and evaluate an LSTM newtwork with
n_epochs epochs
-k, --print_keynumbers_logfiles
Print important numbers and stats about the logfiles
-f, --generate_plots_about_features
Generates different plots from the feature matrix
(Look at main.py for details) and stores it in folder
/Plots/Features
-l, --generate_plots_about_logfiles
Generates different plots from the logfiles (Look at
main.py for details) and stores it in folder
/Plots/Logfiles (Note: Probably use in combination
with -n, i.e. without normalizing heartrate)
-u, --do_not_use_pre_tuned_hyperparameters
There are some hyperparameters that were tuned on
Euler and are used per default. If you want to tune
them manually/on your computer, use this flag
-s, --use_synthesized_data
Use synthesized data. Might not work with everything.
-n, --do_not_normalize_heartrate
Do not normalize heartrate (e.g. if you want plots or
values with real heartrate)
-d, --debugging Use only a small part of the data. Mostly for
debugging purposes