A special thanks and credit to Zach Carmichael for providing many components of the core code of this project
- No work comparing reaction times CYBORG (i.e. what works best?)
- What improves model performance most? (reaction time, or CYBORG)?
- Can reaction time help CYBORG improve?
- What is the best way to use reaction time with deep learning models?
- In the loss term?
- In a regularization term?
CYBORG: CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning ( WACV 2023)
Reaction times: Measuring Human Perception to Improve Handwritten Document Transcription (TPAMI 2021)
You can install this in a pip environment
python3 venv -m env
source env/bin/activate
pip3 install -r requirements.txt
You can run (and optionally set the log level) via the following:
CYBORG_SAL_LOG_LEVEL=INFO ./main.py ...
Run the following for help:
./main.py -h
General options:
./main.py \
-B DenseNet121 \
... \
--epochs 2 \
--gpus 1 \
--quick-test \
--batch-size 64 \
--hparam-tune \
--stochastic-weight-averaging
CYBORG:
./main.py \
-B DenseNet121 \
-L CYBORG \
-T original_data \
--cyborg-loss-alpha 0.5
CYBORG+REACTIONTIME:
./main.py \
-B DenseNet121 \
-L CYBORG+REACTIONTIME \
-T original_data \
--cyborg-loss-alpha 0.5
You can also run this with WandB. Most of this follows their PyTorch-Lightning setup.
You can track your experiments by passing the flag --use-wandb-logger true
to your run script.