Instructions for running on New York University's Prince computer cluster.
-
Clone repository
git clone https://github.com/wh629/c-bert.git
-
Perform the following commands
module purge
module load anaconda3/5.3.1
module load cuda/10.0.130
module load gcc/6.3.0
-
In cloned repository, create anaconda environment
cbert
fromenvironment.yml
conda env create -f environment.yml
-
In repository, setup directories for
a.
data
b.
log
c.
results
(For cached data, place inresults/cached_data/<model-name>/
) -
Load data into
data
-
For faster runs, load cached data into
results/cached_data/bert-base-uncased/
folder -
Load meta weights into
results/meta_weights/
-
Train on SQuAD either using frozen embeddings or fine-tuning
a. Fill out
PROJECT=<Repository Directory>
in desired.sbatch
file-
For frozen, use
sbatch baseline_SQuAD_frozen.sbatch
-
For fine-tuning, use
sbatch baseline_SQuAD_finetune.sbatch
-
-
Outputs will be found in
results
in the following sub-directoriesa.
cached_data
- cached data as.pt
filesb.
logged/<model-name>/<task-name>
- Model state dictionaries as.pt
files -
Monitor run using
log/baseline_SQuAD_<frozen/finetune>_run_log_<date>_<time>.log
-
Train on TriviaQA either using frozen embeddings or fine-tuning and Evaluate Continual Learning
a. Fill out
PROJECT=<Repository Directory>
in desired.sbatch
file-
For frozen, use
sbatch baseline_TriviaQA_ContinualLearning_frozen.sbatch
-
For fine-tuning, use
sbatch baseline_TriviaQA_ContinualLearning_finetune.sbatch
-
-
Outputs will be found in
results
in the following sub-directoriesa.
cached_data
- cached data as.pt
filesb.
json_results
- F1 scores for plotting in.json
filesc.
logged/<model-name>/<task-name>
- Model state dictionaries as.pt
filesd.
plots
- Plots of results in.png
files -
Monitor run using
log/baseline_TriviaQA_ContinualLearning_<frozen/finetune>_run_log_<date>_<time>.log
-
Perform meta-learning with
sbatch Meta.sbatch
-
Meta-learned weights can be found in
results/meta_weights/meta_meta_weights.pt
-
Monitor run using
log/meta_meta_run_log_<date>_<time>.log
-
Train on SQuAD either using frozen embeddings or fine-tuning
a. Fill out
PROJECT=<Repository Directory>
in desired.sbatch
file-
For frozen, use
sbatch cBERT_SQuAD_frozen.sbatch
-
For fine-tuning, use
sbatch cBERT_SQuAD_finetune.sbatch
-
-
Outputs will be found in
results
in the following sub-directoriesa.
cached_data
- cached data as.pt
filesb.
logged/<model-name>/<task-name>
- Model state dictionaries as.pt
files -
Monitor run using
log/cbert_SQuAD_<frozen/finetune>_run_log_<date>_<time>.log
-
Train on TriviaQA either using frozen embeddings or fine-tuning and Evaluate Continual Learning
a. Fill out
PROJECT=<Repository Directory>
in desired.sbatch
file-
For frozen, use
sbatch cBERT_TriviaQA_ContinualLearning_frozen.sbatch
-
For fine-tuning, use
sbatch cBERT_TriviaQA_ContinualLearning_finetune.sbatch
-
-
Outputs will be found in
results
in the following sub-directoriesa.
cached_data
- cached data as.pt
filesb.
json_results
- F1 scores for plotting in.json
filesc.
logged/<model-name>/<task-name>
- Model state dictionaries as.pt
filesd.
plots
- Plots of results in.png
files -
Monitor run using
log/cbert_TriviaQA_ContinualLearning_<frozen/finetune>_run_log_<date>_<time>.log