Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce model #11

Open
hajepe opened this issue Jul 23, 2021 · 0 comments
Open

Cannot reproduce model #11

hajepe opened this issue Jul 23, 2021 · 0 comments

Comments

@hajepe
Copy link

hajepe commented Jul 23, 2021

Hi,

My goal is to reproduce the model with the defined settings in config file: imojie.json.
I ran as suggested: python3 allennlp_script.py --param_path imojie/configs/imojie.json --s models/imojie --mode train_test
As for the data, because it was a test run, I used the first 100 instances from the data file: data/train/4cr_qpbo_extractions.tsv

The issue can be seen below: it hangs after validating, and validation seems to be off. Could you give some direction here on what the problem could be?

INFO:allennlp.common.params:CURRENTLY DEFINED PARAMETERS: 
INFO:allennlp.common.params:trainer.optimizer.lr = 0.001
INFO:allennlp.common.registrable:instantiating registered subclass bert_adam of <class 'allennlp.training.optimizers.Optimizer'>
WARNING:pytorch_pretrained_bert.optimization:t_total value of -1 results in schedule not being applied
INFO:allennlp.common.params:trainer.num_serialized_models_to_keep = 2
INFO:allennlp.common.params:trainer.keep_serialized_model_every_num_seconds = None
INFO:allennlp.common.params:trainer.model_save_interval = None
INFO:allennlp.common.params:trainer.summary_interval = 100
INFO:allennlp.common.params:trainer.histogram_interval = None
INFO:allennlp.common.params:trainer.should_log_parameter_statistics = True
INFO:allennlp.common.params:trainer.should_log_learning_rate = False
INFO:allennlp.common.params:trainer.log_batch_size_period = None
WARNING:allennlp.training.trainer:You provided a validation dataset but patience was set to None, meaning that early stopping is disabled
INFO:allennlp.training.trainer:Beginning training.
INFO:allennlp.training.trainer:Epoch 0/7
INFO:allennlp.training.trainer:Peak CPU memory usage MB: 3613.788
INFO:allennlp.training.trainer:GPU 0 memory usage MB: 1639
INFO:allennlp.training.trainer:Training
  0%|          | 0/1 [00:00<?, ?it/s]INFO:imojie.dataset_readers.copy_seq2multiseq:Reading instances from lines in file at: data/train/4cr_qpbo_extractions.tsv
loss: 2.0916 ||: : 2it [00:06,  3.01s/it]                     
INFO:allennlp.training.trainer:Validating
  0%|          | 0/1 [00:00<?, ?it/s]INFO:imojie.dataset_readers.copy_seq2multiseq:Reading instances from lines in file at: data/dev/carb/extractions.tsv
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 2it [05:10, 154.74s/it]                     
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 3it [08:05, 164.27s/it]
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 5it [13:20, 160.18s/it]
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 7it [18:22, 155.49s/it]
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 8it [20:59, 155.89s/it]
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 14it [36:21, 149.70s/it]
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 15it [38:51, 149.89s/it]
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 19it [49:52, 162.80s/it]
carb_auc: 0.0000, carb_f1: 0.0000, carb_sum: 0.0000, loss: 0.0000 ||: : 20it [52:50, 158.55s/it]
Traceback (most recent call last):
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant