-
Notifications
You must be signed in to change notification settings - Fork 3
/
Nuswide64bitsSymm.log
143 lines (143 loc) · 8.11 KB
/
Nuswide64bitsSymm.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
2022-03-14 23:35:20,494 config: Namespace(K=256, M=8, T=0.4, alpha=10, batch_size=128, checkpoint_root='./checkpoints/Nuswide64bitsSymm', dataset='NUSWIDE', device='cuda:2', download_cifar10=False, epoch_num=50, eval_interval=1, feat_dim=128, final_lr=1e-05, hp_beta=0.01, hp_gamma=0.5, hp_lambda=0.01, is_asym_dist=False, lr=0.01, lr_scaling=0.001, mode='debias', momentum=0.9, monitor_counter=10, notes='Nuswide64bitsSymm', num_workers=10, optimizer='SGD', pos_prior=0.15, protocal='I', queue_begin_epoch=10, seed=2021, start_lr=1e-05, topK=5000, trainable_layer_num=0, use_scheduler=True, use_writer=True, vgg_model_path=None, warmup_epoch_num=1).
2022-03-14 23:35:20,494 prepare NUSWIDE datatset.
2022-03-14 23:36:12,153 setup model.
2022-03-14 23:36:25,402 define loss function.
2022-03-14 23:36:25,413 setup SGD optimizer.
2022-03-14 23:36:25,413 prepare monitor and evaluator.
2022-03-14 23:36:25,417 begin to train model.
2022-03-14 23:36:25,421 register queue.
2022-03-15 00:18:30,146 epoch 0: avg loss=1.658505, avg quantization error=0.017235.
2022-03-15 00:18:30,146 begin to evaluate model.
2022-03-15 00:28:03,917 compute mAP.
2022-03-15 00:28:19,559 val mAP=0.817394.
2022-03-15 00:28:19,560 save the best model, db_codes and db_targets.
2022-03-15 00:28:22,806 finish saving.
2022-03-15 01:10:49,932 epoch 1: avg loss=1.052546, avg quantization error=0.018049.
2022-03-15 01:10:49,932 begin to evaluate model.
2022-03-15 01:20:12,155 compute mAP.
2022-03-15 01:20:27,373 val mAP=0.813015.
2022-03-15 01:20:27,374 the monitor loses its patience to 9!.
2022-03-15 02:02:55,073 epoch 2: avg loss=1.029275, avg quantization error=0.018443.
2022-03-15 02:02:55,073 begin to evaluate model.
2022-03-15 02:12:17,870 compute mAP.
2022-03-15 02:12:32,954 val mAP=0.814839.
2022-03-15 02:12:32,955 the monitor loses its patience to 8!.
2022-03-15 02:54:50,131 epoch 3: avg loss=1.020122, avg quantization error=0.018656.
2022-03-15 02:54:50,132 begin to evaluate model.
2022-03-15 03:04:21,437 compute mAP.
2022-03-15 03:04:36,987 val mAP=0.815221.
2022-03-15 03:04:36,988 the monitor loses its patience to 7!.
2022-03-15 03:46:20,091 epoch 4: avg loss=1.012483, avg quantization error=0.018817.
2022-03-15 03:46:20,091 begin to evaluate model.
2022-03-15 03:55:45,252 compute mAP.
2022-03-15 03:55:59,106 val mAP=0.812984.
2022-03-15 03:55:59,107 the monitor loses its patience to 6!.
2022-03-15 04:37:43,438 epoch 5: avg loss=1.003793, avg quantization error=0.018902.
2022-03-15 04:37:43,439 begin to evaluate model.
2022-03-15 04:47:03,494 compute mAP.
2022-03-15 04:47:20,306 val mAP=0.813361.
2022-03-15 04:47:20,307 the monitor loses its patience to 5!.
2022-03-15 05:29:41,336 epoch 6: avg loss=0.998376, avg quantization error=0.018968.
2022-03-15 05:29:41,337 begin to evaluate model.
2022-03-15 05:39:13,714 compute mAP.
2022-03-15 05:39:29,141 val mAP=0.813420.
2022-03-15 05:39:29,142 the monitor loses its patience to 4!.
2022-03-15 06:20:56,038 epoch 7: avg loss=0.995274, avg quantization error=0.019047.
2022-03-15 06:20:56,038 begin to evaluate model.
2022-03-15 06:30:21,286 compute mAP.
2022-03-15 06:30:36,981 val mAP=0.814717.
2022-03-15 06:30:36,987 the monitor loses its patience to 3!.
2022-03-15 07:12:17,655 epoch 8: avg loss=0.996195, avg quantization error=0.019063.
2022-03-15 07:12:17,656 begin to evaluate model.
2022-03-15 07:21:46,018 compute mAP.
2022-03-15 07:22:00,896 val mAP=0.816775.
2022-03-15 07:22:00,897 the monitor loses its patience to 2!.
2022-03-15 07:58:22,077 epoch 9: avg loss=0.988990, avg quantization error=0.019073.
2022-03-15 07:58:22,077 begin to evaluate model.
2022-03-15 08:04:31,696 compute mAP.
2022-03-15 08:04:38,554 val mAP=0.818123.
2022-03-15 08:04:38,554 save the best model, db_codes and db_targets.
2022-03-15 08:04:41,985 finish saving.
2022-03-15 08:32:54,208 epoch 10: avg loss=4.624856, avg quantization error=0.018657.
2022-03-15 08:32:54,208 begin to evaluate model.
2022-03-15 08:39:03,152 compute mAP.
2022-03-15 08:39:10,031 val mAP=0.819370.
2022-03-15 08:39:10,031 save the best model, db_codes and db_targets.
2022-03-15 08:39:13,445 finish saving.
2022-03-15 09:07:17,900 epoch 11: avg loss=4.624338, avg quantization error=0.018457.
2022-03-15 09:07:17,900 begin to evaluate model.
2022-03-15 09:13:26,290 compute mAP.
2022-03-15 09:13:32,980 val mAP=0.820745.
2022-03-15 09:13:32,980 save the best model, db_codes and db_targets.
2022-03-15 09:13:36,767 finish saving.
2022-03-15 09:41:17,993 epoch 12: avg loss=4.617456, avg quantization error=0.018508.
2022-03-15 09:41:17,993 begin to evaluate model.
2022-03-15 09:47:26,184 compute mAP.
2022-03-15 09:47:32,792 val mAP=0.818770.
2022-03-15 09:47:32,792 the monitor loses its patience to 9!.
2022-03-15 10:16:13,660 epoch 13: avg loss=4.612738, avg quantization error=0.018621.
2022-03-15 10:16:13,661 begin to evaluate model.
2022-03-15 10:22:22,009 compute mAP.
2022-03-15 10:22:29,156 val mAP=0.822323.
2022-03-15 10:22:29,156 save the best model, db_codes and db_targets.
2022-03-15 10:22:32,308 finish saving.
2022-03-15 10:50:27,421 epoch 14: avg loss=4.608610, avg quantization error=0.018645.
2022-03-15 10:50:27,422 begin to evaluate model.
2022-03-15 10:56:44,105 compute mAP.
2022-03-15 10:56:51,217 val mAP=0.818611.
2022-03-15 10:56:51,218 the monitor loses its patience to 9!.
2022-03-15 11:25:43,623 epoch 15: avg loss=4.604973, avg quantization error=0.018694.
2022-03-15 11:25:43,623 begin to evaluate model.
2022-03-15 11:31:56,792 compute mAP.
2022-03-15 11:32:04,354 val mAP=0.818964.
2022-03-15 11:32:04,355 the monitor loses its patience to 8!.
2022-03-15 12:00:12,877 epoch 16: avg loss=4.601783, avg quantization error=0.018699.
2022-03-15 12:00:12,878 begin to evaluate model.
2022-03-15 12:06:24,611 compute mAP.
2022-03-15 12:06:31,869 val mAP=0.820823.
2022-03-15 12:06:31,870 the monitor loses its patience to 7!.
2022-03-15 12:35:08,815 epoch 17: avg loss=4.600727, avg quantization error=0.018713.
2022-03-15 12:35:08,815 begin to evaluate model.
2022-03-15 12:41:19,943 compute mAP.
2022-03-15 12:41:26,901 val mAP=0.821163.
2022-03-15 12:41:26,902 the monitor loses its patience to 6!.
2022-03-15 13:10:05,774 epoch 18: avg loss=4.595379, avg quantization error=0.018773.
2022-03-15 13:10:05,774 begin to evaluate model.
2022-03-15 13:16:17,865 compute mAP.
2022-03-15 13:16:25,030 val mAP=0.820703.
2022-03-15 13:16:25,030 the monitor loses its patience to 5!.
2022-03-15 13:45:27,754 epoch 19: avg loss=4.593961, avg quantization error=0.018806.
2022-03-15 13:45:27,754 begin to evaluate model.
2022-03-15 13:51:40,888 compute mAP.
2022-03-15 13:51:48,300 val mAP=0.820709.
2022-03-15 13:51:48,301 the monitor loses its patience to 4!.
2022-03-15 14:19:51,943 epoch 20: avg loss=4.588047, avg quantization error=0.018817.
2022-03-15 14:19:51,943 begin to evaluate model.
2022-03-15 14:26:00,414 compute mAP.
2022-03-15 14:26:07,443 val mAP=0.821409.
2022-03-15 14:26:07,444 the monitor loses its patience to 3!.
2022-03-15 14:55:16,700 epoch 21: avg loss=4.584672, avg quantization error=0.018870.
2022-03-15 14:55:16,701 begin to evaluate model.
2022-03-15 15:01:24,392 compute mAP.
2022-03-15 15:01:31,057 val mAP=0.817938.
2022-03-15 15:01:31,057 the monitor loses its patience to 2!.
2022-03-15 15:30:40,992 epoch 22: avg loss=4.582548, avg quantization error=0.018919.
2022-03-15 15:30:40,992 begin to evaluate model.
2022-03-15 15:36:50,464 compute mAP.
2022-03-15 15:36:57,165 val mAP=0.820351.
2022-03-15 15:36:57,166 the monitor loses its patience to 1!.
2022-03-15 16:06:03,675 epoch 23: avg loss=4.575794, avg quantization error=0.018964.
2022-03-15 16:06:03,675 begin to evaluate model.
2022-03-15 16:12:12,224 compute mAP.
2022-03-15 16:12:18,857 val mAP=0.819666.
2022-03-15 16:12:18,858 the monitor loses its patience to 0!.
2022-03-15 16:12:18,859 early stop.
2022-03-15 16:12:18,859 free the queue memory.
2022-03-15 16:12:18,859 finish trainning at epoch 23.
2022-03-15 16:12:18,869 finish training, now load the best model and codes.
2022-03-15 16:12:19,365 begin to test model.
2022-03-15 16:12:19,365 compute mAP.
2022-03-15 16:12:26,251 test mAP=0.822323.
2022-03-15 16:12:26,252 compute PR curve and P@top5000 curve.
2022-03-15 16:12:41,599 finish testing.
2022-03-15 16:12:41,599 finish all procedures.