We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,我使用了您提供的basenet.pth,运行test之后能够在DUT-TE上得到与论文一致的精度。
之后我尝试在train中把basenet.pth直接load进来,进行finetune。训练数据集仍然为DUT-TR,代码无改动。
但非常奇怪的是,发现初使的loss非常高,如下所示,这是否不合理?因为这个trainl oss明显还能进一步下降,远远没有拟合训练集。按论文描述,迭代400k步后,那loss应该很小。请问我有弄错什么吗?
[epoch: 1/100000, batch: 32/10553, ite: 4] train loss: 8.034277, tar: 0.670928 l0: 0.826161, l1: 0.863950, l2: 0.952028, l3: 0.988486, l4: 1.222528, l5: 1.460253, l6: 1.663608
The text was updated successfully, but these errors were encountered:
No branches or pull requests
您好,我使用了您提供的basenet.pth,运行test之后能够在DUT-TE上得到与论文一致的精度。
之后我尝试在train中把basenet.pth直接load进来,进行finetune。训练数据集仍然为DUT-TR,代码无改动。
但非常奇怪的是,发现初使的loss非常高,如下所示,这是否不合理?因为这个trainl oss明显还能进一步下降,远远没有拟合训练集。按论文描述,迭代400k步后,那loss应该很小。请问我有弄错什么吗?
[epoch: 1/100000, batch: 32/10553, ite: 4] train loss: 8.034277, tar: 0.670928
l0: 0.826161, l1: 0.863950, l2: 0.952028, l3: 0.988486, l4: 1.222528, l5: 1.460253, l6: 1.663608
The text was updated successfully, but these errors were encountered: