You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have implemented the multi-scale testing, and I have verified that the MPII validation set accuracy is 90.75%
I then go on the apply it to the test set and the accuracy I got is only:
@mobeixiaoxin why is your latest comment not shown here (only in email)? I will re-post it here so it may help others:
@MaxChu719 ,First ,thanks to your multi-scale code!!! I refer to the multi-scale code you published, and it can reach 90.8 as the paper, but the result in the test set is only 91.6%.I think there is no problem with the code of the multi-scale test you published. I wonder if it is caused by the fact that 92.3% of the results submitted in the author's paper is inconsistent with the division of the dataset on GitHub. 92.3% was trained by the author using trainval.json file, while 90.33 was trained by train.json file. The biggest difference is the increase in the amount of data in the training set. Maybe that's why I'm going to give it a try
I have implemented the multi-scale testing, and I have verified that the MPII validation set accuracy is 90.75%
I then go on the apply it to the test set and the accuracy I got is only:
Which is not 92.3% as reported in the paper. Below is the code I have used for the multi-scale testing:
The text was updated successfully, but these errors were encountered: