-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accuracy of the pruned model #17
Comments
Are you training on ImageNet? You could finetune the model after pruning. |
Hi, Thanks for the fast response. Yeah... I am training on ImageNet dataset. I just plot the bn scaling factor from the sparsely trained model and I found the resulting values is not sparse actually... Is this normal or there could be something wrong? Thanks for the advice. |
yeah... I could fine tune the model. I am new to model pruning and I have no idea regarding the performance. Is this normal practice? Is the performance dropping too much for the first round pruning? |
I am not exactly sure of the performance of the pruned model. One reason for the low accuracy could be that the sparsity loss factor 0.00001 is small. You can try increasing it. |
Thanks for the reply. |
Hi,
I have followed the code here and run the sparse training code as below:
python main.py --arch vgg11_bn --s 0.00001 --save [PATH TO SAVE RESULTS] [IMAGENET]
After the training, the accuracy is 71.4% which is fine. However, The pruning results is almost 0 with the 0.5 pruning ratio. As I decrease the pruning ratio to be 0.2, the top 1 accuracy increase to 15% which also far below the expectation. Could you please advice this is normal or there could be something wrong?
I would like a one time pruning and do not want to prune iterative.
Thanks for your reply.
Best regards,
The text was updated successfully, but these errors were encountered: