-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network slimming for ANN on MNIST #19
Comments
Which pruning method are you using? |
Network slimming. Sorry I forgot to mention that. (Given that I am using an ANN with linear layers and batchNorm1d layers) |
For BatchNorm1d, you can refer to Network Slimming code for ImageNet. |
Thanks...but I wanted to know if it would be possible to write a code for the BatchNorm1d and the nn.Linear layers in the MLP without using a mask. |
Also, in main_finetune.py, under args.refine, you have loaded the pruned model to finetune it and written : What doesn't this statement lead to a size mismatch? The model would have the same number of channels as vgg, but the vggprune you are loading has lesser number of channels in each layer right? |
@arvind1096 Can you point to which pruning methods you are using, or which files you are running? |
Thanks for asking. I resolved the issue just 2 days ago. However, while I was trying to come up with MLPprune.py using the code in after applying threshold 'thre' as mentioned in vggprune.py, my network is removing almost all of the connections. For instance, if I had 200K connctions before pruning, I am only getting 36 connections after pruning the MLP. Note: I have not pruned the MLP, but merely want to preserve the nodes whose BatchNorm scaling factors do not cross the threshold, so that the dimension of each layer does not change. Hence, I have a binary network of 1s and 0s, where 1s are the preserved connections and everything else is zero .(only 36 ones remain in my case) I am doing a project which takes in the MLp as a graph and forms an adjacency matrix from it. I dont remove the zeroed out nodes, since that would change the total number of nodes => change the shape of the adjacency matrix. If you could share/upload the code for the pruned MLP model, it would really help me progress! If possible could you attach a code for getting MLPprune.py for "cifar/network-slimming/" so that I can get the new MLP which has almost half the number of 1s as before? (assuming args.percent is 0.5) |
Can you give me the code for your MLP network or describe it in detail? I am still not sure about the MLP structure you mentioned. |
How to do the pruning step for batchNorm1D layers in an ANN, where you would be using the weights directly rather than using a mask?
If possible, a sample code on the nn.Linear layer and the BatchNorm1d layer would be really helpful!
When I use the same code for batchNorm1D, I get :
Traceback (most recent call last):
File "MLPprune.py", line 156, in
end_mask = cfg_mask[layer_id_in_cfg]
IndexError: list index out of range
The text was updated successfully, but these errors were encountered: