This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
example request: load weight example for pruned model #3527
Labels
Comments
I tried the following way but got RuntimeError dummy_input = torch.randn([1, 4, 3, 16, 16])
m_speedup = ModelSpeedup(device_model, dummy_input, f'all_edge_sparsity_0.05-mask.pth', torch.device('cpu'))
m_speedup.speedup_model()
model = m_speedup
model.load_state_dict(torch.load('models.all_edge_sparsity_0.05-prune.pth'))
|
The first way is using Or just
The speedup error can refer this: #3645 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
What would you like to be added:
an example showing how to load weights for a pruned model
Why is this needed:
the design is really confusing
I use
Pruner.export_model
to save model weight and masks but I don't know how to assemble them for inference.The text was updated successfully, but these errors were encountered: