-
Notifications
You must be signed in to change notification settings - Fork 1.8k
V2 Model Compression experiment result #4365
Conversation
@@ -55,6 +55,29 @@ User configuration for Level Pruner | |||
|
|||
.. autoclass:: nni.algorithms.compression.v2.pytorch.pruning.LevelPruner | |||
|
|||
Performance Test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all these performance numbers are obtained from our example code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the seed was set, and some hyperparameters may change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it better to also provide the hyper-parameters that you use?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now this has been provided.
* - Pruned VGG-16 | ||
- 93.74% | ||
- 14.98M | ||
- 313.46M |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why parameters and flops do not change in the pruned model?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine-grained algorithm, it has no speedup procedure.
- | ||
* - Pruned VGG-16 | ||
- 87.24% | ||
- 0.64M |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why parameters are reduce so much with 80% sparsity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pruning behavior may differ according to various default mode
setting. It has three types: 'normal', 'global' and 'dependency_aware'. In the intermediate config_list
we can see it does prune nearly 0.8 in each layer.
* - Pruned VGG-16 | ||
- 76.94% | ||
- | ||
- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why the two numbers are missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the result was not saved at that time, now it has been added.
* - Pruned VGG-16 | ||
- 33.6% (wo FT) | ||
- | ||
- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the meaning of "wo FT", and there are also two missing numbers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At first I want the result 'without finetune', but this time it has been normal.
Just like the workflow: prune -> speedup -> finetune
These results have already been updated. But the result has little difference, I think maybe we need a more significant benchmark... |
Description
Some simple tests to measure the performance of v2 pruner.
Checklist
How to test