Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

V2 Model Compression experiment result #4365

Closed
wants to merge 6 commits into from

Conversation

Fiascolsy
Copy link
Contributor

Description

Some simple tests to measure the performance of v2 pruner.

Checklist

  • test case
  • doc

How to test

@@ -55,6 +55,29 @@ User configuration for Level Pruner

.. autoclass:: nni.algorithms.compression.v2.pytorch.pruning.LevelPruner

Performance Test
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all these performance numbers are obtained from our example code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the seed was set, and some hyperparameters may change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it better to also provide the hyper-parameters that you use?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now this has been provided.

* - Pruned VGG-16
- 93.74%
- 14.98M
- 313.46M
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why parameters and flops do not change in the pruned model?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine-grained algorithm, it has no speedup procedure.

-
* - Pruned VGG-16
- 87.24%
- 0.64M
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why parameters are reduce so much with 80% sparsity?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pruning behavior may differ according to various default mode setting. It has three types: 'normal', 'global' and 'dependency_aware'. In the intermediate config_list we can see it does prune nearly 0.8 in each layer.

* - Pruned VGG-16
- 76.94%
-
-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the two numbers are missing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the result was not saved at that time, now it has been added.

* - Pruned VGG-16
- 33.6% (wo FT)
-
-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the meaning of "wo FT", and there are also two missing numbers

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At first I want the result 'without finetune', but this time it has been normal.
Just like the workflow: prune -> speedup -> finetune

@liuzhe-lz liuzhe-lz mentioned this pull request Dec 6, 2021
86 tasks
@liuzhe-lz liuzhe-lz marked this pull request as draft December 17, 2021 08:36
@J-shang J-shang marked this pull request as ready for review January 6, 2022 10:01
@J-shang
Copy link
Contributor

J-shang commented Jan 6, 2022

These results have already been updated. But the result has little difference, I think maybe we need a more significant benchmark...

@J-shang J-shang mentioned this pull request Jan 10, 2022
51 tasks
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants