Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
fix comments
Browse files Browse the repository at this point in the history
  • Loading branch information
J-shang committed Nov 10, 2021
1 parent 4e83683 commit d15fa2d
Show file tree
Hide file tree
Showing 4 changed files with 29 additions and 22 deletions.
6 changes: 4 additions & 2 deletions docs/en_US/Compression/v2_pruning_algo.rst
Original file line number Diff line number Diff line change
Expand Up @@ -402,12 +402,12 @@ User configuration for Simulated Annealing Pruner
Auto Compress Pruner
--------------------

For each round, AutoCompressPruner prune the model for the same sparsity to achive the overall sparsity:
For total iteration number :math:`N`, AutoCompressPruner prune the model that survive the previous iteration for a fixed sparsity ratio (e.g., :math:`1-{(1-0.8)}^{(1/N)}`) to achieve the overall sparsity (e.g., :math:`0.8`):

.. code-block:: bash
1. Generate sparsities distribution using SimulatedAnnealingPruner
2. Perform ADMM-based structured pruning to generate pruning result for the next round.
2. Perform ADMM-based pruning to generate pruning result for the next iteration.
For more details, please refer to `AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates <https://arxiv.org/abs/1907.03141>`__.

Expand All @@ -432,6 +432,8 @@ Usage
pruner.compress()
_, model, masks, _, _ = pruner.get_best_result()
The full script can be found :githublink:`here <examples/model_compress/pruning/v2/auto_compress_pruner.py>`.

User configuration for Auto Compress Pruner
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class AutoCompressPruner(IterativePruner):
evaluator : Callable[[Module], float]
Evaluate the pruned model and give a score.
admm_params : Dict
The parameters pass to the ADMMPruner.
The parameters passed to the ADMMPruner.
- trainer : Callable[[Module, Optimizer, Callable].
A callable function used to train model or just inference. Take model, optimizer, criterion as input.
Expand All @@ -68,7 +68,7 @@ class AutoCompressPruner(IterativePruner):
The epoch number for training model in each iteration.
sa_params : Dict
The parameters pass to the SimulatedAnnealingPruner.
The parameters passed to the SimulatedAnnealingPruner.
- evaluator : Callable[[Module], float]. Required.
Evaluate the pruned model and give a score.
Expand All @@ -77,7 +77,7 @@ class AutoCompressPruner(IterativePruner):
- stop_temperature : float. Default: `20`.
Stop temperature of the simulated annealing process.
- cool_down_rate : float. Default: `0.9`.
Cool down rate of the temperature.
Cooldown rate of the temperature.
- perturbation_magnitude : float. Default: `0.35`.
Initial perturbation magnitude to the sparsities. The magnitude decreases with current temperature.
- pruning_algorithm : str. Default: `'level'`.
Expand All @@ -86,15 +86,16 @@ class AutoCompressPruner(IterativePruner):
If the pruner corresponding to the chosen pruning_algorithm has extra parameters, put them as a dict to pass in.
log_dir : str
The log directory use to saving the result, you can find the best result under this folder.
The log directory used to save the result, you can find the best result under this folder.
keep_intermediate_result : bool
If keeping the intermediate result, including intermediate model and masks during each iteration.
finetuner : Optional[Callable[[Module], None]]
The finetuner handled all finetune logic, use a pytorch module as input, will be called in each iteration.
The finetuner handles all finetune logic, takes a pytorch module as input.
It will be called at the end of each iteration, usually for neutralizing the accuracy loss brought by the pruning in this iteration.
speed_up : bool
If set True, speed up the model in each iteration.
If set True, speed up the model at the end of each iteration to make the pruned model compact.
dummy_input : Optional[torch.Tensor]
If `speed_up` is True, `dummy_input` is required for trace the model in speed up.
If `speed_up` is True, `dummy_input` is required for tracing the model in speed up.
"""

def __init__(self, model: Module, config_list: List[Dict], total_iteration: int, admm_params: Dict,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,11 @@ class PruningScheduler(BasePruningScheduler):
Used to generate task for each iteration.
finetuner
The finetuner handled all finetune logic, use a pytorch module as input.
It will be called at the end of each iteration if reset_weight is False, will be called at the beginning of each iteration otherwise.
speed_up
If set True, speed up the model in each iteration.
If set True, speed up the model at the end of each iteration to make the pruned model compact.
dummy_input
If `speed_up` is True, `dummy_input` is required for trace the model in speed up.
If `speed_up` is True, `dummy_input` is required for tracing the model in speed up.
evaluator
Evaluate the pruned model and give a score.
If evaluator is None, the best result refers to the latest result.
Expand Down
25 changes: 14 additions & 11 deletions nni/algorithms/compression/v2/pytorch/pruning/iterative_pruner.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,11 +86,12 @@ class LinearPruner(IterativePruner):
keep_intermediate_result : bool
If keeping the intermediate result, including intermediate model and masks during each iteration.
finetuner : Optional[Callable[[Module], None]]
The finetuner handled all finetune logic, use a pytorch module as input, will be called in each iteration.
The finetuner handled all finetune logic, use a pytorch module as input.
It will be called at the end of each iteration, usually for neutralizing the accuracy loss brought by the pruning in this iteration.
speed_up : bool
If set True, speed up the model in each iteration.
If set True, speed up the model at the end of each iteration to make the pruned model compact.
dummy_input : Optional[torch.Tensor]
If `speed_up` is True, `dummy_input` is required for trace the model in speed up.
If `speed_up` is True, `dummy_input` is required for tracing the model in speed up.
evaluator : Optional[Callable[[Module], float]]
Evaluate the pruned model and give a score.
If evaluator is None, the best result refers to the latest result.
Expand Down Expand Up @@ -131,11 +132,12 @@ class AGPPruner(IterativePruner):
keep_intermediate_result : bool
If keeping the intermediate result, including intermediate model and masks during each iteration.
finetuner : Optional[Callable[[Module], None]]
The finetuner handled all finetune logic, use a pytorch module as input, will be called in each iteration.
The finetuner handled all finetune logic, use a pytorch module as input.
It will be called at the end of each iteration, usually for neutralizing the accuracy loss brought by the pruning in this iteration.
speed_up : bool
If set True, speed up the model in each iteration.
If set True, speed up the model at the end of each iteration to make the pruned model compact.
dummy_input : Optional[torch.Tensor]
If `speed_up` is True, `dummy_input` is required for trace the model in speed up.
If `speed_up` is True, `dummy_input` is required for tracing the model in speed up.
evaluator : Optional[Callable[[Module], float]]
Evaluate the pruned model and give a score.
If evaluator is None, the best result refers to the latest result.
Expand Down Expand Up @@ -176,11 +178,12 @@ class LotteryTicketPruner(IterativePruner):
keep_intermediate_result : bool
If keeping the intermediate result, including intermediate model and masks during each iteration.
finetuner : Optional[Callable[[Module], None]]
The finetuner handled all finetune logic, use a pytorch module as input, will be called in each iteration.
The finetuner handled all finetune logic, use a pytorch module as input.
It will be called at the end of each iteration if reset_weight is False, will be called at the beginning of each iteration otherwise.
speed_up : bool
If set True, speed up the model in each iteration.
If set True, speed up the model at the end of each iteration to make the pruned model compact.
dummy_input : Optional[torch.Tensor]
If `speed_up` is True, `dummy_input` is required for trace the model in speed up.
If `speed_up` is True, `dummy_input` is required for tracing the model in speed up.
evaluator : Optional[Callable[[Module], float]]
Evaluate the pruned model and give a score.
If evaluator is None, the best result refers to the latest result.
Expand Down Expand Up @@ -236,9 +239,9 @@ class SimulatedAnnealingPruner(IterativePruner):
finetuner : Optional[Callable[[Module], None]]
The finetuner handled all finetune logic, use a pytorch module as input, will be called in each iteration.
speed_up : bool
If set True, speed up the model in each iteration.
If set True, speed up the model at the end of each iteration to make the pruned model compact.
dummy_input : Optional[torch.Tensor]
If `speed_up` is True, `dummy_input` is required for trace the model in speed up.
If `speed_up` is True, `dummy_input` is required for tracing the model in speed up.
"""

def __init__(self, model: Module, config_list: List[Dict], evaluator: Callable[[Module], float], start_temperature: float = 100,
Expand Down

0 comments on commit d15fa2d

Please sign in to comment.